Concentric Tube Robots the State of the Art and Future Directions

  • Journal List
  • HHS Writer Manuscripts
  • PMC7243456

IEEE Trans Med Robot Bionics. Author manuscript; available in PMC 2021 May 1.

Published in final edited course as:

PMCID: PMC7243456

NIHMSID: NIHMS1576714

Learning the Complete Shape of Concentric Tube Robots

Alan Kuntz, Member, IEEE, Armaan Sethi, Robert J. Webster, 3, Member, IEEE, and Ron Alterovitz, Fellow member, IEEE

Alan Kuntz

Robotics Center and the School of Computing, University of Utah, Salt Lake City, UT, 84112 The states

Armaan Sethi

Department of Informatics, University of N Carolina at Chapel Colina, Chapel Hill, NC, 27599 USA

Robert J. Webster, III

Department of Mechanical Engineering, Vanderbilt University, Nashville, TN, 37235 USA

Ron Alterovitz

Section of Informatics, University of Northward Carolina at Chapel Colina, Chapel Hill, NC, 27599 Us

Abstruse

Concentric tube robots, composed of nested pre-curved tubes, have the potential to perform minimally invasive surgery at hard-to-achieve sites in the human torso. In order to plan motions that safely perform surgeries in constrained spaces that crave avoiding sensitive structures, the ability to accurately estimate the entire shape of the robot is needed. Many state-of-the-fine art physics-based shape models are unable to account for complex physical phenomena and afterwards are less accurate than is required for prophylactic surgery. In this work, nosotros present a learned model that can estimate the unabridged shape of a concentric tube robot. The learned model is based on a deep neural network that is trained using a mixture of simulated and physical data. We evaluate multiple network architectures and demonstrate the model'southward ability to compute the total shape of a concentric tube robot with loftier accurateness.

Keywords: Concentric Tube Robots, Continuum Surgical Robots, Shape Modeling, Car Learning, Deep Neural Networks

I. Introduction

CONCENTRIC tube robots are needle diameter robots composed of nested pre-curved tubes [1]. By rotating and translating the tubes with respect to i another, the robot'southward shaft takes curvilinear shapes. This enables these robots to curve effectually anatomical obstacles to perform surgical procedures at difficult-to-reach sites. Concentric tube robots have the potential to enable less invasive surgeries in many areas of the human body, including the skull base, the lungs, and the centre [2].

Motion planning tin can enable concentric tube robots to move safely through the body, reaching desired surgical targets while avoiding colliding with sensitive anatomical obstacles, such as blood vessels, nerves, and organs [three], [4], [five]. In social club to plan rubber motions for concentric tube robots that automatically avoid unintended collisions with the patient's anatomy, motion planning algorithms simulate robot movement, performing collision detection to ensure that the robot'due south geometry does non collide with obstacles [half-dozen]. In order to perform standoff detection, an accurate geometric model of the robot's entire shape is required.

Accurate prediction of the entire shape of a concentric tube robot from its command inputs is challenging, and electric current land-of-the-art shape models are oft unable to accurately business relationship for circuitous and unpredictable concrete phenomena such as inconsistent friction between tubes, non-homogeneous textile properties, and imprecisely shaped tubes [7], [8].

In this piece of work, we present a information driven, deep-neural-network-based approach that learns a office that accurately models the entire shape of the concentric tube robot, for a given ready of tubes, as a function of its configuration (see Fig. 1). The neural network takes as input the robot's configuration, and the network outputs coefficients for orthonormal polynomial footing functions in 10, y, and z parameterized by arc length forth the robot's tubular shaft. In this style, a function representing the unabridged shape of the robot tin be produced past one feed forward laissez passer through the neural network.

An external file that holds a picture, illustration, etc.  Object name is nihms-1576714-f0001.jpg

Given a concentric tube robot configuration defined by the translations and rotations of the tubes (upper left), our neural network (upper right) outputs coefficients for a set of polynomial basis functions (lower left) that are combined to model the backbone of the robot's 3D shape (lower right).

The cardinal insight behind our parameterization is that the uncertainty in the physics-based shape models is due mainly to uncertainty in curvature and torsion. The arc length of the robot'south shape, withal, is independent of these and equally such is generally not subject to the same sources of doubtfulness. We tin can leverage this known state by parameterizing our shape function by arc length. Additionally, we leverage the machine learning technique known as transfer learning [ix]. Transfer learning allows our model to first utilize a large simulation data fix to acquire the full general construction of concentric tube robot kinematics so to leverage a smaller existent-world information set to learn the differences between the imitation information and the existent robot'southward kinematics. Nosotros demonstrate that pre-training our network on simulated data produced by a physics-based model, then fine-tuning the model on real-globe, sensed concentric tube robot shapes enables the model to perform more accurately than models trained on either faux or real-earth data alone.

This paper extends piece of work previously presented in [10]. We extend that work by performing additional analysis on the method via the evaluation of multiple network architectures, comparing against models that practise not utilize our pre-training strategy, and comparing the computation time required to compute shapes by our method with the physics-based method.

II. Related Piece of work

Concentric tube robots may reduce the invasiveness of a variety of surgical tasks [ii], potentially improving patient outcomes. The extremely unintuitive manual command of these devices has motivated inquiry into their teleoperation [four], [11]. Teleoperation requires some grade of computational actuation of the robot, a part that has been served by both traditional control methods and via movement planning.

Traditional control applied to concentric tube robots has primarily focused on computing controls that are based on desired tip movements. For case, methods accept been developed that compute controls based on the robot'south Jacobian [7], [12]. Additionally, Fourier series based approximations of the robot'due south kinematics have been utilized for command [xiii]. Notwithstanding, in the presence of circuitous anatomy such local control methods can struggle to compute collision-free motions that crave complex trajectories.

Past dissimilarity, motion planning enables concentric tube robots to have a global view of motion in constrained beefcake, allowing for more complex motions that avoid obstacles. To practise then, a few approaches take been proposed. In order to enable fast ciphering, simplified kinematics take been used in motion planning [fourteen], [15]. Sampling-based motion planning methods for concentric tube robots have as well been proposed [iii], [16], [iv], [xi], [5].

In both cases where local control is utilized or motion planning is utilized, in order to clinch the concentric tube robot does non collide with patient anatomy in unexpected ways, an accurate shape model, mapping the control variables of the robot to its concrete geometry in the world, is required.

The shape of continuum robots tin exist sensed online using sensors such as cobweb Bragg grating (FBG) sensors [17], [18]. FBG sensors use specialized embedded optical fibers to sense the shape of curved rods. FBG sensors have been used with concentric tube robots to estimate their curvature both in bending and in torsion [nineteen]. These methods sense the shape of the concentric tube robot as it moves in the concrete earth. However, in lodge to programme safe motions for the robot, we crave a model that can anticipate the shape of the robot in simulation, prior to execution of the movement on the concrete robot. For this reason, we crave a method that computes the shape of the robot in advance.

Nigh existing shape computation methods represent the concentric tube robots using the Cosserat rod equations [20], [21] that ascertain a system of ordinary differential equations which, when solved, provide the shape of the concentric tube robot's backbone. Such models have increased in complication over time, with the most advanced modeling concrete effects such as torsion [13], [7]. Other physical phenomena, such as friction between tubes, has been investigated just not yet fully and finer integrated into such physics-based models [22].

Machine learning methods accept been applied to both the forwards kinematics and changed kinematics of continuum robots. For example, the inverse kinematics of tendon-driven robots have been computed using various data-driven methods [23] and feed frontward neural networks [24]. Neural networks have also been applied to learning the changed kinematics of pneumatically-actuated continuum robots [25]. Neural network models accept been successfully used to more accurately model the forwards kinematics and changed kinematics of concentric tube robots [26], [27]. An ensemble method has been applied to learn and suit a forrard kinematics model for concentric tube robots online [8]. However, these models only consider the pose of the robot's tip. In order to successfully plan and execute motions that avoid unwanted collisions between the robot'south shaft and patient anatomy, a model must accurately predict the entire shape of the robot.

Iii. Method

In order to safely plan motions for concentric tube robots operating around sensitive anatomical structures, we must exist able to anticipate the shape that the robot takes in the body along its unabridged length, non only at its tip, as we activate the robot. For this reason, nosotros consider the problem of accurately mapping a concentric tube robot's configuration, defined by each of its tube'due south rotations and translations, to the robot'south geometry, along its entire length, in the world.

Our neural network model, trained on a combination of simulated and sensed, real-world information, takes the robot'south configuration as input and outputs coefficients for an arc length parameterized space-bend function that represents the robot's courage. This function, combined with cognition of the cantankerous-sectional radii of the robot's tubes, represents knowledge of the robot'southward shape in the world at that configuration (run across Fig. 1).

A. Ground Truth Data Generation

In society to learn a shape function for the concentric tube robot, data representing the robot's shape as a function of its configuration must be gathered. To gather shape data, we apply a multi-view 3D figurer vision technique called shape from silhouette [28], in which multiple images of the robot's shape for a given configuration are collected from cameras with known position and orientation (run into Fig. two).

An external file that holds a picture, illustration, etc.  Object name is nihms-1576714-f0002.jpg

Nosotros train the neural network using data from a physical robot. By taking images from multiple cameras (bluish arrows), the shape of the robot'south shaft (pink arrows) tin exist reconstructed in 3D using shape from silhouette.

Nosotros place two cameras at roughly orthogonal angles around the robot such that the robot'southward shaft is in the field of view of both cameras. We then motility the robot to a sequence of randomly sampled configurations and image the robot at each configuration with both cameras simultaneously (see Fig. 3a). Then for each pair of images nosotros automatically segment the robot'southward shaft in each image using colour thresholding (encounter Fig. 3b). For each pixel in the segmentation a ray is traced out from the camera'due south position through its image plane (we calibrate the cameras' intrinsic and extrinsic parameters using MATLAB's Computer Vision Toolbox). These rays then pass through a voxelized representation of the world, and voxels that are intersected past rays from both cameras represent the robot's shape in the earth (meet Fig. 3c). Nosotros then fit a 3D space curve to the voxels using ordinary least squares, resulting in a curve that represents the true, sensed courage of the robot. We then train the neural network using points along the sensed backbone every bit ground truth (see Fig. 3d).

An external file that holds a picture, illustration, etc.  Object name is nihms-1576714-f0003.jpg

To generate training data from the physical robot, (a) we take an prototype of the robot'due south shape with ii cameras with known positions relative to the robot. (b) We then employ a color thresholding technique to automatically segment out the robot'southward shape (shown in red) from the light-green background in the images. (c) We apply the shape from silhouette algorithm to generate a set of voxels in 3D space that correspond to the robot's shape in the globe (shown in blueish). (d) We generate a set up of evenly spaced points that best judge the set of voxels, resulting in a discretized version of the robot'southward courage in the globe (shown in blueish).

B. Neural Network Model

Our neural network architecture consists of a feed forward, fully connected network. We choose a feed forward, fully connected network due to its simplicity and considering the inputs and outputs were clearly defined allowing us to larn a mapping without imposing any additional constraints associated with more than complex models. We utilize the parametric rectified linear unit as our non-linear activation office between layers, which we noted provided a slight performance improvement over the standard rectified linear unit.

1) Input to the network: For a robot consisting of k tubes, we parameterize the i th tube's state as

γ i { γ 1 , i , γ 2 , i , γ iii , i } = { cos ( α i ) , sin ( α i ) , β i } ,

where α i ∈ (−π, π] is the i th tube's rotation and β i R is the i thursday tube'southward translation, as in [27]. Nosotros then parameterize the robot'southward configuration as

This serves as the input to the neural network.

2) Output of the network: The network outputs 15 coefficients,

c one ten , c ii x , c 5 x , c 1 y , c 2 y , c 5 y , c 1 z , c 2 z , c 5 z ,

which serve as coefficients for a set of 5 orthonormal polynomial basis functions in ten, y, and z parameterized by arc length. This results in three functions, x(q, south), y(q, due south), and z(q, s), where

x ( q , due south ) = len ( q ) × ( c 1 x P i ( s ) + c 2 10 P 2 ( s ) + + c 5 x P 5 ( south ) ) ,

where due south is a normalized arc length parameter betwixt 0 and 1, and len(q) is the total arc length of the robot'south backbone in configuration q. And so y(q, due south) and z(q, south) are defined similarly with their respective coefficients. The resulting shape function is

shape ( q , s ) = < x ( q , southward ) , y ( q , s ) , z ( q , s ) > .

To evaluate the shape of the robot at a given configuration, the neural network tin can exist evaluated at q, and the resulting coefficients define a space-curve role that can so be evaluated at whatever desired arc length. This, combined with knowledge of the robot'southward radius as a function of arc length, results in a prediction of the robot's geometry in the globe.

The orthonormal polynomial ground functions, generated using Gram-Schmidt orthogonalization, are visualized in Fig. four, and the coefficients that define the polynomials are shown in Table I. For example, P1(s) := 1.7321s, P2(south) := −6.7082southward + viii.9943due south 2, etc.

An external file that holds a picture, illustration, etc.  Object name is nihms-1576714-f0004.jpg

The orthonormal polynomial basis functions generated using Gram-Schmidt orthogonalization.

TABLE I

Coefficients for the orthonormal polynomial basis functions.

south s 2 southward three southward 4 s 5
Pane(s) one.7321 0 0 0 0
P2(s) −half-dozen.7082 8.9943 0 0 0
P3(s) 15.8745 −52.915 39.6863 0 0
P4(southward) −thirty.0 180.0 −315.0 168.0 0
P5(s) 49.7494 −464.33 1392.98 −1671.6 696.4912

C. Training the Model

Nosotros first pretrained our model on 100, 000 data points (configuration and courage pairs), where the configuration was sampled uniformly at random from the robot's configuration space and the backbone was generated by the physics-based model presented in [7], a mechanics-based kinematics model based on the Cosserat Rod equations. Such pretraining allows us to apply a large amount of simulation data in society to prevent overfitting on the smaller amount of sensed, real earth data. Additionally, this allows the model to larn general characteristics of how the concentric tube robot's shapes are divers by the configuration from the simulation model, so to learn the means in which the physical robot differs from simulation during fine-tuning.

We generated real globe data by sampling uniformly at random configurations and pairing them with the backbone they produced in the real world, which we sensed via shape from silhouette (see Figs. 2 and 3). We and then separate our real earth information into iii sets. A training set of 7,000 data points, used to fine-tune the model, a validation set of 1,000 information points, used during grooming to evaluate convergence, and a test fix of 1, 000 data points, which nosotros left out for evaluating the functioning of the network. We utilize a pointwise sum-of-squared-distances loss function and the ADAM [29] optimizer during training. We utilize early exit of grooming at convergence equally evaluated past our validation fix in order to foreclose overfitting on the training set. If 10, 000 epochs of training accept passed without the model improving its performance as evaluated by the validation ready, training is stopped and the all-time version of the model plant until that fourth dimension is used.

Nosotros demonstrate the motivation for the transfer learning approach in this application in Fig. five, in which we show a plot in the workspace of evenly spaced points forth the backbone of every configuration in our simulation training set in Fig. 5a and our existent-earth preparation set in Fig. 5b. The simulation information prepare represents greater density throughout the workspace of the robot than is the instance with the real-world data set up.

An external file that holds a picture, illustration, etc.  Object name is nihms-1576714-f0005.jpg

We generate grooming data both in simulation and from the physical robot by sampling uniformly at random in the robot'due south configuration space. To demonstrate how these samples map to the robot's workspace, we plot points along the backbone of each data point from (a) the simulation data ready, and (b) the real world data set.

Iv. Results

The specifications of the robot'due south component tubes used in the experiments are shown in Table II.

Tabular array II

Tube parameters for the 3-tube concentric tube robot.

Tube Outer Diameter
(mm)
Inner Diameter
(mm)
Straight Length
(mm)
Curved Length
(mm)
Curvature
(m−1)
Inner 1.3 ane.0 245.9 66.6 9.3354
Eye one.9 one.half-dozen 163.1 45.half-dozen four.6270
Outer ii.5 2.2 95.two 36.four 7.4184

A. Evaluation of Polynomial Ground Functions

Showtime, in order to evaluate how well the polynomial basis functions are able to judge the shape, we computed the optimal fix of coefficients for the 100,000 pre-training data points using ordinary to the lowest degree squares. We and so evaluated how well the resulting shape representation approximated the training information at 20 equally-spaced points along the backbone of the shape, equally in the training process. Nosotros then calculated the maximum L2 distance over the 20 points forth the backbone of the robot between the polynomial representation and the ground truth for each of the 100, 000 configurations. Over the 100, 000 configurations, the mean of the maximum L2 distance along the backbone was 0.044±0.037 mm. This demonstrates that the polynomial footing functions are capable of representing the shape of a concentric tube robot with accuracy well into the sub-millimeter range.

B. Evaluation of Varying Network Architectures

Next, we evaluate how well our model is able to learn the shape of real-globe concentric tube robot configurations beyond multiple network architectures. We railroad train multiple networks with numbers of subconscious layers varying from 3 to vii and numbers of nodes per subconscious layer varying from fifteen to 60. We as well evaluate networks trained with faux data lone (Sim), trained with real information solitary (Real), and pre-trained with simulated information and fine-tuned with existent data (Sim+Real). We evaluate each model'southward mistake on the test set of i,000 information points. For each configuration of our 3-tube robot we perform a forrard pass through each network to determine the coefficients for the shape function. We so evaluate that function and the ground truth from the vision system at twenty evenly spaced points along the robot's shaft. We evaluate the results using three different error metrics.

  1. Maximum deviation forth the robot's shaft—the L2 distance of the betoken that deviates from the ground truth the nearly. This fault value presents us with a maximum divergence between the predicted shape and the ground truth, a useful metric when because safety related to anatomical obstacle avoidance.

  2. Mean squared error along the robot's shaft—the squared L2 distance from the footing truth, averaged along the robot's shaft.

  3. Sum of the deviation forth the robot's shaft—the L2 distance from the footing truth, summed forth the robot's shaft.

We present the results of this assay in Tables Iii, IV, and V. For each data blazon and compages we report the mean and standard deviation of the error (in mm) across all examination data points. For each data type (Sim, Real, and Sim+Real), we highlight the best performing model in assuming, and highlight the best performing model across all data types and architectures in red.

TABLE Three

Average Learned Model Accuracy Results (Mean ± Std in mm), Maximum Deviation Along Shaft

Model Compages (width 10 depth)
Information Type 3x15 3x30 3x60 5x15 5x30 5x60 7x15 7x30 7x60
Sim half dozen.22 ± three.95 6.28 ± 3.93 6.32 ± 3.95 vi.28 ± 3.94 vi.xxx ± iii.95 six.31 ± iii.94 6.32 ± 3.98 6.31 ± 3.94 half-dozen.32 ± 3.96
Real 3.93 ± two.50 3.66 ± 2.40 iv.08 ± 2.41 4.38 ± 2.43 3.81 ± 2.35 iv.11 ± 2.46 3.95 ± 2.34 3.65 ± 2.31 4.68 ± two.58
Sim+Real iii.69 ± 2.45 three.35 ± two.39 iii.49 ± ii.42 4.68 ± two.90 three.48 ± 2.49 three.58 ± two.49 3.56 ± ii.50 3.53 ± ii.48 3.forty ± 2.41

Table IV

Average Learned Model Accuracy Results (Mean ± Std in mm), Hateful Squared Mistake Forth Shaft

Model Compages (width 10 depth)
Data Type 3x15 3x30 3x60 5x15 5x30 5x60 7x15 7x30 7x60
Sim 12.2 ± 16.4 12.four ± 16.four 12.iii ± 16.5 12.1 ± 16.v 12.3 ± 16.vi 12.3 ± 16.6 12.3 ± 16.7 12.three ± 16.v 12.3 ± xvi.7
Existent 4.73 ± 6.29 3.50 ± 5.13 5.03 ± 6.61 4.44 ± 4.89 iii.49 ± 4.72 4.06 ± 5.21 3.96 ± 4.81 3.54 ± four.64 four.96 ± 5.08
Sim+Real iii.sixty ± v.44 3.03 ± 4.84 3.31 ± 5.27 v.94 ± viii.56 3.25 ± v.53 3.47 ± 5.86 iii.36 ± 5.63 iii.36 ± 5.66 3.xi ± five.24

TABLE V

Boilerplate Learned Model Accuracy Results (Mean ± Std in mm), Sum of Deviations Along Shaft

Model Architecture (width x depth)
Data Type 3x15 3x30 3x60 5x15 5x30 5x60 7x15 7x30 7x60
Sim 48.7 ± 21.8 48.9 ± 22.0 48.8 ± 21.eight 48.6 ± 21.7 48.eight ± 22.0 48.8 ± 21.eight 48.8 ± 22.0 48.7 ± 21.eight 48.8 ± 22.0
Real 31.2 ± 13.vii 25.8 ± 10.9 33.2 ± 12.0 xxx.ane ± 11.five 25.viii ± 10.vi 28.4 ± 10.7 28.6 ± 11.3 26.viii ± 10.7 31.eight ± 11.v
Sim+Real 25.7 ± 11.four 23.2 ± 10.nine 24.5 ± 11.2 33.5 ± xiv.iii 24.3 ± xi.0 25.1 ± 11.five 24.7 ± 10.ix 24.viii ± 11.1 23.viii ± 10.eight

Both the Real and the Sim+Real data types outperform the simulation simply models across all architectures for all fault metrics. For all three error metrics, Sim+Existent outperforms Real across all architectures except 5x15, merely to a lesser extent. This result aligns with the theory behind transfer learning [9], which holds that pre-training on a large data set from a related domain then fine-tuning on a smaller data gear up from the exact domain of interest allows the model to learn the general properties of the related domain first, and then to be refined on the nuance of the verbal domain. The best model for all three fault metrics is trained on Sim+Real data, and has 3 hidden layers of 30 nodes each.

C. Accuracy Comparison to Physics-Based Model

Next, nosotros compare the accuracy of the best model from the previous assay to the physics-based model presented in [seven]. In Fig. six we plot a histogram of the errors across the ane,000 test configurations for the maximum difference mistake metric.

An external file that holds a picture, illustration, etc.  Object name is nihms-1576714-f0006.jpg

A histogram of the maximum error along the robot'due south shaft for the learned model and the physics-based model, for each of the ane, 000 test points. The distribution is shifted to the left in the learned model (Sim+Real 3x30), indicating that it is more likely to produce lower error values.

In Tables VI, Vii, and VIII, we present farther statistics for the maximum error values over the i,000 test configurations for both the physics-based model and the learned model for the three error metrics. In the histogram in Fig. 6, information technology can be seen that the mistake distribution of the learned model is shifted to the left compared with that of the physics-based model, indicating that the learned model is more likely to produce lower error values than the physics-based model. Additionally, the learned model produces lower minimum, maximum, and mean error values for all metrics.

TABLE Vi

Error value statistics for the physics-based model and our learned model across the 1000 test configurations, Maximum Departure Along Shaft.

Physics-Based (mm) Learned (mm)
Minimum i.11 0.61
Maximum 46.68 30.18
Mean 6.32 ± three.95 iii.35 ± two.39

TABLE VII

Mistake value statistics for the physics-based model and our learned model across the 1000 test configurations, Mean Squared Error Forth Shaft.

Physics-Based (mm) Learned (mm)
Minimum 0.thirty 0.05
Maximum 246.62 82.58
Hateful 12.32 ± 16.threescore iii.03 ± four.84

Table VIII

Fault value statistics for the physics-based model and our learned model across the 1000 test configurations, Sum of Deviations Along Shaft.

Physics-Based (mm) Learned (mm)
Minimum 7.79 3.11
Maximum 181.63 85.89
Mean 48.79 ± 21.87 23.23 ± 10.90

In Fig. 7, we plot the fault of our model and the physics-based model as a function of the position along the robot's backbone, averaged over the 1,000 test configurations. Nosotros note that for both the physics-based model and our learned model the error increases closer to the robot'southward tip.

An external file that holds a picture, illustration, etc.  Object name is nihms-1576714-f0007.jpg

The L2 distance of points along the robot'south shaft between the ground truth and those computed by our learned model (bluish) and between the basis truth and those computed by the physics-based model (ruby-red), plotted over the length of the robot's shaft indexed from the first position at the base of the robot (where the error is 0 for both models), to the twentieth signal at the robot'southward tip. The error for both models is greatest at the tip of the robot. The values are averaged over the one, 000 test information points.

In Fig. 8 we bear witness three shapes computed by our learned model compared to the ground truth shapes. Nosotros plot in Fig. 8a the configuration whose mistake is closest to i standard deviation below the average mistake (maximum difference fault metric), Fig. 8b shows the configuration whose error is closest to the average fault (maximum deviation error metric), and Fig. 8c shows the configuration whose error is closest to i standard deviation higher up the boilerplate error (maximum fault metric).

An external file that holds a picture, illustration, etc.  Object name is nihms-1576714-f0008.jpg

Nosotros show iii examples of the shape our learned model computes compared with the basis truth shape sensed in the same configuration. (a) Nosotros plot the shape whose error is closest to i standard deviation below the boilerplate error, (b) the shape whose mistake is closest to the average error, and (c) the shape whose error is closest to one standard deviation above the boilerplate error (using the maximum deviation along the robot's shaft mistake metric).

D. Timing Comparison to Physics-Based Model

Both control and motility planning for concentric tube robots requires many shape computations to happen in a very brusk amount of fourth dimension. This necessitates a shape model that is sufficiently fast to compute. Ane reward of using neural networks for our learned shape model is that computation tin be batched, i.e., you can summate multiple passes through the network (representing multiple shape computations) simultaneously, and it is well-nigh as fast as computing a unmarried pass. This process, called batching, allows for many shape computations to happen very quickly in parallel, a holding that the physics-based shape model does non currently share.

In Fig. 9 we present the time required by each of the Sim+Real networks to perform shape computations. We perform 100, 000 shape computations for each, with batch sizes varying from 1 (i.e. no batching) to 100, 000 (i.eastward. calculating all shapes simultaneously). Included in the timing results are the fourth dimension required to make the passes through the network besides as the time required to evaluate each resulting shape at 20 evenly spaced points forth the backbone of the robot. We nowadays the average fourth dimension per shape ciphering.

An external file that holds a picture, illustration, etc.  Object name is nihms-1576714-f0009.jpg

The boilerplate fourth dimension taken to compute the shape of the concentric tube robot at a specified configuration, for 20 evenly spaced points along its courage. We nowadays the average for each network topology (Sim+Real), for varying batch sizes. For comparison, the physics-based model averages 1.73ms per shape computation.

As can be seen, the models of less complexity (fewer subconscious layers and fewer nodes per layer) are faster. Additionally, increasing the batch size dramatically reduces the time per shape computed, with the fastest (100,000 batch size) taking on average ≈ 0.01ms per shape computation across all models. Without batching, the models range between 0.27ms and 0.51ms per shape computation. These values represent a significant speed-up compared to the physics-based model which takes on boilerplate 1.73ms per shape computation. Without batching, the learned models are between 3.39 and 6.iv times faster than the physics-based model, and with the largest batching, the learned models are ≈ 173 times faster than the physics-based model.

While it is non as piece of cake to take advantage of batching during traditional command of concentric tube robots, due to the iterative nature of the required shape computations, during sampling-based move planning batching can be leveraged to great effect.

V. Conclusion

In this work we present a learned, neural network model that outputs an arc length parameterized space curve. This allows us to take a data driven approach to modeling the shape of the concentric tube robot and improve upon a physics-based model. This may allow for safer motion planning and control of these devices in surgical settings that crave fugitive anatomical obstacles, as a shape that deviates less from the shape predicted in computation volition be less likely to unintentionally collide with the patient's anatomy.

We note many areas of potential future investigation. In this work, the model is just trained on cases where the robot is operating in free space. There be many surgical tasks that require minimal force to exist exerted by the robot on tissue that our method would exist well suited for, such as using the robot equally an endoscope via a chip-tip camera, utilizing the robot to deliver energy-based ablation probes, and utilizing the robot as a suction or irrigation catheter. However, many surgical tasks will require non-footling tissue interaction forces. In the future, we intend to railroad train models that account for the robot'due south interaction with tissue both at its tip and along the shaft of the robot.

There are also many avenues to investigate additional machine learning techniques and parameterizations related to the method. For instance, we annotation the relatively pocket-size comeback we observed when combining faux and real preparation data compared to using simply real training data. While in a minimally invasive surgical setting, any improvement to model accuracy is important, we intend to further investigate ways to leverage the combination between real preparation data and faux training data to identify the role that simulated training data may play in model accuracy. Information technology may also be valuable to investigate the integration of learning with the physics-based concentric tube robot models, via a hybrid model-based and data-driven approach. We also believe it would be valuable to investigate different loss functions during training likewise as the utilise of dissimilar neural network paradigms, such every bit recurrent networks and hybrid networks. We volition also experimentally investigate how the sampling distribution of the training sets touch on the quality of the learned model. We also plan on investigating whether the ideal network structure varies depending on the number of tubes or tube parameters.

We also intend to augment the learned model to account for other sources of doubt in concentric tube robot shape modeling, including hysteresis. There also exist other choices of ground functions. We chose orthonormal polynomials due to their fast evaluation, merely we intend to investigate other types of bases for our shape representation. These and other model parameter choices may have implications on the types of shapes that it is able to predict well and the types that it predicts less well. We consider a hereafter characterization of this relationship to be important.

Further, nosotros programme to integrate the learned model with a motion planner and evaluate its utilise in automatic obstacle abstention during tele-functioning or automatic execution of surgical tasks.

Finally, nosotros believe that this approach is not express to concentric tube robots, but could be used for other forms of continuum robots that accept known arc-lengths, such every bit tendon actuated robots. We intend to extend the method to work with such systems.

Acknowledgments

This research was supported in role by the U.S. National Institutes of Wellness (NIH) under honor R01EB024864.

Contributor Information

Alan Kuntz, Robotics Center and the Schoolhouse of Calculating, Academy of Utah, Salt Lake City, UT, 84112 United states.

Armaan Sethi, Department of Computer science, University of North Carolina at Chapel Loma, Chapel Colina, NC, 27599 USA.

Robert J. Webster, III, Department of Mechanical Engineering, Vanderbilt University, Nashville, TN, 37235 U.s..

Ron Alterovitz, Department of Informatics, University of Northward Carolina at Chapel Hill, Chapel Hill, NC, 27599 United states.

References

[1] Gilbert HB, Rucker DC, and Webster III RJ, "Concentric tube robots: The country of the art and time to come directions," in Int. Symp. Robotics Research (ISRR), Dec 2013. [Google Scholar]

[2] Burgner-Kahrs J, Rucker DC, and Choset H, "Continuum robots for medical applications: A survey," IEEE Trans. Robotics, vol. 31, no. vi, pp. 1261–1280, 2015. [Google Scholar]

[3] Torres LG and Alterovitz R, "Motion planning for concentric tube robots using mechanics-based models," in Proc. IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), Sep. 2011, pp. 5153–5159. [PMC free commodity] [PubMed] [Google Scholar]

[4] Torres LG, Kuntz A, Gilbert HB, Swaney PJ, Hendrick RJ, Webster 3 RJ, and Alterovitz R, "A motion planning arroyo to automatic obstacle avoidance during concentric tube robot teleoperation," in Proc. IEEE Int. Conf. Robotics and Automation (ICRA), May 2015, pp. 2361–2367. [PMC gratis article] [PubMed] [Google Scholar]

[five] Kuntz A, Fu M, and Alterovitz R, "Planning high-quality motions for concentric tube robots in point clouds via parallel sampling and optimization," in Proc. IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), Nov 2019, pp. 2205–2212. [PMC free article] [PubMed] [Google Scholar]

[half-dozen] LaValle SM, Planning Algorithms. Cambridge, U.K.: Cambridge University Press, 2006. [Google Scholar]

[seven] Rucker DC, "The mechanics of continuum robots: model-based sensing and command," Ph.D. dissertation, Vanderbilt University, 2011. [Google Scholar]

[eight] Fagogenis M, Bergeles C, and Dupont PE, "Adaptive nonparametric kinematic modeling of concentric tube robots," in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), October 2016, pp. 4324–4329. [PMC free article] [PubMed] [Google Scholar]

[9] Pan SJ and Yang Q, "A survey on transfer learning," IEEE Transactions on cognition and data engineering, vol. 22, no. 10, pp. 1345–1359, 2009. [Google Scholar]

[10] Kuntz A, Sethi A, and Alterovitz R, "Estimating the consummate shape of concentric tube robots via learning," in Hamlyn Symposium on Medical Robotics, June 2019. [Google Scholar]

[11] Leibrandt Grand, Bergeles C, and Yang G-Z, "Concentric tube robots: Rapid, stable path-planning and guidance for surgical use," IEEE Robotics & Automation Magazine, vol. 24, no. 2, pp. 42–53, 2017. [Google Scholar]

[12] Xu R, Asadian A, Naidu AS, and Patel RV, "Position command of concentric-tube continuum robots using a modified jacobian-based arroyo," in IEEE Int. Conf. Robotics and Automation (ICRA), May 2013, pp. 5793–5798. [Google Scholar]

[13] Dupont PE, Lock J, Itkowitz B, and Butler E, "Blueprint and control of concentric-tube robots," IEEE Trans. Robotics, vol. 26, no. 2, pp. 209–225, April 2010. [PMC complimentary commodity] [PubMed] [Google Scholar]

[14] Lyons LA, Webster Iii RJ, and Alterovitz R, "Motion planning for active cannulas," in Proc. IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), Oct. 2009, pp. 801–806. [Google Scholar]

[15] Trovato K and Popovic A, "Collision-free 6D not-holonomic planning for nested cannulas," in Proc. SPIE Medical Imaging, vol. 7261, March 2009. [Google Scholar]

[sixteen] Torres LG, Baykal C, and Alterovitz R, "Interactive-rate motility planning for concentric tube robots," in Proc. IEEE Int. Conf. Robotics and Automation (ICRA), May 2014, pp. 1915–1921. [PMC gratis article] [PubMed] [Google Scholar]

[17] Ryu SC and Dupont PE, "FBG-based shape sensing tubes for continuum robots," in Proc. IEEE Int. Conf. Robotics and Automation (ICRA), May 2014, pp. 3531–3537. [Google Scholar]

[18] Kim B, Ha J, Park FC, and Dupont PE, "Optimizing curvature sensor placement for fast, authentic shape sensing of continuum robots," in Proc. IEEE Int. Conf. Robotics and Automation (ICRA), May 2014, pp. 5374–5379. [Google Scholar]

[19] Xu R, Yurkewich A, and Patel RV, "Shape sensing for torsionally compliant concentric-tube robots," in Optical Fibers and Sensors for Medical Diagnostics and Handling Applications Xvi, vol. 9702 International Society for Optics and Photonics, 2016, p. 97020V. [Google Scholar]

[20] Rubin MB, Cosserat Theories: Shells, Rods and Points. Springer Scientific discipline & Business Media, 2000. [Google Scholar]

[21] Antman SS, Nonlinear Bug of Elasticity. Springer, 1995. [Google Scholar]

[22] Gilbert HB, Rucker DC, and Webster 3 RJ, "Concentric tube robots: The land of the art and future directions," in Robotics Research. Springer, 2016, pp. 253–269. [Google Scholar]

[23] Xu Westward, Chen J, Lau HY, and Ren H, "Information-driven methods towards learning the highly nonlinear inverse kinematics of tendon-driven surgical manipulators," The International Journal of Medical Robotics and Computer Assisted Surgery, vol. 13, no. three, p. e1774, 2017. [PubMed] [Google Scholar]

[24] Giorelli M, Renda F, Calisti M, Arienti A, Ferri K, and Laschi C, "Neural network and jacobian method for solving the inverse statics of a cable-driven soft arm with nonconstant curvature," IEEE Transactions on Robotics, vol. 31, no. four, pp. 823–834, 2015. [Google Scholar]

[25] Melingui A, Merzouki R, Mbede JB, Escande C, and Benoudjit N, "Neural networks based arroyo for inverse kinematic modeling of a compact bionic handling assistant trunk," in 2014 IEEE 23rd International Symposium on Industrial Electronics (ISIE) IEEE, 2014, pp. 1239–1244. [Google Scholar]

[26] Bergeles C, Lin FY, and Yang GZ, "Concentric tube robot kinematics using neural networks," in Hamlyn Symposium on Medical Robotics, June 2015, pp. 1–two. [Google Scholar]

[27] Grassmann R, Modes V, and Burgner-Kahrs J, "Learning the forward and inverse kinematics of a vi-DOF concentric tube continuum robot in SE(3)," in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Kingdom of spain, October. 2018, pp. 5125–5132. [Google Scholar]

[28] Bakery Due south and Kanade T, "Shape-from-silhouette beyond time function I: Theory and algorithms," International Journal of Computer Vision, vol. 62, no. three, pp. 221–247, 2005. [Google Scholar]

[29] Kingma DP and Ba J, "Adam: A method for stochastic optimization," arXiv preprint arXiv:1412.6980, 2014. [Google Scholar]

martinpooft1958.blogspot.com

Source: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7243456/

0 Response to "Concentric Tube Robots the State of the Art and Future Directions"

Postar um comentário

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel