Quick Access

Inverse Kinematics for flexible surgical arm (2015)

Videos

Realized in March 2014 under simulation, using ROS environment.

Realized in December 2015, under simulation, using ROS environment.

Context

These simulations are presenting some position-based system control. The configuration of the modules are defined to reach a desired tip pose (position and orientation). To do so, an inverse kinematics framework is used. In the first video, the configuration of each flexible modules is defined through the position of the module tip expressed in its base. From the obtained system configuration, the chamber lengths of each module is deduced under the constant curvature assumption.

In the second experiment, the constant curvature assumption is not made anymore. Each module dynamics is expressed through beam theory. Nevertheless the general inverse kinematics framework remains the same. This video also include the estimation of the STIFF-FLOP arm holders pose (presented trough the gray tube). This holder must take into account the single point insertion constraint (visualized with the red disc). This constraint is modeled and embedded within the global inverse kinematics using an adaptation of the spherical coordinates.

More information available on the research page, and on the European project website.

In collaboration with Fabrice Morin, Asier Fernandez, Julius Klein and Alfonso Dominguez.

SAM, a domestic robot from CEA-LIST (2007)

Video

You tube (direct access)

Realized during technical evaluations, in January 2008, Morvan hospital, Brest, France

Windows Media Video format, 11 Mo You tube (direct access)

Realized during the ITEA ANSO evaluation, September 2007 the 13, at the CEA of Fontenay aux Roses

Context

This video presents the general behavior of the mobile robotic plateform SAM developped by the CEA-LIST, for improving the well-being of disabled people.

The mobile plateform enables to go and grasp every-day_life objects within a domestic environment. From a remote friendly man-machine interface, the user ask the robot to reach a topological room (like the kitchen). When the mobile plateform is within this place, the user defines the object to bring back, and the robotic arm realizes the grasping task.

This global application naturally gathers several high-level technologies. One of the originalities, from the user part, is that the grasping task is very easy to start, and also generic. Simple because only two image cliks are necessary to select the object (this define a box in which is the object of interest). Generic because no a priori information, like a 3D model, is used onto the object to grasp. Any textured object can thus be considered without changing the program.

More information available on the research page.

The robotic arm is designed by Exact Dynamics. The mobile platform, and its navigation software, is a product of Neobotix.

In collaboration with Christophe Leroux, Gérard Chalubert, Martine Guerrand, Aline Chansavang, Céline Teulière and Matthieu Le Cam.

References :

Qualitative visual servoing: application to the visibility constraint (2006)

Moving Picture Experts Group 4,10 Mo Moving Picture Experts Group 4, 8 Mo

Context

This video presents an application of the qualitative servoing. Qualitative features are used to define a visibility constraint during a positionning task by visual servoing.

The qualitative visual servoing relies on the use of a weighting matrix for activating or inactivating features within the control law. The weight is obtained with a function that realizes a continuous heavyside transition between 0 (total inactivation) and 1 (full activation).

In these two experiments, a qualitative feature is associated to each point tracked and used within the control law. This qualitative features corresponds to the distance between the considered point and a confident area defined within the image plane. This feature is activated only when the point is close to the confident are borders. The additive constraint makes the camera modify its motion to keep the point inside its field of view.

Both videos present the trajectory of the camera with and without the qualitative features. If no visibility constraint is used, some features gets out the camera field of view in the first video. In the second video, the positioning tasks fails (classical problem of large rotation around the optical axis). By considering the visibility constraint with qualitative features, no point is lost in the first experiment, and the second one succeeds.

With : Nicolas Mansard andFrançois Chaumette .

References :

Navigation from a visual memory, virtual 3D environment (2004)

Moving Picture Experts Group 1 format, 18 Mo Moving Picture Experts Group 1 format, 15 Mo

Context

These video illustrate navigation in 3D environment. These simulations enable to study the control behavior, skipping the tracking step.

On the first video, 5 degrees of freedom of the camera are controlled (rotations around the optical axis are not considered). On the second video, the camera moves on a plan, like a holonomous mobile evoluating in a corridor.

With : Patrick Gros and François Chaumette .

References :

Navigation from a visual memory, planar case (2004)

Windows Media Video format, 4 Mo

Context

This video illustrates a vision-based navigation task. The environnement is here planar, and the camera is mounted onto the effector of a Cartesian arm.

An image data-base is used to describe the environnement that should observe the camera during its navigation. The features matched between each consecutive couple of images are used to control the motion of the robotic system (on the video the indexes displayed correspond to the couple of images from the path where these features were intially extracted and matched).

All the displayed crosses are tracked in real-time with an exhaustive correlation. The green features are the one used to control the robotic arm. A planar homography is used to estimate the position of new features entering the camera field of view.

At the end of the sequence, a classical image-based visual servoing is performed. The blue crosses correspond to the final desired point positions.

Work with: Patrick Gros et François Chaumette .

References :

Large displacements by visual servoing (2002)

Windows Media Video format, 1 Mo Windows Media Video format, 1 Mo

Context

These two videos illustrate my first work on the extension of classical visual servoing for performing large displacements.

An image path (obtained by image retrieval and shortest path finding) describes the visual environment that should observe the camera during the navigation. At each instant, an image-based visual servoing is used to make the visual features converge toward their positions observed in the next image of the path.

Here, we use the fact that the scene is planar. An homography is estimated between the current a nd desired positions of the features. This homography is used to project onto the current image plane the features matched with next image of the path. When enough are visible, the current servoing is stopped, and the next one is started. Thus, the camera does not converge toward each image of the path.

In the second video, a path planning is performed at each beginning of visual servoing. It enables to estimate for each instant the best image position of the features. By using these positions as the desired features, one gets a constant velocity visual servoing.

The red points are the points tracked. Blue points are the desired positions of these points. Green points are the desired positions of points not yet visible. In the second video, the blue points are the desired positions estimated by path planning.

Realized on the robotic arm Afma6 from the Lagadic project, IRISA.

With: Patrick Gros , François Chaumette and Youcef Mezouar

References :