Visual Servoing for Humanoid Robots

To execute reaching, grasping, or manipulating motions, a humanoid robot must be able to deal with inaccurate object localization, fuzzy sensor data, and a dynamic environment. Therefore, we use techniques related to Position Based Visual Servoing (PBVS) to allow a robust and reactive interaction with the environment. By fusing the sensor channels coming from motors, vision, and haptics the visual servoing framework enables the humanoid robots of the ARMAR series to grasp objects and to open doors in a kitchen.
To exploit the full grasping capabilities of a humanoid robot, robust execution of bimanual grasping or manipulation motions should be possible. The bimanual visual servoing framework, which we developed at H²T at KIT, enables the robots to robustly execute dual arm grasping and manipulation tasks. Therefore, target objects and both hands are tracked alternately and a combined open- / closed-loop controller is used for positioning the hands with respect to the targets. The control framework for reactive positioning of both hands applying position based visual servoing fuses the sensor data streams coming from the vision system, the joint encoders, and the force/torque sensors.

Videos

  • A Position-Based Visual Servoing controller enables the humanoid robot ARMAR-III to hand-over a cup from the left to the right hand.