Here is a bi-manual teleoperation system that is controlled using input data from a cheap RGB-D range sensor (like the ASUS Xtion PRO). Such an approach would allow operators to interact with robots from a distance without having to use fancy exoskeletons and sensors. As the developers explain:
we implemented a 3D version of the OpenPose package. The first stage of our method contains the execution of the OpenPose Convolutional Neural Network (CNN), using a sequence of RGB images as input. The extracted human skeleton pose localization in two-dimensions (2D) is followed by the mapping of the extracted joint location estimations into their 3D pose in the camera frame. The output of this process is then used as input to drive the end-pose of the robotic hands relative to the human hand movements, through a whole-body inverse kinematics process in the Cartesian space
The approach was tested on the Centauro robot.
[HT]