In the near future, robots equipped with advanced AI will be able to take on all kinds of tasks, including surgeries. In fact, Google Brain, Intel AI Lab, and UC Berkeley researchers have come up with an approach to teach a robot how to do surgery tasks such as needle-passing, insertion, and tying knots with surgery videos.
As the researchers explain:
Motion2Vec learns a motion-centric representation from video observations by segmenting them into actions/sub-goals/options in a semi-supervised manner. […] Motion2vec leverages upon a few labeled demonstrations to semantically align the embedding space, in contrast to time-driven unsupervised/self-supervised approaches.
[HT]