In the near future, robots equipped with advanced AI will be able to take on all kinds of tasks, including surgeries. In fact, Google Brain, Intel AI Lab, and UC Berkeley researchers have come up with an approach to teach a robot how to do surgery tasks such as needle-passing, insertion, and tying knots with surgery videos.
More like this ➡️ here
Motion2Vec: semi-supervised representation learning from surgical videos
As the researchers explain:
Motion2Vec learns a motion-centric representation from video observations by segmenting them into actions/sub-goals/options in a semi-supervised manner. […] Motion2vec leverages upon a few labeled demonstrations to semantically align the embedding space, in contrast to time-driven unsupervised/self-supervised approaches.
[HT]
*Our articles may contain aff links. As an Amazon Associate we earn from qualifying purchases. Please read our disclaimer on how we fund this site.