Here is a framework to train quadruped robots using only RGB camera inputs. LucidSim creates diverse and physically accurate image sequences from the robot’s perspective. Training is done within simulated environments. Generative models are used to generate realistic visual data, enhancing the robot’s ability to generalize across different environments.
Must see robots & drones:
Robots that learn from machine dreams
This approach allows training robots in simulation to perform challenging tasks, such as visual parkour. The above video sheds more light on how this approach works.
[HT: Alan Yu and Ge Yang and Ran Choi and Yajvan Ravan and John Leonard and Phillip Isola]
*Our articles may contain aff links. As an Amazon Associate we earn from qualifying purchases. Please read our disclaimer on how we fund this site.