This is the ArrayBot: a distributed manipulation system on a 16 x 16 array of sliding pillars with tactile sensors that can manipulate and support objects on a table. Researchers used reinforcement learning algorithms to pull this off. As they explain:
More like this ➡️ here
“we train RL agents that can relocate diverse objects through tactile observations only… Surprisingly, we find that the discovered policy can not only generalize to unseen object shapes in the simulator but also transfer to the physical robot without any domain randomization.”
[ICRA2024] ArrayBot: Reinforcement Learning for Generalizable Distributed Manipulation through Touch
Such an approach can be used to relocate two melons in parallel, relocate a dragon fruit, and a lot more.
*Our articles may contain aff links. As an Amazon Associate we earn from qualifying purchases. Please read our disclaimer on how we fund this site.