This is the ArrayBot: a distributed manipulation system on a 16 x 16 array of sliding pillars with tactile sensors that can manipulate and support objects on a table. Researchers used reinforcement learning algorithms to pull this off. As they explain:
“we train RL agents that can relocate diverse objects through tactile observations only… Surprisingly, we find that the discovered policy can not only generalize to unseen object shapes in the simulator but also transfer to the physical robot without any domain randomization.”
Such an approach can be used to relocate two melons in parallel, relocate a dragon fruit, and a lot more.