Robots could become much better learners thanks to ground-breaking method devised by Dyson-backed research — removing traditional complexities in teaching robots how to perform tasks will make them even more human

Estimated read time 3 min read


One of the biggest hurdles with teaching robots new skills is how to convert complex, high-dimensional data, like images from onboard RGB cameras, into actions that accomplish specific goals. Existing methods typically rely on 3D representations requiring accurate depth information, or using hierarchical predictions working with motion planners or separate policies.

Researchers at the Imperial College London and the Dyson Robot Learning Lab have unveiled a novel approach that could address this problem. The “Render and Diffuse” (R&D) method aims to bridge the gap between high-dimensional observations and low-level robotic actions, especially when data is scarce.

R&D, detailed in a paper published on the arXiv preprint server, tackles the problem by using virtual renders of a 3D model of the robot. By representing low-level actions within the observation space, researchers were able to simplify the learning process.

Robot putting a toilet seat down

(Image credit: Vosylius et al)

Imagining their actions within an image



Source link

You May Also Like

More From Author

+ There are no comments

Add yours