Applying deep learning to robotics is difficult for many reasons; challenges include collecting data at scale, reproducing experiments, and training models that are robust to varying environments. In this talk, I will discuss our new object stacking benchmark task. We generate a challenging and diverse set of objects, selected to require strategies beyond a simple “pick-and-place” solution. In a large experimental study based on this benchmark, we investigate what choices matter for learning vision-based agents in simulated environments, and what factors affect transfer from the simulated to the real robot. Finally, we develop reinforcement learning algorithms to efficiently transfer behaviours from one set of objects to another and from simulation to the real world, given a fixed data budget.
Coline Devin is a research scientist at DeepMind. Her research focuses on reinforcement learning and imitation learning for robotics both in simulation and in the real world. She received her PhD from the University of California, Berkeley where she worked on deep learning methods for compositional robotic agents.
M. Girone, M. Elsing, L. Moneta, M. Pierini