Micro-data: the next frontier in robot learning?

  • 14:00 - 14:15 - Opening
  • 14:15 - 15:00 - Keynote Talk: Oliver Brock (TU Berlin, Germany) - It’s all about priors -- and loads of micro-data
  • 15:00 - 15:20 - Noemie Jaquier (IDIAP/EPFL, Switzerland) - Improving micro-data learning by exploiting the structure and geometry of data: application to prosthetic hands
  • 15:20 - 15:40 - Konstantinos Chatzilygeroudis (Inria, France) - Combining model learning and model identification for fast robot learning
  • 15:40 - 16:00 - Franzi Meier (Max Planck Institute for Intelligent Systems, Germany)Learning to Learn While Learning: Meta-Learning for Robotics
  • 16:00 - 16:30 - Coffee break
  • 16:30 - 16:50 - Nadia Figueroa Fernandez (EPFL, Switzerland)
  • 16:50 - 17:10 - Akshara Rai (Carnegie Mellon University, USA) - How can we use simulations to speed up learning on hardware?
  • 17:10 - 17:30 - Matteo Saveriano (Technical University of Munich, Germany) Data-Efficient Control Policy Search using Residual Dynamics Learning.
  • 17:30 - 18:00 - Round-table discussion with the invited speakers and the audience

Many fields are now snowed under with an avalanche of data, which raises considerable challenges for computer scientists. Meanwhile, robotics (among other fields) can often only use a few dozen data points because acquiring them involves a process that is expensive, time-consuming, and potentially dangerous. How can a robot learn with only a few data points?

Watching a child learn reveals how well humans can learn: a child may only need a few examples of a concept to “learn it”. By contrast, the impressive results achieved with modern machine learning (in particular, by deep learning) are made possible largely by the use of huge datasets. For instance, the ImageNet database used in image recognition contains about 1.2 million labelled examples; DeepMinds's AlphaGo used more than 38 million positions to train their algorithm to play Go; and the same company used more than 38 days of play to train a neural network to play Atari 2600 games, such as Space Invaders or Breakout.

Like children, robots have to face the real world, in which trying something might take seconds, hours, or days. And seeing the consequence of this trial might take much more. When robots share our world, they are expected to learn like humans or animals, that is, in far fewer than a million trials. Robots are not alone to be cursed by the price of data: Any learning process that involves physical tests or precise simulations (e.g., computational fluid dynamics) comes up against the same issue. In short, while data might be abundant in the virtual world, it is often a scarce resource in the physical world. We refer to this challenge as "micro-data learning".

Tremendous progresses have been achieved in robot learning during the last two decades. However, we do not see a large adoption of these robot learning methods in the robotics community. We believe that one of the main reasons is that robot learning still needs hundred, if not thousands of trials: in most cases, this is not a practical solutions, which explains why most robot learning experiments are primarily achieved in simulation, with often very few demonstrations on actual robots.

Researchers on robot learning are well aware that learning algorithms for robots need to be as data-efficient as possible. Still, they are only starting to address the challenge of micro-data explicitly. The objective of this workshop is to directly address the challenge of learning with a robot in only a handful of trials / a few seconds of interaction time. We hope to organize this workshop every year to constitute a community of researchers interested in this question.

Main questions

  • What are the most promising ideas to design data-efficient machine learning algorithms?
  • How to create priors? How to use priors "in the right way"?
  • What tasks can be learned in a few trials / a few seconds of interaction time?
  • What are the best benchmarks for micro-data learning?
  • What are the representations and structures required for micro-data learning?
  • How can different forms of learning be combined to cope with the scarcity of the data?
  • What is the role of transfer learning in minimizing data use?