Reinforcement Learning from Very Scarce Data

Abstract

In many practical applications of machine learning, we face the challenge of data-efficient learning, i.e., learning from scarce data. This includes healthcare, climate science, and autonomous robots. There are many approaches toward learning from scarce data. In this talk, I will discuss a few of them in the context of reinforcement learning. First, I will motivate probabilistic, model-based approaches to reinforcement learning, which allow us to reduce the effect of model errors. Second, I will discuss a meta-learning approach that allows us to generalize knowledge across tasks to enable few-shot learning. Finally, we can also incorporate structural prior knowledge to speed up learning. In this final case, we can exploit Lie group structures to learn predictive models from high-dimensional observations with nearly no data.

Key references

  • Marc P. Deisenroth, Dieter Fox, Carl E. Rasmussen, Gaussian Processes for Data-Efficient Learning in Robotics and Control, IEEE Transactions on Pattern Analysis and Machine Intelligence, volume 37, pp. 408–423, 2015
  • Steindór Sæmundsson, Katja Hofmann, Marc P. Deisenroth, Meta Reinforcement Learning with Latent Variable Gaussian Processes, Proceedings of the International the Conference on Uncertainty in Artificial Intelligence, 2018
  • Steindór Sæmundsson, Katja Hofmann, Marc P. Deisenroth, Variational Integrator Networks for Physically Meaningful Embeddings, arXiv:1910.09349
  • Date
    Event
    Seminar
    Location
    RIKEN, Tokyo, Japan
    Avatar
    Marc Deisenroth
    DeepMind Chair in Artificial Intelligence