Bayesian Inference for Data-Efficient Reinforcement Learning

Abstract

In many high-impact areas of machine learning, we face the challenge of data-efficient learning, i.e., learning from scarce data. This includes healthcare, climate science, and autonomous robots. There are many approaches toward learning from scarce data. In this talk, I will discuss a few of them in the context of reinforcement learning. First, I will motivate probabilistic, model-based approaches to reinforcement learning, which allow us to reduce the effect of model errors. Second, I will discuss an approach based on model-predictive control that allows us to speed up learning while accounting for safety constraints. Finally, I will discuss a meta-learning approach that allows us to generalize knowledge across tasks to enable few-shot learning. Key references Marc P. Deisenroth, Dieter Fox, Carl E. Rasmussen, Gaussian Processes for Data-Efficient Learning in Robotics and Control, IEEE Transactions on Pattern Analysis and Machine Intelligence, volume 37, pp. 408–423, 2015 Sanket Kamthe, Marc P. Deisenroth, Data-Efficient Reinforcement Learning with Probabilistic Model Predictive Control, Proceedings of the International the Conference on Artificial Intelligence and Statistics (AISTATS), 2018 Steindór Sæmundsson, Katja Hofmann, Marc P. Deisenroth, Meta Reinforcement Learning with Latent Variable Gaussian Processes, Proceedings of the International the Conference on Uncertainty in Artificial Intelligence, 2018

Date
Event
Seminar
Location
Spotify (virtual)
Avatar
Marc Deisenroth
DeepMind Chair of Machine Learning and Artificial Intelligence