Controlling Mechanical Systems with Learned Models: A Machine Learning Approach

Abstract

Optimal control has seen many success stories over the past decades. However, when it comes to autonomous systems in open-ended settings, we require methods that allow for automatic learning from data. Reinforcement learning is a principled mathematical framework for autonomous learning of good control strategies from trial and error. Unfortunately, reinforcement learning suffers from data inefficieny, i.e., the learning system often requires collecting much data before learning anything useful. This extensive data collection is usually not practical when working with mechanical systems, such as robots. In this talk, I will outline two approaches toward data-efficient reinforcement learning, and I will draw connections to the optimal control setting. First, I will detail a model-based reinforcement learning method, which exploits probabilistic models for fast learning. Second, I will discuss a model-predictive control approach with learned models, which allows us to provide some theoretical guarantees.

Key references

  • Marc P. Deisenroth, Dieter Fox, Carl E. Rasmussen, Gaussian Processes for Data-Efficient Learning in Robotics and Control, IEEE Transactions on Pattern Analysis and Machine Intelligence, volume 37, pp. 408–423, 2015
  • Sanket Kamthe, Marc P. Deisenroth, Data-Efficient Reinforcement Learning with Probabilistic Model Predictive Control, Proceedings of the International the Conference on Artificial Intelligence and Statistics (AISTATS), 2018
  • Date
    Event
    Seminar
    Location
    Kyoto University, Japan
    Avatar
    Marc Deisenroth
    DeepMind Chair in Artificial Intelligence