Gaussian Processes for Data-Efficient Learning in Robotics and Control

Abstract

Autonomous reinforcement learning (RL) approaches typically require many interactions with the system to learn controllers, which is a practical limitation in real systems, such as robots, where many interactions can be impractical and time consuming. To address this problem, current learning approaches typically require task-specific knowledge in form of expert demonstrations, realistic simulators, pre-shaped policies, or specific knowledge about the underlying dynamics. We follow a different approach and speed up learning by extracting more information from data. In particular, we learn a probabilistic, non-parametric Gaussian process transition model of the system. By explicitly incorporating model uncertainty into long-term planning and controller learning our approach reduces the effects of model errors, a key problem in model-based learning. Compared to state-of-the art RL our model-based policy search method achieves an unprecedented speed of learning. We demonstrate its applicability to autonomous learning in challenging real robot and control tasks. Citation: https://www.computer.org/csdl/journal/tp/2015/02/06654139/13rRUILLkEU

Date
Event
International Conference on Artificial Intelligence and Statistics (AISTATS)
Location
Reykjavik, Iceland
Avatar
Marc Deisenroth
DeepMind Chair in Artificial Intelligence