Loading Events
This event has passed.

Abstract

On our path toward fully autonomous systems, i.e., systems that operate in the real world without significant human intervention, reinforcement learning (RL) is a promising framework for learning to solve problems by trial and error. While RL has had many successes recently, a practical challenge we face is its data inefficiency: In real-world problems (e.g., robotics) it is not always possible to conduct millions of experiments, e.g., due to time or hardware constraints. In this talk, I will outline three approaches that explicitly address the data-efficiency challenge in reinforcement learning using probabilistic models. First, I will give a brief overview of a model-based RL algorithm that can learn from small datasets. Second, I will describe an idea based on model predictive control that allows us to learn even faster while taking care of state or control constraints, which is important for safe exploration. Finally, I will introduce an idea for meta learning (in the context of model-based RL), which is based on latent variables.

 

Bio

Professor Marc Deisenroth is the DeepMind Chair in Artificial Intelligence at University College London. He also holds a visiting faculty position at the University of Johannesburg. From 2014 to 2019, Marc was a faculty member in the Department of Computing, Imperial College London. Marc’s research interests center around data-efficient machine learning, probabilistic modeling and autonomous decision making.

Marc was Program Chair of EWRL 2012, Workshops Chair of RSS 2013 and received Best Paper Awards at ICRA 2014 and ICCAS 2016. In 2019, Marc co-organized the Machine Learning Summer School in London. In 2018, Marc has been awarded The President’s Award for Outstanding Early Career Researcher at Imperial College. He is a recipient of a Google Faculty Research Award and a Microsoft PhD Grant. He is co-author of the book Mathematics for Machine Learning, published by Cambridge University Press.