Data-Efficient Reinforcement Learning with Probabilistic Model Predictive Control

File Description SizeFormat 
1706.06491v1.pdfPublished version396.45 kBAdobe PDFView/Open
Title: Data-Efficient Reinforcement Learning with Probabilistic Model Predictive Control
Authors: Kamthe, S
Deisenroth, MP
Item Type: Working Paper
Abstract: Trial-and-error based reinforcement learning (RL) has seen rapid advancements in recent times, especially with the advent of deep neural networks. However, the majority of autonomous RL algorithms either rely on engineered features or a large number of interactions with the environment. Such a large number of interactions may be impractical in many real-world applications. For example, robots are subject to wear and tear and, hence, millions of interactions may change or damage the system. Moreover, practical systems have limitations in the form of the maximum torque that can be safely applied. To reduce the number of system interactions while naturally handling constraints, we propose a model-based RL framework based on Model Predictive Control (MPC). In particular, we propose to learn a probabilistic transition model using Gaussian Processes (GPs) to incorporate model uncertainties into long-term predictions, thereby, reducing the impact of model errors. We then use MPC to find a control sequence that minimises the expected long-term cost. We provide theoretical guarantees for the first-order optimality in the GP-based transition models with deterministic approximate inference for long-term planning. The proposed framework demonstrates superior data efficiency and learning rates compared to the current state of the art.
Issue Date: 31-Dec-2017
URI: http://hdl.handle.net/10044/1/52898
Copyright Statement: © The Author
Keywords: cs.SY
stat.ML
Appears in Collections:Faculty of Engineering
Computing



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Creative Commonsx