Note: This content is accessible to all versions of every browser. However, this browser does not seem to support current Web standards, preventing the display of our site's design details.

  

Gaussian Processes in Reinforcement Learning

Author(s):

C. Frei
Conference/Journal:

Semester Thesis, HS15 (10469)
Abstract:

We present a study and implementation of the Probabilistic Inference for Learning Control (PILCO) algorithm introduced in the paper "Gaussian Processes for Data-Efficient Learning in Robotics and Control" [1]. PILCO is a learning algorithm that uses Gaussian Processes (GPs) to incorporate uncertainty while learning a model. In [1], the authors use PILCO to learn the model and design a controller simultaneously. They consider both linear and nonlinear control policies. In this work, we derive explicit expressions for the implementation of PILCO for linear policies alone. The resulting code can now be used to illustrate how the PILCO learning framework works, and evaluate its performance.

Supervisors: Marius Schmitt, Chithrupa Ramesh, Paul Beuchat, Florian Dörfler

Year:

2016
Type of Publication:

(13)Semester/Bachelor Thesis
Supervisor:

F. Dörfler

File Download:

Request a copy of this publication.
(Uses JavaScript)
% Autogenerated BibTeX entry
@PhdThesis { Xxx:2016:IFA_5379
}
Permanent link