Note: This content is accessible to all versions of every browser. However, this browser does not seem to support current Web standards, preventing the display of our site's design details.

  

Stochastic Model Predictive Control: Tractability and constraint satisfaction

Back
Abstract:
Exploiting advances in optimization, especially convex and multi-parametric optimization, model predictive control (MPC) for deterministic systems has matured into a powerful methodology with a wide range of applications. Recent activity in robust optimization has also enabled the formulation and solution of robust MPC problems for systems subject to various kinds of worst case uncertainty. For systems subject to stochastic uncertainty, however, the formulation and solution of MPC problems still poses fundamental conceptual challenges. Optimization over open loop controls, for example, tends to lead to excessively conservative solutions, so optimization over an appropriate class of feedback policies is often necessary. As in the case of robust MPC, the selection of policies one considers is crucial and represents a trade-off between the tractability of the optimization problem and the optimality of the solution. Moreover, in the presence of stochastic disturbances hard state and input constraints need to be re-interpreted as chance constraints, or integrated chance constraints, which may be violated with a certain tolerance. This interpretation, however, makes it difficult to enforce hard input constraints dictated by the capabilities of the system and the actuators, especially if one considers desirable classes of feedback policies such as affine policies. And what guarantees can one provide in the infinite horizon case, given that the system evolution is obtained by solving an infinite sequence of finite horizon problems each of which may violate its constraints with a finite probability? This talk will outline these challenges and propose solutions for some. The resulting stochastic MPC methods will be illustrated on benchmark problems and compared with alternatives.

http://control.ee.ethz.ch/~jlygeros/
Type of Seminar:
Optimization and Applications Seminar
Speaker:
Prof. John Lygeros
Automatic Control Laboratory, ETH Zurich
Date/Time:
Dec 07, 2009   16:30-18:00
Location:

ETH Zentrum, Rämistrasse 101, HG G 19.1
Contact Person:

File Download:

Request a copy of this publication.
Biographical Sketch:
John Lygeros received a B.Eng. degree in Electrical Engineering and an M.Sc. degree in Automatic Control from Imperial College, London, U.K., in 1990 and 1991, respectively. He then received a Ph.D. degree from the Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, in 1996. He held a series of postdoctoral research appointments with the National Automated Highway Systems Consortium, the Massachusetts Institute of Technology, and the University of California, Berkeley. In parallel, he was also a part-time Research Engineer with SRI International, Menlo Park, CA, and a Visiting Professor with the Department of Mathematics, Universite de Bretagne Occidentale, Brest, France. Between 2000 and 2003, he was a university lecturer with the Department of Engineering, University of Cambridge, Cambridge, U.K. and a fellow of Churchill College. Between 2003 and 2006, he was an assistant professor with the Department of Electrical and Computer Engineering, University of Patras, Patras, Greece. In July 2006, he joined the Automatic Control Laboratory, ETH Zurich, Switzerland as an associate professor; he is currently serving as the Head of the laboratory. His research interests include modeling, analysis, and control of hierarchical, hybrid and stochastic systems with applications to biochemical networks and large-scale engineering systems such as power networks and air traffic management.