Convex approximation schemes

We focus on linear dynamical systems driven by stochastic noise and a control input, and consider the problem of finding a control policy that minimizes an expected cost function while simultaneously fulfilling constraints on the control input and on the state evolution. In general, no control policy exists that guarantees satisfaction of deterministic (hard) constraints over the whole infinite horizon. One way to cope with this issue is to relax the constraints in terms of probabilistic (soft) constraints. This amounts to requiring that constraints will not be violated with sufficiently large probability or, alternatively, that an expected reward for the fulfillment of the constraints is kept sufficiently large.

In stochastic MPC, at every time t a finite-horizon approximation of the infinite-horizon problem is solved, but only the first control of the resulting policy is implemented. At the next time t+1 a new problem is considered, the control policy is updated, and the process is repeated in a receding horizon fashion. (Under time-invariance assumptions, the finite-horizon optimal control problem is the same at all times, giving rise to a stationary optimal control policy that can be computed offline.)

Two considerations lead to the reformulation of an infinite horizon problem in terms of subproblems of finite horizon length. First, given any bounded set (e.g. a safe set), the state of a linear stochastic dynamical system is guaranteed to exit the set at some time in the future with probability one whatever the control policy. Therefore, soft constraints may turn the original (infeasible) hard-constrained optimization problem into a feasible problem only if the horizon length is finite. Second, even if the constraints are reformulated so that an admissible infinite-horizon policy exists, the computation of such a policy is generally intractable.

The aim of this research is to establish the convexity of certain stochastic finite-horizon control problems with soft constraints. Convexity is central for establishing the uniqueness of the optimal solution, and for the fast computation of the solution by way of numerical procedures.

We have developed different classes of convex relaxations of finite-horizon stochastic optimal control of linear systems with Gaussian disturbances subject to chance-constraints or integrated chance-constraints on the states and/or control inputs. The results are currently under review.

JavaScript has been disabled in your browser