Note: This content is accessible to all versions of every browser. However, this browser does not seem to support current Web standards, preventing the display of our site's design details.

  

Stochastic Control up to a Hitting Time: Optimality and Rolling-horizon Implementation

Author(s):

D. Chatterjee, E. Cinquemani, G. Chaloulos, J. Lygeros
Conference/Journal:

vol. AUT08-10, Also available at: http://arxiv.org/abs/0806.3008
Abstract:

We present a dynamic programming-based solution to a stochastic optimal control problem up to a hitting time for a discrete-time Markov control process. First we determine an optimal control policy to steer the process toward a compact target set while simultaneously minimizing an expected discounted cost. We then provide a rolling-horizon strategy for approximating the optimal policy, together with quantitative characterization of its sub-optimality with respect to the optimal policy. Finally we address related issues of asymptotic discount-optimality of the value-iteration policy. Both the state and action spaces are assumed to be Polish.

Year:

2008
Type of Publication:

(04)Technical Report
Supervisor:

J. Lygeros

File Download:

Request a copy of this publication.
(Uses JavaScript)
% Autogenerated BibTeX entry
@TechReport { ChaEtal:2008:IFA_3274,
    author={D. Chatterjee and E. Cinquemani and G. Chaloulos and J. Lygeros},
    title={{Stochastic Control up to a Hitting Time: Optimality and
	  Rolling-horizon Implementation}},
    institution={},
    year={2008},
    number={},
    address={},
    month=jun,
    url={http://control.ee.ethz.ch/index.cgi?page=publications;action=details;id=3274}
}
Permanent link