Note: This content is accessible to all versions of every browser. However, this browser does not seem to support current Web standards, preventing the display of our site's design details.

  

A Comparison of ADMM and AMA for MPC

Author(s):

P. Vlachas
Conference/Journal:

Master Thesis, SS16 (10487)
Abstract:

The interest in model predictive control (MPC) has risen over the last decades as it can outperform traditional control techniques in handling input and state constraints. Recent advances in the computational power of embedded platforms enable the application of MPC to systems with fast dynamics. First-order optimization methods have gained attention as they use low complexity operations, are parallelizable and need fixed-point arithmetic enabling a low cost and efficient implementation on low level platforms such as Field Programmable Gate Arrays (FPGAs) or Digital Signal Processors (DSPs). We consider the Alternating Direction Method of Multipliers (ADMM) and the Alternating Minimization Algorithm (AMA).

Current research attempts fail to give a consensual result to the question which method needs less iterations to solve quadratic problems. This thesis presents a comparison of the two methods applied to MPC problems with quadratic objectives and box constraints. In order to guarantee a fair comparison, both methods need to be utilized for best performance. This includes using a suitable MPC problem reformulation, a carefully chosen step-size parameter and the use of additional convergence acceleration techniques. To this aim, we use condensed and uncondensed problem formulations, optimal step-size selection, Nesterov acceleration, over-relaxation, constraint conditioning and space scaling. We also show how techniques only available for one method can be transferred to the other, and we establish a parallelism between the optimal step-sizes of ADMM and AMA in each problem formulation.

In order to compare the convergence speed of the methods in practice, we consider a multitude of MPC problems such as quadruple tank, quadrotor, buffer management and randomly generated ones. Comparison metrics are the number of iterations each method needs to solve the problems, the number of operations per iteration such as matrix-vector multiplications, implementation effort, and availability of theoretical convergence guarantees. In all considered problems ADMM methods outperform their AMA counterparts in terms of convergence speed. The benchmarking framework created in MATLAB can be used to identify the formulation, the method, and the scaling/conditioning strategy that needs the fewest number of iterations to solve problems stemming from a specific application.

Supervisor: Felix Rey, John Lygeros

Year:

2016
Type of Publication:

(12)Diploma/Master Thesis
Supervisor:



File Download:

Request a copy of this publication.
(Uses JavaScript)
% Autogenerated BibTeX entry
@PhdThesis { Xxx:2016:IFA_5477
}
Permanent link