Model predictive control (MPC)

Deficiencies of precomputed (open-loop) optimal control

In the previous section we learnt how to compute an optimal control sequence on a finite time horizon using numerical methods for solving nonlinear programs (NLP), and quadratic programs (QP) in particular. There are two major deficiencies of such approach:

  • The control sequence was computed under the assumption that the mathematical model is perfectly accurate. As soon as the reality deviates from the model, either because of some unmodelled dynamics or because of the presence of (external) disturbances, the performance of the system will deteriorate. We need a way to turn the presented open-loop (also feedforward) control scheme into a feedback one.

  • The control sequence was computed for a finite time horizon. It is commonly required to consider an infinite time horizon, which is not possible with the presented approach based on solving finite-dimensional mathematical programs.

There are several ways to address these issues. Here we introduced one of them. It is knowns are Model Predictive Control (MPC), or also Receding Horizon Control (RHC). Some more are presented in the next two sections (one based on indirect approach, another one based on dynamic programming).

Model predictive control (MPC) as a way to turn open-loop control into feedback control

The idea is to compute an optimal control sequence on a finite time horizon using the material presented in the previous section, apply only the first control action to the system, and then repeat the procedure upon shifting the time horizon by one time step.

Although this name “model predictive control” is commonly used in the control community, the other – perhaps a bit less popular – name “receding horizon control” is equally descriptive, if not even a bit more.

Note

It may take a few moments to digest the idea, but it is actually quite natural. As a matter of fact, this is the way most of us control our lifes every day. We plan our actions on a finite time horizon, and while building this plan we use our understanding (model) of the world. We then perform the first action from our plan, observe the impact of our action and possibly a change in the environment, and update our plan accordingly on a new (shifted) time horizon. We repeat this procedure over and over again. It is crucial that the prediction horizon must be long enough so that the full impact of our actions can be observed.

Linear vs nonlinear MPC

x

Prediction horizon vs control

x

Hard constraints vs soft constraints

x

Literature

(Rawlings, Mayne, and Diehl 2017), (Borrelli, Bemporad, and Morari 2017), (Gros and Diehl 2022) (Bemporad 2021)

Back to top

References

Bemporad, Alberto. 2021. “Model Predictive Control.” Lecture Slides. http://cse.lab.imtlucca.it/~bemporad/teaching/mpc/imt/1-linear_mpc.pdf.
Borrelli, Francesco, Alberto Bemporad, and Manfred Morari. 2017. Predictive Control for Linear and Hybrid Systems. Cambridge, New York: Cambridge University Press. http://cse.lab.imtlucca.it/~bemporad/publications/papers/BBMbook.pdf.
Gros, Sebastien, and Moritz Diehl. 2022. “Numerical Optimal Control (Draft).” Systems Control; Optimization Laboratory IMTEK, Faculty of Engineering, University of Freiburg. https://www.syscop.de/files/2020ss/NOC/book-NOCSE.pdf.
Rawlings, James B., David Q. Mayne, and Moritz M. Diehl. 2017. Model Predictive Control: Theory, Computation, and Design. 2nd ed. Madison, Wisconsin: Nob Hill Publishing, LLC. http://www.nobhillpublishing.com/mpc-paperback/index-mpc.html.