Herceg, Domagoj
(2020)
*Stochastic model predictive control of nonlinear and uncertain systems.*
Advisor: Bemporad, Prof. Alberto. Coadvisor: Sopasakis, Dr. Pantelis . pp. 149.
[IMT PhD Thesis]

Text (Doctoral thesis)
Herceg_phdthesis.pdf - Published Version Available under License Creative Commons Attribution Non-commercial Share Alike. Download (1MB) |

## Abstract

This thesis attempts to shed some additional light on pressing questions regarding the control of uncertain systems. Special focus is given to systems with uncertain uncertainty (inexactly known distribution), numerical optimization methods to enable the use of proposed advanced optimization methods in practice and systems controlled by an economic controller where stability is not always the primary objective. Current state-of-the-art methods often neglect that underlying uncertainty in a stochastic model is, in fact, uncertain as well in the sense that its probability distribution function is unknown and can only be (inaccurately) estimated from data. Hence, theoretical guarantees obtained by such methods, e.g., mean-square stability, may not be satisfied in practice. Moreover, many control methods use convex costs as a performance index to be optimized, which may not be the most descriptive choice for real-world problems. Here, we endeavour to remedy the shortcomings of such methods. We focus on three important extension in particular, i) theoretical developments to deal with non-convex performance indices in stochastic optimal control problems and ii) novel methods to deal with “uncertainty in the uncertainty” in a rigorous and theoretically sound way and iii) numerical optimization methods to solve these problems efficiently. Model predictive control (MPC) is an advanced control method that has found its way into many practical applications. Since its introduction and popularization in the 80’s in the process industry, it has now taken a long way to automotive applications, large scale networks and robotics. MPC itself uses a mathematical model of a system to predict its possible future trajectories. A sequence of control actions is then calculated by solving an optimization problem by minimizing a performance index of the state and input cost along predicted trajectories. When the system moves to a new state, the new state is sampled and the whole procedure is applied again. Part of its popularity stems from the fact that the MPC framework can also incorporate state and input constraints and handle multiple input output systems naturally. Stochastic economic model predictive control is concerned with problems with non-convex costs which are readily found in real-world applications. Rather than minimizing a deviation from a prescribed (optimal/best) set-point or a tracking reference, the main objective is to optimize a given economic cost functional. The control paradigm that optimizes the process economics within the MPC formulation is usually known as economic MPC (EMPC). Several research directions have discussed the closed-loop properties of EMPC-controlled deterministic systems, however, little have uncertain systems been studied. In this thesis we propose EMPC formulations for nonlinear Markovian switching systems which guarantee recursive feasibility, asymptotic performance bounds and constrained mean square (MS) stability. For nonlinear systems we provide design guidelines based on the system linearization using only mild assumptions on the system dynamics and stage cost function. Risk-averse model predictive control is an approach to bridge the gap between two popular control strategies dubbed stochastic and robust MPC. In robust MPC, modeling errors and disturbances are assumed to be unknown-but-bounded quantities and the performance index is minimized with respect to the worst-case realization of the uncertainty (min-max) approach). However, such worst-case events which are unlikely to occur in practice and render robust MPC severely conservative since all statistical information, typically available from past measurements, is completely ignored. On the other hand, in stochastic MPC it is assumed that the underlying uncertainty is a random vector following some probability distribution. In reality, not always can the probability distribution be accurately estimated from available data, nor does it remain constant in time. Nonetheless, theoretical guarantees of such algorithms hinge on this unrealistic assumption. Using the theory of risk measures, which originated in the field of stochastic finance, we devise a novel algorithmic and theoretical solutions to combine advantages of robust and stochastic optimal control by proposing a unifying framework that extends and contains both as special cases. In this thesis, we propose risk-averse formulations where the total cost of the MPC problem is expressed as a nested composition of conditional risk mappings. We focus on constrained nonlinear Markovian switching systems and derive Lyapunov-type risk-averse stability conditions. Moreover, for the nonlinear system we prescribe a linearization based controller design procedure and we show that linearized system locally inherits stability properties of its linear counterpart. Finally, we propose a splitting for risk-averse problems which makes the problem a candidate for proximal algorithms. Usually, risk-averse problems are solved using stochastic dual dynamics programming approaches or generic interior point method solvers. Both of these approaches are are not adept to deal with problems of large dimension. However, we show that risk-averse problems posses a rich structure that we can exploit to devise very efficient and massively parallelisable methods to solve them.

Item Type: | IMT PhD Thesis |
---|---|

Subjects: | Q Science > QA Mathematics > QA75 Electronic computers. Computer science |

PhD Course: | Computer science and systems engineering |

Identification Number: | https://doi.org/10.6092/imtlucca/e-theses/309 |

NBN Number: | urn:nbn:it:imtlucca-27024 |

Date Deposited: | 07 Apr 2020 07:50 |

URI: | http://e-theses.imtlucca.it/id/eprint/309 |

### Actions (login required, only for staff repository)

View Item |