References

Dynamic programming for continuous-time systems is typically not covered in standard texts on dynamic programming, because those mainly focus on discrete-time systems. But there is no shortage of discussions of HJB equation in control theory texts. Our introductory treatment here is based on Section 6.3 in Lewis, Vrabie, and Syrmo (2012).

The classical Anderson and Moore (2007) uses HJB equation as the main tool for solving various version of the LQR problem.

Liberzon (2011) discusses the HJB equation in Chapter 5. In the section 5.2 they also discuss the connection with Pontryagin’s principle.

Back to top

References

Anderson, Brian D. O., and John B. Moore. 2007. Optimal Control: Linear Quadratic Methods. Reprint of the 1989 edition. Dover Publications. http://users.cecs.anu.edu.au/~john/papers/BOOK/B03.PDF.
Lewis, Frank L., Draguna Vrabie, and Vassilis L. Syrmo. 2012. Optimal Control. 3rd ed. John Wiley & Sons. https://lewisgroup.uta.edu/FL%20books/Lewis%20optimal%20control%203rd%20edition%202012.pdf.
Liberzon, Daniel. 2011. Calculus of Variations and Optimal Control Theory: A Concise Introduction. Princeton University Press. http://liberzon.csl.illinois.edu/teaching/cvoc/cvoc.html.