References

Dynamic programming for continuous-time systems is typically not covered in standard texts on dynamic programming, because those mainly focus on discrete-time systems. But there is no shortage of discussions of HJB equation in control theory texts. Our introductory treatment here is based on Section 6.3 in [1].

The classical [2] uses HJB equation as the main tool for solving various version of the LQR problem.

[3] discusses the HJB equation in Chapter 5. In the section 5.2 they also discuss the connection with Pontryagin’s principle.

Back to top

References

[1]
F. L. Lewis, D. Vrabie, and V. L. Syrmo, Optimal Control, 3rd ed. John Wiley & Sons, 2012. Accessed: Mar. 09, 2022. [Online]. Available: https://lewisgroup.uta.edu/FL%20books/Lewis%20optimal%20control%203rd%20edition%202012.pdf
[2]
B. D. O. Anderson and J. B. Moore, Optimal Control: Linear Quadratic Methods, Reprint of the 1989 edition. Dover Publications, 2007. Available: http://users.cecs.anu.edu.au/~john/papers/BOOK/B03.PDF
[3]
D. Liberzon, Calculus of Variations and Optimal Control Theory: A Concise Introduction. Princeton University Press, 2011. Available: http://liberzon.csl.illinois.edu/teaching/cvoc/cvoc.html