Sitemap

What is a Pontryagin System? | Pontryagin Maximum Principle, Optimal Control Theory, & Dynamic Optimization

Jon Law
3 min readJun 6, 2025
Pontryagin — Optimal Control Framework

A Pontryagin system is a system of equations in optimal control theory used for finding optimal controls in dynamical systems. In this article, I’ll break down the concept, background (maximum principle), and use cases—no prior experience needed!

First, optimal control theory is part of control theory, which uses equations and algorithms to guide the behavior of dynamic systems (meaning systems with states that change—you can learn more here) to achieve some desired outcome. It’s applied in all kinds of fields, from flight control systems to ensure aircraft stability to industrial robotics and autonomous vehicles.

Optimal control theory, in turn, focuses specifically on optimizing an objective function in a dynamic system, a dynamic system being a model that describes how the states of a system change over time. Similarly, an objective function is an expression defining a goal to be mathematically maximized (gain, profit, utility, etc.) or minimized (error, loss, unemployment, time etc). So, simply, if we frame this like a game, optimal control theory is about finding the best way to play the game, given the rules and the goal (get the most points, survive the longest, don’t lose to your little brother, or so on).

With that all in mind, we can get to Pontryagin’s maximum principle, formulated in 1956 by Russian mathematician Lev Pontryagin and co. It’s an approach or tool of sorts used in optimal control theory, and involves defining a Hamiltonian function (which is a problem setup of sorts if you’re not familiar—comparable for sake of understanding to a Lagrangian) and maximizing it along the optimal trajectory, in turn determined by solving a two-point boundary value problem.

A Pontryagin system is this principle—a framework, or system of equations, used to find an optimal control. Like Lagrangians, first-order conditions, derivative tests, and any number of other mathematical processes, it’s a defined approach to solving complex, dynamic problems in an environment where states and state transition paths are known.

Specifically, the system of equations involves these components, which we’ve already introduced (see the above image):

  • Dynamic System
  • Hamiltonian
  • Costate Equations
  • Boundary Conditions

Like so, the state‑change rules feed directly into the Hamiltonian, with a costate equation being a vector of first order differential equations (Lagrange multipliers) that captures the marginal value of each state variable. As states evolve forward in time, this marginal effect (shadow price) is propagated backward from the terminal (boundary) condition, thus identifying the impact of one more unit of an input (state variable) on the objective function (reward).

As stated, this approach is used in all kinds of fields—whether helping a rocket reach a target orbit, an economy maximize GDP growth, or a person minimize the time needed to get to work.

Again, if you need a refresher on basic building blocks like states, actions, and dynamic optimization, check out this recent article.

Otherwise, if you’re into (or studying!) economics, you’re in the right place, and make sure to check out my other articles here. Thanks for reading!

--

--

Jon Law
Jon Law

Written by Jon Law

5x author - economics - 700k on YouTube @jonlawedu!

Responses (1)