Optimal control: Grail or guff?
Janet Yellen’s confirmation as the new US Federal Reserve Chairwoman by the US Senate two weeks ago puts the concept of “optimal control” in prominent position within the toolkit and jargon of central banking. This concept has been central in Yellen’s reflections on monetary policy in important speeches over the last couple of years.
What is “optimal control”? If this term has a Cold War echo for you, then your instincts are good; that was the context of use at the beginning. Developed independently by mathematicians Lev Pontryagin of the Soviet Union and Richard Bellman of the US in the early 1950s, the tools of optimal control theory are prominently used in guiding rockets and missiles. Rapidly they found their way into applied economics and econometrics, especially promoted and developed there by Princeton Professor Gregory Chow in the late 1960sand early 1970s.
Optimal control tries to determine the evolution of so-called control variables (such as the amount of rocket fuel to be burned in a rocket or the level of Fed fund rates) to achieve an optimal evolution for so-called state variables (the flight trajectory of the rocket or, in the case of an economy, inflation and unemployment rates). To do so, optimal control theory relies on rather complex models, in which a value function is maximized or minimized (in the latter case the value function is called a loss function), depending on the state variables.
Since heavy mathematics and computing are usually involved, the methodology can be intimidating. However, despite a highly scientific and hence trustful appearance, optimal control theory has some serious caveats when used to steer the economy.
Chief among them is the question on how correct the model is with which a value function is maximized or a loss function minimized.
I don’t want to dispute Yellen’s forecasting abilities – which are better than average according to many who have worked with her. Neither do I want to question the professionalism of the roughly 250 PhDs in economics working at the Fed (let alone the thousands of economists associated through grants, visiting scholarships and other assignments).
However, Ben Bernanke’s statement of June 2008, “the risk that the economy has entered a substantial downturn appears to have diminished over the past month or so,” suggests how difficult, if not impossible, forecasting ultimately is.
Moreover, even if by sheer luck the model used for optimal control is “right” at some point in time, there is no guarantee that it will also be right in the future. Model parameters or even the complete model structure might change over time, not least because the behavior of the economic subjects might change. It might even change because people are adapting their behavior in anticipation of Fed actions.
Charles Plosser, the Philadelphia Fed President and this year a voting member on the Federal Open Market Committee, has for example recently questioned how Fed policy is currently conducted. Is the potential US GDP really so far away from the actual one? Is there really so much “slack” in the economy? And is the “non-accelerating inflation rate of unemployment,” the NAIRU, really as low as the Fed forecasts imply?
Finally, perhaps the most devastating critique of optimal control, and why it doesn’t differ much from other tools and rules of monetary policy: the loss function, i.e. how much weight inflation and unemployment have in it respectively, is not a given. In fact it will differ according to the political inclinations of each individual.
Optimal control can certainly help to streamline the decision process of a central bank. However, despite being rather complex and scientific, it is not at all immune to the political sphere and remains ultimately dependent on the subjective choice of a decision taker.