Noah Smith writing at Noahpinion offers up “The Most Damning Critique of DSGE”.
This is heady, but fascinating stuff for students on the borders of macroeconomics and finance. The comment thread is where the real action is, and it drew some big name people in.
Here’s some background. First there was Keynes. Decades later Keynesians built big macroeconometric models that didn’t work well, but were better than nothing. There’s a number of names for these: FRB-MIT or Klein are common. Klein won a Nobel Prize in 1980, in part, for developing these models.
In the 1970’s, academic macroeconomists started shooting holes in Keynesian theory and in Klein-type macroeconometric models. The big hole was shot by Lucas with what is now known as the Lucas’ critique. It says that to be useful, an econometric model needs to estimate coefficients that can be viewed as constants and used for future policy decisions. For example, a Keynesian might argue that an increase in government spending of a dollar always causes GDP to go up by 2 times as much. Lucas argued that the theory underlying the Keynesian models led to econometrics in which you’d get that estimate of 2, but that using it would cause it to change to, say, 3 … so that your policy never did what you thought it would [DIGRESSION: You may have noticed that politicians have a lot of trouble devising policies that work according to plan. Maybe Lucas was on to something.]
By the late 1980’s there were two strains of thought about how to go forward that we still use.
One was based on Sims vector autoregression (VAR for short, or its sibling the VECM). Sims won a Nobel Prize for this work in 2011. The second was Kydland and Prescott’s dynamic stochastic general equilibrium model (or DSGE). Kydland and Prescott won a Nobel Prize for this work in 2004.
The thing is, none of these three methods work very well. You may have noticed that macroeconomics can be really complex, and this is probably why.
So the view of Noahopinion is that DSGE models have failed a market test. If they were better, they’d have been adopted widely by financial firms trying to gain an edge to earn higher returns. And then he asked professionals and academics to chip in with their thoughts. This is where it gets interesting.
What follows is a lot of (seriously) informed, troll-free, discussion of how seriously we should take macroeconomics. You may not see it all clearly, but for me it touches on about a dozen different parts of the text I wrote for your class. Here’s a primer:
- Is the macroeconomy well-forecastable at all? No one is saying that it can’t be forecast, but everyone says that our ability to forecast it well is lousy. No one thinks that weather forecasts are useless just because they’re not very accurate. So perhaps we need to take the same approach to macroeconomics: the problem isn’t the models and their forecasts but our expectations of what others are able to produce for us.
- Should we expect a macroeconomy to be forecastable? This is related to efficiency in financial markets. If we can figure out what will happen in the future, and then take action to avoid what we don’t like, then it will never happen … and our forecasts are wrong. This is odd, but it’s no different than asking why you didn’t forecast your last traffic accident: sharp people recognize that the accident occurred because it couldn’t be forecast, and the accidents that didn’t occur are the ones we could forecast. Perhaps the problem is our insistence that we should be able to forecast the unforecastable. There’s a fascinating World War II story about how this came up in statistics in the footnote.*
- Can we make passable and somewhat useful forecasts without thinking too hard about the theory, by just being observant instead? We do this all the time: you don’t need to understand meteorology, or even check a weather report, to know when to take a coat with you. By the same token, can financial professionals get a lot of the benefit that’s to be had by incorporating macroeconomics into their financial decisions … by just knowing a little bit about the data, the trends, and which data go together? Formally, these are called unconditional forecasts. Often, they are based on reduced forms (regressions showing the correlations between data that don’t impose any structure on how the series relate to each other), or charts.
- Is it useful to impose Keynesian structure to understand how the data works? Seventy years ago, economists working for the Cowles Commission recognized that the data and relationships we observe are consistent with more than one story of the underlying causality. In football, this insight would be that winning teams run the football; but do they run to win the game, or run because they are winning the game? In econometrics, this is called an identification problem. Large-scale (hundreds of equations) macroeconometric models became available in the 1960’s that solved the identification problem by imposing structure from Keynesian theory. These are better than nothing, but their performance at forecasting wasn’t great and plateaued early on. One of the first shots at Keynesian macroeconomics was made by Monetarists working at the Federal Reserve Bank of St. Louis in the late 1960’s, who showed that you could match the performance of a huge and complex Keynesian model with a small and simple Monetarist model. Later, these ideas merged in the FRB-MIT-Penn model; the FRB is the Monetarist part, while the Keynesian part came out if MIT and the University of Pennsylvania. Part of the gist of the comment thread is that a lot of private firms, and most governments still use either this model, or it’s cousin, Klein’s structural Keynesian model (known as the Wharton model, and still marketed by WEFA, a division if IHS Global Insight).
- Academics started discarding these Keynesian models in the 1970’s, and by the 1980’s had started to develop dynamic stochastic general equilibrium models (DSGE’s). They had recognized that there were theoretical problems with the underlying Keynesian macroeconomics in those big models, and they reworked the theory from the bottom up to be robust to the Lucas’ critique. The football analogy is that in those big models the offensive coach drew up the play on the chalkboard, but it didn’t work out as well in the game. DSGE’s address this by arguing how the defense is going to respond to the play the offensive coach drew up, which changes how that coach would draw the play, which changes how the defense will respond, and so on. Solve that out far enough, and you have a better description of the structure underlying the data you observe. The thing is, it’s a lot of work. Noahopinion is asserting that the work wasn’t worth it because there wasn’t enough improvement in performance for private firms and government agencies to switch over to these models. Later in the semester, when we build a growth model, we’re starting down a path then ends with DSGEs.
- Both of the above approaches are structural, and they produce what Noahpinion calls policy-conditional forecasts (and which I’ll just call conditional for short). They’re called conditional because they depend on the underlying theory being correct. Forecasts are unconditional when they use less (or no) theory to relate the data together. John Cochrane’s comment argues that unconditional forecasts are OK for figuring out how to invest, but that you need a conditional forecast to figure out the variables are going to respond to a change in policy (e.g., introducing Obamacare). The football analogy is that you can probably bet on football without knowing much about the game and do OK, and that someone who digs deeper into the football data might get some edge, but not much, because unconditional forecasts work well. But, you can’t win a football game (as opposed to just betting on it) without knowing something about the structure of how the game works, and making conditional forecasts: for example, the quick kick is still legal but has largely disappeared as a football play because it doesn’t offer an advantage in the contemporary game which is structured to make it’s easier for offenses to earn yardage.
- Sims was involved in the early part of the research program that eventually produced DSGE’s, but split off quite early. His position was that the assumptions necessary to impose structure on the data were never likely to be realistic, no matter what the theory. It’s like an econometric model is a water balloon: if the Lucas’ critique is one end of the balloon, and you squeeze it to hold it still, you create a problem at the other end … and you can’t squeeze all parts of the water balloon at the same time. His approach was to impose a minimal structure on the reduced forms to produce a somewhat improved unconditional forecast. The football analogy is that a minimal structure might be that a team runs the ball early in the game because they think it will help them win, but they run at the end of the game because they are already winning. And, you don’t need to know a lot about the structure of football — what running play to call, what blocking scheme to use, and so on — to use that insight. For about 30 years it’s been known that the resulting VAR’s can match the forecasting ability of either kind of structural model, with a lot less work. When we do time series analysis is class, we are on the path that leads to VAR’s.
- If macroeconomics is both hard to understand, and hard to get something useful out of, why bother with it? Heck, why bother with macroeconomists like Tufte? This comes up in the middle of the comment thread, and leads to this other post on Noahpinion. There’s an aphorism that if your performance is going to be measured, you should give the evaluator a ruler of your choosing for them to use … because otherwise you don’t know what ruler they’ll choose. A constructive view of this problem is found by noting that people are going to discuss policy and make policy decisions whether or not there are macroeconomists around … and those policymakers often have some pretty goofy ideas.
… So if there were no academic and Fed macroeconomists around to advise policymakers, who would policymakers listen to on economic matters?
My guess: Some very dangerous people.
For all the talk of academic macro being politicized, it's much less politicized than the macroeconomic discussion outside of the research community. My own experience is that most macroeconomists are pretty apolitical, and research supports that...but even if my sample is biased, macro's interventionist and laissez-faire schools are pretty close to each other ideologically, compared to, say A) armchair-theorizing politicians, B) TV commentators, C) the denizens of internet forums. It really is a jungle out there. You have David Stockman. You have Ron Paul and his followers. You have David Graeber and his followers. And worse. You have "Austrians" who think all of economics can be deduced from some vague derp. You have Marxists who think - well, I'm not sure, because they tend to denounce and vilify you if you even ask them what they mean, but it sounds nuts. In short you have a cavalcade of vast unending wackitude, often with a proven track record of wrecking economies and societies.
So it's possible to see macroeconomists as doing plenty of good, simply by sitting there not being absolute wackaloons. A million DSGE models from which it is impossible to select sounds a lot better to me than three or four totally nutcase worldviews, the selection of any one of which is likely to cause human tragedy on a vast scale. (Note: This idea, of macroeconomists as a vaccine against macro-lunacy, was first suggested to me by Justin Wolfers.)
- A parallel point is that perhaps the advances in macro models aren’t used by people in finance because they are far more incompetent than we’re willing to admit, and they can’t conceive that the macro models can improve on what they already “know”:
… Financial companies are run by people who don't have a very good intuition for (macro)economics. …
DSGEs will only really be accepted if they match these managers' intuitions, which will only happen if they are also broken and useless.
- Maybe financial firms don’t use macroeconometric models because macroeconomists aren’t building them to sell. I think this view is a bit childish, but there’s a big name economist in the thread making it. One of the commenters notes that his career is based on selling the output of Klein-style models, and he can’t find anyone coming out of school even trained to use them.
- Perhaps all we want is stories that seem plausible rather than theory and data that takes work. The weather analogy might work well here: why are so many TV weather people either “big personalities” or unusually attractive eye candy? Maybe it’s because we know the weather is somewhat unpredictable, so why not get a plausible story from someone we like to listen to or watch, rather than the deeper analysis you’d find on The Weather Channel. So, in the realm of policy, perhaps Obama is exactly the sort of macroeconomist that many people want.
* In World War II, England hired a statistician to help them figure out how to keep their planes from being shot down. Prior to this, the planes had come back full of holes, they’d added extra armor where the holes were, sent the planes out again … and many of them were still shot down. I’m not making this up: the statistician immediately said that they’d done the armoring backwards. The places where the planes had holes was where a bullet hit could be survived. They weren’t seeing any bullet holes in the other spots because they were leading to immediate crashes. The British military was incredulous, but followed the advice, and shifted armor to the spots without hole … and increased their rate or return on planes. This is a similar argument to why you should look for an edge by applying macroeconomics to finance, despite the fact that using macro is unlikely to lead to an edge.
Cross posted from SUU Macroblog, which is required reading for my upper level undergraduates.