James Surowiecki at The New Yorker had a nice column last month on "Punditonomics," the tendency of much public discussion to focus on individuals who seem to have forecast one or two big events in the past.
The economic incentive is clear:
I think this little article makes clear why these are such hopeless and profoundly unscientific debates. (Which, you may have wondered, is why I have completely ignored them so far.) Mining old blog posts for successes -- or damning people and their "models" for selected failures -- proves nothing. To learn something about about economic logic and the ability of economic ideas to understand cause and effect, you at least have to assemble an entire forecast record.
More deeply, any serious forecast, reflective of the worth of an economic theory, must be written down and divorced from the judgment of the forecaster. Even if, say, Bob Shiller turned out to be a psychic who could tell when bubbles were happening, and never got a forecast wrong, that is fairly useless knowledge unless Bob can somehow write down his process so that someone else can do it too. Otherwise, this is like saying to a climate scientist, "well you thought it would rain last weekend, so surely 'your model' is wrong."
Academic economics does this. We wrote down models, and test them by whether the models' predictions, in anyone's hands, agrees with the data. It's interesting that the policy debates, even by ex academics, goes back to such solidly pre-scientific witch-doctor evaluation.
Another cool link on this subject was sent to me by a colleague. The graph on the left comes from a speech given by Masaaki Shirakawa, Governor of the Bank of Japan. We usually think of economists as fallible, but demography, well, that should be easy to forecast. Apparently not so.
I include this picture for its beautiful art. Forecasts of GDP, inflation, budget deficits, and interest rates all look about this way. Forecasts should also always include standard errors, but past histories in this graphical form might be more informative and easier to communicate.
The economic incentive is clear:
Experts in a wide range of fields are prone to making daring and confident forecasts, even at the risk of being wrong, because when they're right the rewards are immense. An expert who makes one great prediction can live off the success for a long time; we assume that the feat is repeatable.But, being right once is pretty meaningless
The most comprehensive study in this field was done by the psychologist Philip Tetlock. [JC: link here.] Over many years, he tracked some three hundred experts, asking them to estimate the probability of various geopolitical events. He found that, though a given expert may foretell one extreme event, dosing so consistently was next to impossible. Experts who foresaw the breakup of Yugoslavia also thought, wrongly, that Hungary and Romania would slide into civil war. Being spectacularly right once doesn't guarantee being right in the future. In fact, the opposite may be true. In one fascinating study by the business school professors Jerker Denrell and Cristina Fang, [JC: Link here] people who successfully predicted an extreme event had worse overall forecasting record than their peers.Most of James' examples are financial
The history of forecasting is littered with examples of experts who were acclaimed as visionaries, only to disappoint. Two weeks before the Great Crash of 1929, Irving Fisher, one of the pioneers of economic forecasting, declared that stock prices had reached a "permanently high plateau." In the late seventies, the market-timing abilities of the investment guru Joe Granville were legendary, but he completely missed the beginning of the bull market in 1982. Elaine Garzarelli, who correctly called the crash of 1987, pronounced in October of 2007 that she was "absolutely bullish" on the stock market. That year, the banking analyst Meredith Whitney became famous for her bearish but accurate prediction that Cititgroup would have to slash its dividend and take billions in writedowns. But she was woefully wrong when, just a few years later, she warned, on "60 minutes," that cities in the U.S. were likely to default, resulting in "hundreds of billions of dollars in losses to investors. [JC: so far!] ...Criticizing financial forecasts is, I think, a bit too easy. Blog readers will have more in mind the many debates over who saw the financial crisis coming or didn't, who called the housing "bubble" or didn't, who thought the recession would turn in to another great depression or didn't, who saw the current endless slump coming or didn't, who saw the european debt crisis or didn't, who saw its end or didn't, who has been expecting inflation, who has been expecting deflation, who said that reduced government expenditures would lead to a new recession, and so on and so on. The whole Paul Krugman - Niall Ferguson debate over who said what when comes to mind too.
I think this little article makes clear why these are such hopeless and profoundly unscientific debates. (Which, you may have wondered, is why I have completely ignored them so far.) Mining old blog posts for successes -- or damning people and their "models" for selected failures -- proves nothing. To learn something about about economic logic and the ability of economic ideas to understand cause and effect, you at least have to assemble an entire forecast record.
More deeply, any serious forecast, reflective of the worth of an economic theory, must be written down and divorced from the judgment of the forecaster. Even if, say, Bob Shiller turned out to be a psychic who could tell when bubbles were happening, and never got a forecast wrong, that is fairly useless knowledge unless Bob can somehow write down his process so that someone else can do it too. Otherwise, this is like saying to a climate scientist, "well you thought it would rain last weekend, so surely 'your model' is wrong."
Academic economics does this. We wrote down models, and test them by whether the models' predictions, in anyone's hands, agrees with the data. It's interesting that the policy debates, even by ex academics, goes back to such solidly pre-scientific witch-doctor evaluation.
Source link |
I include this picture for its beautiful art. Forecasts of GDP, inflation, budget deficits, and interest rates all look about this way. Forecasts should also always include standard errors, but past histories in this graphical form might be more informative and easier to communicate.
Post a Comment