Statistical analysis moons the bull
There’s an interesting article on statistical prediction in the FT this morning. The FT firewalls articles after a day, so I suspect this link will break, unless you are an FT subscriber.
I want to quote some chunks of the article, not because it’s smart, but because it’s one of the most embarrassing essays to ever appear in print. The trouble is that the author, Ian Ayres, is pimping a new book, and the FT article is adapted from that book. Since the book was not written in the last month, it says things about the financial markets that in the light of recent events are, at best, grimly hilarious. Presumably if you were a national-security pundit, you had a book release scheduled for October 2001, and your book observed that the threat of terrorism was overheated propaganda churned out by a cabal of neoconservative wingnuts, your publisher would have the sense to pull the book. Or at least hold it until 2005 or so.
You’d think the FT editor could at least have done a quick snip on this graf:
Since the 1950s, social scientists have been comparing the predictive accuracies of number crunchers and traditional experts—and finding that statistical models consistently outpredict experts. But now that revelation has become a revolution in which companies, investors and policymakers use analysis of huge datasets to discover empirical correlations between seemingly unrelated things. Want to hedge a large purchase of euros? Turns out you should sell a carefully balanced portfolio of 26 other stocks and commodities that might include some shares in Wal-Mart.
Um, sure. Want your hedge fund to go tits-up and explode like a rotting whale, raining putrid financial instruments all over Connecticut? Turns out you should assume financial markets are stochastic systems in which the future mirrors the past. Apparently some things that are “seemingly unrelated” are actually also actually unrelated.
As one guest on Brad Setser’s blog (the Mos Eisley of the macro finance geek—but see also Macro Man, Nihon Cassandra, Winter, Plant, Waldman, and Hellasious) writes:
It seems like there are two points of view running through this blog:
(1) the securitization process was fundamentally flawed (2) the process wasn’t flawed, but buyers weren’t prepared for predictable downgrades.
Twofish: “You have a CDO that’s divided into three tranches. Let’s call them high risk, medium risk, and low risk. Everyone knows that the high risk CDO will lose money, the question is how much. You do the calculation and it shows that the low risk tranche has a 2% chance of losing money.”
But the latter sentence is a prediction about the future. This prediction is based on assumptions (e.g. x percent of the underlying loans will behave like this population of loans on which we have historical data). The models only address correlations between the different cash flows to the degree that their assumptions about the behavior of the underlying mortgages are correct. The problem is times change, behavior changes. It’s hard for some of us to believe that models that process historical data can hope to give reliable predictions about future events.
If the models are only a little wrong—and the modelers would have taken this into account—you have a good CDO. If the models are a lot wrong, you have a lemon.
It seems that the people who believe (1) think that the models are likely to be wildly off the mark, while the people who believe (2) think the models are basically sound.
The problem I have with (2) is this: If in fact the process by which CDOs were created is fundamentally sound, why aren’t people in the know making a killing buying CDOs up right now?
Indeed. “Bueller? Bueller?”
Fortunately, at least the financial markets are not Professor Ayres’ leading example. But instead he opens his article with this precious ort of automatist idiocy:
As the men talked, they decided to run a horse race, to create “a friendly interdisciplinary competition” to compare the accuracy of two different ways to predict the outcome of Supreme Court cases. In one corner stood the predictions of the political scientists and their flow charts, and in the other, the opinions of 83 legal experts—esteemed law professors, practitioners and pundits who would be called upon to predict the justices’ votes for cases in their areas of expertise. The assignment was to predict in advance the votes of the individual justices for every case that was argued in the Supreme Court’s 2002 term.
The experts lost. For every argued case during the 2002 term, the model predicted 75 per cent of the court’s affirm/reverse results correctly, while the legal experts collectively got only 59.1 per cent right. The computer was particularly effective at predicting the crucial swing votes of Justices O’Connor and Anthony Kennedy. The model predicted O’Connor’s vote correctly 70 per cent of the time while the experts’ success rate was only 61 per cent.
Now, why is this not surprising?
It’s not surprising because the good professor is testing a model against a survey. In other news, Professor Ayres is a world-class marathoner: in a 26.2-mile race, he can outperform my grandmother. Although considering the intellectual honesty that this exercise demonstrates, he’d probably feel the need to kick her walker as he shambled past.
If our esteemed professor were actually to establish a prediction market in which people could make actual money by predicting Supreme Court decisions, he could actually pit the best models against the best experts, as selected not by his secretary, but by an adaptive system which rewards success and punishes failure. Heck, if he has a good model, he could actually make some scratch in the game. Maybe he could even detach himself from the Official Tit.
Now, why hasn’t anyone done this? (At least, I get no useful hits when I google “Supreme Court” and “prediction market.”) Perhaps because, as I suspect, the market would have error rates well under 10%? Perhaps because, if you have a career as a crusader for “social justice,” general-purpose media whore, and permanent Fedco flack, constructing an experiment designed to demonstrate the essentially comical nature of the rotary system’s most revered agency—the Inspection Council—isn’t the idea that leaps instantly to mind? I’m just guessing, here.
If you need a pop-science book about modeling and prediction, try David Orrell’s The Future of Everything. Orrell, who actually is a mathematician, does not have a whole lot to say except that it’s a really bad idea to trust a model. As Orrell’s near-namesake once put it, “trust a snake before a Jew and a Jew before a Greek, but don’t trust an Armenian.” Hopefully, by the end of Orrell’s book, you will trust a whole clan of snake-handling Greco-Jewish Armenians before you assume that “a carefully-balanced portfolio of 26 other stocks and commodities” will mirror the fluctuations of the euro.
If you’re more interested in the financial end of this, I haven’t finished Rick Bookstaber’s book, but so far I like it. A little Austrian economics would do Bookstaber a large passel of good, but his Blowing Up The Lab essay in Time is as good an introduction to the CDO disaster as I’ve seen so far.
Gary Larson’s first Far Side anthology has as its cover what I think is actually the best Far Side cartoon ever. Note, however, that the idiot photographers are only sticking their tongues out at the bull. They have not actually turned around and mooned it—as Professor Ayres just has. Hopefully he’s already consigned his royalties to Goldman’s Global Alpha fund, which, after “three 25-sigma events in three days,” could certainly use the cash.