Steven Levitt and his team of researchers made an appearance at the recent North American Bridge Championships in Chicago. He was conducting research that was vaguely described to some of my participating friends as the transferrability of knowledge from one card game to another. The experiment was quite interesting, and I look forward to his analysis. His next stop was to test players at the World Series of Poker.
His appearance reminded me that while I have highlighted his excellent book, Freakonomics, I have yet to comment on it. It is a very important work for those of my generation who learned and taught research methods. My guess is that only one in four of those trained in public policy research stayed in the academic world. Here is why.
The academic world prizes careful scholarship, building brick-by-brick on past work, extensive peer review, and use of the most sophisticated methods. The result is that it was difficult for the Vietnam-era scholars to achieve their "change the world" goals. Most policy makers did not share our interest in cost-benefit analysis and most academics did not prize practical policy work.
Wall Street research is a polar opposite. Those with the most visible jobs crank out some sort of analysis almost weekly. There is no peer review. There is an unwritten code about criticizing anyone else's work and no time to get the opinion of peers before floating the latest research product. (Readers of 'A Dash' know that we are free from this constraint.)
The result is that Street research is of comparatively poor quality. Even the most expensive services make many mistakes. Sometimes the researcher looks at a small number of cases without any historic context. Other times the researcher uses the power of the modern computer and the presence of large databases to look at every case -- regardless of the relevance, the existence of any hypotheses, or a prior idea about a causal model.
Analysts on specific stocks have a different task. They used to serve investment banking goals and hype the companies that did business with their firm. Now they are required to find a balance of "sell" and "hold" recommendations, even while the number of firms covered has shrunk dramatically. This is why we find analysts making macro calls and offering opinions on the economy instead of covering their stocks.
Steven Levitt's work strikes an inspiring balance between these poles. Many of us are delighted in the academic and popular acclaim he has received. In our own work we aspire to provide the same practical insights -- using the right data, in the right time periods, in the right way. We want to be like Steve. I suspect that many of us would have stayed in the academic world if we could have found both his approach and the acceptance twenty-five or thirty years ago.
While we were not all as good as Steve is, some were. The world did not seem ready. For now, my generation of researchers is still trying to improve the quality of understanding in many fields. Many are working in health policy.
At "A Dash" this work means trying to point out who is doing good research. Our mission is to spot the data miners, back-fitters, and the anecdotal story-spinners.
It is difficult to do, since consumers of investment research do not know what to look for. Most are narrowly focused on what worked in the last year or so. I have watched those doing great work lose their jobs when their models "did not work" for a a year or two. Many "global strategists" are constantly scrambling to do "walk-forward testing" thinking that this will keep them current. The time horizon is dictated by the individual investor and hot money funds, despite our knowledge that this is a losing method.
Think about it. The period from late 1998 to early 2000 was an extremely strange time, influenced by Y2K fears and an unprecedented surge in employment and the economy, well documented by David Malpass. From 2000 to 2002 there was an equally strange plunge from these conditions with a rather shallow recession. In 2003 there was a delayed economic reaction by businesses (more on this in a future post) because of the expectation of war.
If you were building a model, would you use that time period for your data? It makes no sense, but that is what most big firm strategists did. Those who chose more wisely are now with different firms, often with their own name as the corporate identity. The economics of the Street has punished good research and elevated back-fitting.
That (and maybe some election-year spinning in 2004) has brought us to a 30-month period where we have had repeated calls for economic collapse in the face of unprecedented economic and corporate success.
Future posts will continue to catalog the many prevalent research techniques that lack predictive power. Steve Levitt got a prize from statisticians for the correct use of data. I wonder what they would make of the body of Wall Street research.