My Photo
Note: Jeff does not accept guest blog posts on A Dash of Insight.

For inquiries regarding advertising and republication, contact [email protected]

Follow Jeff on Twitter!

Enter your email address:

Delivered by FeedBurner

Certifications

  • Seeking Alpha
    Seeking Alpha Certified
  • AllTopSites
    Alltop, all the top stories
  • iStockAnalyst
Talk Markets
Forexpros Contributor
Disclaimer
Copyright 2005-2014
All Rights Reserved

« Choosing what to read? | Main | Making Investing too Difficult: The Fed »

September 17, 2007

Comments

oldprof

Mick -

Thanks for fixing the headline and also for dropping by. As I said in the post, you guys do a great job.

When you have so many things to cover, there will occasionally be a slip. It is all about how it is handled, and you are real pros.

Jeff

Mick Weinstein, Seeking Alpha

Thanks for pointing out the problem with our headline, Jeff. It's fixed now.

Mick Weinstein
Seeking Alpha

RB

Jeff,
The font size probably needs some tweaking for Explorer 7.
If Poindexter is reading this blog, I hope he sees Page 9 of this article:
http://www.leggmason.com/funds/knowledge/mauboussin/HarryPotterInevitable.pdf

Bill aka NO DooDahs!

I apologize for double-dipping, but I realized I needed to flesh out the answer a bit more.

If one says the recession odds are 40% this year, they mean a 60% chance of no recession. It follows that, while no one non-recession disproves their model, a streak of 7 consecutive independent years in which they say "40% odds" and no recession occurs gives us 97.2% confidence that their model is incorrect.

If one says that "conditions are not favorable for stocks" and stocks go up, that once occurrence doesn't disprove their model. However, if they say that every quarter for four consecutive years, then one would expect, if their model were correct, that the distribution of good and bad quarters in the last 16 quarters would be statistically significantly worse than expected given the last few decades of quarterly stock market experience. If, indeed, the last 16 quarters have been significantly better than average, one has some statistical proof that the model in question is flawed.

If one manages money according to a statistical model, one is making predictions.

Bill aka NO DooDahs!

"The tough question is after how many games of being wrong do you reevaluate your model."

It's not that tough a question. If the prediction is recession, what are the odds of a recession in a given year? 20%? OK, then the benchmark for accuracy is 80%, since any fool can say "no recession" every year. Do a binomial approximation test to see if the accuracy level reached by the predictor is statistically significantly different from 80%. Voila!

In the case of a fund manager who has outperformed his index for an 8-year cumulative period, but outperforms only 4 specific years (falling short in 4 other years), one should rightfully say "what's the pattern?" If all four years of out(under) performance fall in streaks, streaks that correspond to changing market conditions, is it fair to say that the manager's predictive model is flawed? That distribution is approaching significance, but not quite there yet ... unless one evaluates outpeformance when the benchmark return is positive or negative, and then the pattern is significant.

Putting money on something implies a prediction, and further, some amount of confidence in that prediction.

Mike C

Jeff,

Thank you for removing the 2 paragraphs, and thanks for the link to your previous post on "forecasting unlikely events". I think you hit the nail on the head with this:

"Let us suppose that an expert put the chances of the big loss at 40%, but the home team actually won the game! Was the expert wrong? Not necessarily. We cannot tell from a single game. The odds might well have been 40%. It would take many games of similar circumstances for us to judge the accuracy of the prediction.

Briefly put, our experts would never predict a three-run loss in a specific game, although their probability estimates would reflect the specific circumstances."

I basically made this exact same point on another blog with respect to making predictions. The tough question is after how many games of being wrong do you reevaluate your model.

I agree that you can't throw the baby out with the bath water, and missing just the 2001 recession isn't proof experts are idiots, but I would love to see a comprehensive study of numerous forecasts.

I'm going off memory here, but I think it was Dreman (notable and highly successful value investor) who completed a study of sell-side analyst estimates and concluded they were dismal, and not much better then random predictions.


oldprof

Mike -

I have removed the last two paragraphs from your original comment, as you requested. I'm at the mercy of Typepad when it comes to editing comments, but there is a "preview" function.

My position: It is a mistake to disparage economists because they did not forecast some past recession, like 2001. Please read my article on forecasting unlikely events to see why. http://oldprof.typepad.com/a_dash_of_insight/2007/08/forecasting-unl.html

Maybe I need to take this up again.

Thanks for your thoughtful observations.

Jeff

Mike C

Oops. I accidently copied, pasted, and posted some verbage from another note. Might want to add the capability to delete or edit a post after it has been posted.

To be clear, the last 2 paragraphs are NOT my words but come from another note on recessions.

Mike C

"We had marked Paul's original article for comment because of the disparity between the predictions of economists and non-economists, in this case the bettors at Intrade, a theme we described discussed here."

I'm not sure it is even worthwhile to spend alot of time on whether we are entering a recession soon regardless of whether it is journalists making proclamations or "experts".

I recently read a note on recessions that indicated that Bernanke and 90% of economists missed forecasting the 2001 recession. They all forecasted slowing growth and no recession. So much for the experts.

I completely agree that it is dangerous to attribute to much credibility to journalists and bloggers with questionable knowledge or experience. However, I think it could be just as dangerous to attribute credibility to "experts" whose track records may not be much better. My own view is it almost always makes more sense to evaluate the argument and analysis on its own merits regardless of who is making it.

The comments to this entry are closed.