Comparing 2010 Hitter Forecasts Part 1: Which System is the Best?

There are a number of published baseball player forecasts that are freely available and online.  As Dave Allen notes in his article on Fangraphs Fan Projections, and what I find as well, is that some projections are definitely better than others.  Part 1 of this article examines the overall fit of each of six different player forecasts: Zips, CHONE, Marcel, CBS Sportsline, ESPN, and Fangraphs Fans.  What I find is that the Marcel projections are the best based on average error, followed by the Zips and CHONE projections.  However, if we control for the over-optimism of each of these projection systems, each of the forecasts are virtually indistinguishable.

This second result is important in that it requires us to dig a little deeper to see how much each of these forecasts is actually helping to predict player performance.  This is addressed in Part 2 of this article.

The tool that is generally used to compare the average fit of a set of forecasts is Root Mean Squared Forecasting Error (RMSFE).  This measure is imperfect in that it doesn’t consider the relative value of an over-projection versus and under-projection; for example, in earlier rounds of a fantasy draft we may be drafting to limit risk while in later rounds we may be seeking risk.  That being said, RMSE is pretty easy to understand and is thus the standard for comparing average fit of a projection.

Table 1 shows the RMSFE of each of the projection systems in each of the main five fantasy categories for hitters.  Here, we see that each of the “mechanical” projection systems (Marcel, Zips, and CHONE) are the best compared to the three “human” projections.  The value is the standard deviation of the error of a particular forecast.  In other words, 2/3rds of the time, a player projected by Marcel to score 100 runs will score between 75 and 125 runs.

Table 1. Root Mean Squared Forecasting Error

Runs HRs RBIs SBs AVG
Marcel 24.43 7.14 23.54 7.37 0.0381
Zips 25.59 7.47 26.23 7.63 0.0368
CHONE 25.35 7.35 24.12 7.26 0.0369
Fangraphs Fans 29.24 7.98 32.91 7.61 0.0396
ESPN 26.58 8.20 26.32 7.28 0.0397
CBS 27.43 8.36 27.79 7.55 0.0388

Another measure that is important is bias.  Bias occurs when a projection consistently over or under predicts.  Bias inflates the MSFE, so a simple bias correction may improve a forecast’s fit substantially.  In Table 2, we see that the human projection systems exhibit substantially more bias than the mechanical ones.

Table 2. Average Bias

Runs HRs RBIs SBs AVG
Marcel 7.12 2.09 5.82 1.16 0.0155
Zips 11.24 2.55 11.62 0.73 0.0138
CHONE 10.75 2.67 9.14 0.61 0.0140
Fangraphs Fans 17.75 4.03 23.01 2.80 0.0203
ESPN 13.26 3.78 11.59 1.42 0.0173
CBS 15.09 4.08 14.17 2.05 0.0173

We can get a better picture about which forecasting system is best by correcting for bias in the individual forecasts. Table 3 presents the results of bias corrected RMSFEs. What we see here is a tightening in the results of the forecasts across each of the forecasting systems.  Here, we see that each forecasting system is about the same.

Table 3. Bias-corrected Root Mean Squared Forecasting Error

Runs HRs RBIs SBs AVG
Marcel 23.36 6.83 22.81 7.28 0.0348
Zips 22.98 7.02 23.52 7.59 0.0341
CHONE 22.96 6.85 22.33 7.24 0.0341
Fangraphs Fans 23.24 6.88 23.53 7.08 0.0340
ESPN 23.03 7.27 23.62 7.14 0.0357
CBS 22.91 7.29 23.90 7.27 0.0347

So where does this leave us if each of these six forecasts are basically indistinguishable?  As it turns out, evaluating the performance of individual forecasts doesn’t tell the whole story.  It may be true that there is useful information in each of the different forecasting systems, so that an average or a weighted average of forecasts may prove to be a better predictor than any individual forecast. Part 2 of this article examines this in some detail. Stay tuned!





18 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Sky
13 years ago

Love the bias adjustment. Every analysis of forecasts needs it!

Rui
13 years ago

I’d really like to see the Bill James projections added in. Those are generally perceived as the most optimistic, which should show up in the bias, yeah?

Also dividing hitters and pitchers up may provide even more insights to the accuracy of the projection systems

Colin Wyers
13 years ago

When you say you adjusted the bias, what do you mean by that?

Colin Wyers
13 years ago

So when you figure the average bias, do you prorate based on playing time?

Jeremy
13 years ago

Are bias’ consistent from year to year? Otherwise including them is useless from a predictive standpoint. However, if they are consistent, then why don’t the propagators of each system favor use it before the season starts?

Zubin
13 years ago

Really enjoyed this. Question: Did you compute RMSE based on the entire sample of projections that a system gave or did you limit the sample by some filter (like players w/ greater than 350 ABs)?

evo34
13 years ago

Can you look at OBP and SLG errors? Not sure that counting stats are the best metric for evaluating forecast quality, even after correcting for bias.

Craig Tomarkin
13 years ago

Interesting comparison. Was wondering what you thought of the free forecasts at baseballguru.com

Ross Gore
13 years ago

It would be interesting to see how the average compares to the weighted average stats computed by AggPro. Cameron Snapp, TJ Highley and I developed a methodology to compute weights based on prior seasons that showed the computed weighted average applied to upcoming season projections was more accurate than any of the constituent projections. Full paper that was published in the SABR journal is here: http://www.cs.virginia.edu/~rjg7v/AggPro.pdf

MDLmember
12 years ago

Will, the BPP website looks like a fantastic resource… definitely going to explore around some more this weekend!

I just wanted to point out something about the footnote you linked to on the BPP website: the Prospectus Projections Project was co-authored by one “David Cameron”. Assuming that this is FG’s own, just contact him about any potential naming issues.

MDLmember
12 years ago

Whoops, wrong article! Meant to post in the 2012 version 😛