There are a number of published baseball player forecasts that are freely available and online. As Dave Allen notes in his article on Fangraphs Fan Projections, and what I find as well, is that some projections are definitely better than others. Part 1 of this article examines the overall fit of each of six different player forecasts: Zips, CHONE, Marcel, CBS Sportsline, ESPN, and Fangraphs Fans. What I find is that the Marcel projections are the best based on average error, followed by the Zips and CHONE projections. However, if we control for the over-optimism of each of these projection systems, each of the forecasts are virtually indistinguishable.
This second result is important in that it requires us to dig a little deeper to see how much each of these forecasts is actually helping to predict player performance. This is addressed in Part 2 of this article.
The tool that is generally used to compare the average fit of a set of forecasts is Root Mean Squared Forecasting Error (RMSFE). This measure is imperfect in that it doesn’t consider the relative value of an over-projection versus and under-projection; for example, in earlier rounds of a fantasy draft we may be drafting to limit risk while in later rounds we may be seeking risk. That being said, RMSE is pretty easy to understand and is thus the standard for comparing average fit of a projection.
Table 1 shows the RMSFE of each of the projection systems in each of the main five fantasy categories for hitters. Here, we see that each of the “mechanical” projection systems (Marcel, Zips, and CHONE) are the best compared to the three “human” projections. The value is the standard deviation of the error of a particular forecast. In other words, 2/3rds of the time, a player projected by Marcel to score 100 runs will score between 75 and 125 runs.
Table 1. Root Mean Squared Forecasting Error
|
Runs |
HRs |
RBIs |
SBs |
AVG |
Marcel |
24.43 |
7.14 |
23.54 |
7.37 |
0.0381 |
Zips |
25.59 |
7.47 |
26.23 |
7.63 |
0.0368 |
CHONE |
25.35 |
7.35 |
24.12 |
7.26 |
0.0369 |
Fangraphs Fans |
29.24 |
7.98 |
32.91 |
7.61 |
0.0396 |
ESPN |
26.58 |
8.20 |
26.32 |
7.28 |
0.0397 |
CBS |
27.43 |
8.36 |
27.79 |
7.55 |
0.0388 |
Another measure that is important is bias. Bias occurs when a projection consistently over or under predicts. Bias inflates the MSFE, so a simple bias correction may improve a forecast’s fit substantially. In Table 2, we see that the human projection systems exhibit substantially more bias than the mechanical ones.
Table 2. Average Bias
|
Runs |
HRs |
RBIs |
SBs |
AVG |
Marcel |
7.12 |
2.09 |
5.82 |
1.16 |
0.0155 |
Zips |
11.24 |
2.55 |
11.62 |
0.73 |
0.0138 |
CHONE |
10.75 |
2.67 |
9.14 |
0.61 |
0.0140 |
Fangraphs Fans |
17.75 |
4.03 |
23.01 |
2.80 |
0.0203 |
ESPN |
13.26 |
3.78 |
11.59 |
1.42 |
0.0173 |
CBS |
15.09 |
4.08 |
14.17 |
2.05 |
0.0173 |
We can get a better picture about which forecasting system is best by correcting for bias in the individual forecasts. Table 3 presents the results of bias corrected RMSFEs. What we see here is a tightening in the results of the forecasts across each of the forecasting systems. Here, we see that each forecasting system is about the same.
Table 3. Bias-corrected Root Mean Squared Forecasting Error
|
Runs |
HRs |
RBIs |
SBs |
AVG |
Marcel |
23.36 |
6.83 |
22.81 |
7.28 |
0.0348 |
Zips |
22.98 |
7.02 |
23.52 |
7.59 |
0.0341 |
CHONE |
22.96 |
6.85 |
22.33 |
7.24 |
0.0341 |
Fangraphs Fans |
23.24 |
6.88 |
23.53 |
7.08 |
0.0340 |
ESPN |
23.03 |
7.27 |
23.62 |
7.14 |
0.0357 |
CBS |
22.91 |
7.29 |
23.90 |
7.27 |
0.0347 |
So where does this leave us if each of these six forecasts are basically indistinguishable? As it turns out, evaluating the performance of individual forecasts doesn’t tell the whole story. It may be true that there is useful information in each of the different forecasting systems, so that an average or a weighted average of forecasts may prove to be a better predictor than any individual forecast. Part 2 of this article examines this in some detail. Stay tuned!