Archive for Research

Team Construction, OBP, and the Importance of Variance

A recent article by ncarrington brought up an interesting point, and it’s one that merits further investigation. The basis of the article points out that even though two teams may have similar team average on-base percentages, a lack of consistency within one team will cause them to under-perform their collective numbers when it comes to run production. A balanced team, on the other hand, will score more runs. That’s our hypothesis.

How does the scientific method work again? Er, nevermind, let’s just look at the data.

In order to gain an initial understanding we’re going to start by looking at how teams fared in 2013. We’ll calculate a league average runs/OBP number that will work as a proxy for how many runs a team should be expected to score based on their OBP. And then we’ll calculate the standard deviation of each team’s OBP (weighted to plate appearances), and compare that to the league average standard deviation. If our hypothesis is true, teams with a relatively low OBP deviations will outperform their expected runs scored number.

Of course, there’s a lot more to team production than OBP. We’re going to conquer that later. Bear with me–here’s 2013.

A few things to keep in mind while dissecting this chart: 668.5 is the baseline number for Runs/(OBP/LeagueOBP). Any team number above this means that they are outperforming, while any number below represents underperformance. The league average team OBP standard deviation is .162

Team Runs/(OBP/LeagueOBP) OBP Standard Deviation
Royals 647.71 0.1
Rangers 710.22 0.17
Padres 632.53 0.14
Mariners 642.88 0.15
Angels 700.75 0.17
Twins 618.61 0.16
Tigers 723.95 0.12
Astros 642.5 0.15
Giants 620.1 0.15
Dodgers 627.18 0.21
Reds 673.82 0.19
Mets 638.45 0.18
Diamondbacks 668.02 0.16
Braves 675.02 0.16
Blue Jays 705.27 0.17
White Sox 622.92 0.15
Red Sox 768.53 0.19
Cubs 631.74 0.12
Athletics 738.61 0.15
Nationals 662.76 0.18
Brewers 650.02 0.16
Rays 669.46 0.18
Orioles 749.95 0.19
Rockies 689.93 0.18
Phillies 627.95 0.14
Indians 717.08 0.18
Pirates 637.87 0.17
Cardinals 744.3 0.2
Marlins 552.48 0.14
Yankees 666.17 0.14

That chart’s kind of a bear, so I’m going to break it up into buckets. In 2013 there were 16 teams that exhibited above-average variances. Of those, 11 outperformed expectations while only 5 underperformed expectations. Now for the flipside–of the 14 teams that exhibited below-average variances, only 2 outperformed expectations while a shocking 12(!) teams underperformed.

That absolutely flies in the face of our hypothesis. A startling 23 out of 30 teams suggest that a high variance will actually help a team score more runs while a low variance will cause a team to score less.

Before we get all comfy with our conclusions, however, we’re going to acknowledge how complicated baseball is. It’s so complicated that we have to worry about this thing called sample size, since we have no idea what’s going on until we’ve seen a lot of things go on. So I’m going to open up the floodgates on this particular study, and we’re going to use every team’s season since 1920. League average OBP standard deviation and runs/OBP numbers will be calculated for each year, and we’ll use the aforementioned bucket approach to examine the results.

Team Seasons 1920-2013

Result Occurrences
High variance, outperformed expectations 504
High variance, underperformed expectations 508
Low variance, outperformed expectations 492
Low variance, underperformed expectations 538

Small sample size strikes again. Will there ever be a sabermetric article that doesn’t talk about sample size? Maybe, but it probably won’t be written by me. Anyways, the point is that variance in team OBP has little to no effect on actual results when you up your sample size to 2000+. As a side note of some interest, I wondered if teams with high variances would tend have bigger power numbers than their low variance counterparts. High variance teams have averaged an ISO of .132 since 1920. Low variance teams? .131. So, uh, not really.

If you want to examine the ISO numbers a little more, here’s this: outperforming teams had an ISO of .144 while underperforming teams had an ISO .120. These numbers remain the same for both high and low variance teams. It appears that overachieving/underachieving OBP expectations can be almost entirely explained by ISO.

I’m not satisfied with that answer, though. Was 2013 really just an aberration? What if we limit our samples to only teams that significantly outperformed or underperformed expectations (by 50 runs) while having a significantly large or small team standard deviation OBP.

Team Seasons 1920-2013, significant values only

Result Occurrences
High variance, outperformed expectations 117
High variance, underperformed expectations 93
Low variance, outperformed expectations 101
Low variance, underperformed expectations 119

The numbers here do point a little bit more towards high variance leading to outperformance. High-variance teams are more likely to strongly outperform their expectations to the tune of about 20%, and the same is true for low-variance teams regarding underperforming. Bear in mind, however, that that is not a huge number, and that is not a huge sample size. If you’re trying to predict whether a team should outperform or underperform their collective means then variance is something to consider, but it isn’t the first place you should look.

Being balanced is nice. Being consistent is nice. It’s something we have a natural inclinations towards as humans–it’s why we invented farming, civilization, the light bulb, etc. But when you’re building a baseball team it’s not something that’s going to help you win games. You win games with good players.


Baseball’s Most Ridiculous Patented Equipment

Background – what does a patent get you?

Long ago, governments recognized that protecting inventors’ efforts was essential to encourage technological advancement but realized that limiting the time in which an inventor had the exclusive right to market their invention served the greater good by preventing the inventor from controlling a useful product forever.  Patents were first granted in Europe in the late 1400s and the patent system was first enacted in the United States in 1790.  To date, there have been thousands of baseball-related patents issued covering everything from game equipment to methods of compressing game broadcasts.

In the United States, a patent is an intellectual property right granted by the government to an inventor that “excludes others from making, using, offering for sale, or selling the invention throughout the United States or importing the invention into the United States” for a limited time in exchange for public disclosure of the invention when the patent is granted.  Currently, a utility patent is enforceable for 20 years from the date on which the application was submitted, assuming that periodic maintenance fees are paid as scheduled.

What can be patented?

A utility patent will be granted for a machine, process, article of manufacture, composition of matter (or any improvement to an existing machine, process, article of manufacture, composition of matter) as long as it is “new, nonobvious and useful.”  There are certain things that cannot be patented, however, such as laws of nature, abstract ideas and inventions that are morally offensive or “not useful.”

The “non useful” component is somewhat interesting in that the patent examiner is charged only with making a decision whether an invention will function as expected and otherwise has a “useful purpose.”  As you will see below, “useful” does not always mean that the invention will be marketable.

So how did James Bennett hope to change baseball?

While it is not clear whether inventor James E. Bennett of Momence, Illinois is the same James Bennett who played for the Sharon Ironmongers in the 1895 Iron and Oil League, it seems clear that he did not exert any forethought as to whether his inventions would be practical when used under baseball game conditions.  Either that or he just really hated catching a ball with the existing baseball glove technology available at the turn of the 20th Century.

By the early 1900s, baseball gloves had undergone constant improvement.  Starting with George Rawlings in 1885, (Pat. No. 325,968) protective gloves were becoming more acceptable to protect fielders’ hands.  In 1891, Harry Decker added a thick pad to the front of the glove (Pat. No. 450,355) and Bob Reach added an inflatable chamber (Pat. No. 450,717).  By 1895 Elroy Rogers had designed the classic “pillow-style” catcher’s mitt (Pat. No. 528,343) that would be used with little change until Randy Hundley pioneered the one-handed catching technique in the 1960s using a hinged catcher’s mitt.

Regardless of the existence of the baseball glove technology in use at the time, James Bennett tried to think outside the box by eliminating the catcher’s mitt altogether and, instead, attaching that box to the catcher’s chest.  Here is 1904’s “Base Ball Catcher” in all of its ill-conceived glory:

Front View
Side View

Bennett apparently envisioned the catcher squatting behind home plate acting as a passive target for the pitcher’s offerings and designed this contraption to accept the pitched ball into the cage such that it would strike the padding and drop through a chute into the catcher’s hand so it could be returned to the mound.  As you can see, however, the device would have significant shortcomings should the catcher have to attempt to throw out a would-be base stealer, be required to catch the ball for a play at the plate, attempt to block a wild pitch or especially to field his position on a ball put in play in front of the plate.

But Bennett was not finished yet! In 1905, he patented a two-handed “Base Ball Glove” with an oversized pocket to trap the ball:
Front and Back View

Bennett claims that this poorly imagined glove is easy to use because the fingers on the player’s throwing hand were specially designed to “permit the easy and quick removal of that hand to grasp and throw the ball.”  Just as with the “Base Ball Catcher,” however, this design does not offer the player much in the way of a catching radius.

So what happened to James E. Bennett’s inventions?
As of 1918, he was still looking for investors, according to this advertisement he placed in the August and October issues of “Forest and Stream” magazine.

The Impact of Defensive Prowess on a Pitcher’s Earned Runs Average

EXECUTIVE SUMMARY

  • This study attempts to determine how much the fielders’ prowess, measured by the metric UZR (Ultimate Zone Range), affects a pitcher’s Earned Runs Average.
  • The data used for the regression (collected from FanGraphs.com) includes collective ERA, BABIP, HR/9, BB/9, K/9 and UZR for every Major League Baseball team for the past three years.
  • ERA (Earned Runs Average) is the amount of earned runs a pitcher allows per nine innings pitched. BABIP (Batting Average per Balls in Play) is the batting average against any given pitcher, but only including the at bats where the hitter puts the ball in play. HR/9 is home runs allowed per nine innings pitched. BB/9 is walks allowed per nine innings pitched. K/9 is batter struck out per nine innings pitched. UZR (Ultimate Zone Range) is a widely used metric to evaluate defense. It summarizes how many runs any given fielder saved or gave up during a season compared to the league average in that position.
  • The model passed the F-test, the adjusted “R” squared came out at 91.2 percent and every one of the independent variables passed their respective t-test.
  • The model tested negative for both Multicollinearity (using Variance Inflation Factors) and Heteroskedasticity (using the second version of the White’s test).
  • The regression equation looks like this: ERA = -2.55 – 0.187 K/9 + 0.413 BB/9 +16.9 BABIP + 1.72 HR/9 – 0.00157 UZR. Even though the independent variable UZR has a low coefficient, it definitely affects a pitcher’s ERA, and in the way it was suspected. As the UZR goes up the ERA goes down.

INTRODUCTION

Since Bill James started to write about baseball in the late 1970’s and started to defy the traditional stats used to evaluate players, hundreds of baseball fans have tried to follow his footsteps creating new ways to evaluate players and defy the existing ones. One of the stats that has been brought to light lately is Earned Runs Average (ERA).

According to several baseball analysts ERA is not an efficient way to evaluate how good or bad a pitcher performs. The rationale behind this thinking is pretty simple; ERA is the amount of earned runs that any given pitcher allows per nine innings pitched, but the pitcher is not always 100 percent responsible for every earned run allowed. Sometimes, a fielder’s lack of defensive prowess will allow hitters to reach base safely (I am not talking about errors), and when it happens, rather often, those hits will translate into earned runs, thus affecting the pitcher’s ERA.

One of the metrics that has been used to determine any given fielder’s prowess is UZR (Ultimate Zone Range). UZR compiles data on the outfielders arms, fielder range and errors and summarizes the amount of runs those fielders saved or gave up during a season compared to the league average in that position. Using that metric along with other metrics that affect the ERA, we can answer the question “How much does defensive prowess impacts a pitcher’s ERA?”

If in fact defensive prowess affects ERA, we could also determine how much it affects it. With that kind of information, cost-effective teams (Tampa Bay Rays and Oakland Athletics) can help improve their pitching staff without investing heavily on new pitchers.

DATA

The unit of observation for this study is one Major League Baseball team. And the number of observations is 90. Currently, there are 30 Major League Baseball teams, so data was collected for the past three Major League Baseball seasons. So the time period covered goes from 2010 to 2012, including both seasons.

The dependent variable used in this project was Earned Runs Average, and the independent variables are as follow:

  • BABIP: Batting average per balls in play
  • HR/9: Homeruns allowed per nine innings pitched
  • BB/9: Walks allowed per nine innings pitched
  • K/9: Hitters struck out per nine innings pitched
  • UZR: Runs saved or given up by any given fielder during a season

All the data for this study is cross-sectional because all the observations have been collected at the same point of time.

All the data for this study was collected from the baseball website FanGraphs.com. FanGraphs is a widely known source of baseball stats and news, but the data they publish on their website is collected by another company called Baseball Info Solutions.

REGRESSION ESTIMATIONS

            Regression Analysis: ERA versus BABIP, HR/9, BB/9, K/9 and UZR

The regression equation is

ERA = – 2.55 – 0.187 K/9 + 0.413 BB/9 + 16.9 BABIP + 1.72 HR/9 – 0.00157 UZR

 

Predictor       Coef         SE Coef              T           P             VIF

Constant      -2.5474     0.5594        -4.55    0.000

K/9              -0.18718    0.02428     -7.71     0.000    1.099

BB/9            0.41261     0.04671        8.83     0.000    1.052

BABIP          16.914        1.876             9.02     0.000     1.741

HR/9            1.7222       0.1105          15.58    0.000    1.180

UZR        -0.0015743  0.0006219  -2.53  0.013       1.669

 

S = 0.133650   R-Sq = 91.7%   R-Sq(adj) = 91.2%

 

Analysis of Variance

 

Source                  DF        SS            MS              F             P

Regression          5     16.5663   3.3133   185.49   0.000

Residual Error  84   1.5004     0.0179

Total                     89   18.0668

The first step used to evaluate the model was the F-test, and since the model has a p-value less than 0.05, it is safe to say that the model passed the F-test. The adjusted “R” squared for the model was 91.2 percent, which means that 91.2 percent of the variation in ERA is explained by at least one of the independent variables used in this model. The method used to evaluate the relevance of the independent variables was the t-test, and each one of them, as mentioned earlier, had a p-value below 0.05, so in conclusion, they all passed the t-test. The p-value for K/9, BB/9, BABIP and HR/9 was 0.000 for each one of them, and the p-value for UZR was 0.013.

MODEL ESTIMATION SEQUENCE

  1. Correct functional form: To check for correct functional form, each one of the independent variables was plotted against the dependent variable. The scatter plots that resulted from this check show a linear relationship between each one of the independent variables and the dependent variable.
  2. Test for Heteroskedasticity: The data for this study is cross-sectional, so it was necessary to test for Heteroskedasticity, and such test was conducted by the second version of White’s test. To do so, the residuals for the original regression were stored. Those squared residuals were regressed against the Independent variables and the independent variables squared. After running the regression, an the F-test was applied to it and since the p-value was over 0.05, it can be concluded that the regression fails the F-test, therefore Heteroskedasticity does not exist in the initial model.
  3. Multicollinearity: This model also tested for Multicollinearity and it is done by using the correlation matrix and the Variance Inflation Factors, observed in the initial regression.
    1. Since none of the VIF’s is larger than 10, it can be concluded that Multicollinearity does not exist and the p-values from the t-tests can be trusted.
    2. A correlation matrix was calculated using all the independent variables but since every one of them passed the t-test, none will be dropped from the model.
  • K/9: p-value (0.000), VIF (1.099), rho (0.252)
  • BB/9: p-value (0.000), VIF (1.052), rho (0.195)
  • BABIP: p-value (0.000), VIF (1.741), rho(0.604)
  • UZR: p-value (0.013), VIF (1.669), rho (0.604)
  1. Drop any irrelevant variable from the model: Since all the independent variables in this model are relevant, none of them will be dropped from the model.

FINAL MODEL

The final model is exactly the same as the initial model because the it passed the F-test, all of the independent variables passed their t-tests and neither Heteroskedasticity or Multicollinearity are present in the model, so it was not necessary to run another regression or drop any variable.

COEFFICIENT INTERPRETATION

  • K/9: When the team strikes out one extra batter per nine innings, the team’s ERA should go down by 0.187 runs per nine innings holding everything else constant.
  • BB/9: When the team walks one extra batter per nine innings, the team’s ERA should go up by 0.413 runs per nine innings holding everything else constant.
  • BABIP: If every time a batter puts the ball in play he records a hit, the ERA will go up by 16.9 runs per nine innings. This variable is hard to explain since it will never go up by 1, it will go up or down depending on how many hits the team allows in any given number of at-bats where the batter puts the ball in play. For example, if a team averages eight hits every 27 outs, the BABIP will be 0.296 throughout the entire season. Taking into account that every batter put the ball in play (no strikeouts). The expected increase in ERA given a 0.296 BABIP during a season, and holding everything else constant, would be 5.00.
  • HR/9: When the team allows one more homerun per nine innings, ERA should go up by 1.72 runs per nine innings holding everything else constant.
  • UZR: When the team saves one extra run defensively, ERA should go down by 0.00157 runs per nine innings holding everything else constant.

SUMMARY

The null hypothesis for this project stated that defensive prowess didn’t affect ERA, but the results showed otherwise, so it is safe to reject the null hypothesis. Defensive prowess appears to affect ERA although in a small scale. This might not seem like much, but cost-effective teams like the Rays and Athletics can acquire premium defensive players at a much cheaper cost than a premium pitcher, and although they won’t be “game changers,” they will definitely improve the team’s ERA.

Baseball is a game of numbers, and these numbers don’t lie. A good defender will help his team save runs; a lot of good defenders will help their team save multitude of runs. Is this enough to get to the postseason or win a World Series? Absolutely not, but it has been proven already that finding edges in the game, as little as they might be, will help a team in the long run. The findings in this study are a concise proof that taking advantage of defense is an edge that can be exploited for the betterment of the organization.


Weighting Past Results: Starting Pitchers

My article on weighting a hitter’s past results was supposed to be a one-off study, but after reading a recent article by Dave Cameron I decided to expand the study to cover starting pitchers. The relevant inspirational section of Dave’s article is copied below:

“The truth of nearly every pitcher’s performance lies somewhere in between his FIP-based WAR and his RA9-based WAR. The trick is that it’s not so easy to know exactly where on the spectrum that point lies, and its not the same point for every pitcher.”

Dave’s work is consistently great. This, however, is a rather hand-wavy explanation of things. Is there a way that we can figure out where pitchers have typically laid on this scale in the past  so that we can make more educated guesses about what a pitcher’s true skill level is? We have the data–so we can try.

So, how much weight should be placed on ERA and FIP respectively?  Like Dave said, the answer will be different in every case, but we can establish some solid starting points. Also since we’re trying to predict pitching results and not just historical value we’re going to factor in the very helpful xFIP and SIERA metrics.

Now for the methodology paragraph: In order to test this I’m going to use every pitcher season since 2002 (when FanGraphs starts recording xFIP/SIERA data) where a pitcher had at least 100 innings pitched, and then weight all of the relevant metrics for that season in order to create an ERA prediction for the following season. I’ll then look at the difference between the following season’s predicted and average ERA, and then calculate the average miss. The smaller the average miss, the better the weights. Simple. As an added note, I have weighted the importance of a pitcher’s second (predicted – actual) season by innings pitched so that a pitcher who pitched 160 innings in his second (predicted – actual) season will assume more merit than the pitcher who pitched only 40 innings.

How predictive are each of the relevant stats without weights? I am nothing without my tables, so here we go (There are going to be a lot of tables along the way to our answers. If you’re just interested in the final results, go ahead and skip on down towards the bottom).

Metric Miss Average
ERA .8933
FIP .7846
xFIP .7600
SIERA .7609

This doesn’t really tell us anything we don’t already know: SIERA and xFIP are similar, and FIP is a better predictor than ERA. Let’s start applying some weights to see if we can increase accuracy, starting with ERA/SIERA combos.

ERA% SIERA% Miss Average
50% 50% .7750
75% 25% .8218
25% 75% .7530
15% 85% .7527
10% 90% .7543
5% 95% .7571

We can already see that factoring in ERA just a slight amount improves our results substantially. When you’re predicting a pitcher’s future, therefore, you can’t just fully rely on xFIP or SIERA to be your fortune teller. You can’t lean on ERA too hard either, though, since once you start getting up over around 25% your projections begin to go awry. Ok, so we know how SIERA and ERA combine, but what if we use xFIP instead?

ERA% xFIP% Average Miss
25% 75% .7530
15% 85% .7530
10% 90% .7549
5% 95% .7560

Using xFIP didn’t really improve our results at all. SIERA consistently outperforms xFIP (or is at worst only marginally beaten by it) throughout pretty much all weighting combinations, and so from this point forward we’re just going to use SIERA. Just know that SIERA is basically xFIP, and that there are only slight differences between them because SIERA makes some (intelligent) assumptions about pitching. Now that we’ve established that, let’s try throwing out ERA and use FIP instead.

FIP% SIERA% Average Miss
50% 50% .7563
25% 75% .7543
15% 85% .7560
10% 90% .7570

It’s interesting that ERA/SIERA combos are more predictive than FIP/SIERA combos, even though FIP is more predictive in and of itself. This is likely due to the fact that a lot of pitchers have consistent team factors that show up in ERA but are cancelled out by FIP. We’ll explore that more later, but for now we’re going to try to see if we can use any ERA/FIP/SIERA combos that will give us better results.

ERA% FIP% SIERA% Average Miss
25% 25% 50% .7570
15% 15% 70% .7513
10% 10% 80% .7520
5% 15% 80% .7532
10% 15% 75% .7517
15% 25% 60% .7520
15% 25% 65% .7517

There are three values here that are all pretty good. The important thing to note is that ERA/FIP/SIERA combos offer more consistently good results than any two stats alone. SIERA should be your main consideration, but ERA and FIP should not be discarded since the combo offers a roughly .05 better predictive value towards ERA than SIERA alone. It’s a small difference, but it’s there.

Now I’m going to go back to something that I mentioned previously–should a player be evaluated differently if he isn’t coming back to the same team? The answer to this is a pretty obvious yes, since a pitcher’s defense/park/source of coffee in the morning will change. Let’s narrow down our sample to only pitchers that changed teams, to see if different numbers work better. These numbers will be useful when evaluating free agents, for example.

ERA% FIP% SIERA% Average Miss (changed teams)
10% 15% 80% .7932
5% 15% 80% .7918
2.5% 17.5% 80% .7915
2.5% 20% 77.5% .7915
2.5% 22.5% 75% .7917

As suspected ERA loses a lot of it’s usefulness when a player is switching teams, and FIP retains its marginal usefulness while SIERA carries more weight. Another thing to note is that it’s just straight-up harder to predict pitcher performance when a pitcher is changing teams no matter what metric you use. SIERA itself goes down in accuracy to .793 when only dealing with pitchers that change teams, a noticeable difference from the .760 value above for all pitchers.

For those of you who have made it this far, it’s time to join back in with those who have skipped down towards to bottom. Here’s a handy little chart that shows previously found optimal weights for evaluating pitchers:

Optimal Weights

Team ERA% FIP% SIERA% Average Miss
Same 10% 15% 75% .7517
Different 2.5% 17.5% 80% .7910

Of course, any reasonable projection should take more than just one year of data into account. The point of this article was not to show a complete projection system, but more to explore how much weight to give to each of the different metrics we have available to us when evaluating pitchers. Regardless, I’m going to expand the study a little bit to give us a better idea of weighting years by establishing weights over a two-year period. I’m not going to show my work here mostly out of an honest effort to spare you from having to dissect more tables, so here are the optimal two year weights:

ERA% Year 1 FIP% Year 1 SIERA% Year 1 ERA% Year 2 FIP% Year 2 SIERA% Year 2 Average Miss
5% 5% 30% 7.5% 7.5% 45% .742

As expected using multiple years increases our accuracy (by roughly .15 ERA per pitcher). Also note that these numbers are for evaluating all pitchers, and so if you’re dealing with a pitcher who is changing teams you should tweak ERA down while uptweaking FIP and SIERA. And, again, as Dave stated each pitcher is a case study–each pitcher warrants their own more specific analysis. But be careful when you’re changing weights. When doing so make sure that you have a really solid reason for your tweaks and also make sure that you’re not tweaking the numbers too much, because when you begin to start thinking that you’re significantly smarter than historical tendencies you can start getting in trouble. So these are your starting values–carefully tweak from here. Go forth, smart readers.

As a parting gift to this article, here’s a list of the top 20 predictions for pitchers using the two-year model described above. Note that this will inherently exclude one-year pitchers such as Jose Fernandez and pitchers that failed to meet the 100IP as a starter requirement in either of the past two years. Also note that these numbers do not include any aging curves (aging curves are well outside the scope of this article), which will obviously need to be factored in to any finalized projection system.

# Pitcher Weighted ERA prediction
1 Clayton Kershaw 2.93
2 Cliff Lee 2.94
3 Felix Hernandez 2.95
4 Max Scherzer 3.01
5 Stephen Strasburg 3.03
6 Adam Wainwright 3.11
7 A.J. Burnett 3.22
8 Anibal Sanchez 3.22
9 David Price 3.24
10 Madison Bumgarner 3.33
11 Alex Cobb 3.36
12 Cole Hamels 3.36
13 Zack Greinke 3.41
14 Justin Verlander 3.41
15 Doug Fister 3.46
16 Marco Estrada 3.48
17 Gio Gonzalez 3.53
18 James Shields 3.53
19 Homer Bailey 3.57
20 Mat Latos 3.60

Thoughts on the MVP Award: Team-Based Value and Voter Bias

You are reading this right now.  That is a fact.  Since you are reading this right now, many things can be reasonably inferred:

1.  You probably read FanGraphs at least fairly often

2. Since you probably read FanGraphs at least fairly often, you probably know that there are a lot of differing opinions on the MVP award and that many articles here in the past week have been devoted to it.

3. You probably are quite familiar with sabermetrics

4. You probably are either a Tigers fan or think that Mike Trout should have won MVP, or both

5. You might know that Josh Donaldson got one first-place vote

6. You might even know that the first-place vote he got was by a voter from Oakland

7. You might know that Yadier Molina got two first-place votes, and they both came from voters from St. Louis

8. You might even know that one of the voters who put Molina first on his ballot put Matt Carpenter second

9. You might be wondering if there is any truth to the idea that Miguel Cabrera is much more important to his team than Mike Trout is

I have thought about many of those things myself.  So, in this very long 2-part article, I am going to discuss them.  Ready?  Here goes:

Part 1: How much of an impact does a player have on his team?

Lots of people wanted Miguel Cabrera to win the MVP award. Some of you reading this may be shocked, but it’s actually true. One of the biggest arguments for Miguel Cabrera over Mike Trout for MVP is that Cabrera was much more important and “valuable” than Trout.  Cabrera’s team made the playoffs.  Trout’s team did not.  Therefore anything Trout did cannot have been important.  Well, let’s say too important.  I don’t think that anybody’s claiming that Trout had zero impact on the game of baseball or the MLB standings whatsoever.

OK.  That’s reasonable. There’s nothing flawed about that thinking when it’s not a rationale for voting Cabrera ahead of Trout for MVP.  As just a general idea, it makes sense:  Cabrera had a bigger impact on baseball this year than Trout did.  I, along with many other people in the sabermetric community, disagree with the fact that that’s a reason to vote for Cabrera, though.  But the question I’m going to ask is this: did Cabrera have a bigger impact on his own team than Trout did?

WAR tells us no.  Trout had 10.4 WAR, tops in MLB.  Cabrera had 7.6 – a fantastic number, good for 5th in baseball and 3rd in the AL, as well as his own career high – but clearly not as high as Trout.   Miggy’s hitting was out of this world, at least until September, and it’s pretty clear than he could have at least topped 8 WAR easily had he stayed healthy through the final month and been just as productive as he was April through August.  But, fact is, he did get hurt, and did not finish with a WAR as high as Trout.  So if they were both replaced with a replacement player, the Tigers would suffer more than the Angels.  Cabrera was certainly valuable – if replaced by a replacement, the 7 or 8 wins the Tigers would lose would probably not be enough to win them the AL Central.  But take Trout out, and the Angels go from a mediocre-to-poor team to a really bad one. The Angels had 78 wins this year, and that would have been around 68 (if we trust WAR) without Trout.  That would have been the 6th worst total in the league.  So, by WAR, Trout meant more to his team than Cabrera did.

But WAR is not the be all and end all of statistics (though we may like to think it is sometimes).  Let’s look at this from another angle.  Here’s a theory for you: the loss of a key player on a good team would probably not hurt that team as much because they’re already good to begin with.  If a not-so-good team loses a key player, though, the other players on the team aren’t as good so they can’t carry the team very well.

How do we test this theory?  Well, we have at our disposal a fairly accurate and useful tool to determine how many wins a team should get.  That tool is pythagorean expectation – a way of predicting wins and losses based on runs scored and allowed.  So let’s see if replacing Trout with an average player (I am using average and not replacement because all the player run values given on FanGraphs are above or below average, not replacement) is more detrimental to the Angels than replacing Cabrera with an average player is to the Tigers.

The Angels, this year, scored 733 runs and allowed 737.  Using the Pythagenpat (sorry to link to BP but I had to) formula, I calculated their expected win percentage, and it came out to .497 – roughly 80.6 wins and 81.4 losses*.  That’s actually significantly better than they did this year, which is good news for Angels fans.  But that’s not the focus right here.

Trout, this year, added 61.1 runs above average at the plate and 8.1 on the bases for a total of 69.2 runs of offense.  He also saved 4.4 runs in the field (per UZR).  So, using the Pythagenpat formula again with adjusted run values for if Trout were replaced by an average hitter and defender (663.8 runs scored and 741.4 runs allowed), I again calculated the Angels’ expected win percentage.  This came out to be .449 – roughly 72.7 wins and 89.3 losses.  7.9 fewer wins than the original one.  That’s the difference, for that specific Angels team, that Trout made.  Now, keep in mind, this is above average, not replacement, so it will be lower than WAR by a couple wins (about two WAR signifies an average player, so wins above average will be about two less than wins above replacement).  7.9 wins is a lot.  But is it more than Cabrera?

Let’s see.  This year, the Tigers scored 796 runs and allowed 624.  This gives them a pythagorean expectation (again, Pythagenpat formula) of a win percentage of .612 – roughly 99.1 wins and 62.9 losses.  Again much better than what they did this year, but also not the focus of this article.  Cabrera contributed 72.1 runs above average hitting and  4.4 runs below average on the bases for a total of 67.7 runs above average on offense.  His defense was a terrible 16.8 runs below average.

Now take Cabrera out of the equation.  With those adjusted run totals (728.3 runs scored and 607.2 runs allowed) we get  a win percentage of .583 – 94.4 wins and 67.6 losses.  A difference of 4.7 wins from the original.

Talk about anticlimactic.  Trout completely blew Cabrera out of the water (I would say no pun intended, but that was intended).  This makes sense if we think about it – a team with more runs scored will be hurt less by x fewer runs because they are losing a lower percentage of their runs.  In fact, if we pretend the Angels scored 900 runs this year instead of 733, they go from a 96.5-win team with Trout to an 89.8-win team without.  Obviously, they are better in both cases, but the difference Trout makes is only 6.7 wins – pretty far from the nearly 8 he makes in real life.

The thing about this statistic is that it penalizes players on good teams. Generally,  statistics such as the “Win” for pitchers are frowned upon because they measure things that the pitcher can’t control – just like this one.  But if we want to measure how much a team really needs a player, which is pretty much the definition of value, I think this does a pretty good job. Obviously, it isn’t perfect: the numbers that go into it, especially the baserunning and fielding ones, aren’t always completely accurate, and when looking at the team level, straight linear weights aren’t always the way to go; overall, though, this stat gives a fairly accurate picture.  The numbers aren’t totally wrong.

Here’s a look at the top four vote-getters from each league by team-adjusted wins above average (I’ll call it tWAA):

Player tWAA
Mike Trout 7.9
Andrew McCutchen 6.4
Paul Goldschmidt 6.2
Chris Davis 6.1
Josh Donaldson 4.9
Miguel Cabrera 4.7
Matt Carpenter 4.0
Yadier Molina 3.1

This is interesting.  Like expected, the players on better teams have a lower tWAA than the ones on good teams, just as we discussed earlier. One notable player is Yadier Molina, who despite being considered one of, if not the best catcher in the game, has the lowest tWAA of anyone on that list.  This may be because he missed some time. But let’s look at it a little closer: if we add the 2 wins that an average player would provide over a replacement-level player, we get 5.1 WAR, which isn’t so far off of his 5.6 total from this year. And the Cardinals’ pythagorean expectation was 101 wins, so obviously under this system he won’t be credited as much because his runs aren’t as valuable to his team.  Another factor is that we’re not adjusting by position here (except for the fielding part), and Molina is worth more runs offensively above the average catcher than he is above the average hitter, since catchers generally aren’t as good at hitting. But if Molina was replaced with an average catcher, I’m fairly certain that the Cardinals would lose more than the 3 games more that this number suggests. They might miss Molina’s game calling skills – if such a thing exists – and there’s no way to quantify how much Molina has helped the Cardinal pitchers improve, especially since they have so many rookies. But there’s also something else, something we can quantify, even if not perfectly.  And that’s pitch framing. Let’s add the 19.8 runs that Molina saved (measured by Statcorner) to Molina’s defensive runs saved (for which, by the way, I used the Fielding Bible’s DRS, since there is no UZR for catchers – that may be another reason Molina’s number may seem out of place, because DRS and UZR don’t always agree; Trout’s 2013 UZR was 4.4, and his DRS was -9. Molina did play 18 innings at first base, where he had a UZR of -0.2. We’ll ignore that, though, since it is such a small sample size and won’t make such a big difference).

Here is the table with only Molina’s tWAA changed, to account for pitch framing:

Player tWAA
Mike Trout 7.9
Andrew McCutchen 6.4
Paul Goldschmidt 6.2
Chris Davis 6.1
Yadier Molina 5.4
Josh Donaldson 4.9
Miguel Cabrera 4.7
Matt Carpenter 3.9

Now we see Molina move up into 5th place out of 8 with a much better tWAA of 5.4 – more than 2 wins better than without the pitch framing, and about 7.4 WAR if we want to convert from wins above average to wins above replacement.  Interesting. I don’t want to get into a whole argument now about whether pitch framing is accurate or actually based mostly on skill instead of luck, or whether it should be included in a catcher’s defensive numbers when we talk about their total defense. I’m just putting that data out there for you to think about.

But as I mentioned before, I used DRS for Molina and not UZR. What if we try to make this list more consistent and use DRS for everyone? (We can’t use UZR for everyone.)  Let’s see:

Player tWAA DRS UZR
Mike Trout 6.5 -9 4.4
Andrew McCutchen 6.4 7 6.9
Paul Goldschmidt 7.0 13 5.4
Chris Davis 5.5 -7 -1.2
Molina w/ Framing 5.4 31.8 N/A
Josh Donaldson 5.0 11 9.9
Miguel Cabrera 4.6 -18 -16.8
Matt Carpenter 4.1 0 -0.9
Yadier Molina 3.1 12 N/A

We see Trout go down by almost a win and a half here. I don’t really trust that, though, because I really don’t think that Mike Trout is a significantly below average fielder, despite what DRS tells me. DRS actually gave Trout a rating of 21 in 2012, so I don’t think it’s as trustworthy. But for the sake of consistency, I’m showing you those numbers too, with the DRS and UZR comparison so you can see why certain people lost/gained wins.

OK. So I think we have a pretty good sense for who was most valuable to their teams. But I also think we can improve this statistic a little bit more. Like I said earlier, the hitting number I use – wRAA – is based off of league average, not off of position average. In other words, if Chris Davis is 56.3 runs better than the average hitter, but we replace him with the average first baseman, that average first baseman is already going to be a few runs better than the average player. So what if we use weighted runs above position average? wRAA is calculated by subtracting the league-average wOBA from a player’s wOBA, dividing by the wOBA scale, and multiplying by plate appearances. What I did was subtract the position average wOBA from the player’s wOBA instead. So that penalizes players at positions where the position average wOBA is high.

Here’s your data (for the defensive numbers I used UZR because I think it was better than DRS, even though the metric wasn’t the same for everyone):

Player position-adj. tWAA Pos-adj. wRAA wRAA
Trout 7.7 59.4 61.1
McCutchen 6.2 40.1 41.7
Molina w/ Framing 5.6 23.3 20.5
Goldschmidt 5.0 39.5 50.1
Davis 5.0 46.4 56.3
Donaldson 4.9 36.6 36.7
Cabrera 4.7 72.0 72.1
Carpenter** 4.3 41.7 37.8
Molina 3.4 23.3 20.5

I included here both the regular and position-adjusted wRAA for all players for reference. Chris Davis and Paul Goldschmidt suffered pretty heavily – each lost over a win of production – because the average first baseman is a much better hitter than the average player. Molina got a little better, as did Carpenter, because they play positions where the average player isn’t as good offensively. Everyone else stayed almost the same, though.

I think this position-adjusted tWAA is probably the most accurate. And I would also use the number with pitch framing included for Molina. It’s up to you to decide which one you like best – if you like any of them at all. Maybe you have a better idea, in which case you should let me know in the comments.

 Part 2: Determining voter bias in the MVP award

As I mentioned in my introduction, Josh Donaldson got one first-place MVP vote – from an Oakland writer. Yadier Molina got 2 – both from St. Louis writers. Matt Carpenter got 1 second-place vote – also from a St. Louis writer. Obviously, voters have their bias when it comes to voting for MVP. But how much does that actually matter?

The way MVP voting works is that for each league, AL and NL, two sportswriters who are members of the BBWAA are chosen from each location that has a team in that league – 15 locations per league times 2 voters per location equals 30 voters total for each league. That way you won’t end up with a lot of voters or very few voters from one place who may be biased one way or another.

But is there really voter bias?

In order to answer this question, I took all players who received MVP votes this year (of which there were 49) and measured how many points each of them got per 2 voters***.  Then I took the amount of points that each of them got from the voters from their chapter and found the difference. Here’s what I found:

AL:

Player, Club City Points Points/2 voter Points From City voters % Homer votes Homer difference
Josh Donaldson, Athletics OAK 222 14.80 22 9.91% 7.20
Mike Trout, Angels LA 282 18.80 23 8.16% 4.20
Evan Longoria, Rays TB 103 6.87 11 10.68% 4.13
David Ortiz, Red Sox BOS 47 3.13 7 14.89% 3.87
Adam Jones, Orioles BAL 9 0.60 3 33.33% 2.40
Miguel Cabrera, Tigers DET 385 25.67 28 7.27% 2.33
Coco Crisp, Athletics OAK 3 0.20 2 66.67% 1.80
Edwin Encarnacion, Blue Jays TOR 7 0.47 2 28.57% 1.53
Max Scherzer, Tigers DET 25 1.67 3 12.00% 1.33
Salvador Perez, Royals KC 1 0.07 1 100.00% 0.93
Koji Uehara, Red Sox BOS 2 0.13 1 50.00% 0.87
Chris Davis, Orioles BAL 232 15.47 16 6.90% 0.53
Adrian Beltre, Rangers TEX 99 6.60 7 7.07% 0.40
Yu Darvish, Rangers TEX 1 0.07 0 0.00% -0.07
Felix Hernandez, Mariners SEA 1 0.07 0 0.00% -0.07
Shane Victorino, Red Sox BOS 1 0.07 0 0.00% -0.07
Jason Kipnis, Indians CLE 31 2.07 2 6.45% -0.07
Torii Hunter, Tigers DET 2 0.13 0 0.00% -0.13
Hisashi Iwakuma, Mariners SEA 2 0.13 0 0.00% -0.13
Greg Holland, Royals KC 3 0.20 0 0.00% -0.20
Carlos Santana, Indians CLE 3 0.20 0 0.00% -0.20
Jacoby Ellsbury, Red Sox BOS 3 0.20 0 0.00% -0.20
Dustin Pedroia, Red Sox BOS 99 6.60 5 5.05% -1.60
Manny Machado, Orioles BAL 57 3.80 2 3.51% -1.80
Robinson Cano, Yankees NY 150 10.00 8 5.33% -2.00

NL:

Player, Club City Points Points/2 voter Points from City Voters % Homer votes Homer difference
Yadier Molina, Cardinals STL 219 14.60 28 12.79% 13.40
Hanley Ramirez, Dodgers LA 58 3.87 7 12.07% 3.13
Joey Votto, Reds CIN 149 9.93 13 8.72% 3.07
Allen Craig, Cardinals STL 4 0.27 3 75.00% 2.73
Jayson Werth, Nationals WAS 20 1.33 4 20.00% 2.67
Hunter Pence, Giants SF 7 0.47 3 42.86% 2.53
Yasiel Puig, Dodgers LA 10 0.67 3 30.00% 2.33
Matt Carpenter, Cardinals STL 194 12.93 15 7.73% 2.07
Andrelton Simmons, Braves ATL 14 0.93 2 14.29% 1.07
Paul Goldschmidt, D-backs ARI 242 16.13 17 7.02% 0.87
Michael Cuddyer, Rockies COL 3 0.20 1 33.33% 0.80
Andrew McCutchen, Pirates PIT 409 27.27 28 6.85% 0.73
Clayton Kershaw, Dodgers LA 146 9.73 10 6.85% 0.27
Craig Kimbrel, Braves ATL 27 1.80 2 7.41% 0.20
Russell Martin, Pirates PIT 1 0.07 0 0.00% -0.07
Matt Holliday, Cardinals STL 2 0.13 0 0.00% -0.13
Buster Posey, Giants SF 3 0.20 0 0.00% -0.20
Adam Wainwright, Cardinals STL 3 0.20 0 0.00% -0.20
Adrian Gonzalez, Dodgers LA 4 0.27 0 0.00% -0.27
Troy Tulowitzki, Rockies COL 5 0.33 0 0.00% -0.33
Shin Soo Choo, Reds CIN 23 1.53 1 4.35% -0.53
Jay Bruce, Reds CIN 30 2.00 1 3.33% -1.00
Carlos Gomez, Brewers MIL 43 2.87 1 2.33% -1.87
Freddie Freeman, Braves ATL 154 10.27 8 5.19% -2.27

Where points is total points received, points/2 voter is points per two voters (points/15), points from city voters is points received from the voters in the player’s city, % homer votes is the percentage of a player’s points that came from voters in his city, and homer difference is the difference between points/2 voter and points from city voters. Charts are sorted by homer difference.

I don’t know that there’s all that much we can draw from this. Obviously, voters are more likely to vote for players from their own city, but that’s to be expected. Voting was a little bit less biased in the AL – the average player received exactly 1 point more from voters in their city than from all voters in the AL, whereas that number in the NL was 1.21. 8.08% of all votes in the AL came from homers compared to 8.31% in the NL. If you’re wondering which cities were the most biased, here’s a look:

AL:

City Points Points/2 voter Points From City voters Difference
OAK 225 15.00 24 9.00
LA 282 18.80 23 4.20
TB 103 6.87 11 4.13
DET 412 27.47 31 3.53
BOS 152 10.13 13 2.87
TOR 7 0.47 2 1.53
BAL 298 19.87 21 1.13
KC 4 0.27 1 0.73
TEX 100 6.67 7 0.33
SEA 3 0.20 0 -0.20
CLE 34 2.27 2 -0.27
NY 150 10.00 8 -2.00

NL:

City Points Points/2 voters Points From City Voters Difference
STL 422 28.13 46 17.87
LA 218 14.53 20 5.47
WAS 20 1.33 4 2.67
SF 10 0.67 3 2.33
CIN 202 13.47 15 1.53
ARI 242 16.13 17 0.87
PIT 410 27.33 28 0.67
COL 8 0.53 1 0.47
ATL 195 13.00 12 -1.00
MIL 43 2.87 1 -1.87

Where all these numbers are just the sum of the individual numbers for all players in that city.

If you’re wondering what players have benefited the most from homers in the past 2 years, check out this article by Reuben Fischer-Baum over at Deadspin’s Regressing that I found while looking up more info. He basically used the same method I did, only for 2012 as well (the first year that individual voting data was publicized).

So that’s all for this article. Hope you enjoyed.

———————————————————————————————————————————————————–

*I’m using fractions of wins because that gives us a more accurate number for the statistic I introduce by measuring it to the tenth and not to the single digit. Obviously a team can’t win .6 games in real life but we aren’t concerned with how many games the team won in real life, only their runs scored and allowed.

**Carpenter spent time both at second base and third base, so I used the equation (Innings played at 3B*average wOBA for 3rd basemen + Innings played at 2B*average wOBA for 2nd basemen)/(Innings played at 3B + Innings played at 2B) to get Carpenter’s “custom” position-average wOBA. He did play some other positions too, but very few innings at each of them so I didn’t include those.  It came out to about .307.

***Voting is as such: Each voter puts 10 people on their ballot, with the points going 14-9-8-7-6-5-4-3-2-1.


The R.A. Dickey Effect – 2013 Edition

It is widely talked about by announcers and baseball fans alike, that knuckleball pitchers can throw hitters off their game and leave them in funks for days. Some managers even sit certain players to avoid this effect. I decided to analyze to determine if there really is an effect and what its value is. R.A. Dickey is the main knuckleballer in the game today, and he is a special breed with the extra velocity he has.

Most people that try to analyze this Dickey effect tend to group all the pitchers that follow in to one grouping with one ERA and compare to the total ERA of the bullpen or rotation. This is a simplistic and non-descriptive way of analyzing the effect and does not look at the how often the pitchers are pitching not after Dickey.

Dickey's Dancing Knuckleball
Dickey’s Dancing Knuckleball (@DShep25)

I decided to determine if there truly is an effect on pitchers’ statistics (ERA, WHIP, K%, BB%, HR%, and FIP) who follow Dickey in relief and the starters of the next game against the same team. I went through every game that Dickey has pitched and recorded the stats (IP, TBF, H, ER, BB, K) of each reliever individually and the stats of the next starting pitcher, if the next game was against the same team. I did this for each season. I then took the pitchers’ stats for the whole year and subtracted their stats from their following Dickey stats to have their stats when they did not follow Dickey. I summed the stats for following Dickey and weighted each pitcher based on the batters he faced over the total batters faced after Dickey. I then calculated the rate stats from the total. This weight was then applied to the not after Dickey stats. So for example if Janssen faced 19.11% of batters after Dickey, it was adjusted so that he also faced 19.11% of the batters not after Dickey. This gives an effective way of comparing the statistics and an accurate relationship can be determined. The not after Dickey stats were then summed and the rate stats were calculated as well. The two rate stats after Dickey and not after Dickey were compared using this formula (afterDickeySTAT-notafterDickeySTAT)/notafterDickeySTAT. This tells me how much better or worse relievers or starters did when following Dickey in the form of a percentage.

I then added the stats after Dickey for starters and relievers from all four years and the stats not after Dickey and I applied the same technique of weighting the sample so that if Niese’12 faced 10.9% of all starter batters faced following a Dickey start against the same team, it was adjusted so that he faced 10.9% of the batters faced by starters not after Dickey (only the starters that pitched after Dickey that season). The same technique was used from the year to year technique and a total % for each stat was calculated.

The most important stat to look at is FIP. This gives a more accurate value of the effect. Also make note of the BABIP and ERA, and you can decide for yourself if the BABIP is just luck, or actually better/worse contact. Normally I would regress the results based on BABIP and HR/FB, but FIP does not include BABIP and I do not have the fly ball numbers.

The size of the sample was also included, aD means after Dickey and naD is not after Dickey. Here are the results for starters following Dickey against the same team.

Dickey Starters

It can be concluded that starters after Dickey see an improvement across the board. Like I said, it is probably better to use FIP rather than ERA. Starters see an approximate 18.9% decrease in their FIP when they follow Dickey over the past 4 years. So assuming 130 IP are pitched after Dickey by a league average set of pitchers (~4.00 FIP), this would decrease their FIP to around 3.25. 130 IP was selected assuming ⅔ of starter innings (200) against the same team. Over 130 IP this would be a 10.8 run difference or around 1.1 WAR! This is amazingly significant and appears to be coming mainly from a reduction in HR%. If we regress the HR% down to -10% (seems more than fair), this would reduce the FIP reduction down to around 7%. A 7% reduction would reduce a 4.00 FIP down to 3.72, and save 4.0 runs or 0.4 WAR.

Here are the numbers for relievers following Dickey in the same game.

Dickey Bullpen

Relievers see a more consistent improvement in the FIP components (K, BB, HR) between each other (11.4, 8.1, 4.9). FIP was reduced 10.3%. Assuming 65 IP (in between 2012 and 2013) innings after Dickey of an average bullpen (or slightly above average, since Dickey will likely have setup men and closers after him) with a 3.75 FIP, FIP would get reduced to 3.36 and save 3 runs or 0.3 WAR.

Combining the un-regressed results, by having pitchers pitch after him, Dickey would contribute around 1.4 WAR over a full season. If you assume the effect is just 10% reduction in FIP for both groups, this number comes down to around 0.9 WAR, which is not crazy to think at all based off the results. I can say with great confidence, that if Dickey pitches over 200 innings again next year, he will contribute above 1.0 WAR just from baffling hitters for the next guys. If we take the un-regressed 1.4 WAR and add it to his 2013 WAR (2.0) we get 3.4 WAR, if we add in his defence (7 DRS), we get 4.1 WAR. Even though we all were disappointed with Dickey’s season, with the effect he provides and his defence, he is still all-star calibre.

Just for fun, lets apply this to his 2012. He had 4.5 WAR in 2012, add on the 1.4 and his 6 DRS we get 6.5 WAR, wow! Using his RA9 WAR (6.2) instead (commonly used for knucklers instead of fWAR) we get 7.6 WAR! That’s Miguel Cabrera value! We can’t include his DRS when using RA9 WAR though, as it should already be incorporated.

This effect may even be applied further, relievers may (and likely do) get a boost the following day as well as starters. Assuming it is the same boost, that’s around another 2.5 runs or 0.25 WAR. Maybe the second day after Dickey also sees a boost? (A lot smaller sample size since Dickey would have to pitch first game of series). We could assume the effect is cut in half the next day, and that’d still be another 2 runs (90 IP of starters and relievers). So under these assumptions, Dickey could effectively have a 1.8 WAR after effect over a full season! This WAR is not easy to place, however, and cannot just be added onto the teams WAR, it is hidden among all the other pitchers’ WARs (just like catcher framing).

You may be disappointed with Dickey’s 2013, but he is still well worth his money. He is projected for 2.8 WAR next year by Steamer, and adding on the 1.4 WAR Dickey Effect and his defence, he could be projected to really have a true underlying value of almost 5 WAR. That is well worth the $12.5M he will earn in 2014.

For more of my articles, head over to Breaking Blue where we give a sabermetric view on the Blue Jays, and MLB. Follow on twitter @BreakingBlueMLB and follow me directly @CCBreakingBlue.


The Effect of Devastating Blown Saves

It’s a pretty well documented sabremetric notion that pitching your closer when you have a three run lead in the ninth is probably wasting him. You’re likely going to win the game anyways, since the vast majority of pretty much everyone allowed to throw baseballs in the major leagues is going to be able to keep the other team from scoring three runs.

But we still see it all the time. Teams keep holding on to their closer and waiting until they have a lead in the ninth to trot him out there. One of the reasons for this is that blowing a lead in the ninth is devastating—it’ll hurt team morale more to blow a lead in the ninth than to slip behind in the seventh. And then this decrease in morale will cause for the players to play more poorly in the future, which will result in more losses.

Or will it?

We’re going to look at how teams play following games that they devastatingly lose to see if there’s any noticeable drop in performance. The “devastating blown save” stat can be defined as any game in which a team blows the lead in the ninth and then goes on to lose. Our methodology is going to look at team records in both the following game as well as the following three games to see if there’s any worsening of play. If the traditional thought is right (hey, it’s a possibility!), it will show up in the numbers. Let’s take a look.

All Games (2000-2012)

9+ Inning Games

Devastating BS’s

Devastating BS%

Following Game W%

Three Game W%

31,405

1,333

4.24%

.497

.484

In the following game, the team win percentage was very, very close to 50%. Over a sample size of 1,333 that’s completely insignificant. But what about the following three games, where the win percentage drops down to roughly 48.4%? Well, that’s a pretty small deviation from the 50% baseline, and is of questionable statistical significance. And wouldn’t it make sense that if the devastating blow save effect existed at all it would occur in the directly following game, and not wait until later to manifest itself? It seems safe to say that the “morale drop” of devastatingly losing is likely nonexistent—or at most incredibly small. We’re dealing with grown men after all. They can take it.

Another thing you might want to consider when looking at these numbers is that teams with lots of blown saves are probably more likely to be subpar. Not so fast. The win% of teams weighted to their amount of blown 9th innings over the years is .505. This is probably because better teams are more likely to be ahead in the first place, and so they are going to be on the bubble to blow saves more often even if they blow them a smaller percentage of the time. Just for the fun of seeing how devastation-prone your team has been over the past 13 years, however, here’s a table of individual team results.

 Devastating Blown Saves By Team (2000-2012)

Team

Devastating Blown Saves

Next Game W%

Milwaukee

63

0.460

Chicago Cubs

60

0.4

Kansas City

57

0.315

Toronto

54

0.592

Chicago White Sox

52

0.615

Houston

51

0.372

NY Mets

50

0.56

St. Louis

48

0.625

Texas

46

0.543

Cleveland

46

0.586

Texas

46

0.543

Florida

45

0.511

Baltimore

45

0.377

Oakland

44

0.545

Seattle

44

0.5

Boston

41

0.585

Cincinnati

41

0.585

Los Angeles

40

0.425

Detroit

39

0.384

Atlanta

39

0.743

Detroit

39

0.384

San Diego

35

0.4

Anaheim

34

0.529

New York Yankees

33

0.666

Minnesota

33

0.515

Pittsburgh

32

0.468

Montreal

25

0.2

Washington

18

0.555

Miami (post-change)

8

0.375

Congratulations Pittsburgh, you’ve been the least devastated full-time team over the past 13 years! Now if there’s a more fun argument against the effects of devastating losses than that previous sentence, I want to hear it. Meanwhile the Braves have lived up to their nickname, winning in an outstanding 74.3% of games following devastating losses (it looks like we’ve finally found our algorithm for calculating grit, ladies and gentleman) while the hapless Expos rebounded in just 20% of their games. Milwaukee leads the league in single-game heartbreak, etc. etc. Just read the table. These numbers are fun. Mostly meaningless, but fun.

Back to the point: team records following devastating losses tend to hover very, very close to .500. Managers shouldn’t worry about how their teams lose games—they should worry about if their teams lose games. Because, in the end, that’s all that matters.


Raw data courtesy of Retrosheet.


Weighting Past Results: Hitters

We all know by now that we should look at more than one year of player data when we evaluate players. Looking at the past three years is the most common way to do this, and it makes sense why: three years is a reasonable time frame to try and increase your sample size while not reaching back so far that you’re evaluating an essentially different player.

 The advice for looking at previous years of player data, however, usually comes with a caveat. “Weigh them”, they’ll say. And then you’ll hear some semi-arbitrary numbers such as “20%, 30%, 50%”, or something in that range. Well, buckle up, because we’re about to get a little less arbitrary.

 Some limitations: The point of this study isn’t to replace projection systems—we’re not trying to project declines/improvements here. We’re simply trying to understand how past data tends to translate into future data.

 The methodology is pretty simple. We’re going to take three years of player data (I’m going to use wRC+ since it’s league-adjusted etc., and I’m only trying to measure offensive production), and then weight the years so that we can get an expected 4th year wRC+. We’re then going to compare our expected wRC+ against the actual wRC+*. The closer the expected to our actual, the better the weights.

 *Note: I am using four-year spans of player data from 2008-2013, and limiting to players that had at least 400 PA in four consecutive years. This should help throw out outliers and to give more consistent results. Our initial sample size is 244, which is good enough to give meaningful results.

 I’ll start with the “dumb” case. Let’s just weigh all of the years equally, so that each year counts for 33.3% of our expected outcome.

 Expected vs. Actual wRC+, unweighted

Weight1

Weight2

Weight3

Average Inaccuracy

33.3%

33.3%

33.3%

16.55

 Okay, so we’re averaging missing the actual wRC+ by roughly 16.5. That means that we’re averaging 16.5% inaccuracy when extrapolating the past into the future with no weights. Now let’s try being a little smarter about it and try some different weights out.

 Expected vs. Actual wRC+, various weights

Weight1

Weight2

Weight3

Average Inaccuracy

20%

30%

50%

16.73

25%

30%

45%

16.64

30%

30%

40%

16.58

15%

40%

45%

16.62

0%

50%

50%

16.94

0%

0%

100%

20.15

Huh! It seems that no matter what we do, “intelligently weighting” each year never actually increases our accuracy. If you’re just generally trying to extrapolate several past years of wRC+ data to try and predict a fourth year of wRC+, your best bet is to just unweightedly average the past wRC+ data. Now, the differences are small (for example, our weights of [.3, .3, .4] were only .03 different in accuracy the unweighted total, which is statistically insignificant), but the point remains: weighing data from past years simply does not increase your accuracy. Pretty counter-intuitive.

Let’s dive a little deeper now—is there any situation in which weighting a player’s past does help? We’ll test this by limiting our ages. For example: are players that are younger than 30 better served by weighing their most previous years heavily? This would make sense, since younger players are most likely to experience a true-talent change. (Sample size: 106)

 Expected vs. Actual wRC+, players younger than 30

Weight1

Weight2

Weight3

Average Inaccuracy

33.3%

33.3%

33.3%

16.17

20%

30%

50%

16.37

25%

30%

45%

16.29

30%

30%

40%

16.26

15%

40%

45%

16.20

0%

50%

50%

16.50

0%

0%

100%

20.16

Ok, so that didn’t work either. Even with young players, using unweighted totals is the best way to go. What about old players? Surely with aging players the recent years would most represent a player’s decline. Let’s find out (Sample size: 63).

 Expected vs. Actual wRC+, players older than 32

Weight1

Weight2

Weight3

Average Inaccuracy

33.3%

33.3%

33.3%

16.52

16%

30%

50%

16.18

25%

30%

45%

16.27

30%

30%

40%

16.37

15%

40%

45%

16.00

0%

50%

50%

15.77

0%

55%

45%

15.84

0%

45%

55%

15.77

0%

0%

100%

18.46

Hey, we found something! With aging players you should weight a player’s last two seasons equally, and you should not even worry about three seasons ago! Again, notice that the difference is small (you’ll be about 0.8% more correct by doing this than using unweighted totals). And as with any stat, you should always think about why you’re coming to the conclusion that you’re coming to. You might want to weight some players more aggressively than others, especially if they’re older.

In the end, it just really doesn’t matter that much. You should, however, generally use unweighted weights since differences in wRC+ are pretty much always results of random fluctuation and very rarely the result of actual talent change. That’s what the data shows. So next time you hear someone say “weigh their past three years 3/4/5” (or similar), you can snicker a little. Because you know better.


Power and Patience (Part IV of a Study)

We saw in Part Three that the R^2 between OBP and ISO for the annual average of each from 1901-2013 is .373. To find out the correlation between OBP and ISO at the individual level, I set the leaders page to multiple seasons 1901-2013, split the seasons, and set the minimum PA to 400, then exported the 16,001 results to Open Office Calc.

(Yes, sixteen thousand and one. You can find anything with FanGraphs! Well, anything that has to do with baseball. Meanwhile, Open Office operating on Windows 7 decides it’s tired the moment you ask it to sum seven cells. At least it gets there in the end.)

The result was .201, so we’re looking at even less of a correlation in a much larger sample compared to the league-wide view. Are there periods where the correlation is higher?

Recall from Part Two that from 1994-2013 the R^2 for the league numbers was .583. Using individual player lines (400+ PA) from those seasons increases our sample size from 20 to 4107 (again splitting seasons). This gives us an R^2 of .232. That’s a little higher than .201, but not very much so.

All in all, it’s not the most surprising thing. On-base percentage and isolated power, mathematically, basically have nothing in common other than at-bats in the denominator. Given that, any correlation between them at all (and there is some) suggests that it either helps players hit for power to be an on-base threat, or vice versa. Not that one is necessary for the other, but there’s something to it. And throughout history, as we saw in Part One, a majority of players are either good at both aspects of hitting, or neither.

In fact, it’s the exceptions to that rule that triggered this whole series, those higher-OBP, lower-ISO players. Again from part one, there were 12 19th century, 21 pre-1961 20th century, and 3 post-1961 20th-21st century players with a career OBP over .400 and ISO below .200.

Much of this can probably be attributed to that consistency OBP has had historically relative to ISO that we observed a couple weeks ago. Continuing with the somewhat arbitrary 1961 expansion era cutoff, from 1961-present, 168 players with 3000 PA have an ISO over .200 and 18 have an OBP over .400; from 1901-60, it was 43 with the ISO over .200 and 31 with the OBP over .400. The .200+ ISO’s are split 80-20% and the .400+ OBP’s are split about 60-40%. The latter is the much smaller gap, as we’d expect. (Some players whose careers straddled 1961 are double-counted, but you get the basic idea.)

But let’s see if we can trace the dynamics that brought us to this point. What follows is basically part of part one in a part three format (part). In other words, we’re going to look at select seasons, and in those seasons, compare the number of players above and below the average OBP and ISO. Unfortunately, it’s hard to park-adjust those numbers, so a player just above the average ISO at Coors and a player just below it at Safeco are probably in each other’s proper place. But that’s a minor thing.

After the league-wide non-pitcher OBP and ISO are listed, you’re going to see what might look like the results of monkeys trying to write Hamlet. But “++” refers to the number of players with above-average OBP and ISO; “+-” means above-average OBP, below-average ISO; “-+” means below-average OBP and above-average ISO; and “- -” means, obviously, below-average OBP and ISO. The years were picked for various reasons, including an attempt at spreading them out chronologically. Notes are sparse as the percentages are the main thing to notice.

1901: .330 OBP, .091 ISO. Qualified for batting title: 121. 35% ++, 25% +-, 12% -+, 28% – –

1908: .295 OBP, .069 ISO. Qualified for batting title: 127. 41% ++, 23% +-, 8% -+, 28% – –

The sum of OBP and ISO was its lowest ever in 1908.

1921: .346 OBP, .117 ISO. Qualified for batting title: 119. 42% ++, 24% +-, 8% -+, 26% – –

Baseball rises from the dead ball era. Still relatively few players are hitting for power while not getting on base as much.

1930: .356 OBP, .146 ISO. Qualified for batting title: 122. 45% ++, 23% +-, 3% -+, 29% – –

The best pre-WWII season for OBP and ISO. Almost nobody was about average at hitting for power while not as good at reaching base. Two-thirds of qualifiers had an above-average OBP vs. fewer than half with an above-average ISO.

1943: .327 OBP, .096 ISO. Qualified for batting title: 106. 41% ++, 24% +-, 10% -+, 25% – –

World War II, during which OBPs stayed near the average but ISOs tanked. That would not necessarily appear in these numbers, because the players in this segment are categorized vs. each year’s average.

1953: .342 OBP, .140 ISO. Qualified for batting title: 88. 44% ++, 22% +-, 14% -+, 20% – –

The first year where sISO exceeded OBP was easily the lowest so far in terms of players below average in both OBP and ISO. (Note: So few players qualified on account of the Korean War.)

1969: .330 OBP, .127 ISO. Qualified for batting title: 121. 45% ++, 17% +-, 14% -+, 23% – –

1983: .330 OBP, .131 ISO. Qualified for batting title: 133. 43% ++, 16% +-, 17% -+, 25% – –

1969 and 1983 were picked because of their historically average league-wide numbers for both OBP and ISO. The percentages for each of the four categories are about equal in both seasons.

2000: .351 OBP, .171 ISO. Qualified for batting title: 165. 39% ++, 16% +-, 15% -+, 29% – –

The sum of OBP and ISO was its highest ever in 2000.

2011: .325 OBP, .147 ISO. Qualified for batting title: 145. 50% ++, 17% +-, 12% -+, 21% – –

2012: .324 OBP, .154 ISO. Qualified for batting title: 144. 44% ++, 24% +-, 14% -+, 18% – –

2013: .323 OBP, .146 ISO. Qualified for batting title: 140. 45% ++, 24% +-, 17% -+, 14% – –

Originally, this part ended with just 2013, but that showed an abnormally low “- -” percentage, so now 2011-13 are all listed. From 2011 to 2012, the split groups (above-average at 1 of the 2 statistics, “+-” or “-+”) increased sharply while the number of generally good and generally bad hitters decreased. From 2012 to 2013, there was almost no change in qualifiers based on OBP (the “++” and “+-” groups). Among those with below-average OBPs, the number with above-average power increased as the number with below-average power decreased. Most significantly, 2011-13 has produced an overall drop in players who are below average at both.

I don’t want to draw too many conclusions from this set of 12 out of 113 seasons. But a few more things come up besides the recent decline in players below average in both OBP and ISO.

Regarding “++” Players

Unsurprisingly, limiting the samples to qualifiers consistently shows a plurality of players to be good at both the OBP and ISO things.

Regarding “- -” Players 

Essentially, until 2012, this group was always at least 1/5 of qualifiers, and usually it was 1/4 or more. The last couple years have seen a decline here. Is it a trend to keep an eye on in the future (along with the league-wide OBP slump from Part 3)?

Regarding “++” and “- -” Players

Meanwhile, the majority of players will be above average at both getting on base or hitting for power, or below average at both. The sum of those percentages is just about 60% at minimum each year. Of the ten seasons above, the lowest sum is actually from 2013, mostly on account of the 14% of players who were below average at both.

This also means that it’s a minority of players who “specialize” in one or the other.

Regarding “+-” vs. “-+” Players

The “-+” players, those with below-average OBPs and above-average ISOs, show the best-defined trends of any of the four categorizations. In general, before 1953, when OBP was always “easier” to be good at than ISO (via OBP vs. sISO as seen in Parts 2 and 3), you saw fewer ISO-only players than you see today. Either they were less valuable because power was less a part of the game and of the leagues’ offenses, or they were less common since it was harder to exceed the league average.

The number of OBP-only players is more complicated, because they too were more common in the pre-1953 days. But they have jumped in the last two years from 1/6 of qualifiers from ’69-’11 to 1/4 of qualifiers in 2012 and 2013. Overall, the recent decline in “- -” players has come at the expense of “+-” players. This can also be interpreted as indicating that players are becoming better at reaching base while remaining stagnant at hitting for power (important distinction: that’s compared to the annual averages, not compared to the historical average; as we saw last week, OBP is in a historical decline at the league level).

Conclusion

The key takeaway for all of this is that there are always going to be more players who are above-average in both OBP and ISO or below average in both. Even if the correlations between OBP and ISO on the individual level aren’t overly high, clearly more players are good at both or at neither.

This isn’t just on account of players with enough PA to qualify for the league leaders being better hitters in general, because while the number of players above-average in both who qualify is always a plurality, it’s almost never a majority. It takes a number of players who are below-average at both to create a majority in any given year.

In terms of OBP-only players and ISO-only players, the former have almost always outnumbered the latter. This is sufficiently explained in that reaching base is often key to being a good hitter, while hitting for power is optional. (That’s why OPS has lost favor, because it actually favors slugging over OBP.) Even when batting average was the metric of choice throughout baseball, those who got the plate appearances have, in general, always been good at getting on base, but not necessarily at hitting for power.

Next week this series concludes by looking at the careers of some selected individual players. The most interesting ones will be the either-or players, with a significantly better OBP or ISO. We won’t look much at players like Babe Ruth or Bill Bergen, but instead players like Matt Williams or Wade Boggs. Stay tuned.


How Much Work are AL Starters Doing, and What Difference Has It Made in Team Success?

Baseball fans have been treated to incredible starting pitching performances in recent years, with several ace staffs leading their teams to regular-season and postseason success. Initially, I set out to examine the number of innings pitched by AL starting rotations because I expected that there would be a big disparity from team to team. And more specifically, I thought that the percentage of innings pitched by a team’s starting rotation would correlate positively to either its W-L record, or more likely, its Pythagorean W-L record.

I gathered five years of data (2009 – 2013 seasons) and calculated the Starting Pitcher Innings Pitched Percentage (SP IP%). This number is simply the number of innings a team’s starters pitched divided by the total innings the team pitched. If a starter was used in relief, those innings didn’t count. I only looked at AL teams, because I assumed that NL starting pitchers could be pulled from games prematurely for tactical, pinch-hitting purposes, while AL starters were likely to stay in games as long as they weren’t giving up runs, fatigued, or injured.

Two things struck me about the results:

1. There was little correlation between a team’s SP IP% and its W-L record or its SP IP % and Pythagorean W-L record

2. The data showed little variance and was normally distributed

I looked at 71 AL team seasons from 2009 – 2013 and found that on average, AL Teams used starting pitchers for 66.8% of innings, with a standard deviation of 2.83%. The data followed a rather normal distribution, with teams SP IP% breaking down as follows:

Standard Deviations # of Teams % of Total Teams
-2 or lower 2 2.82%
-1 to -2 10 14.08%
-1 to 0 22 30.99%
0 to 1 26 36.62%
1 to 2 10 14.08%
2 or higher 1 1.41%

Over two-thirds of the teams (48 of 71) fell within the range of 63.6 to 69.2 SP IP%, which is much less variance than I expected to find.  And only three seasons fall outside the range of two standard deviations from the mean: two outliers on the negative end and one on the positive end. Those teams are:

Negative Outliers:

2013 Minnesota Twins: 60.06 SP IP%

2013 Chicago White Sox 60.25 SP IP%

Positive Outlier:

2011 Tampa Bay Rays 73.02 SP IP%

Taken at the extreme, these numbers show a huge gap in the number of innings the teams got out of their starters. Minnesota, for example, got only 871 innings out of starters in 2013, while the 2011 Tampa Bay Rays 1,058 innings in a season with fewer overall innings pitched. Another way of conceptualizing it would be to say that Minnesota starters pitched averaged just over 5 1/3 innings of each nine-inning game in 2013, while the 2011 Rays starters pitched nearly 6 2/3 innings. But when the sample is viewed as a whole the number of innings is quite close, as seen on this graph of SP IP% for the last five years:

Scatter plot diagram

 

The correlation between SP IP% and team success (measured via W-L or Pythagorean W-L) was minimal. (The Pearson coefficient values of the correlations were .1692 and .1625, respectively).  Team victories are dependent on too many variables to isolate a connection between team success (measured via team wins) and  SP IP%;  a runs scored/runs allowed formula for calculating W-L record was barely an improvement over the traditional W-L measurement. Teams like the Seattle Mariners exemplify the issue with correlating the variables: their starters have thrown above-average numbers of innings in most of the years in the study, but rarely finished with a winning record.

What I did find, to my surprise, was a relatively narrow range of SP IP% over the last five years, with teams distributed normally around an average of 66% of innings. In the future, it might be helpful to expand the sample, or look at a historic era to see how the SP IP% workload has changed over time. The relative consistency of SP IP% over five seasons and across teams could make this metric useful for future studies of pitching workloads, even if these particular correlations proved unsuccessful.