What if: Prince Fielder Were an Everyday Shortstop?

I was recently involved in an online discussion of the Prince Fielder/Ian Kinsler trade and the signing of Jhonny Peralta by the St. Louis Cardinals. Someone stated that Peralta was no more than a utility infielder who could sometimes hit. I pointed out that, over the last three seasons, Peralta was actually a top-five SS. Someone else stated that Prince, were he to play SS, would also be a top-five SS. I thought that was ridiculous, but decided I’d try to look at it as objectively as possible.

Over the last three seasons, Fielder has 111 batting runs, -18 base running runs, 61 replacement runs and -10 fielding and -37 positional runs for 107 total runs.

If we assume that his batting, base running and overall playing time would stay the same, which is probably an optimistic assumption given the likely additional strain of playing SS instead of 1B, then we only need to adjust his positional and defensive runs.

The positional adjustment is the easiest to adjust. The adjustment for 1B is -12.5 runs per 1350 innings, the adjustment for SS is +7.5 runs per 1350 innings. Fielder’s -37 positional runs represent (-37/-12.5) 3.0 defensive seasons. Three defensive seasons at SS is worth (3 * 7.5) 23 runs.

At this point Fielder at SS is worth 111 batting runs+-18 base running runs+23 positional runs+61 replacement runs. That’s 167 runs all told. That’d make him, by far, the best SS in the league. Troy Tulowitzki has 114 runs.

But we still haven’t factored in Fielder’s defense compared to the average SS. I’m not really sure that we can.

Fielder has been about six runs worse than the average 1B each season of his career. But the average SS is a much better defensive player than the average 1B.

I think it’s safe to assume that Fielder would be the worst defensive SS in baseball.

Since 2002, the UZR era, the worst season by a SS (minimum 650 innings, about half a season) is Dee Gordon’s 2012 season in which UZR says he was worth -27 runs per 1350 innings.

That’s a somewhat amusing comparison. Dee Gordon is listed at 5’11” 160 lbs. Prince is listed at 5’11” 275 lbs. Those are listed weights and I think it’s entirely possible that Prince weighs twice as much as Gordon.

I’m going to go out on a limb as say that Prince would be a worse defensive SS than Gordon. I’d go so far as to say that he would be considerably worse. But how much is considerably?

UZR can be broken down into different components.
Range runs – attempts to measure a player’s range; how many balls he does/doesn’t get to compared to average.
Error runs – attempts to measure how many runs a player saves/costs his team by avoiding/making errors
Double play runs – attempts to measure how many runs a player saves/costs his team by turning/not turning double plays.

I’m going to assume that Fielder would be the worst at all three of the above. So, what would that look like for Fielder’s overall defensive worth at SS?

It’s worth noting here that most of Gordon’s poor UZR was due to making errors, his range and double plays were bad, but not historically bad. His errors were.

The worst SS in terms of double play runs (per 1350 innings) was, go figure, 2012 Dee Gordon at -5 runs per 1350 innings. If we say that Fielder was equally as bad as Gordon, I’ve little doubt he’d be much worse than Gordon, that’d be (3*-5)-15 runs over the 3 seasons.

The worst SS in terms of range runs was, not surprisingly, 2012 Derek Jeter at -17.5 runs per 1350 innings. Anyone think that Fielder has Jeter’s range? I don’t. But if we give Fielder three seasons as poor as Jeters’ 2012 that’s (3*-17.5) -53 runs for 3 seasons.

The worst SS in terms of error runs, bet you guessed that it, was 2012 Dee Gordon at -13 runs per 1350 innings. Again, I think that Dee’s footwork and hands around 2B would be much better than Fielder’s, but if we say that Fielder was as good as Gordon then he’d be worth (3*-13) -39 runs per the three seasons.

If we add all of that up (and remembering that this is-I believe-an optimistic look at Fielder’s possible performance at SS, we get Fielder being (-15-53-39) -107 runs worse than the average SS. Quite a bit worse than Gordon’s -27 runs

Let’s add that to his other performance from above:
111 batting runs, -18 base running runs, -107 fielding runs, 23 positional runs, 61 replacement runs = 71 total runs.

71 total runs between 2011 and 2013 would have put Fielder 12th among major league SS, between Hanley Ramirez (84 runs) and Marco Scutaro (70 runs), and worth about 2.5 WAR per season.

To emphasize again, I think these are the most ridiculously optimistic assumptions that I can present with a straight face. I think it much more likely that Fielder would be a -50 (per 1350 innings) or worse SS were he to play there everyday. Not to mention the additional strain on his body that would decrease his hitting, baserunning, and ability to play every day.


Thoughts on the MVP Award: Team-Based Value and Voter Bias

You are reading this right now.  That is a fact.  Since you are reading this right now, many things can be reasonably inferred:

1.  You probably read FanGraphs at least fairly often

2. Since you probably read FanGraphs at least fairly often, you probably know that there are a lot of differing opinions on the MVP award and that many articles here in the past week have been devoted to it.

3. You probably are quite familiar with sabermetrics

4. You probably are either a Tigers fan or think that Mike Trout should have won MVP, or both

5. You might know that Josh Donaldson got one first-place vote

6. You might even know that the first-place vote he got was by a voter from Oakland

7. You might know that Yadier Molina got two first-place votes, and they both came from voters from St. Louis

8. You might even know that one of the voters who put Molina first on his ballot put Matt Carpenter second

9. You might be wondering if there is any truth to the idea that Miguel Cabrera is much more important to his team than Mike Trout is

I have thought about many of those things myself.  So, in this very long 2-part article, I am going to discuss them.  Ready?  Here goes:

Part 1: How much of an impact does a player have on his team?

Lots of people wanted Miguel Cabrera to win the MVP award. Some of you reading this may be shocked, but it’s actually true. One of the biggest arguments for Miguel Cabrera over Mike Trout for MVP is that Cabrera was much more important and “valuable” than Trout.  Cabrera’s team made the playoffs.  Trout’s team did not.  Therefore anything Trout did cannot have been important.  Well, let’s say too important.  I don’t think that anybody’s claiming that Trout had zero impact on the game of baseball or the MLB standings whatsoever.

OK.  That’s reasonable. There’s nothing flawed about that thinking when it’s not a rationale for voting Cabrera ahead of Trout for MVP.  As just a general idea, it makes sense:  Cabrera had a bigger impact on baseball this year than Trout did.  I, along with many other people in the sabermetric community, disagree with the fact that that’s a reason to vote for Cabrera, though.  But the question I’m going to ask is this: did Cabrera have a bigger impact on his own team than Trout did?

WAR tells us no.  Trout had 10.4 WAR, tops in MLB.  Cabrera had 7.6 – a fantastic number, good for 5th in baseball and 3rd in the AL, as well as his own career high – but clearly not as high as Trout.   Miggy’s hitting was out of this world, at least until September, and it’s pretty clear than he could have at least topped 8 WAR easily had he stayed healthy through the final month and been just as productive as he was April through August.  But, fact is, he did get hurt, and did not finish with a WAR as high as Trout.  So if they were both replaced with a replacement player, the Tigers would suffer more than the Angels.  Cabrera was certainly valuable – if replaced by a replacement, the 7 or 8 wins the Tigers would lose would probably not be enough to win them the AL Central.  But take Trout out, and the Angels go from a mediocre-to-poor team to a really bad one. The Angels had 78 wins this year, and that would have been around 68 (if we trust WAR) without Trout.  That would have been the 6th worst total in the league.  So, by WAR, Trout meant more to his team than Cabrera did.

But WAR is not the be all and end all of statistics (though we may like to think it is sometimes).  Let’s look at this from another angle.  Here’s a theory for you: the loss of a key player on a good team would probably not hurt that team as much because they’re already good to begin with.  If a not-so-good team loses a key player, though, the other players on the team aren’t as good so they can’t carry the team very well.

How do we test this theory?  Well, we have at our disposal a fairly accurate and useful tool to determine how many wins a team should get.  That tool is pythagorean expectation – a way of predicting wins and losses based on runs scored and allowed.  So let’s see if replacing Trout with an average player (I am using average and not replacement because all the player run values given on FanGraphs are above or below average, not replacement) is more detrimental to the Angels than replacing Cabrera with an average player is to the Tigers.

The Angels, this year, scored 733 runs and allowed 737.  Using the Pythagenpat (sorry to link to BP but I had to) formula, I calculated their expected win percentage, and it came out to .497 – roughly 80.6 wins and 81.4 losses*.  That’s actually significantly better than they did this year, which is good news for Angels fans.  But that’s not the focus right here.

Trout, this year, added 61.1 runs above average at the plate and 8.1 on the bases for a total of 69.2 runs of offense.  He also saved 4.4 runs in the field (per UZR).  So, using the Pythagenpat formula again with adjusted run values for if Trout were replaced by an average hitter and defender (663.8 runs scored and 741.4 runs allowed), I again calculated the Angels’ expected win percentage.  This came out to be .449 – roughly 72.7 wins and 89.3 losses.  7.9 fewer wins than the original one.  That’s the difference, for that specific Angels team, that Trout made.  Now, keep in mind, this is above average, not replacement, so it will be lower than WAR by a couple wins (about two WAR signifies an average player, so wins above average will be about two less than wins above replacement).  7.9 wins is a lot.  But is it more than Cabrera?

Let’s see.  This year, the Tigers scored 796 runs and allowed 624.  This gives them a pythagorean expectation (again, Pythagenpat formula) of a win percentage of .612 – roughly 99.1 wins and 62.9 losses.  Again much better than what they did this year, but also not the focus of this article.  Cabrera contributed 72.1 runs above average hitting and  4.4 runs below average on the bases for a total of 67.7 runs above average on offense.  His defense was a terrible 16.8 runs below average.

Now take Cabrera out of the equation.  With those adjusted run totals (728.3 runs scored and 607.2 runs allowed) we get  a win percentage of .583 – 94.4 wins and 67.6 losses.  A difference of 4.7 wins from the original.

Talk about anticlimactic.  Trout completely blew Cabrera out of the water (I would say no pun intended, but that was intended).  This makes sense if we think about it – a team with more runs scored will be hurt less by x fewer runs because they are losing a lower percentage of their runs.  In fact, if we pretend the Angels scored 900 runs this year instead of 733, they go from a 96.5-win team with Trout to an 89.8-win team without.  Obviously, they are better in both cases, but the difference Trout makes is only 6.7 wins – pretty far from the nearly 8 he makes in real life.

The thing about this statistic is that it penalizes players on good teams. Generally,  statistics such as the “Win” for pitchers are frowned upon because they measure things that the pitcher can’t control – just like this one.  But if we want to measure how much a team really needs a player, which is pretty much the definition of value, I think this does a pretty good job. Obviously, it isn’t perfect: the numbers that go into it, especially the baserunning and fielding ones, aren’t always completely accurate, and when looking at the team level, straight linear weights aren’t always the way to go; overall, though, this stat gives a fairly accurate picture.  The numbers aren’t totally wrong.

Here’s a look at the top four vote-getters from each league by team-adjusted wins above average (I’ll call it tWAA):

Player tWAA
Mike Trout 7.9
Andrew McCutchen 6.4
Paul Goldschmidt 6.2
Chris Davis 6.1
Josh Donaldson 4.9
Miguel Cabrera 4.7
Matt Carpenter 4.0
Yadier Molina 3.1

This is interesting.  Like expected, the players on better teams have a lower tWAA than the ones on good teams, just as we discussed earlier. One notable player is Yadier Molina, who despite being considered one of, if not the best catcher in the game, has the lowest tWAA of anyone on that list.  This may be because he missed some time. But let’s look at it a little closer: if we add the 2 wins that an average player would provide over a replacement-level player, we get 5.1 WAR, which isn’t so far off of his 5.6 total from this year. And the Cardinals’ pythagorean expectation was 101 wins, so obviously under this system he won’t be credited as much because his runs aren’t as valuable to his team.  Another factor is that we’re not adjusting by position here (except for the fielding part), and Molina is worth more runs offensively above the average catcher than he is above the average hitter, since catchers generally aren’t as good at hitting. But if Molina was replaced with an average catcher, I’m fairly certain that the Cardinals would lose more than the 3 games more that this number suggests. They might miss Molina’s game calling skills – if such a thing exists – and there’s no way to quantify how much Molina has helped the Cardinal pitchers improve, especially since they have so many rookies. But there’s also something else, something we can quantify, even if not perfectly.  And that’s pitch framing. Let’s add the 19.8 runs that Molina saved (measured by Statcorner) to Molina’s defensive runs saved (for which, by the way, I used the Fielding Bible’s DRS, since there is no UZR for catchers – that may be another reason Molina’s number may seem out of place, because DRS and UZR don’t always agree; Trout’s 2013 UZR was 4.4, and his DRS was -9. Molina did play 18 innings at first base, where he had a UZR of -0.2. We’ll ignore that, though, since it is such a small sample size and won’t make such a big difference).

Here is the table with only Molina’s tWAA changed, to account for pitch framing:

Player tWAA
Mike Trout 7.9
Andrew McCutchen 6.4
Paul Goldschmidt 6.2
Chris Davis 6.1
Yadier Molina 5.4
Josh Donaldson 4.9
Miguel Cabrera 4.7
Matt Carpenter 3.9

Now we see Molina move up into 5th place out of 8 with a much better tWAA of 5.4 – more than 2 wins better than without the pitch framing, and about 7.4 WAR if we want to convert from wins above average to wins above replacement.  Interesting. I don’t want to get into a whole argument now about whether pitch framing is accurate or actually based mostly on skill instead of luck, or whether it should be included in a catcher’s defensive numbers when we talk about their total defense. I’m just putting that data out there for you to think about.

But as I mentioned before, I used DRS for Molina and not UZR. What if we try to make this list more consistent and use DRS for everyone? (We can’t use UZR for everyone.)  Let’s see:

Player tWAA DRS UZR
Mike Trout 6.5 -9 4.4
Andrew McCutchen 6.4 7 6.9
Paul Goldschmidt 7.0 13 5.4
Chris Davis 5.5 -7 -1.2
Molina w/ Framing 5.4 31.8 N/A
Josh Donaldson 5.0 11 9.9
Miguel Cabrera 4.6 -18 -16.8
Matt Carpenter 4.1 0 -0.9
Yadier Molina 3.1 12 N/A

We see Trout go down by almost a win and a half here. I don’t really trust that, though, because I really don’t think that Mike Trout is a significantly below average fielder, despite what DRS tells me. DRS actually gave Trout a rating of 21 in 2012, so I don’t think it’s as trustworthy. But for the sake of consistency, I’m showing you those numbers too, with the DRS and UZR comparison so you can see why certain people lost/gained wins.

OK. So I think we have a pretty good sense for who was most valuable to their teams. But I also think we can improve this statistic a little bit more. Like I said earlier, the hitting number I use – wRAA – is based off of league average, not off of position average. In other words, if Chris Davis is 56.3 runs better than the average hitter, but we replace him with the average first baseman, that average first baseman is already going to be a few runs better than the average player. So what if we use weighted runs above position average? wRAA is calculated by subtracting the league-average wOBA from a player’s wOBA, dividing by the wOBA scale, and multiplying by plate appearances. What I did was subtract the position average wOBA from the player’s wOBA instead. So that penalizes players at positions where the position average wOBA is high.

Here’s your data (for the defensive numbers I used UZR because I think it was better than DRS, even though the metric wasn’t the same for everyone):

Player position-adj. tWAA Pos-adj. wRAA wRAA
Trout 7.7 59.4 61.1
McCutchen 6.2 40.1 41.7
Molina w/ Framing 5.6 23.3 20.5
Goldschmidt 5.0 39.5 50.1
Davis 5.0 46.4 56.3
Donaldson 4.9 36.6 36.7
Cabrera 4.7 72.0 72.1
Carpenter** 4.3 41.7 37.8
Molina 3.4 23.3 20.5

I included here both the regular and position-adjusted wRAA for all players for reference. Chris Davis and Paul Goldschmidt suffered pretty heavily – each lost over a win of production – because the average first baseman is a much better hitter than the average player. Molina got a little better, as did Carpenter, because they play positions where the average player isn’t as good offensively. Everyone else stayed almost the same, though.

I think this position-adjusted tWAA is probably the most accurate. And I would also use the number with pitch framing included for Molina. It’s up to you to decide which one you like best – if you like any of them at all. Maybe you have a better idea, in which case you should let me know in the comments.

 Part 2: Determining voter bias in the MVP award

As I mentioned in my introduction, Josh Donaldson got one first-place MVP vote – from an Oakland writer. Yadier Molina got 2 – both from St. Louis writers. Matt Carpenter got 1 second-place vote – also from a St. Louis writer. Obviously, voters have their bias when it comes to voting for MVP. But how much does that actually matter?

The way MVP voting works is that for each league, AL and NL, two sportswriters who are members of the BBWAA are chosen from each location that has a team in that league – 15 locations per league times 2 voters per location equals 30 voters total for each league. That way you won’t end up with a lot of voters or very few voters from one place who may be biased one way or another.

But is there really voter bias?

In order to answer this question, I took all players who received MVP votes this year (of which there were 49) and measured how many points each of them got per 2 voters***.  Then I took the amount of points that each of them got from the voters from their chapter and found the difference. Here’s what I found:

AL:

Player, Club City Points Points/2 voter Points From City voters % Homer votes Homer difference
Josh Donaldson, Athletics OAK 222 14.80 22 9.91% 7.20
Mike Trout, Angels LA 282 18.80 23 8.16% 4.20
Evan Longoria, Rays TB 103 6.87 11 10.68% 4.13
David Ortiz, Red Sox BOS 47 3.13 7 14.89% 3.87
Adam Jones, Orioles BAL 9 0.60 3 33.33% 2.40
Miguel Cabrera, Tigers DET 385 25.67 28 7.27% 2.33
Coco Crisp, Athletics OAK 3 0.20 2 66.67% 1.80
Edwin Encarnacion, Blue Jays TOR 7 0.47 2 28.57% 1.53
Max Scherzer, Tigers DET 25 1.67 3 12.00% 1.33
Salvador Perez, Royals KC 1 0.07 1 100.00% 0.93
Koji Uehara, Red Sox BOS 2 0.13 1 50.00% 0.87
Chris Davis, Orioles BAL 232 15.47 16 6.90% 0.53
Adrian Beltre, Rangers TEX 99 6.60 7 7.07% 0.40
Yu Darvish, Rangers TEX 1 0.07 0 0.00% -0.07
Felix Hernandez, Mariners SEA 1 0.07 0 0.00% -0.07
Shane Victorino, Red Sox BOS 1 0.07 0 0.00% -0.07
Jason Kipnis, Indians CLE 31 2.07 2 6.45% -0.07
Torii Hunter, Tigers DET 2 0.13 0 0.00% -0.13
Hisashi Iwakuma, Mariners SEA 2 0.13 0 0.00% -0.13
Greg Holland, Royals KC 3 0.20 0 0.00% -0.20
Carlos Santana, Indians CLE 3 0.20 0 0.00% -0.20
Jacoby Ellsbury, Red Sox BOS 3 0.20 0 0.00% -0.20
Dustin Pedroia, Red Sox BOS 99 6.60 5 5.05% -1.60
Manny Machado, Orioles BAL 57 3.80 2 3.51% -1.80
Robinson Cano, Yankees NY 150 10.00 8 5.33% -2.00

NL:

Player, Club City Points Points/2 voter Points from City Voters % Homer votes Homer difference
Yadier Molina, Cardinals STL 219 14.60 28 12.79% 13.40
Hanley Ramirez, Dodgers LA 58 3.87 7 12.07% 3.13
Joey Votto, Reds CIN 149 9.93 13 8.72% 3.07
Allen Craig, Cardinals STL 4 0.27 3 75.00% 2.73
Jayson Werth, Nationals WAS 20 1.33 4 20.00% 2.67
Hunter Pence, Giants SF 7 0.47 3 42.86% 2.53
Yasiel Puig, Dodgers LA 10 0.67 3 30.00% 2.33
Matt Carpenter, Cardinals STL 194 12.93 15 7.73% 2.07
Andrelton Simmons, Braves ATL 14 0.93 2 14.29% 1.07
Paul Goldschmidt, D-backs ARI 242 16.13 17 7.02% 0.87
Michael Cuddyer, Rockies COL 3 0.20 1 33.33% 0.80
Andrew McCutchen, Pirates PIT 409 27.27 28 6.85% 0.73
Clayton Kershaw, Dodgers LA 146 9.73 10 6.85% 0.27
Craig Kimbrel, Braves ATL 27 1.80 2 7.41% 0.20
Russell Martin, Pirates PIT 1 0.07 0 0.00% -0.07
Matt Holliday, Cardinals STL 2 0.13 0 0.00% -0.13
Buster Posey, Giants SF 3 0.20 0 0.00% -0.20
Adam Wainwright, Cardinals STL 3 0.20 0 0.00% -0.20
Adrian Gonzalez, Dodgers LA 4 0.27 0 0.00% -0.27
Troy Tulowitzki, Rockies COL 5 0.33 0 0.00% -0.33
Shin Soo Choo, Reds CIN 23 1.53 1 4.35% -0.53
Jay Bruce, Reds CIN 30 2.00 1 3.33% -1.00
Carlos Gomez, Brewers MIL 43 2.87 1 2.33% -1.87
Freddie Freeman, Braves ATL 154 10.27 8 5.19% -2.27

Where points is total points received, points/2 voter is points per two voters (points/15), points from city voters is points received from the voters in the player’s city, % homer votes is the percentage of a player’s points that came from voters in his city, and homer difference is the difference between points/2 voter and points from city voters. Charts are sorted by homer difference.

I don’t know that there’s all that much we can draw from this. Obviously, voters are more likely to vote for players from their own city, but that’s to be expected. Voting was a little bit less biased in the AL – the average player received exactly 1 point more from voters in their city than from all voters in the AL, whereas that number in the NL was 1.21. 8.08% of all votes in the AL came from homers compared to 8.31% in the NL. If you’re wondering which cities were the most biased, here’s a look:

AL:

City Points Points/2 voter Points From City voters Difference
OAK 225 15.00 24 9.00
LA 282 18.80 23 4.20
TB 103 6.87 11 4.13
DET 412 27.47 31 3.53
BOS 152 10.13 13 2.87
TOR 7 0.47 2 1.53
BAL 298 19.87 21 1.13
KC 4 0.27 1 0.73
TEX 100 6.67 7 0.33
SEA 3 0.20 0 -0.20
CLE 34 2.27 2 -0.27
NY 150 10.00 8 -2.00

NL:

City Points Points/2 voters Points From City Voters Difference
STL 422 28.13 46 17.87
LA 218 14.53 20 5.47
WAS 20 1.33 4 2.67
SF 10 0.67 3 2.33
CIN 202 13.47 15 1.53
ARI 242 16.13 17 0.87
PIT 410 27.33 28 0.67
COL 8 0.53 1 0.47
ATL 195 13.00 12 -1.00
MIL 43 2.87 1 -1.87

Where all these numbers are just the sum of the individual numbers for all players in that city.

If you’re wondering what players have benefited the most from homers in the past 2 years, check out this article by Reuben Fischer-Baum over at Deadspin’s Regressing that I found while looking up more info. He basically used the same method I did, only for 2012 as well (the first year that individual voting data was publicized).

So that’s all for this article. Hope you enjoyed.

———————————————————————————————————————————————————–

*I’m using fractions of wins because that gives us a more accurate number for the statistic I introduce by measuring it to the tenth and not to the single digit. Obviously a team can’t win .6 games in real life but we aren’t concerned with how many games the team won in real life, only their runs scored and allowed.

**Carpenter spent time both at second base and third base, so I used the equation (Innings played at 3B*average wOBA for 3rd basemen + Innings played at 2B*average wOBA for 2nd basemen)/(Innings played at 3B + Innings played at 2B) to get Carpenter’s “custom” position-average wOBA. He did play some other positions too, but very few innings at each of them so I didn’t include those.  It came out to about .307.

***Voting is as such: Each voter puts 10 people on their ballot, with the points going 14-9-8-7-6-5-4-3-2-1.


Power and Patience (Part V of a Study)

One, two, one, two, three, four.

Sorry. Those were links to the first four parts. Anyway, now it’s time to fill the circle of this series. This final piece isn’t really much of an analysis, but sort of a potpourri of interesting trivia. Trivia’s where these five weeks started, after all. Hopefully there was sufficient analytical substance to the first four parts. (Or any.)

Here is an interesting tidbit to start: only two batting title qualifiers have ever had a higher ISO than OBP in a season. One was Barry Bonds in his insane, 73-HR 2001 season (.536 ISO, .515 OBP–I told you it was insane). The other was Matt Williams in 1994. Take a look at the 1994 OBP and ISO scatter chart among qualifiers, with a line of y=x for reference:

I trust you to figure out which one belongs to the current manager of the Washington Nationals. He had a .319 OBP and a .339 ISO that season. (And, FYI, that lonely dot in the lower left belongs to 24 year old Twins catcher Matt Walbeck and his .204/.246/.284 in 359 PA. And that one insanely close to the .500 OBP? Frank Thomas.)

And Barry Bonds’s 2001? Well, just take a look:

Yeah.

(I kind of wanted just to show that chart.)

That only two players ever had a single season, let alone career, with a higher ISO than OBP, a good way to measure a player’s relative prowess at each facet of hitting is to look at the gap between those statistics.

Care to guess the player with a career OBP below the historical average of .333 who has the smallest gap between his career OBP and ISO? To the surprise of nobody, it’s:

Dave Kingman

Kingman posted a career .302 OBP and .242 ISO, making him the ultimate in empty power. By Kingman’s last year, 1986 with Oakland, all he could do was hit home runs. He had 35, while hitting .210/.255/.431, which even in 1986 was only good for a wRC+ of 86. Kingman also has the 2nd highest ISO period among those with a sub-.333 OBP, behind Russell Branyan (.253 ISO, .329 OBP).

Expand this list, by the way, and it feels like a pretty accurate indicator of players who provided solid and at times even great power, but weren’t great offensive players. The top 10: Kingman, Steve Balboni, Ron Kittle, Branyan, Tony Armas, Alfonso Soriano, Dick Stuart, Matt Williams, Tony Batista and Mark Reynolds. The debuts of those players range from 1958 (Stuart) to 2007 (Reynolds), so this phenomenon is not exactly a 21st century one. It does, however, divide pretty well along pre- and post-expansion lines.

Among players who debuted before Stuart, the next smallest gap here belongs to a Hall of Famer: Ernie Banks, with a .330 OBP and .226 ISO. He’s 18th on the list, so that’s about where the last paragraph’s thesis breaks down. During his career, 1953-71, the league-wide non-pitcher OBP was .329, so Banks was about average reaching base, but provided a ton of value from his years at shortstop and his power (1953-71 ISO: .135).

Wally Post is 19th, and he debuted in 1949, making him the top pre-1950 debut player on the OBP minus ISO list, and the smallest gap belonging to someone who debuted before 1940 belongs to DiMaggio, who debuted in 1937. He ended up with a .324 OBP and .164 ISO in his 10 seasons with the Bees, Reds, Pirates and Giants. We’re talking, of course, about Vince DiMaggio, not Dom.

Go back all the way to 1901 and you find the career of:

Albert Samuel “Hobe” Ferris

Hobe Ferris played from 1901-09 and never led the league in home runs, but was in the top 7 five times in a nine-year career on his way to 40 career home runs. His .102 career ISO came in a time frame when league-wide non-pitcher ISO was .077, but he only produced a career .265 OBP (vs. the league’s .310). A second- and third-baseman with a good defensive reputation (backed up today by his +70 career fielding runs on Baseball Reference), he also may have been the first power threat in MLB history who didn’t reach base effectively. His best season was actually during the nadir of the dead ball era, his penultimate year in 1908 when he hit .270/.291/.353 for a 109 wRC+. This was mostly due to an unusually efficient year reaching base, but even his .083 ISO was better than the league’s .069.

All-time, however, Ferris’s OBP-ISO gap ranks as just the 166th smallest out of 692 who meet the 3000 PA, sub-.333 thresholds. The 167th smallest belongs to another turn-of-the-century player, the infamous Bill Bergen, who was just bad at everything. In general, you’re just not going to find turn of the century players whose ISO’s are particularly close to their OBP’s, because ISO’s were so low 100 years ago.

To start getting into the other types of players–good OBP, not so good power–let’s remove any cap on the OBP and see what happens at both ends of the list of OBP and ISO gaps. Again, 3000 PA is the cutoff.

10 Lowest Gaps: Kingman, Mark McGwire, Balboni, Kittle, Branyan, Juan Gonzalez, Sammy Sosa, Ryan Howard, Armas, Soriano

10 Highest: Roy Thomas, Miller Huggins, Eddie Stanky, Eddie Collins, Max Bishop, Richie Ashburn, Ferris Fain, Johnny Pesky, Luke Appling, Muddy Ruel

So, apparently Mark McGwire’s .263 career batting average is a little misleading…as in, perhaps the most misleading batting average of all time. He posted a .394 OBP and .325 ISO. The other three players who weren’t on this list when sub-.333 OBP’s were removed are Gonzalez, Sosa, and Howard. None of them have spotless resumes, but they are bound to be the 2nd to 4th best hitters on that list in most any ranking of these players, subjective or objective. After Howard, the next few players on this list who had an OBP above .333: Richie Sexson (15th), Albert Belle (20th), Jose Canseco (25th), Andruw Jones (28th) and Greg Vaughn (30th). All probably better hitters than Kingman and certainly better hitters than Balboni.

Meanwhile, Roy Thomas has the highest such difference, with a line from 1901-11 of .282/.403/.329. (He debuted in 1899.) From 1900-06, Thomas led the majors in walks every year except 1905. He hit a fascinating .327/.453/.365 in 1903, for a 138 wRC+.

We might think that everybody with a large gap is from the dead ball era, but such is not the case. Richie Ashburn (1948-62) and Luke Appling (1930-50) carved out Hall of Fame careers. They got away with a lack of power by hitting .300 in their careers. These next two players weren’t career .300 hitters, providing value more so with high walk rates, and how can we talk about players who got on base but didn’t hit for power without them:

Eddie Stanky and Ferris Fain

Stanky (.410 OBP, .080 ISO) played from 1943-53 and Fain (.424 OBP, .106 ISO) from 1947-55, and they might be the two most famous players in MLB history in terms of reaching base without being much of a power threat. They were pioneers of the you’re-never-pitching-around-me-but -I-will-foul-off-pitches-and-work-a-walk-anyway school of hitting, especially Stanky, who only hit .268 and slugged .348 in his career. (Roy Thomas could have been the “pioneer” of this if power were more of a thing when he played.) Stanky’s most striking season in this regard was probably 1946 when he hit .273/.436/.352. Fain, meanwhile, had a .455 OBP and .066 ISO in his last season in 1955.

Just as the first list in this piece lacked many dead-ball era players, this list of large OBP-ISO gaps seems to lack 21st (and late 20th) century players. The first player to debut after 1980 that we meet on the list, in the 13th place?

Luis Castillo

Castillo’s offensive production was almost entirely in his .290 batting average. If batting average says little about McGwire, it says almost as little about Castillo, who posted a career .368 OBP and .061 ISO.

The first good hitter on the list (with his career 97 wRC+, Castillo was decidedly average) is Dave Magadan, 23rd, with a .390 OBP and just a .089 ISO. He had a 117 career wRC+. Magadan’s 1995 season with Houston was his wildest as he managed an OBP of .428 with an ISO of just .086.

Two spots below Magadan is one of the three who started us down this month-plus-long path:

Wade Boggs

Boggs had a .328/.415/.443 career line for a 132 wRC+. In his rookie season in 1982 (381 PA), he was already producing a .406 OBP…with an ISO of just .092.

We might as well wrap up with our other two above-.400 OBP, under-.200 ISO players since 1961. Joe Mauer (.405 OBP, .146 ISO) and Rickey Henderson (.401 OBP, .140 ISO) have wRC+’s of 134 and 132 respectively. Their OBP-ISO gaps of .261 and .259 rank among the 200 largest gaps, or roughly the 90th percentile.

There are plenty more angles, more than I can cover, that one could take with this. At this link you can find the list of players with 3000 PA since 1901, ordered from the largest OBP-ISO to the smallest, with extra stats (as I didn’t change or remove the default dashboard stats).


The R.A. Dickey Effect – 2013 Edition

It is widely talked about by announcers and baseball fans alike, that knuckleball pitchers can throw hitters off their game and leave them in funks for days. Some managers even sit certain players to avoid this effect. I decided to analyze to determine if there really is an effect and what its value is. R.A. Dickey is the main knuckleballer in the game today, and he is a special breed with the extra velocity he has.

Most people that try to analyze this Dickey effect tend to group all the pitchers that follow in to one grouping with one ERA and compare to the total ERA of the bullpen or rotation. This is a simplistic and non-descriptive way of analyzing the effect and does not look at the how often the pitchers are pitching not after Dickey.

Dickey's Dancing Knuckleball
Dickey’s Dancing Knuckleball (@DShep25)

I decided to determine if there truly is an effect on pitchers’ statistics (ERA, WHIP, K%, BB%, HR%, and FIP) who follow Dickey in relief and the starters of the next game against the same team. I went through every game that Dickey has pitched and recorded the stats (IP, TBF, H, ER, BB, K) of each reliever individually and the stats of the next starting pitcher, if the next game was against the same team. I did this for each season. I then took the pitchers’ stats for the whole year and subtracted their stats from their following Dickey stats to have their stats when they did not follow Dickey. I summed the stats for following Dickey and weighted each pitcher based on the batters he faced over the total batters faced after Dickey. I then calculated the rate stats from the total. This weight was then applied to the not after Dickey stats. So for example if Janssen faced 19.11% of batters after Dickey, it was adjusted so that he also faced 19.11% of the batters not after Dickey. This gives an effective way of comparing the statistics and an accurate relationship can be determined. The not after Dickey stats were then summed and the rate stats were calculated as well. The two rate stats after Dickey and not after Dickey were compared using this formula (afterDickeySTAT-notafterDickeySTAT)/notafterDickeySTAT. This tells me how much better or worse relievers or starters did when following Dickey in the form of a percentage.

I then added the stats after Dickey for starters and relievers from all four years and the stats not after Dickey and I applied the same technique of weighting the sample so that if Niese’12 faced 10.9% of all starter batters faced following a Dickey start against the same team, it was adjusted so that he faced 10.9% of the batters faced by starters not after Dickey (only the starters that pitched after Dickey that season). The same technique was used from the year to year technique and a total % for each stat was calculated.

The most important stat to look at is FIP. This gives a more accurate value of the effect. Also make note of the BABIP and ERA, and you can decide for yourself if the BABIP is just luck, or actually better/worse contact. Normally I would regress the results based on BABIP and HR/FB, but FIP does not include BABIP and I do not have the fly ball numbers.

The size of the sample was also included, aD means after Dickey and naD is not after Dickey. Here are the results for starters following Dickey against the same team.

Dickey Starters

It can be concluded that starters after Dickey see an improvement across the board. Like I said, it is probably better to use FIP rather than ERA. Starters see an approximate 18.9% decrease in their FIP when they follow Dickey over the past 4 years. So assuming 130 IP are pitched after Dickey by a league average set of pitchers (~4.00 FIP), this would decrease their FIP to around 3.25. 130 IP was selected assuming ⅔ of starter innings (200) against the same team. Over 130 IP this would be a 10.8 run difference or around 1.1 WAR! This is amazingly significant and appears to be coming mainly from a reduction in HR%. If we regress the HR% down to -10% (seems more than fair), this would reduce the FIP reduction down to around 7%. A 7% reduction would reduce a 4.00 FIP down to 3.72, and save 4.0 runs or 0.4 WAR.

Here are the numbers for relievers following Dickey in the same game.

Dickey Bullpen

Relievers see a more consistent improvement in the FIP components (K, BB, HR) between each other (11.4, 8.1, 4.9). FIP was reduced 10.3%. Assuming 65 IP (in between 2012 and 2013) innings after Dickey of an average bullpen (or slightly above average, since Dickey will likely have setup men and closers after him) with a 3.75 FIP, FIP would get reduced to 3.36 and save 3 runs or 0.3 WAR.

Combining the un-regressed results, by having pitchers pitch after him, Dickey would contribute around 1.4 WAR over a full season. If you assume the effect is just 10% reduction in FIP for both groups, this number comes down to around 0.9 WAR, which is not crazy to think at all based off the results. I can say with great confidence, that if Dickey pitches over 200 innings again next year, he will contribute above 1.0 WAR just from baffling hitters for the next guys. If we take the un-regressed 1.4 WAR and add it to his 2013 WAR (2.0) we get 3.4 WAR, if we add in his defence (7 DRS), we get 4.1 WAR. Even though we all were disappointed with Dickey’s season, with the effect he provides and his defence, he is still all-star calibre.

Just for fun, lets apply this to his 2012. He had 4.5 WAR in 2012, add on the 1.4 and his 6 DRS we get 6.5 WAR, wow! Using his RA9 WAR (6.2) instead (commonly used for knucklers instead of fWAR) we get 7.6 WAR! That’s Miguel Cabrera value! We can’t include his DRS when using RA9 WAR though, as it should already be incorporated.

This effect may even be applied further, relievers may (and likely do) get a boost the following day as well as starters. Assuming it is the same boost, that’s around another 2.5 runs or 0.25 WAR. Maybe the second day after Dickey also sees a boost? (A lot smaller sample size since Dickey would have to pitch first game of series). We could assume the effect is cut in half the next day, and that’d still be another 2 runs (90 IP of starters and relievers). So under these assumptions, Dickey could effectively have a 1.8 WAR after effect over a full season! This WAR is not easy to place, however, and cannot just be added onto the teams WAR, it is hidden among all the other pitchers’ WARs (just like catcher framing).

You may be disappointed with Dickey’s 2013, but he is still well worth his money. He is projected for 2.8 WAR next year by Steamer, and adding on the 1.4 WAR Dickey Effect and his defence, he could be projected to really have a true underlying value of almost 5 WAR. That is well worth the $12.5M he will earn in 2014.

For more of my articles, head over to Breaking Blue where we give a sabermetric view on the Blue Jays, and MLB. Follow on twitter @BreakingBlueMLB and follow me directly @CCBreakingBlue.


The Effect of Devastating Blown Saves

It’s a pretty well documented sabremetric notion that pitching your closer when you have a three run lead in the ninth is probably wasting him. You’re likely going to win the game anyways, since the vast majority of pretty much everyone allowed to throw baseballs in the major leagues is going to be able to keep the other team from scoring three runs.

But we still see it all the time. Teams keep holding on to their closer and waiting until they have a lead in the ninth to trot him out there. One of the reasons for this is that blowing a lead in the ninth is devastating—it’ll hurt team morale more to blow a lead in the ninth than to slip behind in the seventh. And then this decrease in morale will cause for the players to play more poorly in the future, which will result in more losses.

Or will it?

We’re going to look at how teams play following games that they devastatingly lose to see if there’s any noticeable drop in performance. The “devastating blown save” stat can be defined as any game in which a team blows the lead in the ninth and then goes on to lose. Our methodology is going to look at team records in both the following game as well as the following three games to see if there’s any worsening of play. If the traditional thought is right (hey, it’s a possibility!), it will show up in the numbers. Let’s take a look.

All Games (2000-2012)

9+ Inning Games

Devastating BS’s

Devastating BS%

Following Game W%

Three Game W%

31,405

1,333

4.24%

.497

.484

In the following game, the team win percentage was very, very close to 50%. Over a sample size of 1,333 that’s completely insignificant. But what about the following three games, where the win percentage drops down to roughly 48.4%? Well, that’s a pretty small deviation from the 50% baseline, and is of questionable statistical significance. And wouldn’t it make sense that if the devastating blow save effect existed at all it would occur in the directly following game, and not wait until later to manifest itself? It seems safe to say that the “morale drop” of devastatingly losing is likely nonexistent—or at most incredibly small. We’re dealing with grown men after all. They can take it.

Another thing you might want to consider when looking at these numbers is that teams with lots of blown saves are probably more likely to be subpar. Not so fast. The win% of teams weighted to their amount of blown 9th innings over the years is .505. This is probably because better teams are more likely to be ahead in the first place, and so they are going to be on the bubble to blow saves more often even if they blow them a smaller percentage of the time. Just for the fun of seeing how devastation-prone your team has been over the past 13 years, however, here’s a table of individual team results.

 Devastating Blown Saves By Team (2000-2012)

Team

Devastating Blown Saves

Next Game W%

Milwaukee

63

0.460

Chicago Cubs

60

0.4

Kansas City

57

0.315

Toronto

54

0.592

Chicago White Sox

52

0.615

Houston

51

0.372

NY Mets

50

0.56

St. Louis

48

0.625

Texas

46

0.543

Cleveland

46

0.586

Texas

46

0.543

Florida

45

0.511

Baltimore

45

0.377

Oakland

44

0.545

Seattle

44

0.5

Boston

41

0.585

Cincinnati

41

0.585

Los Angeles

40

0.425

Detroit

39

0.384

Atlanta

39

0.743

Detroit

39

0.384

San Diego

35

0.4

Anaheim

34

0.529

New York Yankees

33

0.666

Minnesota

33

0.515

Pittsburgh

32

0.468

Montreal

25

0.2

Washington

18

0.555

Miami (post-change)

8

0.375

Congratulations Pittsburgh, you’ve been the least devastated full-time team over the past 13 years! Now if there’s a more fun argument against the effects of devastating losses than that previous sentence, I want to hear it. Meanwhile the Braves have lived up to their nickname, winning in an outstanding 74.3% of games following devastating losses (it looks like we’ve finally found our algorithm for calculating grit, ladies and gentleman) while the hapless Expos rebounded in just 20% of their games. Milwaukee leads the league in single-game heartbreak, etc. etc. Just read the table. These numbers are fun. Mostly meaningless, but fun.

Back to the point: team records following devastating losses tend to hover very, very close to .500. Managers shouldn’t worry about how their teams lose games—they should worry about if their teams lose games. Because, in the end, that’s all that matters.


Raw data courtesy of Retrosheet.


Weighting Past Results: Hitters

We all know by now that we should look at more than one year of player data when we evaluate players. Looking at the past three years is the most common way to do this, and it makes sense why: three years is a reasonable time frame to try and increase your sample size while not reaching back so far that you’re evaluating an essentially different player.

 The advice for looking at previous years of player data, however, usually comes with a caveat. “Weigh them”, they’ll say. And then you’ll hear some semi-arbitrary numbers such as “20%, 30%, 50%”, or something in that range. Well, buckle up, because we’re about to get a little less arbitrary.

 Some limitations: The point of this study isn’t to replace projection systems—we’re not trying to project declines/improvements here. We’re simply trying to understand how past data tends to translate into future data.

 The methodology is pretty simple. We’re going to take three years of player data (I’m going to use wRC+ since it’s league-adjusted etc., and I’m only trying to measure offensive production), and then weight the years so that we can get an expected 4th year wRC+. We’re then going to compare our expected wRC+ against the actual wRC+*. The closer the expected to our actual, the better the weights.

 *Note: I am using four-year spans of player data from 2008-2013, and limiting to players that had at least 400 PA in four consecutive years. This should help throw out outliers and to give more consistent results. Our initial sample size is 244, which is good enough to give meaningful results.

 I’ll start with the “dumb” case. Let’s just weigh all of the years equally, so that each year counts for 33.3% of our expected outcome.

 Expected vs. Actual wRC+, unweighted

Weight1

Weight2

Weight3

Average Inaccuracy

33.3%

33.3%

33.3%

16.55

 Okay, so we’re averaging missing the actual wRC+ by roughly 16.5. That means that we’re averaging 16.5% inaccuracy when extrapolating the past into the future with no weights. Now let’s try being a little smarter about it and try some different weights out.

 Expected vs. Actual wRC+, various weights

Weight1

Weight2

Weight3

Average Inaccuracy

20%

30%

50%

16.73

25%

30%

45%

16.64

30%

30%

40%

16.58

15%

40%

45%

16.62

0%

50%

50%

16.94

0%

0%

100%

20.15

Huh! It seems that no matter what we do, “intelligently weighting” each year never actually increases our accuracy. If you’re just generally trying to extrapolate several past years of wRC+ data to try and predict a fourth year of wRC+, your best bet is to just unweightedly average the past wRC+ data. Now, the differences are small (for example, our weights of [.3, .3, .4] were only .03 different in accuracy the unweighted total, which is statistically insignificant), but the point remains: weighing data from past years simply does not increase your accuracy. Pretty counter-intuitive.

Let’s dive a little deeper now—is there any situation in which weighting a player’s past does help? We’ll test this by limiting our ages. For example: are players that are younger than 30 better served by weighing their most previous years heavily? This would make sense, since younger players are most likely to experience a true-talent change. (Sample size: 106)

 Expected vs. Actual wRC+, players younger than 30

Weight1

Weight2

Weight3

Average Inaccuracy

33.3%

33.3%

33.3%

16.17

20%

30%

50%

16.37

25%

30%

45%

16.29

30%

30%

40%

16.26

15%

40%

45%

16.20

0%

50%

50%

16.50

0%

0%

100%

20.16

Ok, so that didn’t work either. Even with young players, using unweighted totals is the best way to go. What about old players? Surely with aging players the recent years would most represent a player’s decline. Let’s find out (Sample size: 63).

 Expected vs. Actual wRC+, players older than 32

Weight1

Weight2

Weight3

Average Inaccuracy

33.3%

33.3%

33.3%

16.52

16%

30%

50%

16.18

25%

30%

45%

16.27

30%

30%

40%

16.37

15%

40%

45%

16.00

0%

50%

50%

15.77

0%

55%

45%

15.84

0%

45%

55%

15.77

0%

0%

100%

18.46

Hey, we found something! With aging players you should weight a player’s last two seasons equally, and you should not even worry about three seasons ago! Again, notice that the difference is small (you’ll be about 0.8% more correct by doing this than using unweighted totals). And as with any stat, you should always think about why you’re coming to the conclusion that you’re coming to. You might want to weight some players more aggressively than others, especially if they’re older.

In the end, it just really doesn’t matter that much. You should, however, generally use unweighted weights since differences in wRC+ are pretty much always results of random fluctuation and very rarely the result of actual talent change. That’s what the data shows. So next time you hear someone say “weigh their past three years 3/4/5” (or similar), you can snicker a little. Because you know better.


Two Different Scenarios of a Mike Trout Extension

There has been plenty of conjecture on the timing and amount of Mike Trout’s next contract.  People gravitate towards round numbers and that’s why you often hear talk about ten years and $300 million.  I heard one pundit refer to 10/300 after his first season, and have heard several refer to these figures during this off season.  But is 10/300 even realistic?

The first step of his analysis is to look at the early years of a contract extension.  For a player that hasn’t even hit his arbitration years, we’ve seen discounting of the players pre-arbitration and arbitration years on their way to seven- or eight-year contracts.  So while the disbursement of money in a player’s early years might not be a one for one match with what they would be from the arbitration process, they’re generally close, if not a little smaller for some players.  The theory seems to go that the player trades off the potentially bigger payoff of arbitration awards, in return for secure, guaranteed and somewhat smaller annual contract value on a multi-year deal.

Mike Trout will break records, but not only on the playing field.  If he goes to arbitration, we’ll see amounts not seen for 1st, 2nd and 3rd-year arbitration-eligible players.  We can quibble about what those amounts will be but I’m guessing on the low end they might be $10 M/$15 M/$20 M, and on the high end $15/$20/$25.  Mike Trout has achieved so much in so little time that he might have quite a bit of leverage to earn a full payout of potential arbitration amounts, in the early years of a multi-year contract extension.

So the value of the early years of Mike’s next contract might look like this:

Year signed 1 2 3 4
2014 0.5 15 20 25
2015 15 20 25

Note: the table shows possible values of the early years of his contract.  Actual payments will probably be much different.  If he signs in 2014, then he will likely get much more than $500,000 in year 1.  Or there might be a bonus that gets spread across these early seasons.  I’m stipulating values here because I believe they’re easier to predict.

The rest gets easier, in one sense.  What is Mike Trout worth during his free-agent years, from the age of 26 to approximately 32.  Is he worth $30 million, $35? or even $40 million per year?  Remember, the Angels are buying out his peak seasons.  This is creme de la creme.  It’s similar to A-Rod from the age of 26-32 where he earned $25 million per year in 2001 dollars and was worth every penny.

Angels management might be a little worried about not signing Mike this year because those free-agent years could get really expensive if next season he puts up even more stupendous numbers.  But my question is, should they be worried?  That’s why I look at two different scenarios.  One, sign him this offseason.  The second, pay him minimum again this year and give him the big contract next offseason.

Year signed 1 2 3 4 5 6 7 8 9 10 11 Total
2014 0.5 15 20 25 35 35 35 35 35 35 270.5
2015 15 20 25 40 40 40 40 40 40 40 340

What you notice about scenario one, right off, is that $35 million per year seems like a lot of money.  But when you total it up over the seemingly magic number of big baseball contracts, ten years, it only totals to $270 million.  For Trout to be paid 10/300, the Angels would have to value his free agent years at $40 million per year.  Dave Cameron’s crowd sourcing project of predicting the salary of signing Trout to a single season came out to be around $40 million.  To guarantee $40 mill for 6 consecutive seasons which are four years off from occurring seems to be one helluva lot of risk for the Angels to assume at this point.

Especially because the Angels don’t necessarily need to be in a rush to assume that much risk.  So I’m making a prediction here.  If Mike Trout gets a ten-year contract extension this year, it will be for less than $300 million.  I think of $270 as being a sort of ceiling for him this year.  $220 to $250 million, might be much more realistic.

That leads us to scenario 2.  Sign him in 2015.  And let’s assume Trout puts up another monstrous season, one where the Angels will supposedly rue not securing the big fish to a long-term contract, the year before.  What are his free agent seasons valued at this point?  $40 million is still probably absurd but let’s follow this along and see where it goes.  The contract now is 10/$340.  But when you look at the average cost of Mike Trout across the years he remains an Angel, you get $27 million across ten seasons in the first scenario, and $30.9 million across 11 seasons in the second scenario.  So you’re paying a premium of $3.9 million per year for waiting one extra season before signing him.  But don’t forget, in return for waiting that extra year, you also tack on another year of Mike Trout goodness at the end of his contract.

When you consider the extra year, the real difference between the two scenarios is $3o to $35 million.  That’s not pocket change.  But consider this, the Angels have paid the Yankees $30+ million to take Vernon Wells off their hands for two years.

The other thing to consider here is if there is some natural market ceiling on annual salary for any player.  If so, Mike Trout might approach it.  Dave Cameron mentioned this possibility in the crowdsourcing piece.  If $40 million is just too high a number for any player to be valued at annually, then waiting til next off season could be the much better scenario if his free-agent seasons top off at $36 or $37 million.

If the Angels can get Mike Trout at say 10/240 this season, they should probably jump on it.  But if him and his agent aren’t budging off 10/270, or higher, it’s probably best to wait one more season.


Power and Patience (Part IV of a Study)

We saw in Part Three that the R^2 between OBP and ISO for the annual average of each from 1901-2013 is .373. To find out the correlation between OBP and ISO at the individual level, I set the leaders page to multiple seasons 1901-2013, split the seasons, and set the minimum PA to 400, then exported the 16,001 results to Open Office Calc.

(Yes, sixteen thousand and one. You can find anything with FanGraphs! Well, anything that has to do with baseball. Meanwhile, Open Office operating on Windows 7 decides it’s tired the moment you ask it to sum seven cells. At least it gets there in the end.)

The result was .201, so we’re looking at even less of a correlation in a much larger sample compared to the league-wide view. Are there periods where the correlation is higher?

Recall from Part Two that from 1994-2013 the R^2 for the league numbers was .583. Using individual player lines (400+ PA) from those seasons increases our sample size from 20 to 4107 (again splitting seasons). This gives us an R^2 of .232. That’s a little higher than .201, but not very much so.

All in all, it’s not the most surprising thing. On-base percentage and isolated power, mathematically, basically have nothing in common other than at-bats in the denominator. Given that, any correlation between them at all (and there is some) suggests that it either helps players hit for power to be an on-base threat, or vice versa. Not that one is necessary for the other, but there’s something to it. And throughout history, as we saw in Part One, a majority of players are either good at both aspects of hitting, or neither.

In fact, it’s the exceptions to that rule that triggered this whole series, those higher-OBP, lower-ISO players. Again from part one, there were 12 19th century, 21 pre-1961 20th century, and 3 post-1961 20th-21st century players with a career OBP over .400 and ISO below .200.

Much of this can probably be attributed to that consistency OBP has had historically relative to ISO that we observed a couple weeks ago. Continuing with the somewhat arbitrary 1961 expansion era cutoff, from 1961-present, 168 players with 3000 PA have an ISO over .200 and 18 have an OBP over .400; from 1901-60, it was 43 with the ISO over .200 and 31 with the OBP over .400. The .200+ ISO’s are split 80-20% and the .400+ OBP’s are split about 60-40%. The latter is the much smaller gap, as we’d expect. (Some players whose careers straddled 1961 are double-counted, but you get the basic idea.)

But let’s see if we can trace the dynamics that brought us to this point. What follows is basically part of part one in a part three format (part). In other words, we’re going to look at select seasons, and in those seasons, compare the number of players above and below the average OBP and ISO. Unfortunately, it’s hard to park-adjust those numbers, so a player just above the average ISO at Coors and a player just below it at Safeco are probably in each other’s proper place. But that’s a minor thing.

After the league-wide non-pitcher OBP and ISO are listed, you’re going to see what might look like the results of monkeys trying to write Hamlet. But “++” refers to the number of players with above-average OBP and ISO; “+-” means above-average OBP, below-average ISO; “-+” means below-average OBP and above-average ISO; and “- -” means, obviously, below-average OBP and ISO. The years were picked for various reasons, including an attempt at spreading them out chronologically. Notes are sparse as the percentages are the main thing to notice.

1901: .330 OBP, .091 ISO. Qualified for batting title: 121. 35% ++, 25% +-, 12% -+, 28% – –

1908: .295 OBP, .069 ISO. Qualified for batting title: 127. 41% ++, 23% +-, 8% -+, 28% – –

The sum of OBP and ISO was its lowest ever in 1908.

1921: .346 OBP, .117 ISO. Qualified for batting title: 119. 42% ++, 24% +-, 8% -+, 26% – –

Baseball rises from the dead ball era. Still relatively few players are hitting for power while not getting on base as much.

1930: .356 OBP, .146 ISO. Qualified for batting title: 122. 45% ++, 23% +-, 3% -+, 29% – –

The best pre-WWII season for OBP and ISO. Almost nobody was about average at hitting for power while not as good at reaching base. Two-thirds of qualifiers had an above-average OBP vs. fewer than half with an above-average ISO.

1943: .327 OBP, .096 ISO. Qualified for batting title: 106. 41% ++, 24% +-, 10% -+, 25% – –

World War II, during which OBPs stayed near the average but ISOs tanked. That would not necessarily appear in these numbers, because the players in this segment are categorized vs. each year’s average.

1953: .342 OBP, .140 ISO. Qualified for batting title: 88. 44% ++, 22% +-, 14% -+, 20% – –

The first year where sISO exceeded OBP was easily the lowest so far in terms of players below average in both OBP and ISO. (Note: So few players qualified on account of the Korean War.)

1969: .330 OBP, .127 ISO. Qualified for batting title: 121. 45% ++, 17% +-, 14% -+, 23% – –

1983: .330 OBP, .131 ISO. Qualified for batting title: 133. 43% ++, 16% +-, 17% -+, 25% – –

1969 and 1983 were picked because of their historically average league-wide numbers for both OBP and ISO. The percentages for each of the four categories are about equal in both seasons.

2000: .351 OBP, .171 ISO. Qualified for batting title: 165. 39% ++, 16% +-, 15% -+, 29% – –

The sum of OBP and ISO was its highest ever in 2000.

2011: .325 OBP, .147 ISO. Qualified for batting title: 145. 50% ++, 17% +-, 12% -+, 21% – –

2012: .324 OBP, .154 ISO. Qualified for batting title: 144. 44% ++, 24% +-, 14% -+, 18% – –

2013: .323 OBP, .146 ISO. Qualified for batting title: 140. 45% ++, 24% +-, 17% -+, 14% – –

Originally, this part ended with just 2013, but that showed an abnormally low “- -” percentage, so now 2011-13 are all listed. From 2011 to 2012, the split groups (above-average at 1 of the 2 statistics, “+-” or “-+”) increased sharply while the number of generally good and generally bad hitters decreased. From 2012 to 2013, there was almost no change in qualifiers based on OBP (the “++” and “+-” groups). Among those with below-average OBPs, the number with above-average power increased as the number with below-average power decreased. Most significantly, 2011-13 has produced an overall drop in players who are below average at both.

I don’t want to draw too many conclusions from this set of 12 out of 113 seasons. But a few more things come up besides the recent decline in players below average in both OBP and ISO.

Regarding “++” Players

Unsurprisingly, limiting the samples to qualifiers consistently shows a plurality of players to be good at both the OBP and ISO things.

Regarding “- -” Players 

Essentially, until 2012, this group was always at least 1/5 of qualifiers, and usually it was 1/4 or more. The last couple years have seen a decline here. Is it a trend to keep an eye on in the future (along with the league-wide OBP slump from Part 3)?

Regarding “++” and “- -” Players

Meanwhile, the majority of players will be above average at both getting on base or hitting for power, or below average at both. The sum of those percentages is just about 60% at minimum each year. Of the ten seasons above, the lowest sum is actually from 2013, mostly on account of the 14% of players who were below average at both.

This also means that it’s a minority of players who “specialize” in one or the other.

Regarding “+-” vs. “-+” Players

The “-+” players, those with below-average OBPs and above-average ISOs, show the best-defined trends of any of the four categorizations. In general, before 1953, when OBP was always “easier” to be good at than ISO (via OBP vs. sISO as seen in Parts 2 and 3), you saw fewer ISO-only players than you see today. Either they were less valuable because power was less a part of the game and of the leagues’ offenses, or they were less common since it was harder to exceed the league average.

The number of OBP-only players is more complicated, because they too were more common in the pre-1953 days. But they have jumped in the last two years from 1/6 of qualifiers from ’69-’11 to 1/4 of qualifiers in 2012 and 2013. Overall, the recent decline in “- -” players has come at the expense of “+-” players. This can also be interpreted as indicating that players are becoming better at reaching base while remaining stagnant at hitting for power (important distinction: that’s compared to the annual averages, not compared to the historical average; as we saw last week, OBP is in a historical decline at the league level).

Conclusion

The key takeaway for all of this is that there are always going to be more players who are above-average in both OBP and ISO or below average in both. Even if the correlations between OBP and ISO on the individual level aren’t overly high, clearly more players are good at both or at neither.

This isn’t just on account of players with enough PA to qualify for the league leaders being better hitters in general, because while the number of players above-average in both who qualify is always a plurality, it’s almost never a majority. It takes a number of players who are below-average at both to create a majority in any given year.

In terms of OBP-only players and ISO-only players, the former have almost always outnumbered the latter. This is sufficiently explained in that reaching base is often key to being a good hitter, while hitting for power is optional. (That’s why OPS has lost favor, because it actually favors slugging over OBP.) Even when batting average was the metric of choice throughout baseball, those who got the plate appearances have, in general, always been good at getting on base, but not necessarily at hitting for power.

Next week this series concludes by looking at the careers of some selected individual players. The most interesting ones will be the either-or players, with a significantly better OBP or ISO. We won’t look much at players like Babe Ruth or Bill Bergen, but instead players like Matt Williams or Wade Boggs. Stay tuned.


How Much Work are AL Starters Doing, and What Difference Has It Made in Team Success?

Baseball fans have been treated to incredible starting pitching performances in recent years, with several ace staffs leading their teams to regular-season and postseason success. Initially, I set out to examine the number of innings pitched by AL starting rotations because I expected that there would be a big disparity from team to team. And more specifically, I thought that the percentage of innings pitched by a team’s starting rotation would correlate positively to either its W-L record, or more likely, its Pythagorean W-L record.

I gathered five years of data (2009 – 2013 seasons) and calculated the Starting Pitcher Innings Pitched Percentage (SP IP%). This number is simply the number of innings a team’s starters pitched divided by the total innings the team pitched. If a starter was used in relief, those innings didn’t count. I only looked at AL teams, because I assumed that NL starting pitchers could be pulled from games prematurely for tactical, pinch-hitting purposes, while AL starters were likely to stay in games as long as they weren’t giving up runs, fatigued, or injured.

Two things struck me about the results:

1. There was little correlation between a team’s SP IP% and its W-L record or its SP IP % and Pythagorean W-L record

2. The data showed little variance and was normally distributed

I looked at 71 AL team seasons from 2009 – 2013 and found that on average, AL Teams used starting pitchers for 66.8% of innings, with a standard deviation of 2.83%. The data followed a rather normal distribution, with teams SP IP% breaking down as follows:

Standard Deviations # of Teams % of Total Teams
-2 or lower 2 2.82%
-1 to -2 10 14.08%
-1 to 0 22 30.99%
0 to 1 26 36.62%
1 to 2 10 14.08%
2 or higher 1 1.41%

Over two-thirds of the teams (48 of 71) fell within the range of 63.6 to 69.2 SP IP%, which is much less variance than I expected to find.  And only three seasons fall outside the range of two standard deviations from the mean: two outliers on the negative end and one on the positive end. Those teams are:

Negative Outliers:

2013 Minnesota Twins: 60.06 SP IP%

2013 Chicago White Sox 60.25 SP IP%

Positive Outlier:

2011 Tampa Bay Rays 73.02 SP IP%

Taken at the extreme, these numbers show a huge gap in the number of innings the teams got out of their starters. Minnesota, for example, got only 871 innings out of starters in 2013, while the 2011 Tampa Bay Rays 1,058 innings in a season with fewer overall innings pitched. Another way of conceptualizing it would be to say that Minnesota starters pitched averaged just over 5 1/3 innings of each nine-inning game in 2013, while the 2011 Rays starters pitched nearly 6 2/3 innings. But when the sample is viewed as a whole the number of innings is quite close, as seen on this graph of SP IP% for the last five years:

Scatter plot diagram

 

The correlation between SP IP% and team success (measured via W-L or Pythagorean W-L) was minimal. (The Pearson coefficient values of the correlations were .1692 and .1625, respectively).  Team victories are dependent on too many variables to isolate a connection between team success (measured via team wins) and  SP IP%;  a runs scored/runs allowed formula for calculating W-L record was barely an improvement over the traditional W-L measurement. Teams like the Seattle Mariners exemplify the issue with correlating the variables: their starters have thrown above-average numbers of innings in most of the years in the study, but rarely finished with a winning record.

What I did find, to my surprise, was a relatively narrow range of SP IP% over the last five years, with teams distributed normally around an average of 66% of innings. In the future, it might be helpful to expand the sample, or look at a historic era to see how the SP IP% workload has changed over time. The relative consistency of SP IP% over five seasons and across teams could make this metric useful for future studies of pitching workloads, even if these particular correlations proved unsuccessful.


Revenue Sharing Deal Cubs Struck with Rooftop Owners Holding Up Wrigley Field Renovations

During the 2013 baseball season, the City of Chicago approved a $500 million plan to renovate Wrigley Field and build an adjacent office building and hotel.  Included in the renovation plan is the proposed construction of a large video board behind the left field bleachers and signs advertising Budweiser behind the right field bleachers.  The Cubs have delayed the start of this project, however, because the owners of the rooftop businesses across from the ballpark have threatened to file a lawsuit against the Cubs because the proposed signage will obstruct the views of the field from their respective rooftop businesses.

Rooftop Litigation History

Detroit Base-Ball Club v. Deppert, 61 Mich. 63, 27 N.W. 856 (Mich., 1886)

Disputes over neighbors viewing ballgames are nothing new.  In 1885, John Deppert, Jr. constructed a rooftop stand on his barn that overlooked Recreation Park, home to the National League’s Detroit Wolverines, future Hall of Famer Sam Thompson and a rotation featuring the likes of men named Stump Wiedman, Pretzels Getzien and Lady Baldwin.  The Wolverines claimed that they had to pay $3000 per month for rent and that the 50 cent admission fees, helped to offset this cost.  They were thereby “annoyed” by Deppert charging people, between 25 to 100 per game, to watch the games from his property and asked the court to forever ban Deppert from using his property in this manner.

Deppert countered that the ballgames had ruined the quiet enjoyment of his premises, that ballplayers often trespassed on his land in pursuit of the ball and that he often had to call the police to “quell fights and brawls of the roughs who assemble there to witness the games.”  He further claimed that his viewing stand had passed the city’s building inspection and that he had the legal right to charge admission and sell refreshments.

The trial court dismissed the Wolverines case and the ball club appealed.  The Supreme Court of Michigan agreed that the Wolverines had no right to control the use of the adjoining property; therefore, Deppert was within his rights to erect a stand on his barn roof and sell refreshments to fans that wanted to watch the game.  Furthermore, there was no evidence that Deppert’s rooftop customers would otherwise have paid the fees to enter Recreation Park.

Similarly, the rooftops of the buildings across the street from Shibe Park were frequently filled with fans wanting a view of the Philadelphia Athletics game action.  While never happy about the situation, Connie Mack was pushed too far in the early 1930s when the rooftop operators started actively poaching fans from the ticket office lines.  Mack responded by building the “Spite Fence,” a solid wall that effectively blocked the view of the field from the buildings across 20th Street.

Lawsuits were filed but the “Spite Fence” remained in place throughout the remainder of the use of Shibe Park, later renamed Connie Mack Stadium.

The Current Dispute

Chicago National League Ball Club, Inc. v. Skybox on Waveland, LLC, 1:02-cv-09105 (N.D.IL.)

In this case, the Cubs sued the rooftop owners on December 16, 2002 seeking compensatory damages, disgorgement to the Cubs of the defendants’ profits and a permanent injunction prohibiting the rooftop owners from selling admissions to view live baseball games at Wrigley Field, among other remedies and under several causes of action.  According to the complaint, the Cubs alleged that the defendant rooftop operators “…have unlawfully misappropriated the Cubs’ property, infringed its copyrights and misleadingly associated themselves with the Cubs and Wrigley Field.  By doing so, Defendants have been able to operate multi-million dollar businesses in and atop buildings immediately outside Wrigley Field and unjustly enrich themselves to the tune of millions of dollars each year, while paying the Cubs absolutely nothing.”

In their statement of undisputed facts, the defendants countered that the rooftops had been used to view games since the park opened on April 23, 1914 as home of the Chicago Federal League team and that the Cubs conceded that their present management knew the rooftop businesses were selling admissions since at least the late 1980s.

In May 1998, the City of Chicago enacted an ordinance authorizing the rooftops to operate as “special clubs,” which allowed them to sell admissions to view Cubs games under city license.  The City wanted their piece of the action and interestingly, the Cubs made no formal objection to the ordinance.  Based on the licensure and lack of any opposition from the Cubs, the rooftop owners made substantial improvements to enhance the experience and to meet new City specifications.

By January 27, 2004, the Cubs had reached a written settlement with owners of 10 of the defendant rooftop businesses which assured that the Cubs “would not erect windscreens or other barriers to obstruct the views of the [settling rooftops]” for a period of 20 years.  The remaining rooftop owners later settled and the case was dismissed on April 8, 2004, just days ahead of the Cubs home opener set for April 12th.

After the 2004 agreement legitimized their businesses, the rooftop owners made further improvements to the properties.  Long gone are the days that a rooftop experience meant an ice-filled trough of beer and hot dogs made on a single Weber.  The rooftop operations are now sophisticated businesses with luxurious accommodations, enhanced food and beverage service and even electronic ticketing.

As a result of the settlement agreement of Cubs’ 2002 lawsuit, the team now has legitimate concerns that a subsequent lawsuit by the rooftop owners to enforce the terms of the contract could ultimately result in the award of monetary damages to the rooftop owners; cause further delays in the commencement of the construction project due to a temporary restraining order; or, be the basis of an injunction preventing the Cubs from erecting the revenue-producing advertising platforms for the remainder of the rooftop revenue sharing agreement.

It is obvious that the rooftop owners need the Cubs more than the Cubs need them; however, the Cubs wanted their piece of the rooftop owners’ profits (estimated to be a payment to the Cubs in the range of $2 million annually) and now the Cubs have to deal with the potential that their massive renovation project will be held up by the threat of litigation over the blocking of the rooftop views.