What If: The St. Louis Cardinals Were Two Teams

Much has been made of the Cardinals’ amazing depth and seeming ability to pull All-Star-caliber players from their minor leagues at will.

In today’s FanGraphs After Dark chat with Paul Swydan I asked what place in the NL Central the Cardinals would finish in were they to be forced to field two separate (but equal) teams in 2014.

Swydan’s answer:

Probably third and fourth. They’re not THAT good.
Maybe even lower than that. It’s an interesting question.

Well, I too thought it was interesting and decided to try to find out.

I looked at the Oliver projections for the Cardinals and tried to divide them into equal teams. Then I did my best (well, my most efficient, it is 9 at night) to divide up playing time equally between both teams. STEAMER projections assume 600 PA’s for all position players so I prorated each player’s WAR projection for the number of PA’s that I estimated (I tried to stick to 600 PA’s for each position – too much work to do otherwise).

For pitchers I used Oliver’s projected number of starts for starters and innings pitched for relievers to make sure that both teams were equal. I didn’t do any prorating for pitchers. I wanted to, but that started to look like more work than I was willing to put in right now — and I was sort of worried that Paul would do his own post on this, so I wanted to beat him to the punch.

There weren’t quite enough players projected for the Cardinals so for the missing positions I just assumed a replacement-level player.

These were the teams and their projected WAR totals that I came up with.

null

null

So each team was at about 25 .5 WAR.

How about the rest of the NL Central?

For this I just looked at the STEAMER projections since they already adjust playing time and I didn’t want to have to do it for each team. This is what STEAMER had for the other NL Central teams:

Pirates 34.5 WAR
Reds 30.5 WAR
Brewers 27.6 WAR
Cubs 26.9 WAR

So, our Cardinals teams look like they’d finish just behind the rest of the NL Central, but it’s close enough that we can say that the Cardinals might literally be twice as good as the Cubs and Brewers.


Team On-Base Percentage and a Balanced Lineup

Teams that get on base often score more runs than those that don’t. We know this, and it comes as no surprise. In 2013, the Red Sox had the highest team OBP (.349) and also scored the most runs in MLB. The Tigers had the second-highest team OBP (.346), and they scored the second-most runs. Team OBPs can tell us a lot about the effectiveness of an offense (obviously not everything), but they can also be misleading if proper context isn’t applied.

The Cardinals scored 783 runs in 2013, good enough for third in MLB. The rival Reds scored 698 runs, 85 fewer than the Cardinals. There are many reasons for this gap in runs scored, but I would like to examine just one of them.  The Cardinals had a team OBP of .332 while the Reds had a team OBP of .327. On first look, it appears that the Cardinals and Reds got on base at a similar rate. But a major difference exists below the surface. Take a look at the chart below of the top eight hitters by plate appearance for both teams (Chris Heisey gets the nod over Ryan Hanigan as to not have two Reds’ catchers on the list).

Reds OBP Cardinals OBP
Joey Votto .435 Matt Carpenter .392
Shin Soo Choo .423 Matt Holliday .389
Jay Bruce .329 Allen Craig .373
Todd Frazier .314 Yadier Molina .359
Brandon Phillips .310 John Jay .351
Devin Mesoraco .287 David Freese .340
Zack Cozart .284 Carlos Beltran .339
Chris Heisey .279 Pete Kozma .275

The difference is quite evident. The average OBP in 2013 was .318. Seven of the top eight Cardinal hitters got on base at an above-average clip. Besides the pitcher, there is one easy out in that lineup. The Cardinals maintained a ridiculous batting average with RISP, but that matters much more because they always had people on base.

On the other hand, the Reds had two on-base Goliaths. Joey Votto and Shin-Soo Choo camped out on the bases. They became one with the bases. The problem was that the Reds had only one more player with an above-average OBP, Jay Bruce at .329. The other five players struggled to get on base consistently. Three of them had OBPs under .300.

So while the Cardinals achieve a high team OBP through balance, the Reds had two hitters who significantly raised the team OBP. Take Votto and Choo away, and the other six Reds on this list have a combined OBP of .305. That is a staggering low number for six of the top hitters on a playoff team.

What does this teach us? Well, team OBPs do not provide insight into how balanced a lineup a team has. The Reds would be foolish to think they have a lineup that gets on base enough to be an elite offense. With the loss of Choo, the Reds offense may struggle to produce runs at a league-average clip as Votto and Bruce could be stranded on base countless times.

A balanced lineup was a major factor in the Cardinals scoring the most runs in the National League. Their team may have had an excellent .332 OBP, but their top eight hitters by plate appearance had a .355 OBP. As a group they were excellent. The Red Sox were similar in that their top eight hitters by plate appearances all had above-average OBPs with Stephen Drew coming in eighth at .333. Think about that! The Red Sox eighth-best hitter at getting on base was 15 points above league average.

Even though the Reds finished 6th in team OBP in 2013, their on-base skills were lacking. While the Cardinals had only a five-point advantage in team OBP over their rival, they were much more adept at clogging the bases. Team OBPs are great, they just don’t always tell the whole story.


A New Metric of High Unimportance: SCRAP

It’s something we hear all the time: “He’s a scrappy player” or “He’s always trying hard out there, I love his scrappiness.” Maybe chicks don’t dig the long ball anymore; maybe they’re into scrappiness. I’m not really in a position to accurately comment on what chicks dig though, so I don’t know.

Even from a guy’s perspective, scrappiness is great. It’s hard to hate guys that overcome their slim frames by just out-efforting everyone else and getting to the big leagues. It’s not easy to quantify scrappiness, though. Through the years it’s always been a quality that you know when you see, but there’s never been a number to back it up. Until now.

Scrap is a metric that is scaled on a similar scale to Spd, where 5 is average and anything above that is above average, and anything below 5 is below average. Here are the components that make it up (each component is factored onto a Spd-like scale, assigned a weight, and then combined with all of the other components to give a final number).

  • Infield hit% — Higher is better.
  • .ISO — Less power means more scrappiness.
  • Spd –The ability to change a game with legs.
  • balls in play% — (PA-BB-K)/PA — Go up there looking to fight.
  • zSwing%. — Higher is better. Measures willingness to defend the zone.
  • oSwing%. — Lower is better. These guys can’t hit the low and away pitch to deep center.
  • zContact%. — Higher is better. These guys swing for contact.

Without further ado, here are the Scrap rankings of all qualified batters in 2013.

# Name Scrap
1 Alcides Escobar 6.31
2 Eric Young 6.27
3 Leonys Martin 6.25
4 Jacoby Ellsbury 6.24
5 Starling Marte 6.23
6 Jean Segura 6.19
7 Ichiro Suzuki 6.13
8 Alexei Ramirez 6.13
9 Elvis Andrus 6.08
10 Denard Span 6.08
11 Jose Altuve 6.08
12 Erick Aybar 5.93
13 Adeiny Hechavarria 5.9
14 Daniel Murphy 5.9
15 Brett Gardner 5.89
16 Carlos Gomez 5.89
17 Gregor Blanco 5.87
18 Michael Bourn 5.8
19 Alex Rios 5.76
20 Will Venable 5.72
21 Norichika Aoki 5.7
22 Jimmy Rollins 5.64
23 Shane Victorino 5.63
24 Michael Brantley 5.63
25 Howie Kendrick 5.63
26 Gerardo Parra 5.61
27 Nate McLouth 5.58
28 Nolan Arenado 5.54
29 Torii Hunter 5.53
30 Austin Jackson 5.53
31 Chris Denorfia 5.52
32 Jon Jay 5.52
33 Brandon Phillips 5.5
34 Alejandro De Aza 5.48
35 Dustin Pedroia 5.45
36 Darwin Barney 5.45
37 Ian Desmond 5.42
38 Starlin Castro 5.42
39 A.J. Pierzynski 5.4
40 Eric Hosmer 5.39
41 Asdrubal Cabrera 5.39
42 Josh Hamilton 5.39
43 Alex Gordon 5.39
44 Adam Jones 5.38
45 Coco Crisp 5.35
46 Andrew McCutchen 5.34
47 Marco Scutaro 5.34
48 Ian Kinsler 5.33
49 Andrelton Simmons 5.33
50 Desmond Jennings 5.32
51 Jonathan Lucroy 5.32
52 Chase Utley 5.3
53 Brandon Belt 5.3
54 Hunter Pence 5.26
55 Jason Kipnis 5.22
56 Ben Zobrist 5.21
57 Alfonso Soriano 5.2
58 Pablo Sandoval 5.19
59 Manny Machado 5.18
60 Brian Dozier 5.18
61 Matt Holliday 5.17
62 Brandon Crawford 5.17
63 Allen Craig 5.15
64 Matt Carpenter 5.14
65 Michael Young 5.13
66 Yunel Escobar 5.12
67 Yoenis Cespedes 5.11
68 Yadier Molina 5.11
69 Nick Markakis 5.11
70 Zack Cozart 5.1
71 Mike Trout 5.1
72 Nate Schierholtz 5.08
73 Todd Frazier 5.07
74 Michael Cuddyer 5.07
75 Domonic Brown 5.06
76 Chase Headley 5.03
77 Salvador Perez 5.03
78 Marlon Byrd 5.02
79 James Loney 5.0
80 Neil Walker 5.0
81 Kyle Seager 4.97
82 Andre Ethier 4.97
83 Freddie Freeman 4.96
84 Mike Moustakas 4.95
85 Robinson Cano 4.95
86 Jed Lowrie 4.95
87 David Freese 4.92
88 Shin-Soo Choo 4.91
89 Adam LaRoche 4.91
90 Chris Johnson 4.88
91 Martin Prado 4.87
92 Carlos Beltran 4.86
93 Ryan Zimmerman 4.85
94 Victor Martinez 4.83
95 Justin Morneau 4.81
96 Adrian Gonzalez 4.8
97 Anthony Rizzo 4.79
98 Alberto Callaspo 4.79
99 Trevor Plouffe 4.79
100 Ryan Doumit 4.77
101 Brandon Moss 4.74
102 Mark Trumbo 4.74
103 Matt Wieters 4.7
104 Josh Donaldson 4.69
105 Adrian Beltre 4.69
106 Justin Upton 4.68
107 Daniel Nava 4.67
108 Paul Konerko 4.65
109 Billy Butler 4.65
110 Matt Dominguez 4.64
111 Jayson Werth 4.62
112 Russell Martin 4.62
113 Jay Bruce 4.62
114 J.J. Hardy 4.6
115 Joey Votto 4.59
116 Buster Posey 4.59
117 Dan Uggla 4.57
118 Nick Swisher 4.55
119 Kendrys Morales 4.52
120 Carlos Santana 4.51
121 Pedro Alvarez 4.49
122 Mark Reynolds 4.48
123 Jedd Gyorko 4.48
124 Paul Goldschmidt 4.47
125 Prince Fielder 4.47
126 Edwin Encarnacion 4.45
127 David Ortiz 4.45
128 Adam Lind 4.4
129 Jose Bautista 4.38
130 Justin Smoak 4.37
131 Miguel Cabrera 4.37
132 Mitch Moreland 4.36
133 Joe Mauer 4.34
134 Evan Longoria 4.24
135 Chris Carter 4.23
136 Giancarlo Stanton 4.1
137 Mike Napoli 4.09
138 Troy Tulowitzki 4.07
139 Chris Davis 3.94
140 Adam Dunn 3.81

That’s quite a bit to look at. Here are a few of my takeaways:

  • The general perception of a player’s scrappiness is pretty close to what this metric spits out.
  • There are some surprises, such as Tulo being near the bottom. In his case it’s caused by an extremely low speed rating and a low z-swing%.
  • Little dudes that run hard tend to be scrappy (duh).
  • Big oafy power guys tend not to be scrappy (duh).
  • Upon removing the qualified batter restriction the ‘Scrap’ leader is Hernan Perez. Tony Campana is a close second. I think we can all agree that Campana is more or less the definition of scrappiness.

This isn’t a stat that’s going to forever change how we view baseball. But this does give us a way of quantifying, however imperfectly, a skillset that we haven’t been able to before. Now we not only know that Jose Altuve is scrappy, we know just how scrappy he is. I’ll let you decide how important that is.

If you have any suggestions regarding different ways to calculate Scrap let me know in the comments. It’s a metric that requires a good amount of arbitrary significance since, well, what does it even mean to be scrappy? We’ve always had an idea, and now we have a number.


The idea for this metric was spurned on by Dan Syzmborksi on this episode of the CACast podcast, somewhere around the 75-minute mark.


Baseball’s Most Ridiculous Patented Equipment

Background – what does a patent get you?

Long ago, governments recognized that protecting inventors’ efforts was essential to encourage technological advancement but realized that limiting the time in which an inventor had the exclusive right to market their invention served the greater good by preventing the inventor from controlling a useful product forever.  Patents were first granted in Europe in the late 1400s and the patent system was first enacted in the United States in 1790.  To date, there have been thousands of baseball-related patents issued covering everything from game equipment to methods of compressing game broadcasts.

In the United States, a patent is an intellectual property right granted by the government to an inventor that “excludes others from making, using, offering for sale, or selling the invention throughout the United States or importing the invention into the United States” for a limited time in exchange for public disclosure of the invention when the patent is granted.  Currently, a utility patent is enforceable for 20 years from the date on which the application was submitted, assuming that periodic maintenance fees are paid as scheduled.

What can be patented?

A utility patent will be granted for a machine, process, article of manufacture, composition of matter (or any improvement to an existing machine, process, article of manufacture, composition of matter) as long as it is “new, nonobvious and useful.”  There are certain things that cannot be patented, however, such as laws of nature, abstract ideas and inventions that are morally offensive or “not useful.”

The “non useful” component is somewhat interesting in that the patent examiner is charged only with making a decision whether an invention will function as expected and otherwise has a “useful purpose.”  As you will see below, “useful” does not always mean that the invention will be marketable.

So how did James Bennett hope to change baseball?

While it is not clear whether inventor James E. Bennett of Momence, Illinois is the same James Bennett who played for the Sharon Ironmongers in the 1895 Iron and Oil League, it seems clear that he did not exert any forethought as to whether his inventions would be practical when used under baseball game conditions.  Either that or he just really hated catching a ball with the existing baseball glove technology available at the turn of the 20th Century.

By the early 1900s, baseball gloves had undergone constant improvement.  Starting with George Rawlings in 1885, (Pat. No. 325,968) protective gloves were becoming more acceptable to protect fielders’ hands.  In 1891, Harry Decker added a thick pad to the front of the glove (Pat. No. 450,355) and Bob Reach added an inflatable chamber (Pat. No. 450,717).  By 1895 Elroy Rogers had designed the classic “pillow-style” catcher’s mitt (Pat. No. 528,343) that would be used with little change until Randy Hundley pioneered the one-handed catching technique in the 1960s using a hinged catcher’s mitt.

Regardless of the existence of the baseball glove technology in use at the time, James Bennett tried to think outside the box by eliminating the catcher’s mitt altogether and, instead, attaching that box to the catcher’s chest.  Here is 1904’s “Base Ball Catcher” in all of its ill-conceived glory:

Front View
Side View

Bennett apparently envisioned the catcher squatting behind home plate acting as a passive target for the pitcher’s offerings and designed this contraption to accept the pitched ball into the cage such that it would strike the padding and drop through a chute into the catcher’s hand so it could be returned to the mound.  As you can see, however, the device would have significant shortcomings should the catcher have to attempt to throw out a would-be base stealer, be required to catch the ball for a play at the plate, attempt to block a wild pitch or especially to field his position on a ball put in play in front of the plate.

But Bennett was not finished yet! In 1905, he patented a two-handed “Base Ball Glove” with an oversized pocket to trap the ball:
Front and Back View

Bennett claims that this poorly imagined glove is easy to use because the fingers on the player’s throwing hand were specially designed to “permit the easy and quick removal of that hand to grasp and throw the ball.”  Just as with the “Base Ball Catcher,” however, this design does not offer the player much in the way of a catching radius.

So what happened to James E. Bennett’s inventions?
As of 1918, he was still looking for investors, according to this advertisement he placed in the August and October issues of “Forest and Stream” magazine.

The Rockies’ One Through Eight: the Small Successes and Failures of Lineup Construction

Given the speedy obsolescence of my last blog post, I am left to conclude that Dan O’Dowd and Bill Geivett either don’t read my blog, or they don’t give a shit what an immodest blogger has to say about the Rockies. It’s likely both. Indeed, after the Rockies traded Dexter Fowler and signed Justin Morneau last week, there’s no use rehashing alternatives and possible failures. The task now is to think about what the Rockies can do with the roster that they do have. Last week, I wrote about the construction of the Rockies’ roster in the long-term and on a macro scale. This week, I want to think about what the lineup might—and, yes, should—look like on a micro level. What did the daily lineup look like in 2013? What will the daily lineup look like in 2014? Can it be a recipe for immediate success? What does the structure of the lineup tell us about the organization? Because the pitching staff is the area most likely to go through changes between now and opening day, I’m limiting myself to the position players and their offensive production.

The consensus among those who think about these things is that most managers follow orthodoxies that determine what types of hitters can hit where—speedy guys are lead-off hitters, and power hitters hit in the four or five hole. However, there is evidence that these managerial codes are non-optimal. The big caveat, however, is that research indicates optimizing lineups might only account for a handful of runs a year, and maybe one or two wins. But sometimes one or two wins can be the difference between postseason play and spending October noting the changing leaves. My goal here is not to compare the probable 2014 lineup with a more optimal one and argue that it constitutes the difference between success and failure. Rather, I suggest that a daily glance at the Rockies one through eight in 2014 can illuminate broader directions regarding where the team is going. Or not going, as the case may be.

Here is what I think the Rockies daily lineup will look like come April (for the sake of simplicity, I’ll only consider lineups against right-handed starting pitchers):

1)      Charlie Blackmon, LF

2)      DJ LaMahieu, 2B

3)      Carlos Gonzalez, CF

4)      Troy Tulowitzki, SS

5)      Michael Cuddyer, RF

6)      Wilin Rosario, C

7)      Justin Morneau, 1B

8)      Nolan Arenado, 3B

9)      Pitcher

The immediate result of the Fowler trade is that the Rockies have lost their leadoff hitter. Fowler fit the profile of a conventional choice to lead off games. Namely, he is fast. Still, Fowler was a good fit to hit leadoff, but it was not because of his speed, but because he was among the best on the team in getting on base. This should be the primary metric for a leadoff hitter because guys need to get on base in order to score runs. Despite hitting just .263, Fowler’s 13% walk rate elevated his OBP to .368. For comparison, Rosario hit .292, but his free swinging style and 3% walk rate put his OBP at just .315. Even without the threat to steal (Fowler stole 19 bases in 28 attempts), his ability to get on base made him the best candidate on the team to hit in the one hole. Without Fowler, I think Walk Weiss (or Bill Geivett, or whoever the hell makes these clubhouse decisions) is going to go with Blackmon (and sometimes Corey Dickerson) in the leadoff spot, only because Blackmon fits the profile that values speed first. If we assume that Blackmon splits time with Dickerson in left field as well as leading off games, they collectively project (per Steamer) to get on base at a .325 clip in about 700 plate appearances, hardly enough to justify hitting first.

Whereas the decision to bat Fowler first made sense both by conventional and unconventional thinking, the number-two hitter is where the Rockies really made a mistake. I expect it to be repeated in 2014. Over the course of the year, a mélange of as-of-now below average hitters were placed in the two spot—mostly whoever happened to be playing second base, meaning either Josh Rutledge or LaMahieu. The total slash line of all two hitters for the 2013 Rockies? .256/.290/.341. Aside from the pitcher’s spot, the collective average and OBP of the two hitter was better than only the seven spot, and the slugging percentage was the worst among position players. The Rockies essentially placed their worst hitter between the one and three spot. If the Rockies, as I suspect, go with LaMahieu to hit second, they’re going to repeat the error. The other player I can envision Weiss placing in the two hole is Arenado—who projects to be the only position player with worse offensive numbers than LaMahieu.

What throws this mistaken lineup construction into such stark relief is that research suggests that the two spot is precisely where the team’s best hitter should be placed. Sky Kalkman argues that a team’s three best hitters should be placed in the one, two, and four holes, with high OBP leaning towards the one and two spots and power at the four spot. The next best two should be hitting in the three and five spots, and the worst hitters placed in spots six through eight (in the National League). If the Rockies daily lineup looks like what I think it will, then two of the team’s three worst hitters will regularly hit one and two.

Then what should the lineup look like? Baseball Musing’s lineup analysis allows the interested fan to input a name, OBP, and slugging percentage, and it purports to output the optimal team lineup based on runs per game. The calculus is based on past performance taken from data either from 1959-2004 or the steroid inflated statistics from 1989-2002. As Jack Moore observes, both models are flawed because neither is applicable to the game today and the simulations take place in a vacuum without context. Additionally, the RPG outputs are inflated beyond reason. But regardless of whether or not the RPG outputs can be taken at face value, the tool has some use because it enables you to see RPG differentials among different lineup constructions. Using the more inclusive 1959-2004 model and 2014 Steamer projections, the supposed optimal lineup—the one that ostensibly would produce just over five runs per game—looks like this:

1)      Tulowitzki

2)      Gonzalez

3)      Blackmon

4)      Morneau

5)      Cuddyer

6)      Arenado

7)      LaMahieu

8)      Rosario

9)      Pitcher

This lineup is enticingly unconventional. It provides for the Rockies’s best hitters to have the most opportunities to get on base and score runs. Still, I wouldn’t follow it. For one, the team’s best hitters at getting on base also happen to be the ones with the most pop. So there is no easy way to favor OBP at the one and two spots and power at the four and five spots. I would love to have an OBP Carlos Gonzalez and a home run hitting one, but we have to make do with the fortunate curse that they are the same person—at least we do now, as Fowler reached base about as often as Gonzalez in 2013. This lineup would also be risky because the two through four hitters are all left-handed, which would make it easy for the opposition to marshal its lefty specialist late in a close game. Conversely, I would construct the Rockies daily lineup as follows, this time with projected slash line (again, per Steamer):

1)      Gonzalez – .297/.376/.547

2)      Cuddyer – .281/.343/.474

3)      Rosario – .278/.316/.515

4)      Tulowitzki – .300/.376/.534

5)      Morneau – .276/.345/.461

6)      LaMahieu – .289/.328/.392

7)      Arenado – .277/.318/.446

8)      Blackmon/Dickerson – .276/.326/.455

9)      Pitcher (based on 2013 production) – .140/.176/.165

In my mind, this lineup is the one most likely to produce the most runs for the Rockies. Ideally, I would rather have Gonzalez hitting second rather than first, but the rest of the roster limits this flexibility. The possibility of Gonzalez leading off has been raised, but I don’t think there is much to the talk. Other than Gonzalez’s first half season with the Rockies in 2009, he’s only led off when Jim Tracy thought it could pull him out of a horrid slump. Tulowitzki is certainly a better hitter than Cuddyer, but Tulo’s power coupled with Cuddyer’s ability to get on base (even if he’s in for some serious regression in 2014) make hitting Cuddyer second and Tulo fourth the best play. The three and five spots will produce more outs than the one, two, and four spots, but the upside of Rosario’s power mitigates the risk of those outs, as would Morneau’s relatively higher OBP and ability to hit about one fifth of his balls in play as line drives.

Again, this exercise does not identify the path to success and the path to failure for the Rockies in 2014. The team is unlikely to make the playoffs regardless of how the lineup is structured. But what it should do is serve as a reminder to pay attention to the daily details and to think beyond inherited baseball wisdom. If the daily lineup turns out to replicate past mistakes, then I think it points to a much larger organizational problem of resisting even the simplest and most easily integrated baseball analytics. But if Weiss runs out lineups that defy convention, then it might suggest that the franchise has a baseball plan in addition to a business plan.


The Impact of Defensive Prowess on a Pitcher’s Earned Runs Average

EXECUTIVE SUMMARY

  • This study attempts to determine how much the fielders’ prowess, measured by the metric UZR (Ultimate Zone Range), affects a pitcher’s Earned Runs Average.
  • The data used for the regression (collected from FanGraphs.com) includes collective ERA, BABIP, HR/9, BB/9, K/9 and UZR for every Major League Baseball team for the past three years.
  • ERA (Earned Runs Average) is the amount of earned runs a pitcher allows per nine innings pitched. BABIP (Batting Average per Balls in Play) is the batting average against any given pitcher, but only including the at bats where the hitter puts the ball in play. HR/9 is home runs allowed per nine innings pitched. BB/9 is walks allowed per nine innings pitched. K/9 is batter struck out per nine innings pitched. UZR (Ultimate Zone Range) is a widely used metric to evaluate defense. It summarizes how many runs any given fielder saved or gave up during a season compared to the league average in that position.
  • The model passed the F-test, the adjusted “R” squared came out at 91.2 percent and every one of the independent variables passed their respective t-test.
  • The model tested negative for both Multicollinearity (using Variance Inflation Factors) and Heteroskedasticity (using the second version of the White’s test).
  • The regression equation looks like this: ERA = -2.55 – 0.187 K/9 + 0.413 BB/9 +16.9 BABIP + 1.72 HR/9 – 0.00157 UZR. Even though the independent variable UZR has a low coefficient, it definitely affects a pitcher’s ERA, and in the way it was suspected. As the UZR goes up the ERA goes down.

INTRODUCTION

Since Bill James started to write about baseball in the late 1970’s and started to defy the traditional stats used to evaluate players, hundreds of baseball fans have tried to follow his footsteps creating new ways to evaluate players and defy the existing ones. One of the stats that has been brought to light lately is Earned Runs Average (ERA).

According to several baseball analysts ERA is not an efficient way to evaluate how good or bad a pitcher performs. The rationale behind this thinking is pretty simple; ERA is the amount of earned runs that any given pitcher allows per nine innings pitched, but the pitcher is not always 100 percent responsible for every earned run allowed. Sometimes, a fielder’s lack of defensive prowess will allow hitters to reach base safely (I am not talking about errors), and when it happens, rather often, those hits will translate into earned runs, thus affecting the pitcher’s ERA.

One of the metrics that has been used to determine any given fielder’s prowess is UZR (Ultimate Zone Range). UZR compiles data on the outfielders arms, fielder range and errors and summarizes the amount of runs those fielders saved or gave up during a season compared to the league average in that position. Using that metric along with other metrics that affect the ERA, we can answer the question “How much does defensive prowess impacts a pitcher’s ERA?”

If in fact defensive prowess affects ERA, we could also determine how much it affects it. With that kind of information, cost-effective teams (Tampa Bay Rays and Oakland Athletics) can help improve their pitching staff without investing heavily on new pitchers.

DATA

The unit of observation for this study is one Major League Baseball team. And the number of observations is 90. Currently, there are 30 Major League Baseball teams, so data was collected for the past three Major League Baseball seasons. So the time period covered goes from 2010 to 2012, including both seasons.

The dependent variable used in this project was Earned Runs Average, and the independent variables are as follow:

  • BABIP: Batting average per balls in play
  • HR/9: Homeruns allowed per nine innings pitched
  • BB/9: Walks allowed per nine innings pitched
  • K/9: Hitters struck out per nine innings pitched
  • UZR: Runs saved or given up by any given fielder during a season

All the data for this study is cross-sectional because all the observations have been collected at the same point of time.

All the data for this study was collected from the baseball website FanGraphs.com. FanGraphs is a widely known source of baseball stats and news, but the data they publish on their website is collected by another company called Baseball Info Solutions.

REGRESSION ESTIMATIONS

            Regression Analysis: ERA versus BABIP, HR/9, BB/9, K/9 and UZR

The regression equation is

ERA = – 2.55 – 0.187 K/9 + 0.413 BB/9 + 16.9 BABIP + 1.72 HR/9 – 0.00157 UZR

 

Predictor       Coef         SE Coef              T           P             VIF

Constant      -2.5474     0.5594        -4.55    0.000

K/9              -0.18718    0.02428     -7.71     0.000    1.099

BB/9            0.41261     0.04671        8.83     0.000    1.052

BABIP          16.914        1.876             9.02     0.000     1.741

HR/9            1.7222       0.1105          15.58    0.000    1.180

UZR        -0.0015743  0.0006219  -2.53  0.013       1.669

 

S = 0.133650   R-Sq = 91.7%   R-Sq(adj) = 91.2%

 

Analysis of Variance

 

Source                  DF        SS            MS              F             P

Regression          5     16.5663   3.3133   185.49   0.000

Residual Error  84   1.5004     0.0179

Total                     89   18.0668

The first step used to evaluate the model was the F-test, and since the model has a p-value less than 0.05, it is safe to say that the model passed the F-test. The adjusted “R” squared for the model was 91.2 percent, which means that 91.2 percent of the variation in ERA is explained by at least one of the independent variables used in this model. The method used to evaluate the relevance of the independent variables was the t-test, and each one of them, as mentioned earlier, had a p-value below 0.05, so in conclusion, they all passed the t-test. The p-value for K/9, BB/9, BABIP and HR/9 was 0.000 for each one of them, and the p-value for UZR was 0.013.

MODEL ESTIMATION SEQUENCE

  1. Correct functional form: To check for correct functional form, each one of the independent variables was plotted against the dependent variable. The scatter plots that resulted from this check show a linear relationship between each one of the independent variables and the dependent variable.
  2. Test for Heteroskedasticity: The data for this study is cross-sectional, so it was necessary to test for Heteroskedasticity, and such test was conducted by the second version of White’s test. To do so, the residuals for the original regression were stored. Those squared residuals were regressed against the Independent variables and the independent variables squared. After running the regression, an the F-test was applied to it and since the p-value was over 0.05, it can be concluded that the regression fails the F-test, therefore Heteroskedasticity does not exist in the initial model.
  3. Multicollinearity: This model also tested for Multicollinearity and it is done by using the correlation matrix and the Variance Inflation Factors, observed in the initial regression.
    1. Since none of the VIF’s is larger than 10, it can be concluded that Multicollinearity does not exist and the p-values from the t-tests can be trusted.
    2. A correlation matrix was calculated using all the independent variables but since every one of them passed the t-test, none will be dropped from the model.
  • K/9: p-value (0.000), VIF (1.099), rho (0.252)
  • BB/9: p-value (0.000), VIF (1.052), rho (0.195)
  • BABIP: p-value (0.000), VIF (1.741), rho(0.604)
  • UZR: p-value (0.013), VIF (1.669), rho (0.604)
  1. Drop any irrelevant variable from the model: Since all the independent variables in this model are relevant, none of them will be dropped from the model.

FINAL MODEL

The final model is exactly the same as the initial model because the it passed the F-test, all of the independent variables passed their t-tests and neither Heteroskedasticity or Multicollinearity are present in the model, so it was not necessary to run another regression or drop any variable.

COEFFICIENT INTERPRETATION

  • K/9: When the team strikes out one extra batter per nine innings, the team’s ERA should go down by 0.187 runs per nine innings holding everything else constant.
  • BB/9: When the team walks one extra batter per nine innings, the team’s ERA should go up by 0.413 runs per nine innings holding everything else constant.
  • BABIP: If every time a batter puts the ball in play he records a hit, the ERA will go up by 16.9 runs per nine innings. This variable is hard to explain since it will never go up by 1, it will go up or down depending on how many hits the team allows in any given number of at-bats where the batter puts the ball in play. For example, if a team averages eight hits every 27 outs, the BABIP will be 0.296 throughout the entire season. Taking into account that every batter put the ball in play (no strikeouts). The expected increase in ERA given a 0.296 BABIP during a season, and holding everything else constant, would be 5.00.
  • HR/9: When the team allows one more homerun per nine innings, ERA should go up by 1.72 runs per nine innings holding everything else constant.
  • UZR: When the team saves one extra run defensively, ERA should go down by 0.00157 runs per nine innings holding everything else constant.

SUMMARY

The null hypothesis for this project stated that defensive prowess didn’t affect ERA, but the results showed otherwise, so it is safe to reject the null hypothesis. Defensive prowess appears to affect ERA although in a small scale. This might not seem like much, but cost-effective teams like the Rays and Athletics can acquire premium defensive players at a much cheaper cost than a premium pitcher, and although they won’t be “game changers,” they will definitely improve the team’s ERA.

Baseball is a game of numbers, and these numbers don’t lie. A good defender will help his team save runs; a lot of good defenders will help their team save multitude of runs. Is this enough to get to the postseason or win a World Series? Absolutely not, but it has been proven already that finding edges in the game, as little as they might be, will help a team in the long run. The findings in this study are a concise proof that taking advantage of defense is an edge that can be exploited for the betterment of the organization.


Confounding: Are the Rockies Rebuilding?

In the 2014 Hardball Times Baseball Annual, Jeff Moore analyzes six teams undergoing some form of “rebuilding.” He correctly notes that the concept has become a platitude in sports media, but that it still has explanatory value. In order to highlight the utility of “rebuilding,” he parses the concept to represent different forms of practice implemented by a variety of organizations. Moore covers the “ignorance” of the Philadelphia Phillies who continue on as if their core of players wasn’t aging and Ryan Howard was ever a reliable contributor; the “recognition” of the New York Mets that they have to be patient for one or two more years before the pieces come together and, they hope, work as well as Matt Harvey’s new elbow should; the “overhauling” of the Houston Astros evident in their fecund farm system and arid big league squad; the “perpetual” rebuilding of the Miami Marlins in a different key from anyone else, most recently using the public extortion and fire sale method; the Kansas City Royals’ “deviation” by trading long-term potential for a short-term possibility; and the “competition” exemplified by the 2013 Pittsburgh Pirates as they seemingly put everything together in 2013, though it remains to be seen whether or not they will need to rebuild again sooner rather than later.

Although the Colorado Rockies are not on Moore’s radar, I think they fall into an altogether different category. They appear to be in a confoundingly stagnant state of non-rebuilding. The mode of rebuilding can be as stigmatizing as it is clichéd, and it is as if the Rockies are avoiding the appellation at the cost of the foresight it might bring. Or, I don’t know what the hell is going on, and I’m not convinced there is a clear plan.

That might sound unfair. But if we, like Moore, take the definition of rebuilding to essentially mean identifying a future window of opportunity and working towards fielding a competitive team to maximize that opportunity, but with the acceptance of present limitations, then I don’t think I’m far off. General Manager Dan O’Dowd is, inexplicably, the fourth-longest tenured general manager in all of baseball, despite overseeing just four winning clubs in 14 full seasons. The only GMs who have held their current job longer are the dissimilarly successful Brian Sabean of the San Francisco Giants, Brian Cashman of the New York Yankees, and Billy Beane of the Oakland Athletics. The possible moves that have been rumored suggest that Dan O’Dowd and de facto co-GM Bill Geivett are frozen by anything more than a one-year plan.

Let’s look at some of the possible moves that are garnering notice. Beat writer Troy Renck reports that the Rockies are eying first baseman Justin Morneau to replace the retired Todd Helton. Of all of the speculative deals, this one is most likely to happen. But what would this accomplish in the short and long-term? In the short term, it would provide a replacement for Todd Helton and possibly provide a bridge for either Wilin Rosario or prospect Kyle Parker to take over full-time at first. The long-term effects are not as easy to identify, as his contract probably wouldn’t exceed two years.

It might sound just fine, until you realize that Morneau would be a “replacement” in more than one sense. Per FanGraphs’ Wins Above Replacement (WAR), Morneau hasn’t accrued an average major-league season since the half-season he played in 2010. Hayden Kane over at Rox Pile notes that he slashed .345/.437/.618 before a concussion ended his 2010 season and most of the next, but those numbers were inflated by a .385 Batting Average on Balls in Play (BABIP), over .100 points higher than his career average. He was still well on his way to a successful season, but the effects the concussion had on his productivity cannot be overstated. Morneau accrued 4.9 war in the 81 games he played in 2010, and 0.4 since. Optimistically, if Morneau out-produces his projected line next year (.258/.330/.426, per Steamer projections), which he likely would do playing half of his games in Coors Field (except against lefties, who he can’t hit), he would at best be a league-average hitter to go along with his average defense. Sure, it would be an improvement from the lackluster production from first base in 2013, but not enough to build beyond current listlessness.

Fundamentally, I believe that the Rockies do need a bridge before easing Rosario into a defensive position where he is less of a liability or seeing what the team has in Parker. But they already have the link in Michael Cuddyer. While he’s unlikely to reproduce the career year he had in his age 34 season in 2013, having Cuddyer play out his contract sharing time at first seems to be the better allocation of resources in the short-term. In January of 2013, Paul Swydan characterized the Rockies as an organization on a “quest for mediocrity.” Signing Morneau would go a long way toward realizing that goal.

In addition to possible additions via free agency, trade rumors are aren’t helping to clarify where the team is. It has been rumored that the Rockies are interested in trading for Anaheim’s Mark Trumbo, which would also fill the hole at first base that I don’t think actually exists yet. Trumbo, a power hitter, is misleadingly tantalizing. As opposed to Morneau, Trumbo is at least on the right side of 30; similarly though, Trumbo doesn’t get on base enough to provide the offense the boost it needs, especially on the road. He’d be a virtual lock to hit 30+ home runs, but he would also be sure to have an OBP hovering around .300. It’s unclear who would be involved in such a deal, as the Angels wouldn’t be interested in the Rockies’ primary trading piece, Dexter Fowler.

Speaking of Fowler, he’s going to be traded. In an interview with Dave Krieger, O’Dowd said that the organization has given up on him. Not in those words of course—rather, he noted that Fowler lacks “edge,” which is a bullshit baseball “intangible” that doesn’t tell us anything about the player in question, but rather that the front office seeks amorphous traits that can only be identified retrospectively. Reports have the Rockies in talks with Kansas City that would result in the teams swapping Fowler for a couple of relievers, likely two of Aaron Crow, Tim Collins, and Wade Davis. This, too, would maintain organizational stagnation.

The Rockies are practicing a confounding type of non-rebuilding, wherein veterans are brought in not with the idea that they can be valuable role players (like Shane Victorino, Mike Napoli, and Stephen Drew were for the Boston Red Sox last off-season), but as immediate solutions to problems that should be viewed in the long-term. I’m not as pessimistic as I might sound. The Rockies finished in last place for the second straight season in 2013, but with just two fewer wins than the Padres as Giants, and a true-talent level of about a .500 team. The thing about teams with a win projection of about 80 is that they can reasonably be expected to finish with as much as 90 wins—and as few as 70. If the Rockies are competitive in 2014, it will likely be due to health and a lot of wins in close games. I do, however, think they can be competitive starting in 2015. That’s the rebuilding window of opportunity the team should be looking at. If they are, it won’t be because of who is playing first base or right field, or even an improvement in hitting on the road, but progress in the true source of their problems: run prevention.

Last year, only the Twins and the lowly Astros allowed more runs per game. Despite this, for the first time in a while Rockies’ fans can be optimistic about the engine of run prevention, quality starting pitching. This is an area where the team can build a clear agenda for the future. Tyler Chatwood and Jhoulys Chacin should be reliable starters for the next few years. It’s unclear how many good years Jorge de la Rosa has left in him, and it’s also unclear whether or not Juan Nicasio can be a legitimate starter. But the Rockies have two polished, nearly big-league-ready pitching prospects in Jonathan Gray and Eddie Butler—Rockies’ fans should be really excited about these two—so long as one of them is not one of the “young arms” rumored to be in play for Trumbo. If Gray and Butler can be shepherded to the big leagues in a timely manner and learn to pitch to major leaguers quickly, they could join Chatwood and Chacin for possibly the best rotations in Rockies history. And if the front office really wants to make a big free-agent splash, the answers aren’t in the Brian McCanns or Jose Abreus of the world, but in splitter-throwing, ground-ball inducing, 25-year-old starting pitcher Masahiro Tanaka. His presence would likely push a rotation in 2015-2016 and possibly beyond from dependable to exceptional. Of course, it won’t happen. The Rockies, if they bid, will be outbid, and it’s precisely starting pitchers in demand that tend to stay away from Colorado.

In a sense, every major-league team is always in some stage of rebuilding, whether they admit it or not. My point is that I think there can be power in the admission of it. De-stigmatizing the “rebuilding process” might contribute to the recognition that it’s not necessarily a multiyear process, and that being in the process is not an acknowledgement of failure. Recognition of this, which by itself should provide more foresight, should lead the organization and armchair observers like myself from a state of confusion due to the team’s pursuit of stagnation, to one of encouragement where progress can be visualized.


Weighting Past Results: Starting Pitchers

My article on weighting a hitter’s past results was supposed to be a one-off study, but after reading a recent article by Dave Cameron I decided to expand the study to cover starting pitchers. The relevant inspirational section of Dave’s article is copied below:

“The truth of nearly every pitcher’s performance lies somewhere in between his FIP-based WAR and his RA9-based WAR. The trick is that it’s not so easy to know exactly where on the spectrum that point lies, and its not the same point for every pitcher.”

Dave’s work is consistently great. This, however, is a rather hand-wavy explanation of things. Is there a way that we can figure out where pitchers have typically laid on this scale in the past  so that we can make more educated guesses about what a pitcher’s true skill level is? We have the data–so we can try.

So, how much weight should be placed on ERA and FIP respectively?  Like Dave said, the answer will be different in every case, but we can establish some solid starting points. Also since we’re trying to predict pitching results and not just historical value we’re going to factor in the very helpful xFIP and SIERA metrics.

Now for the methodology paragraph: In order to test this I’m going to use every pitcher season since 2002 (when FanGraphs starts recording xFIP/SIERA data) where a pitcher had at least 100 innings pitched, and then weight all of the relevant metrics for that season in order to create an ERA prediction for the following season. I’ll then look at the difference between the following season’s predicted and average ERA, and then calculate the average miss. The smaller the average miss, the better the weights. Simple. As an added note, I have weighted the importance of a pitcher’s second (predicted – actual) season by innings pitched so that a pitcher who pitched 160 innings in his second (predicted – actual) season will assume more merit than the pitcher who pitched only 40 innings.

How predictive are each of the relevant stats without weights? I am nothing without my tables, so here we go (There are going to be a lot of tables along the way to our answers. If you’re just interested in the final results, go ahead and skip on down towards the bottom).

Metric Miss Average
ERA .8933
FIP .7846
xFIP .7600
SIERA .7609

This doesn’t really tell us anything we don’t already know: SIERA and xFIP are similar, and FIP is a better predictor than ERA. Let’s start applying some weights to see if we can increase accuracy, starting with ERA/SIERA combos.

ERA% SIERA% Miss Average
50% 50% .7750
75% 25% .8218
25% 75% .7530
15% 85% .7527
10% 90% .7543
5% 95% .7571

We can already see that factoring in ERA just a slight amount improves our results substantially. When you’re predicting a pitcher’s future, therefore, you can’t just fully rely on xFIP or SIERA to be your fortune teller. You can’t lean on ERA too hard either, though, since once you start getting up over around 25% your projections begin to go awry. Ok, so we know how SIERA and ERA combine, but what if we use xFIP instead?

ERA% xFIP% Average Miss
25% 75% .7530
15% 85% .7530
10% 90% .7549
5% 95% .7560

Using xFIP didn’t really improve our results at all. SIERA consistently outperforms xFIP (or is at worst only marginally beaten by it) throughout pretty much all weighting combinations, and so from this point forward we’re just going to use SIERA. Just know that SIERA is basically xFIP, and that there are only slight differences between them because SIERA makes some (intelligent) assumptions about pitching. Now that we’ve established that, let’s try throwing out ERA and use FIP instead.

FIP% SIERA% Average Miss
50% 50% .7563
25% 75% .7543
15% 85% .7560
10% 90% .7570

It’s interesting that ERA/SIERA combos are more predictive than FIP/SIERA combos, even though FIP is more predictive in and of itself. This is likely due to the fact that a lot of pitchers have consistent team factors that show up in ERA but are cancelled out by FIP. We’ll explore that more later, but for now we’re going to try to see if we can use any ERA/FIP/SIERA combos that will give us better results.

ERA% FIP% SIERA% Average Miss
25% 25% 50% .7570
15% 15% 70% .7513
10% 10% 80% .7520
5% 15% 80% .7532
10% 15% 75% .7517
15% 25% 60% .7520
15% 25% 65% .7517

There are three values here that are all pretty good. The important thing to note is that ERA/FIP/SIERA combos offer more consistently good results than any two stats alone. SIERA should be your main consideration, but ERA and FIP should not be discarded since the combo offers a roughly .05 better predictive value towards ERA than SIERA alone. It’s a small difference, but it’s there.

Now I’m going to go back to something that I mentioned previously–should a player be evaluated differently if he isn’t coming back to the same team? The answer to this is a pretty obvious yes, since a pitcher’s defense/park/source of coffee in the morning will change. Let’s narrow down our sample to only pitchers that changed teams, to see if different numbers work better. These numbers will be useful when evaluating free agents, for example.

ERA% FIP% SIERA% Average Miss (changed teams)
10% 15% 80% .7932
5% 15% 80% .7918
2.5% 17.5% 80% .7915
2.5% 20% 77.5% .7915
2.5% 22.5% 75% .7917

As suspected ERA loses a lot of it’s usefulness when a player is switching teams, and FIP retains its marginal usefulness while SIERA carries more weight. Another thing to note is that it’s just straight-up harder to predict pitcher performance when a pitcher is changing teams no matter what metric you use. SIERA itself goes down in accuracy to .793 when only dealing with pitchers that change teams, a noticeable difference from the .760 value above for all pitchers.

For those of you who have made it this far, it’s time to join back in with those who have skipped down towards to bottom. Here’s a handy little chart that shows previously found optimal weights for evaluating pitchers:

Optimal Weights

Team ERA% FIP% SIERA% Average Miss
Same 10% 15% 75% .7517
Different 2.5% 17.5% 80% .7910

Of course, any reasonable projection should take more than just one year of data into account. The point of this article was not to show a complete projection system, but more to explore how much weight to give to each of the different metrics we have available to us when evaluating pitchers. Regardless, I’m going to expand the study a little bit to give us a better idea of weighting years by establishing weights over a two-year period. I’m not going to show my work here mostly out of an honest effort to spare you from having to dissect more tables, so here are the optimal two year weights:

ERA% Year 1 FIP% Year 1 SIERA% Year 1 ERA% Year 2 FIP% Year 2 SIERA% Year 2 Average Miss
5% 5% 30% 7.5% 7.5% 45% .742

As expected using multiple years increases our accuracy (by roughly .15 ERA per pitcher). Also note that these numbers are for evaluating all pitchers, and so if you’re dealing with a pitcher who is changing teams you should tweak ERA down while uptweaking FIP and SIERA. And, again, as Dave stated each pitcher is a case study–each pitcher warrants their own more specific analysis. But be careful when you’re changing weights. When doing so make sure that you have a really solid reason for your tweaks and also make sure that you’re not tweaking the numbers too much, because when you begin to start thinking that you’re significantly smarter than historical tendencies you can start getting in trouble. So these are your starting values–carefully tweak from here. Go forth, smart readers.

As a parting gift to this article, here’s a list of the top 20 predictions for pitchers using the two-year model described above. Note that this will inherently exclude one-year pitchers such as Jose Fernandez and pitchers that failed to meet the 100IP as a starter requirement in either of the past two years. Also note that these numbers do not include any aging curves (aging curves are well outside the scope of this article), which will obviously need to be factored in to any finalized projection system.

# Pitcher Weighted ERA prediction
1 Clayton Kershaw 2.93
2 Cliff Lee 2.94
3 Felix Hernandez 2.95
4 Max Scherzer 3.01
5 Stephen Strasburg 3.03
6 Adam Wainwright 3.11
7 A.J. Burnett 3.22
8 Anibal Sanchez 3.22
9 David Price 3.24
10 Madison Bumgarner 3.33
11 Alex Cobb 3.36
12 Cole Hamels 3.36
13 Zack Greinke 3.41
14 Justin Verlander 3.41
15 Doug Fister 3.46
16 Marco Estrada 3.48
17 Gio Gonzalez 3.53
18 James Shields 3.53
19 Homer Bailey 3.57
20 Mat Latos 3.60

What if: Prince Fielder Were an Everyday Shortstop?

I was recently involved in an online discussion of the Prince Fielder/Ian Kinsler trade and the signing of Jhonny Peralta by the St. Louis Cardinals. Someone stated that Peralta was no more than a utility infielder who could sometimes hit. I pointed out that, over the last three seasons, Peralta was actually a top-five SS. Someone else stated that Prince, were he to play SS, would also be a top-five SS. I thought that was ridiculous, but decided I’d try to look at it as objectively as possible.

Over the last three seasons, Fielder has 111 batting runs, -18 base running runs, 61 replacement runs and -10 fielding and -37 positional runs for 107 total runs.

If we assume that his batting, base running and overall playing time would stay the same, which is probably an optimistic assumption given the likely additional strain of playing SS instead of 1B, then we only need to adjust his positional and defensive runs.

The positional adjustment is the easiest to adjust. The adjustment for 1B is -12.5 runs per 1350 innings, the adjustment for SS is +7.5 runs per 1350 innings. Fielder’s -37 positional runs represent (-37/-12.5) 3.0 defensive seasons. Three defensive seasons at SS is worth (3 * 7.5) 23 runs.

At this point Fielder at SS is worth 111 batting runs+-18 base running runs+23 positional runs+61 replacement runs. That’s 167 runs all told. That’d make him, by far, the best SS in the league. Troy Tulowitzki has 114 runs.

But we still haven’t factored in Fielder’s defense compared to the average SS. I’m not really sure that we can.

Fielder has been about six runs worse than the average 1B each season of his career. But the average SS is a much better defensive player than the average 1B.

I think it’s safe to assume that Fielder would be the worst defensive SS in baseball.

Since 2002, the UZR era, the worst season by a SS (minimum 650 innings, about half a season) is Dee Gordon’s 2012 season in which UZR says he was worth -27 runs per 1350 innings.

That’s a somewhat amusing comparison. Dee Gordon is listed at 5’11” 160 lbs. Prince is listed at 5’11” 275 lbs. Those are listed weights and I think it’s entirely possible that Prince weighs twice as much as Gordon.

I’m going to go out on a limb as say that Prince would be a worse defensive SS than Gordon. I’d go so far as to say that he would be considerably worse. But how much is considerably?

UZR can be broken down into different components.
Range runs – attempts to measure a player’s range; how many balls he does/doesn’t get to compared to average.
Error runs – attempts to measure how many runs a player saves/costs his team by avoiding/making errors
Double play runs – attempts to measure how many runs a player saves/costs his team by turning/not turning double plays.

I’m going to assume that Fielder would be the worst at all three of the above. So, what would that look like for Fielder’s overall defensive worth at SS?

It’s worth noting here that most of Gordon’s poor UZR was due to making errors, his range and double plays were bad, but not historically bad. His errors were.

The worst SS in terms of double play runs (per 1350 innings) was, go figure, 2012 Dee Gordon at -5 runs per 1350 innings. If we say that Fielder was equally as bad as Gordon, I’ve little doubt he’d be much worse than Gordon, that’d be (3*-5)-15 runs over the 3 seasons.

The worst SS in terms of range runs was, not surprisingly, 2012 Derek Jeter at -17.5 runs per 1350 innings. Anyone think that Fielder has Jeter’s range? I don’t. But if we give Fielder three seasons as poor as Jeters’ 2012 that’s (3*-17.5) -53 runs for 3 seasons.

The worst SS in terms of error runs, bet you guessed that it, was 2012 Dee Gordon at -13 runs per 1350 innings. Again, I think that Dee’s footwork and hands around 2B would be much better than Fielder’s, but if we say that Fielder was as good as Gordon then he’d be worth (3*-13) -39 runs per the three seasons.

If we add all of that up (and remembering that this is-I believe-an optimistic look at Fielder’s possible performance at SS, we get Fielder being (-15-53-39) -107 runs worse than the average SS. Quite a bit worse than Gordon’s -27 runs

Let’s add that to his other performance from above:
111 batting runs, -18 base running runs, -107 fielding runs, 23 positional runs, 61 replacement runs = 71 total runs.

71 total runs between 2011 and 2013 would have put Fielder 12th among major league SS, between Hanley Ramirez (84 runs) and Marco Scutaro (70 runs), and worth about 2.5 WAR per season.

To emphasize again, I think these are the most ridiculously optimistic assumptions that I can present with a straight face. I think it much more likely that Fielder would be a -50 (per 1350 innings) or worse SS were he to play there everyday. Not to mention the additional strain on his body that would decrease his hitting, baserunning, and ability to play every day.


Thoughts on the MVP Award: Team-Based Value and Voter Bias

You are reading this right now.  That is a fact.  Since you are reading this right now, many things can be reasonably inferred:

1.  You probably read FanGraphs at least fairly often

2. Since you probably read FanGraphs at least fairly often, you probably know that there are a lot of differing opinions on the MVP award and that many articles here in the past week have been devoted to it.

3. You probably are quite familiar with sabermetrics

4. You probably are either a Tigers fan or think that Mike Trout should have won MVP, or both

5. You might know that Josh Donaldson got one first-place vote

6. You might even know that the first-place vote he got was by a voter from Oakland

7. You might know that Yadier Molina got two first-place votes, and they both came from voters from St. Louis

8. You might even know that one of the voters who put Molina first on his ballot put Matt Carpenter second

9. You might be wondering if there is any truth to the idea that Miguel Cabrera is much more important to his team than Mike Trout is

I have thought about many of those things myself.  So, in this very long 2-part article, I am going to discuss them.  Ready?  Here goes:

Part 1: How much of an impact does a player have on his team?

Lots of people wanted Miguel Cabrera to win the MVP award. Some of you reading this may be shocked, but it’s actually true. One of the biggest arguments for Miguel Cabrera over Mike Trout for MVP is that Cabrera was much more important and “valuable” than Trout.  Cabrera’s team made the playoffs.  Trout’s team did not.  Therefore anything Trout did cannot have been important.  Well, let’s say too important.  I don’t think that anybody’s claiming that Trout had zero impact on the game of baseball or the MLB standings whatsoever.

OK.  That’s reasonable. There’s nothing flawed about that thinking when it’s not a rationale for voting Cabrera ahead of Trout for MVP.  As just a general idea, it makes sense:  Cabrera had a bigger impact on baseball this year than Trout did.  I, along with many other people in the sabermetric community, disagree with the fact that that’s a reason to vote for Cabrera, though.  But the question I’m going to ask is this: did Cabrera have a bigger impact on his own team than Trout did?

WAR tells us no.  Trout had 10.4 WAR, tops in MLB.  Cabrera had 7.6 – a fantastic number, good for 5th in baseball and 3rd in the AL, as well as his own career high – but clearly not as high as Trout.   Miggy’s hitting was out of this world, at least until September, and it’s pretty clear than he could have at least topped 8 WAR easily had he stayed healthy through the final month and been just as productive as he was April through August.  But, fact is, he did get hurt, and did not finish with a WAR as high as Trout.  So if they were both replaced with a replacement player, the Tigers would suffer more than the Angels.  Cabrera was certainly valuable – if replaced by a replacement, the 7 or 8 wins the Tigers would lose would probably not be enough to win them the AL Central.  But take Trout out, and the Angels go from a mediocre-to-poor team to a really bad one. The Angels had 78 wins this year, and that would have been around 68 (if we trust WAR) without Trout.  That would have been the 6th worst total in the league.  So, by WAR, Trout meant more to his team than Cabrera did.

But WAR is not the be all and end all of statistics (though we may like to think it is sometimes).  Let’s look at this from another angle.  Here’s a theory for you: the loss of a key player on a good team would probably not hurt that team as much because they’re already good to begin with.  If a not-so-good team loses a key player, though, the other players on the team aren’t as good so they can’t carry the team very well.

How do we test this theory?  Well, we have at our disposal a fairly accurate and useful tool to determine how many wins a team should get.  That tool is pythagorean expectation – a way of predicting wins and losses based on runs scored and allowed.  So let’s see if replacing Trout with an average player (I am using average and not replacement because all the player run values given on FanGraphs are above or below average, not replacement) is more detrimental to the Angels than replacing Cabrera with an average player is to the Tigers.

The Angels, this year, scored 733 runs and allowed 737.  Using the Pythagenpat (sorry to link to BP but I had to) formula, I calculated their expected win percentage, and it came out to .497 – roughly 80.6 wins and 81.4 losses*.  That’s actually significantly better than they did this year, which is good news for Angels fans.  But that’s not the focus right here.

Trout, this year, added 61.1 runs above average at the plate and 8.1 on the bases for a total of 69.2 runs of offense.  He also saved 4.4 runs in the field (per UZR).  So, using the Pythagenpat formula again with adjusted run values for if Trout were replaced by an average hitter and defender (663.8 runs scored and 741.4 runs allowed), I again calculated the Angels’ expected win percentage.  This came out to be .449 – roughly 72.7 wins and 89.3 losses.  7.9 fewer wins than the original one.  That’s the difference, for that specific Angels team, that Trout made.  Now, keep in mind, this is above average, not replacement, so it will be lower than WAR by a couple wins (about two WAR signifies an average player, so wins above average will be about two less than wins above replacement).  7.9 wins is a lot.  But is it more than Cabrera?

Let’s see.  This year, the Tigers scored 796 runs and allowed 624.  This gives them a pythagorean expectation (again, Pythagenpat formula) of a win percentage of .612 – roughly 99.1 wins and 62.9 losses.  Again much better than what they did this year, but also not the focus of this article.  Cabrera contributed 72.1 runs above average hitting and  4.4 runs below average on the bases for a total of 67.7 runs above average on offense.  His defense was a terrible 16.8 runs below average.

Now take Cabrera out of the equation.  With those adjusted run totals (728.3 runs scored and 607.2 runs allowed) we get  a win percentage of .583 – 94.4 wins and 67.6 losses.  A difference of 4.7 wins from the original.

Talk about anticlimactic.  Trout completely blew Cabrera out of the water (I would say no pun intended, but that was intended).  This makes sense if we think about it – a team with more runs scored will be hurt less by x fewer runs because they are losing a lower percentage of their runs.  In fact, if we pretend the Angels scored 900 runs this year instead of 733, they go from a 96.5-win team with Trout to an 89.8-win team without.  Obviously, they are better in both cases, but the difference Trout makes is only 6.7 wins – pretty far from the nearly 8 he makes in real life.

The thing about this statistic is that it penalizes players on good teams. Generally,  statistics such as the “Win” for pitchers are frowned upon because they measure things that the pitcher can’t control – just like this one.  But if we want to measure how much a team really needs a player, which is pretty much the definition of value, I think this does a pretty good job. Obviously, it isn’t perfect: the numbers that go into it, especially the baserunning and fielding ones, aren’t always completely accurate, and when looking at the team level, straight linear weights aren’t always the way to go; overall, though, this stat gives a fairly accurate picture.  The numbers aren’t totally wrong.

Here’s a look at the top four vote-getters from each league by team-adjusted wins above average (I’ll call it tWAA):

Player tWAA
Mike Trout 7.9
Andrew McCutchen 6.4
Paul Goldschmidt 6.2
Chris Davis 6.1
Josh Donaldson 4.9
Miguel Cabrera 4.7
Matt Carpenter 4.0
Yadier Molina 3.1

This is interesting.  Like expected, the players on better teams have a lower tWAA than the ones on good teams, just as we discussed earlier. One notable player is Yadier Molina, who despite being considered one of, if not the best catcher in the game, has the lowest tWAA of anyone on that list.  This may be because he missed some time. But let’s look at it a little closer: if we add the 2 wins that an average player would provide over a replacement-level player, we get 5.1 WAR, which isn’t so far off of his 5.6 total from this year. And the Cardinals’ pythagorean expectation was 101 wins, so obviously under this system he won’t be credited as much because his runs aren’t as valuable to his team.  Another factor is that we’re not adjusting by position here (except for the fielding part), and Molina is worth more runs offensively above the average catcher than he is above the average hitter, since catchers generally aren’t as good at hitting. But if Molina was replaced with an average catcher, I’m fairly certain that the Cardinals would lose more than the 3 games more that this number suggests. They might miss Molina’s game calling skills – if such a thing exists – and there’s no way to quantify how much Molina has helped the Cardinal pitchers improve, especially since they have so many rookies. But there’s also something else, something we can quantify, even if not perfectly.  And that’s pitch framing. Let’s add the 19.8 runs that Molina saved (measured by Statcorner) to Molina’s defensive runs saved (for which, by the way, I used the Fielding Bible’s DRS, since there is no UZR for catchers – that may be another reason Molina’s number may seem out of place, because DRS and UZR don’t always agree; Trout’s 2013 UZR was 4.4, and his DRS was -9. Molina did play 18 innings at first base, where he had a UZR of -0.2. We’ll ignore that, though, since it is such a small sample size and won’t make such a big difference).

Here is the table with only Molina’s tWAA changed, to account for pitch framing:

Player tWAA
Mike Trout 7.9
Andrew McCutchen 6.4
Paul Goldschmidt 6.2
Chris Davis 6.1
Yadier Molina 5.4
Josh Donaldson 4.9
Miguel Cabrera 4.7
Matt Carpenter 3.9

Now we see Molina move up into 5th place out of 8 with a much better tWAA of 5.4 – more than 2 wins better than without the pitch framing, and about 7.4 WAR if we want to convert from wins above average to wins above replacement.  Interesting. I don’t want to get into a whole argument now about whether pitch framing is accurate or actually based mostly on skill instead of luck, or whether it should be included in a catcher’s defensive numbers when we talk about their total defense. I’m just putting that data out there for you to think about.

But as I mentioned before, I used DRS for Molina and not UZR. What if we try to make this list more consistent and use DRS for everyone? (We can’t use UZR for everyone.)  Let’s see:

Player tWAA DRS UZR
Mike Trout 6.5 -9 4.4
Andrew McCutchen 6.4 7 6.9
Paul Goldschmidt 7.0 13 5.4
Chris Davis 5.5 -7 -1.2
Molina w/ Framing 5.4 31.8 N/A
Josh Donaldson 5.0 11 9.9
Miguel Cabrera 4.6 -18 -16.8
Matt Carpenter 4.1 0 -0.9
Yadier Molina 3.1 12 N/A

We see Trout go down by almost a win and a half here. I don’t really trust that, though, because I really don’t think that Mike Trout is a significantly below average fielder, despite what DRS tells me. DRS actually gave Trout a rating of 21 in 2012, so I don’t think it’s as trustworthy. But for the sake of consistency, I’m showing you those numbers too, with the DRS and UZR comparison so you can see why certain people lost/gained wins.

OK. So I think we have a pretty good sense for who was most valuable to their teams. But I also think we can improve this statistic a little bit more. Like I said earlier, the hitting number I use – wRAA – is based off of league average, not off of position average. In other words, if Chris Davis is 56.3 runs better than the average hitter, but we replace him with the average first baseman, that average first baseman is already going to be a few runs better than the average player. So what if we use weighted runs above position average? wRAA is calculated by subtracting the league-average wOBA from a player’s wOBA, dividing by the wOBA scale, and multiplying by plate appearances. What I did was subtract the position average wOBA from the player’s wOBA instead. So that penalizes players at positions where the position average wOBA is high.

Here’s your data (for the defensive numbers I used UZR because I think it was better than DRS, even though the metric wasn’t the same for everyone):

Player position-adj. tWAA Pos-adj. wRAA wRAA
Trout 7.7 59.4 61.1
McCutchen 6.2 40.1 41.7
Molina w/ Framing 5.6 23.3 20.5
Goldschmidt 5.0 39.5 50.1
Davis 5.0 46.4 56.3
Donaldson 4.9 36.6 36.7
Cabrera 4.7 72.0 72.1
Carpenter** 4.3 41.7 37.8
Molina 3.4 23.3 20.5

I included here both the regular and position-adjusted wRAA for all players for reference. Chris Davis and Paul Goldschmidt suffered pretty heavily – each lost over a win of production – because the average first baseman is a much better hitter than the average player. Molina got a little better, as did Carpenter, because they play positions where the average player isn’t as good offensively. Everyone else stayed almost the same, though.

I think this position-adjusted tWAA is probably the most accurate. And I would also use the number with pitch framing included for Molina. It’s up to you to decide which one you like best – if you like any of them at all. Maybe you have a better idea, in which case you should let me know in the comments.

 Part 2: Determining voter bias in the MVP award

As I mentioned in my introduction, Josh Donaldson got one first-place MVP vote – from an Oakland writer. Yadier Molina got 2 – both from St. Louis writers. Matt Carpenter got 1 second-place vote – also from a St. Louis writer. Obviously, voters have their bias when it comes to voting for MVP. But how much does that actually matter?

The way MVP voting works is that for each league, AL and NL, two sportswriters who are members of the BBWAA are chosen from each location that has a team in that league – 15 locations per league times 2 voters per location equals 30 voters total for each league. That way you won’t end up with a lot of voters or very few voters from one place who may be biased one way or another.

But is there really voter bias?

In order to answer this question, I took all players who received MVP votes this year (of which there were 49) and measured how many points each of them got per 2 voters***.  Then I took the amount of points that each of them got from the voters from their chapter and found the difference. Here’s what I found:

AL:

Player, Club City Points Points/2 voter Points From City voters % Homer votes Homer difference
Josh Donaldson, Athletics OAK 222 14.80 22 9.91% 7.20
Mike Trout, Angels LA 282 18.80 23 8.16% 4.20
Evan Longoria, Rays TB 103 6.87 11 10.68% 4.13
David Ortiz, Red Sox BOS 47 3.13 7 14.89% 3.87
Adam Jones, Orioles BAL 9 0.60 3 33.33% 2.40
Miguel Cabrera, Tigers DET 385 25.67 28 7.27% 2.33
Coco Crisp, Athletics OAK 3 0.20 2 66.67% 1.80
Edwin Encarnacion, Blue Jays TOR 7 0.47 2 28.57% 1.53
Max Scherzer, Tigers DET 25 1.67 3 12.00% 1.33
Salvador Perez, Royals KC 1 0.07 1 100.00% 0.93
Koji Uehara, Red Sox BOS 2 0.13 1 50.00% 0.87
Chris Davis, Orioles BAL 232 15.47 16 6.90% 0.53
Adrian Beltre, Rangers TEX 99 6.60 7 7.07% 0.40
Yu Darvish, Rangers TEX 1 0.07 0 0.00% -0.07
Felix Hernandez, Mariners SEA 1 0.07 0 0.00% -0.07
Shane Victorino, Red Sox BOS 1 0.07 0 0.00% -0.07
Jason Kipnis, Indians CLE 31 2.07 2 6.45% -0.07
Torii Hunter, Tigers DET 2 0.13 0 0.00% -0.13
Hisashi Iwakuma, Mariners SEA 2 0.13 0 0.00% -0.13
Greg Holland, Royals KC 3 0.20 0 0.00% -0.20
Carlos Santana, Indians CLE 3 0.20 0 0.00% -0.20
Jacoby Ellsbury, Red Sox BOS 3 0.20 0 0.00% -0.20
Dustin Pedroia, Red Sox BOS 99 6.60 5 5.05% -1.60
Manny Machado, Orioles BAL 57 3.80 2 3.51% -1.80
Robinson Cano, Yankees NY 150 10.00 8 5.33% -2.00

NL:

Player, Club City Points Points/2 voter Points from City Voters % Homer votes Homer difference
Yadier Molina, Cardinals STL 219 14.60 28 12.79% 13.40
Hanley Ramirez, Dodgers LA 58 3.87 7 12.07% 3.13
Joey Votto, Reds CIN 149 9.93 13 8.72% 3.07
Allen Craig, Cardinals STL 4 0.27 3 75.00% 2.73
Jayson Werth, Nationals WAS 20 1.33 4 20.00% 2.67
Hunter Pence, Giants SF 7 0.47 3 42.86% 2.53
Yasiel Puig, Dodgers LA 10 0.67 3 30.00% 2.33
Matt Carpenter, Cardinals STL 194 12.93 15 7.73% 2.07
Andrelton Simmons, Braves ATL 14 0.93 2 14.29% 1.07
Paul Goldschmidt, D-backs ARI 242 16.13 17 7.02% 0.87
Michael Cuddyer, Rockies COL 3 0.20 1 33.33% 0.80
Andrew McCutchen, Pirates PIT 409 27.27 28 6.85% 0.73
Clayton Kershaw, Dodgers LA 146 9.73 10 6.85% 0.27
Craig Kimbrel, Braves ATL 27 1.80 2 7.41% 0.20
Russell Martin, Pirates PIT 1 0.07 0 0.00% -0.07
Matt Holliday, Cardinals STL 2 0.13 0 0.00% -0.13
Buster Posey, Giants SF 3 0.20 0 0.00% -0.20
Adam Wainwright, Cardinals STL 3 0.20 0 0.00% -0.20
Adrian Gonzalez, Dodgers LA 4 0.27 0 0.00% -0.27
Troy Tulowitzki, Rockies COL 5 0.33 0 0.00% -0.33
Shin Soo Choo, Reds CIN 23 1.53 1 4.35% -0.53
Jay Bruce, Reds CIN 30 2.00 1 3.33% -1.00
Carlos Gomez, Brewers MIL 43 2.87 1 2.33% -1.87
Freddie Freeman, Braves ATL 154 10.27 8 5.19% -2.27

Where points is total points received, points/2 voter is points per two voters (points/15), points from city voters is points received from the voters in the player’s city, % homer votes is the percentage of a player’s points that came from voters in his city, and homer difference is the difference between points/2 voter and points from city voters. Charts are sorted by homer difference.

I don’t know that there’s all that much we can draw from this. Obviously, voters are more likely to vote for players from their own city, but that’s to be expected. Voting was a little bit less biased in the AL – the average player received exactly 1 point more from voters in their city than from all voters in the AL, whereas that number in the NL was 1.21. 8.08% of all votes in the AL came from homers compared to 8.31% in the NL. If you’re wondering which cities were the most biased, here’s a look:

AL:

City Points Points/2 voter Points From City voters Difference
OAK 225 15.00 24 9.00
LA 282 18.80 23 4.20
TB 103 6.87 11 4.13
DET 412 27.47 31 3.53
BOS 152 10.13 13 2.87
TOR 7 0.47 2 1.53
BAL 298 19.87 21 1.13
KC 4 0.27 1 0.73
TEX 100 6.67 7 0.33
SEA 3 0.20 0 -0.20
CLE 34 2.27 2 -0.27
NY 150 10.00 8 -2.00

NL:

City Points Points/2 voters Points From City Voters Difference
STL 422 28.13 46 17.87
LA 218 14.53 20 5.47
WAS 20 1.33 4 2.67
SF 10 0.67 3 2.33
CIN 202 13.47 15 1.53
ARI 242 16.13 17 0.87
PIT 410 27.33 28 0.67
COL 8 0.53 1 0.47
ATL 195 13.00 12 -1.00
MIL 43 2.87 1 -1.87

Where all these numbers are just the sum of the individual numbers for all players in that city.

If you’re wondering what players have benefited the most from homers in the past 2 years, check out this article by Reuben Fischer-Baum over at Deadspin’s Regressing that I found while looking up more info. He basically used the same method I did, only for 2012 as well (the first year that individual voting data was publicized).

So that’s all for this article. Hope you enjoyed.

———————————————————————————————————————————————————–

*I’m using fractions of wins because that gives us a more accurate number for the statistic I introduce by measuring it to the tenth and not to the single digit. Obviously a team can’t win .6 games in real life but we aren’t concerned with how many games the team won in real life, only their runs scored and allowed.

**Carpenter spent time both at second base and third base, so I used the equation (Innings played at 3B*average wOBA for 3rd basemen + Innings played at 2B*average wOBA for 2nd basemen)/(Innings played at 3B + Innings played at 2B) to get Carpenter’s “custom” position-average wOBA. He did play some other positions too, but very few innings at each of them so I didn’t include those.  It came out to about .307.

***Voting is as such: Each voter puts 10 people on their ballot, with the points going 14-9-8-7-6-5-4-3-2-1.