Archive for Research

Revenue Sharing Deal Cubs Struck with Rooftop Owners Holding Up Wrigley Field Renovations

During the 2013 baseball season, the City of Chicago approved a $500 million plan to renovate Wrigley Field and build an adjacent office building and hotel.  Included in the renovation plan is the proposed construction of a large video board behind the left field bleachers and signs advertising Budweiser behind the right field bleachers.  The Cubs have delayed the start of this project, however, because the owners of the rooftop businesses across from the ballpark have threatened to file a lawsuit against the Cubs because the proposed signage will obstruct the views of the field from their respective rooftop businesses.

Rooftop Litigation History

Detroit Base-Ball Club v. Deppert, 61 Mich. 63, 27 N.W. 856 (Mich., 1886)

Disputes over neighbors viewing ballgames are nothing new.  In 1885, John Deppert, Jr. constructed a rooftop stand on his barn that overlooked Recreation Park, home to the National League’s Detroit Wolverines, future Hall of Famer Sam Thompson and a rotation featuring the likes of men named Stump Wiedman, Pretzels Getzien and Lady Baldwin.  The Wolverines claimed that they had to pay $3000 per month for rent and that the 50 cent admission fees, helped to offset this cost.  They were thereby “annoyed” by Deppert charging people, between 25 to 100 per game, to watch the games from his property and asked the court to forever ban Deppert from using his property in this manner.

Deppert countered that the ballgames had ruined the quiet enjoyment of his premises, that ballplayers often trespassed on his land in pursuit of the ball and that he often had to call the police to “quell fights and brawls of the roughs who assemble there to witness the games.”  He further claimed that his viewing stand had passed the city’s building inspection and that he had the legal right to charge admission and sell refreshments.

The trial court dismissed the Wolverines case and the ball club appealed.  The Supreme Court of Michigan agreed that the Wolverines had no right to control the use of the adjoining property; therefore, Deppert was within his rights to erect a stand on his barn roof and sell refreshments to fans that wanted to watch the game.  Furthermore, there was no evidence that Deppert’s rooftop customers would otherwise have paid the fees to enter Recreation Park.

Similarly, the rooftops of the buildings across the street from Shibe Park were frequently filled with fans wanting a view of the Philadelphia Athletics game action.  While never happy about the situation, Connie Mack was pushed too far in the early 1930s when the rooftop operators started actively poaching fans from the ticket office lines.  Mack responded by building the “Spite Fence,” a solid wall that effectively blocked the view of the field from the buildings across 20th Street.

Lawsuits were filed but the “Spite Fence” remained in place throughout the remainder of the use of Shibe Park, later renamed Connie Mack Stadium.

The Current Dispute

Chicago National League Ball Club, Inc. v. Skybox on Waveland, LLC, 1:02-cv-09105 (N.D.IL.)

In this case, the Cubs sued the rooftop owners on December 16, 2002 seeking compensatory damages, disgorgement to the Cubs of the defendants’ profits and a permanent injunction prohibiting the rooftop owners from selling admissions to view live baseball games at Wrigley Field, among other remedies and under several causes of action.  According to the complaint, the Cubs alleged that the defendant rooftop operators “…have unlawfully misappropriated the Cubs’ property, infringed its copyrights and misleadingly associated themselves with the Cubs and Wrigley Field.  By doing so, Defendants have been able to operate multi-million dollar businesses in and atop buildings immediately outside Wrigley Field and unjustly enrich themselves to the tune of millions of dollars each year, while paying the Cubs absolutely nothing.”

In their statement of undisputed facts, the defendants countered that the rooftops had been used to view games since the park opened on April 23, 1914 as home of the Chicago Federal League team and that the Cubs conceded that their present management knew the rooftop businesses were selling admissions since at least the late 1980s.

In May 1998, the City of Chicago enacted an ordinance authorizing the rooftops to operate as “special clubs,” which allowed them to sell admissions to view Cubs games under city license.  The City wanted their piece of the action and interestingly, the Cubs made no formal objection to the ordinance.  Based on the licensure and lack of any opposition from the Cubs, the rooftop owners made substantial improvements to enhance the experience and to meet new City specifications.

By January 27, 2004, the Cubs had reached a written settlement with owners of 10 of the defendant rooftop businesses which assured that the Cubs “would not erect windscreens or other barriers to obstruct the views of the [settling rooftops]” for a period of 20 years.  The remaining rooftop owners later settled and the case was dismissed on April 8, 2004, just days ahead of the Cubs home opener set for April 12th.

After the 2004 agreement legitimized their businesses, the rooftop owners made further improvements to the properties.  Long gone are the days that a rooftop experience meant an ice-filled trough of beer and hot dogs made on a single Weber.  The rooftop operations are now sophisticated businesses with luxurious accommodations, enhanced food and beverage service and even electronic ticketing.

As a result of the settlement agreement of Cubs’ 2002 lawsuit, the team now has legitimate concerns that a subsequent lawsuit by the rooftop owners to enforce the terms of the contract could ultimately result in the award of monetary damages to the rooftop owners; cause further delays in the commencement of the construction project due to a temporary restraining order; or, be the basis of an injunction preventing the Cubs from erecting the revenue-producing advertising platforms for the remainder of the rooftop revenue sharing agreement.

It is obvious that the rooftop owners need the Cubs more than the Cubs need them; however, the Cubs wanted their piece of the rooftop owners’ profits (estimated to be a payment to the Cubs in the range of $2 million annually) and now the Cubs have to deal with the potential that their massive renovation project will be held up by the threat of litigation over the blocking of the rooftop views.


Power and Patience (Part III of a Study)

So, last week we hopefully learned a few things. Let’s continue looking at league-wide trends.

In terms of getting on base, not getting on base, hitting for power, and not hitting for power, there are actually four mostly-distinct periods in baseball history for each combination. Define these terms against the historical average and you get:

  • 1901-18 – Players aren’t getting on base or hitting for power

  • 1919-52 – Players are getting on base but not hitting for power

  • 1953-92 – Players aren’t getting on base but are hitting for power

  • 1993-pres-Players are getting on base and hitting for power

There are some exceptions, but this paradigm mostly holds true. Here’s another depiction of the “eras” involved:

YEAR (AVG)

OBP (.333)

ISO (.130)

1901-18

.316

.081

1919-52

.343

.120

1953-92

.329

.131

1993-present

.338

.158

The periods from 1901-52 and since 1993 really are quite distinct, but the 1953-92 period is the hardest to truly peg and kind of has to be squeezed in there. In fact, those figures are quite close to the historical average. Well, actually, the OBP before 1993 is just as much below the average as the OBP after 1993 is above it. When the same era, categorized by offense, includes both 1968 and 1987, there is going to be some finagling.

So, really, there hasn’t been a clear period in MLB history with above-average power and below-average on-base percentages, while the “Ruth-Williams Era” (1919-52) had below-average power (again, vs. the historical average) but above-average on-base percentages.

Still, breaking things down into four eras is too simplistic. What follows is a walk-through, not of every season in MLB history, but key seasons, using some of the “metrics” from the first two parts of this series.

1918: .207 XB/TOB, -.038 sISO-OBP, 95 OBP+, 57 ISO+

In 1918, MLB hitters earned .207 extra bases on average. By 1921, they were earning .300 extra bases after year-to-year gains of 19%, 8%, and 12%. How much of this was on account of the Sultan of Swat? In 1918, Babe Ruth was already earning .523 extra bases, but had only 382 plate appearances. In 1921, however, he had 693 plate appearances and averaged .717 extra bases. Without him, the 1918 and 1921 ratios change to .205 and .295, respectively. So he’s only responsible for .003 of the increase. (My guess from a couple weeks ago was way off. He’s still just one player.) Perhaps the effect on the power boom of his individual efforts is overstated. However, his success was clear by 1921, so his influence on how other hitters hit seems properly stated. While Ruth’s 11 HR in 1918 tied Tillie Walker for the MLB lead, five other players had 20+ home runs in 1921.

OBP was low in 1918, and most seasons up to that point, but the dead ball era really was mostly a power vacuum. OBP already had two seasons (1911-12) around the current average, even though it would not get back there until 1920.

1921: .300 XB/TOB, -.027 sISO-OBP, 104 OBP+, 90 ISO+

So we touched on the 1918-21 period moments ago. Power skyrocketed, but still to about 10% below its current norm. Meanwhile, OBP was well on its way to a long above-average stretch: OBP+ was 100 or higher every single year from 1920 through 1941.

1930: .364 XB/TOB, -.007 sISO-OBP, 107 OBP+, 112 ISO+

1930 was the most power-heavy MLB season until 1956 and is even today the second-highest OBP season in MLB history at .35557, just behind the .35561 mark set in 1936. Non-pitchers hit .303/.356/.449 in 1930. Ten players hit 35 or more home runs, including 40+ for Wilson, Ruth, Gehrig and Klein.

Like we’ll see in 1987, however, 1930 was really the peak of a larger trend: XB/TOB grew 6+% for the third straight year before dropping 14% in 1931 and another 12% in 1933 (with a 9% spike in 1932).

1943: .261 XB/TOB, -.028 sISO-OBP, 98 OBP+, 74 ISO+

World War II in general was a bad time for hitters, at least from a power standpoint, with 1943 the worst season among them, but 1945 almost as bad. From 1940-45, the XB/TOB ratio fell 23%. It remained low until 1947. (But even at its lowest point in this time frame in 1942, it was still a better year for power than 1918.) OBP, however, was actually at about its current historical average during the war (within one standard deviation of the mean throughout), so there wasn’t a total offensive collapse. However, it was the first time since the deadball era that OBP+ was below 100. Either way, perhaps the coming look at individual players will tell us what happened.

1953: .365 XB/TOB, .001 sISO-OBP, 103 OBP+, 108 OPS+

Thanks to an 11% increase in XB/TOB, it was finally “easier,” relatively, to hit a double or homer than it was to make it to base in the first place. Also playing a role, however, was the OBP; in 1950 it was only harder to hit for power because players were reaching base at a pretty good clip; the OBP+ and ISO+ that year (1950) were 106 and 110.

1968: .320 XB/TOB, .003 sISO-OBP, 93 OBP+, 84 ISO+

1968 is often considered perhaps the all-time nadir for Major League hitters outside of the dead ball era, and non-pitchers only earned an average of .320 extra bases per time on base that year. It wasn’t just power that suffered, however, although it did, but it was also the worst league-wide OBP in 51 years. In fact, OBP was so low, it was actually ever so slightly easier to hit for power in 1968 than it was to reach base.

The thing about 1968 is that, while 1969 featured a lower mound, no 1.12 ERA’s, and a solid recovery for both OBP and ISO, it didn’t automatically revert baseball hitters to their pre-mid-60s form. Power fluctuated wildly in the roughly 25-year period between 1968-93.

1977: .378 XB/TOB, .010 sISO-OBP, 100 OBP+, 108 ISO+

1977, rather than 1930 or 1987, may be really the flukiest offensive season in MLB history. ISO+ shot up from 83 to 108, after having not been above 96 since 1970. MLB hitters earned 26% more extra bases per times on base than in 1976, easily the biggest one-year increase in MLB history. XB/TOB then promptly decreased 10% in 1978; it’s the only time that figure has gone up 10% in one year and declined 10% the next. It was the only season where sISO was .010 above OBP from 1967-84. 35 players homered 25 times or more, the most in MLB history until 1987. 1977 was a banner year for getting on base as well, although, as usual, not as much as ISO. It was the highest OBP season from 1970-78 and one of four seasons from 1963-92 with an average OBP vs. the historical average.

1987: .416 XB/TOB, .023 sISO-OBP, 101 OBP+, 120 ISO+

1987 has a big reputation as a fluky power season, and players earned .416 extra bases per time on base that year, but that was “only” a 9% spike from the prior season. Additionally, XB/TOB had actually increased every year from 1982-87, except for a 2% drop in 1984. The 1987 season was mostly the peak of a larger trend, which came crashing down in 1988, when the ratio dropped more than 15% to .353 extra bases. The .400 mark would not be broken again until 1994’s .412, but after that point, this ratio would never fall below the 0.400 it was in 1995.

This season was, however, the only one in the Eighties with an OBP+ over 100. From 1963-92, in fact, OBP was at or above the historical norm in just four seasons (1970, 1977, 1979, 1987). As with power, however, OBP collapsed in 1988 more so than it had gained in 1987, falling to 1981 levels (97 OBP+).

1994: .412 XB/TOB, .017 sISO-OBP, 103 OBP+, 122 ISO+

XB/TOB leapt over 10% from 1992-93, and another 9.5% in 1994, ushering in a power era that hasn’t quite yet flamed out. 1994 was the year power really took off relative to OBP: in 1992, sISO and OBP were even; in 1993, the gap was still about half of what it would be in favor of sISO in 1994. 1994 also featured the highest ISO to that point, higher than even in the culmination of the mid-80’s power trend in 1987. While there would be some years between 1993 and 2009 with modest decreases in power, even in 2013, ISO+ was 112–its lowest mark since 1993. More on the current power and OBP environment momentarily.

1901-2013: Changes in XB/TOB

Extra bases per time on base was our first choice of metric. How has this particular one changed in certain years?

Overall, nine times has this ratio spiked at least 10% in one season: 1902-03 (+12%), 1918-19 (+19%), 1920-21 (+12%), 1945-46 (+11%), 1949-50 (+10%), 1952-53 (+11%), 1976-77 (+26%), 1981-82 (+12%), and 1992-93 (+10%).

Meanwhile, it decreased by 10 or more percent on six occasions: 1901-02 (-11%), 1930-31 (-14%), 1932-33 (-12%), 1941-42 (-11%), 1977-78 (-10%), 1987-88 (-15%).

2014-???

We’ll try to make this a little more interesting: where is baseball going from here? Can we look at these trends throughout history and determine what the next few years might look like?

XB/TOB dropped 4.8% in 2013. It was the sharpest one-year drop since a 5.6% fall in 1992, but that season only preceded a power boom. Both were modest declines historically, and this one is unlikely to portend much. However, this year’s 112 ISO+ was a new low for the post-strike era.

Yet the bigger issue in 2013 was a stagnant OBP, which has been below the current average since 2009 after being above it every year since 1992. OBP never deviates very much from its norm, but 26/30 seasons from 1963-92 featured a below average OBP.

Will OBP continue to stay low? It has fallen every year since 2006, from .342 to .323, which represents the longest continuous decline in MLB history. It may be unlikely that it decreases further, but the below-average-since-2009 fact is worrisome if you enjoy offense. Stagnation for such a length of time has nearly always been part of a larger trend, mostly in the dead ball era and that 30 year period from 1963-92.

One thing we can probably say is that the “Steroid Era” is over. From 1993-2009, OBP+ was never below 101 and ISO+ never below 109. Take 1993 out of the sample, and ISO+ is never below 118, and from 1996-2009, 14 years, ISO was 20% or more above the historical norm every time.

But since 2009, that 20% threshold has never been reached, although 2012’s ISO+ of 119 comes close. Nonetheless, power from 2010-present has yet to reach mid-90s, early 2000s levels. Power could still increase in the future, but likely for reasons other than PED’s (although the Melky Cabreras and Ryan Brauns of the world always leave a doubt).

If I had to guess, power and home runs are here to stay, even if 2000’s .171 stands as the highest non-pitcher ISO for years to come. (That really is a crazy figure if you think about it: non-pitchers that year hit for power at roughly the career rates of Cal Ripken or Ken Caminiti. In 2013, they were down to more “reasonable” levels similar to Johnny Damon or Barry Larkin.)

The on-base drought is more of a concern for offenses, however, but because OBP is so consistent, that OBP drought could be persistent, but minor.

This concludes the league-wide observations of power and patience. Part IV next week will look at things like “X players with an OBP of Y and ISO of Z in year 19-something.” Part V will then look at individual players. Maybe we can even wrap up with the ones who started this whole series: Joe Mauer, Rickey Henderson, and Wade Boggs. I guess we’ll have to find out.


Current Edwin Encarnacion vs. Vintage Albert Pujols

Toronto Blue Jays 1B/DH Edwin Encarnacion had another great year with the bat in 2013. He posted a .272/.370/.534 line with a 148 wRC+ that was 6th in the AL. This was on the heels of a 2012 season where Encarnacion managed a .280/.384/.557 line with a 151 wRC+.

In his late-career resurgence, Encarnacion has become the rarest of players, a power hitter that rarely strikes out. Only Chris Davis and Miguel Cabrera had more home runs than Encarnacion’s 36. The previous year, Encarnacion slammed 42 home runs.

Meanwhile, Encarnacion struck out in only 10% of his plate appearances. Only seven qualified hitters struck out at a lower rate than Encarnacion. None of them had more than 17 home runs.

In fact, you’ll have to go back to the glory days of Albert Pujols (2001-11) to find someone who matched Encarnacion’s home run total with a similarly low strikeout rate.

Here’s a look at their numbers side by side.

HR BB% K%
Vintage Pujols 40 13.1 9.5
Encarnacion ’12-13 39 13.1 12.3

Pretty impressive, huh? Well, let’s dig even further. From 2001-11, the MLB average walk and strikeout rates were 8.5% and 17.3%, respectively. In 2012-13, they were 7.9%, and 19.9%, respectively. So, here are Pujols’ and Encarnacion’s numbers expressed as a percentage of the MLB average.

HR/PA BB% K%
Vintage Pujols 222% 154% 55%
Encarnacion ’12-13 238% 165% 62%

So if we adjust for the MLB average, Edwin Encarnacion’s home run and walk rates from 2012-13 were better than those of vintage Albert Pujols. His strikeout rate was a shade worse. If I restricted the comparison to 2013, Encarnacion would be better in all three categories.

Does this mean that Encarnacion from 2012-13 has been the offensive equivalent of vintage Pujols? Well, not quite. Let’s revisit wRC+. Pujols’ average from 2001-11 was a robust 167. Encarnacion’s wRC+ from 2012-13 is 148. Where does this big difference come from?

Pujols in-play batting average in his prime years was .311. On the other hand, Encarnacion has just a .256 in-play average from 2012-13. That’s a very big difference. Only Darwin Barney had a worse in-play batting average than Encarnacion in that time frame.

Does Pujols hit more line drives? What’s the reason for this big split? Here are their batted-ball ratios.

LD% GB% FB% IFFB%
Vintage Pujols 19.0 40.9 40.0 13.0
Encarnacion ’12-13 19.6 34.1 46.3 10.7

Pretty similar. Pujols hits more ground balls, Encarnacion does a better job of avoiding the infield fly. In fact, based on these ratios, you would expect Encarnacion to have a higher in-play average than Pujols.

Recently teams have been using a unique shift against Encarnacion, where they put three infielders on the left side of second base. Here’s a picture below.

This shift has been successful in taking away hits from Encarnacion. Since 2012, he’s hit just .222 on ground balls, compared to .262 for vintage Pujols. In 2013, just 25 of the 170 groundballs Encarnacion hit found a hole. Here’s a link to his spray chart.

On balls he pulls, Encarnacion has a .376 batting average. That might sound very good, but compare it to Pujols, who hit .477 on balls he pulled in his vintage years.

Edwin Encarnacion is an elite hitter. In terms of walks, strikeouts, and home runs, he’s every bit the hitter that Albert Pujols was during his prime years. Sure, his pull-heavy approach might allow the shift to take away some hits, but the shift can’t do anything about the balls he puts over the fence.


SkaP: A New Metric to Measure Hitting Prowess

Before I explain to you what this new metric – SkaP – does, I am first going to warn you that I can’t provide you with a formula or individual statistics for it.  It’s a theory right now, and something for which I need access to data I don’t have in order to find a formula.

This statistic was inspired in part by Colin Dew-Becker’s article the other day here on FanGraphs Community Research.  In his article, he argued that the the way a hit or out is made matters – not just the result of the hit or out.  A single to the outfield, for example, is more likely to send a runner from first to third or from second to home than an infield single.  Likewise, a flyout is more likely to advance runners than a strikeout is.

This statistic was also inspired in part by UZR.  UZR attempts to quantify runs saved defensively by a player partially by measuring if they make a play that the average fielder would not.  In the FanGraphs UZR Primer, Mitchel Lichtman explains that

“With offensive linear weights, if a batted ball is a hit or an out, the credit that the batter receives is not dependent on where or how hard the ball was hit, or any other parameters.”

This means that a line drive into the gap in right-center that is a sure double but is caught by Andrelton Simmons ranging all the way from shortstop (OK, maybe that was an exaggeration) will only count for an out, even though in almost any other situation it would be a double.  The nature of linear-weight based hitting statistics (and most other hitting statistics as well) is that they are defense-dependent.  Hitters have been shown to have much more control over their batted balls than pitchers do, which is why so far only pitchers have commonly used defense-independent statistics, but it would probably be useful for hitting too, no?

Now, if we want a defense-independent and linear weights-based hitting statistic, it would not be possible to formulate something similar to the hitting equivalent of the current model  of tERA (or tRA) because that generalizes all batted balls into categories such as grounders, line drives, or fly balls, because hitters can control where and how hard and at what angle their batted balls are hit at least to some extent.  Instead, what I would use is something more similar to a hitting equivalent of this version of tERA I found on a baseball blog.  What that article proposes is something much more detailed than what we have now (by the way, tERA has been supplanted by SIERA, but is still an interesting theory).  Their idea is that instead of finding expected run and out values for grounders, line drives, and fly balls, find the expected run value for a ball, to use their words, “with x velocity and y trajectory [that] lands at location z.”  This is similar to UZR in that exact (or as close to exact as possible) batted-ball data is processed and the expected run/out values are calculated.

So now for the statistic:  SkaP, or Skill at (the) Plate, is a number that uses all that batted-ball data to find the expected run and out values of each at-bat.  It would weight the following things:  home runs (although maybe a regressed version could use lgHR/FB%*FB instead), walks, strikeouts, HBP, and each ball put in play by the player.  This makes it so that it is not defense-dependent, and so that Andrelton Simmons catching that sure double does not penalize the hitter.  I haven’t calculated this statistic, though, so I don’t know if this would be best as a rate, counting, or plus-minus statistic (maybe all three?).

There’s one catch to this, however:  Skill at the Plate is really only a measure of skill at the plate.  It doesn’t account for some batters’ ability to stretch hits or beat out infield singles.  Billy Hamilton is going to be more likely to reach on an infield single than Prince Fielder.  However, this stat would treat them both the same, and not reward Hamilton’s speed for allowing him to reach base on what might have most likely been an out.  It would be very hard to separate defense independence and batter-speed independence for hitting statistics, though, and I’m not sure it’s possible to do without an extreme amount of effort.  Maybe a crude solution would be to quantify a player’s speed using Spd, UBR or BsR and add it somehow to this statistic.

I can’t calculate this myself, as I don’t have access to Baseball Info Solutions’s (or some other database that tracks batted balls) data.  FanGraphs does, however, and I would love to see this looked into further.


Power and Patience (Part II of a Study)

Last week’s post ended with a chart comparing power and patience, or, more accurately, league-wide extras bases and times on base (excluding pitchers), year-by-year. Here it is again:

Fig. 1 – No, Not a Fig Leaf

One question this chart does raise, at least to me: does it merely indicate the general effectiveness of offenses, or are there actually times where power goes up relative to getting on base, but offense stagnates or declines? After all, it dipped in 1968 when offense dipped; it increased from 1918-21 as the dead ball era ended; it rose in 1987.

There have been 113 seasons since 1901. Running some R^2 numbers when comparing XB/TOB to various statistics over these 113 seasons gets us some interesting results. I suppose it’s possible than in the year 2514, these stats will correlate better or worse, and that a sample size of 113 seasons is too small. I don’t really have the time to wait and see, though, and I’m fairly sure you don’t either, so:

  • wOBA .0014 (.016 w/pitchers–and for only pitchers, .004)
  • OBP .217 (.083 w/pitchers–and for only pitchers, .006)
  • R/G .246 (.238 w/pitchers)
  • HR/PA .958 (.960 w/pitchers)
  • ISO .968 (.971 w/pitchers)

So, no, we’re not looking at a proxy for overall offense here. But we are looking at a proxy for power itself. The plan here was to investigate the relationship between hitting for power and getting on base through the years. And instead, all we have done with this chart is look at league-wide power proficiency, not even really compared to league-wide getting-on-base proficiency.

Well, there is an alternative explanation, which we will get to.

The good news is, we don’t have to throw away these numbers. We just have to bring OBP and ISO back into the picture, re-separating the two elements of that chart. You can’t really guess a league’s OBP in any given season from ISO, or vice versa, as the R^2 for OBP and ISO is .373:

Fig. 2 – No, Not a Fig Newton

To some, this may indicate a problem with the premise of this series: there’s a solid but not overwhelming correlation between power and patience, it turns out. Well, first, it’s still worth looking into. Part of the reason for that is that is, in smaller sample sizes, there often is more of a correlation: the R^2 between OBP and ISO from 1901-20 is .792; in the last 20 years, it’s .583. Granted, you can mess with the numbers all you want here; for instance, go back 21 years, and suddenly the R^2 between OBP and ISO is .461. Nevertheless, there are brief stretches in baseball where OBP and ISO correlate quite well, and each season is a set of tens of thousands of plate appearances, for what that’s worth. (Little, I know; it just means that the figures for each season were unlikely to change much if the season were longer.)

Also, while they don’t correlate well, or at least well enough that you can predict one from the other, OBP and ISO do correlate pretty well for two independent rate statistics. For example, the R^2 for BB% and K% is .007. There seems to be something to the idea that power threats can get on base more effectively, or that it’s easier to get on base as a power threat. How much is part of the point.

Now for some graphical representations of annual changes in OBP and ISO.

First, here they are on one chart, with the all time figures represented for comparative purposes.

Fig. 3 – Yum, Fig Newtons

Next, we remove the lines representing the all-time marks and then scale ISO to OBP. FIP is scaled to ERA by adding a constant, so we’ll try a similar technique. The all-time OBP, remember from last week, is .333, and the all-time ISO is .130. So, we’re now going to add .203 to each year’s ISO. I call it scaled ISO, or sISO. (I don’t expect this to catch on as anything as it really just has a purpose limited to this series.) Since we’re just adding a constant to ISO, “sISO” and ISO have a perfect correlation, so we’re cool in that regard. Regard:

Fig. 4

The line for “sISO” is the same shape as the line for ISO. (I’m sure this point is patently obvious to some, but perhaps not everyone.) Now we can see really see the seasons ISO was above its all-time norm relative to OBP, so let’s graph those gaps between each line above. Scaled ISO vs. OBP:

Fig. 5 – I Thought It Would Be More Fun For You to Guess the “Horizontal axis title” and That’s My Story and I’m Sticking to It

ISO peeked above OBP in 1953, dipped back below in 1954, and then sharply increased in 1955 and 1956. Before that, however, getting on base was always “easier” vs. the historical norms than hitting for power was. This was true even in the post-Ruth era, with players such as Ruth, Gehrig, Foxx, Ott, and even the beginning of Ted Williams’ career, right up until the end of the Korean War. Actually, league OBP through 1952 was slightly higher, .334, than the current average, while ISO was at .107, still well below the current average.

If baseball ended in 1952 (perish the thought!), the dead ball era would still be a distinct period in baseball history. From 1901-18, league OBP was .316 and ISO .081. From 1919 to 1952, the figures were a .343 OBP and .120 ISO.

Since 1956, power has mostly been above its historical norms relative to OBP, with some exception. Part III will look further into all of this.

Astute observers might have noticed something, though:
   

The R^2 of the figures comprising each chart (sISO-OBP and XB/TOB) is .885.

So, what do we have here, then?

One possible conclusion is still that we’re still only looking at power. But having now observed changes in OBP over time as part of this exercise, perhaps something else is at play. I think there is.

It’s not particularly obvious in the chart that shows OBP vs. its historical average, but OBP, despite what we know about the dead ball era, and other seasons such as 1968, has actually been relatively consistent historically. Even at the hardest time in history for players to reach base, during the dead ball era, it was still much harder to hit for power. When I looked at a sort of OBP+ and ISO+ vs. their historical averages (just using 100*OBP/historical OBP), here were some things:

  • Range: OBP+ 18 (89-107), ISO+ 80 (51-131)
  • Standard Deviation: OBP+ 3.79, ISO+ 19.6

It’s not necessarily that looking at extra bases per times on base, or the arithmetical difference between OBP and ISO, is the same at looking at power. Rather, OBP has been so consistent historically relative to ISO, that the observations in this article are effectively only an observation of ISO, regardless of the specific numbers that go into them. This is a not uninteresting takeaway to me.

Next week, we’ll use four factors–XB/TOB, sISO-OBP, OBP+, and ISO+–to run through the relationship between power and patience throughout baseball history, and maybe even try to look into the future a little bit. Parts IV and V will then bring us back to the beginning of Part I as we return to observing OBP and ISO through the lens of the efforts of individual players. That’s the tentative plan at least.


Estimating the Advantage of Switch Hitting on BB/K Splits

It is generally a marked advantage for a batter to face an opposite-handed pitcher. Platoon splits across the league are evidence of this well documented phenomenon, and managers are quick to take advantage of matchups.

One of the chief advantages of switch-hitting is that the opposite-handed pitcher’s release point is closer to the center of the hitter’s field of vision. This allows him to get a better look at the ball, and judge whether the pitch is worth swinging at. If a switch-hitter generally gets a better look at the incoming pitch he should, in theory, be better at commanding the strike zone than his one-sided counterparts, walking more and striking out less. Do switch hitters have a better BB/K split than other hitters?

While we are limited by a small sample size of switch-hitters who accrue a enough at bats against lefties to possibly stabilize (according to work done by Russell Carleton), we can calculate their splits and compare it to the average split for batters who always hit from one side.

If we assume that switch-hitters would ‘naturally’ hit from the side in which they throw, we can roughly estimate what their split might be if they were not switch-hitters by calculating BB/K split for righties when facing left-handed pitchers (LHP) and right-handed pitchers (RHP).

Right-handed batters (RHB), on average, post a healthy BB/K ratio of .63 against LHP and more dismal .38 against RHP. The table below shows how splits for switch-hitters who throw right-handed compared to those righties who do not swing from both sides of the plate.

Right-Handed Players

BB/K vs. LHP BB/K vs. RHP Difference
Alberto Callaspo 1.5 1.03 0.47
Andres Torres 0.52 0.26 0.26
Dexter Fowler 0.82 0.57 0.25
Kendrys Morales 0.55 0.37 0.18
Jarrod Saltalamacchia 0.44 0.26 0.18
Jed Lowrie 0.63 0.52 0.11
Shane Victorino 0.38 0.31 0.07
Nick Franklin 0.42 0.35 0.07
Everth Cabrera 0.63 0.58 0.05
Emilio Bonifacio 0.32 0.28 0.04
Ryan Doumit 0.5 0.48 0.02
Pablo Sandoval 0.6 0.59 0.01
Eric Young Jr. 0.45 0.46 -0.01
Asdrubal Cabrera 0.3 0.31 -0.01
Chase Headley 0.45 0.48 -0.03
Carlos Santana 0.77 0.88 -0.11
Jimmy Rollins 0.53 0.68 -0.15
Matt Wieters 0.31 0.48 -0.17
Erick Aybar 0.21 0.48 -0.27
Ben Zobrist 0.57 0.9 -0.33
Victor Martinez 0.59 1.09 -0.5
Coco Crisp 0.68 1.18 -0.5

Left-Handed Players

BB/K vs RHP BB/K vs. LHP Difference
Daniel Nava 0.64 0.38 0.26
Carlos Beltran 0.48 0.27 0.21
Justin Smoak 0.57 0.46 0.11
Nick Swisher 0.42 1.07 -0.65

 

Or if you prefer to see the splits visually, and compared to the mean for all non-switch hitters:
Difference vs RHP

Difference vs RHP

 

We can see the results are relatively mixed. If switch-hitters really did display a better ability to draw walks and avoid strikeouts we would expect to see smaller than league-average (below the red line) splits, in the positive direction. Among righties, hitters from Kendrys Morales to Chase Headley in the chart above do not display as severe a split as the average right-handed batter, and may derive a significant benefit to never seeing a same-handed pitcher. However, a surprising number of hitters display reverse splits, improving their ratio considerably when batting from their own weak side.

The extreme negative splits of Coco Crisp, Victor Martinez, and Nick Swisher are all consistent with their recent career numbers. Indeed, these negative splits are even evident when examining their wOBA splits for the last several years.

Alberto Callaspo’s outlier split belies a an impressive ability to avoid strikeouts while taking walks at a accelerated pace. Against lefties he posts an outstanding BB/K of 1.5, and his ratio of 1.03 vs. RHP is still impressive. The dropoff from facing LHP to RHP is steep in absolute terms, but his knowledge of the strike zone is still elite.

The BB/K ratio for Jarrod Saltalamacchia, and Justin Smoak both see a slight benefit in switch-hitting, featuring splits a bit lower than the league average. Justin Smoak, however, suffers from a serious power outage, posting a .218 ISO when hitting from his left side, and a miserable .082 ISO from his left. Salty’s power split is not as egregious, but the .128 point drop in ISO is troubling for a player who’s contact % is only slightly above Dan Uggla and Pedro Alvarez. Andres Torres, a natural right hander, sees a similar decline in his wOBA splits– .318 against LHP but a paltry .249 against RHP. These players enjoy a nonexistent or marginal advantage in BB/K ratio as a switch hitter, and hitting primarily from their strong side might be an experiment worth performing.

The Shane Victorino Experiment

 Shane Victorino’s ratio of walks to strikeouts reduces by .07 when facing RHP as opposed to LHP. After tweaking his hamstring in the second half of 2013, he decided to at least temporarily abandon switch-hitting for the remainder of the season. Since mid-August had almost 50 plate appearances as a RHB vs. RHP,  offering a real-life counterfactual case. How does not switch-hitting affect a productive hitter’s BB/K ratio?

From September and into the postseason, Victorino has managed to walk just twice and strike out over 20 times, giving him a miniscule BB/K ratio of just .09, much smaller than his .33 season average. Still, with a wOBA of .356 right in line with his season long average, his overall production at the plate has not suffered despite the more aggressive and less patient approach.

Victorino’s small sample size of hitting exclusively right-handed fails to reliably estimate the counterfactual scenario. However, his case is interesting because, while switch-hitters like J.T. Snow did abandon their dual approach, most did so because of a decline in production from their weak side. Players who eventually decided the advantages of switch-hitting did not offset the challenges of being ambidextrous were already in decline mode—Victorino on the other hand is coming off a great season. While he has officially achieved veteran status, the 32-year old proved this season that reports of his bat’s death have been greatly exaggerated. If he and his coaches are encouraged by his recent wOBA spike, and he abandons hitting from the left side entirely, his BB/K may continue to steadily decline even if his power improves.

Conclusions

The results seen here do not strongly support the hypothesis that switch-hitters have an inherent advantage over others when considering the ratio of bases on balls to strikeouts. While there is some evidence that switch-hitters do enjoy better splits, it is not overwhelming and may provide only marginal benefit to players like Andres Torres, Dexter Fowler and Justin Smoak. Overall, lefties like Carlos Beltran and Daniel Nava joined Alberto Callaspo as possible examples of the reverse, a larger than average split when going from the strong side to weak side.

There are obvious limitations to this study, starting with a  small sample size. We only examined 2013 splits, and the number of left-handed hitters who switch-hit is very low. It may be possible moving forward to use career splits for lefties going back decades to determine if left handed switch-hitters generally have worse BB/K splits than their counterparts.

Currently, switch-hitters account for slightly less than 15% of major league hitters.  To say that having the platoon advantage is always an advantage for the hitter may not be accurate– players whose weak side bat is significantly less powerful, like Justin Smoak or Jarrod Saltalamacchia, may inadvertently harm their value as a hitter by sticking to switch-hitting in all cases. Baseball is a game of adjustments and gaining incremental advantages, and switch-hitting is no different. Some players use it to gain an upper hand, and others may be wasting their potential.


Seeing the Complete Picture: Building New Statistics to Find Value in the Details

Attempting to accurately estimate the number of runs produced by players is one of the most important tasks in sabermetrics. While there is value in knowing that a player averages four hits every ten at-bats, that value comes from knowing that more hits tend to lead to more runs. On-base percentage became popularized through Moneyball in the early 2000s because the Oakland Athletics, among other teams, realized that getting more runners on base would lead to more opportunities to score runs.

Knowing a player’s batting average or on-base percentage can be informative, but that information does nothing to quantify how the player contributed to a team’s ability to score runs. The classic method for determining how many runs a player contributes to his team is to look at his RBI and runs scored totals. However, both of those statistics are extremely dependent on timely hitting and the quality of the rest of the team. A player will not score many runs nor have many RBI opportunities if the rest of the players on his team, particularly the players around him in the lineup, are not productive.

One of the more popular sabermetric methods to estimate a player’s run production is to find the average number of runs that certain offensive events are worth across all situations and then apply those weights to a player’s stat line. In this way, it doesn’t matter if a player comes to the plate with the bases loaded every time or the bases empty every time, just that he produced the specific type of event.

Here is a chart that shows the average number of runs that scored in an inning following each combination of base and out states in 2013^^.

Base State

0 OUT

1 OUT

2 OUT

0**

0.47

0.24

0.09

1

0.82

0.50

0.21

2

1.09

0.62

0.30

3

1.30

0.92

0.34

1-2

1.39

0.84

0.41

1-3

1.80

1.11

0.46

2-3

2.00

1.39

0.56

1-2-3

2.21

1.57

0.71

We can see in the chart that in 2013, with no men on base and zero outs, teams scored an average of 0.47 runs through the end of the inning.  If a batter came to the plate in that situation and hit a single, the new base/out state is a man on first with zero outs, a state in which teams scored an average of 0.82 runs through the end of the inning. If the batter had instead caused an out, the new base/out state would have become bases empty with one out, a state in which teams only averaged 0.24 runs through the remainder of the inning. Consequently, we can say that a single in that situation was worth 0.58 runs in relation to the value of an out in the same situation. If we repeat this process for every single hit in 2013, and apply the averages from the chart to each single depending on when they occur, we find that an average single in 2013 was worth approximately 0.70 runs in relation to the average value of an out.

This is known as the linear weights method for calculating the context-neutral value of certain events. Check this article from the FanGraphs Library, and the links within, for more information on linear weights estimation methods.

There have been a variety of statistics created to estimate a player’s performance in a context-neutral environment using the linear weights method over the last few decades. Recently, one of the more popular linear weight run estimators, particularly here at FanGraphs, has been weighted On-Base Average (wOBA) introduced in The Book: Playing the Percentages in Baseball. wOBA is arguably the best, publically-available run estimator, but I think it has potential for improvement by incorporating more specific and different kinds of events into its estimate.

wOBA is traditionally built with seven statistics: singles, doubles, triples, home runs, reaches on error, unintentional walks, and hit by pitches. While some versions may exclude reaches on error and others may include components like stolen bases and caught stealing, I will focus exclusively on the version presented in The Book that uses those seven statistics. By limiting the focus to just those seven components, wOBA can be calculated perfectly in every season since at least 1974 (as far back as most play-by-play data goes), and can be calculated reasonably well for the entire history of the game.

While it can be informative to see what Babe Ruth’s wOBA was in 1927, when analyzing players in recent history, particularly those currently playing, accuracy in the estimation should be the most important consideration. Narrowing the focus to just seven statistics, some broadly defined, will limit how accurately we can estimate the number of runs a player produced in a context-neutral environment. The statistics I refer to as “broadly defined” are singles and doubles. I say that because it is a relatively easy task to convince even a casual baseball fan that not all singles are created equally.

If we compare singles hit to the infield with singles hit to the outfield, we’ll notice that outfield singles will cause runners on base to move further ahead on the basepaths on average than infield singles. For example, in 2013, with a man on first, only 3.2% of infield singles ended with men on first and third base compared to 29.9% of outfield singles. If outfield singles create more “1-3” base states than infield singles, and we know from the chart above that “1-3” base states have a higher run expectancy than “1-2” base states in the same out state, then we know that outfield singles are producing more runs on average than infield singles. If outfield and infield single are producing different amounts of runs on average, then we should differentiate between the two events.

Beyond just breaking down hits by fielding location, we can refine hit types even further. If we differentiate singles and doubles by direction (left, center, right) and by batted ball type (bunt, groundball, line drive, fly ball, pop up) we can more accurately reflect the value of each of those offensive events. While the difference in value between a groundball single to right field compared to a line drive single to center field is minimal, about 0.04 runs, those minimal differences add up over a season or career of plate appearances. Reach on error events should also be broken down like singles and doubles, as balls hit to the third baseman that cause errors are going to have a different effect on the base state than balls hit to the right fielder that cause errors.

The two other ways that wOBA accounts for run production by a batter are through unintentional walks and hit by pitches, notably excluding intentional walks. If a statistic is attempting to estimate the number of runs produced by a player at the plate, I believe the value created by unskilled events should also be counted. While it takes no skill to stand next to home plate and watch four balls go three feet wide of the strike zone, the batter is still given first base and affects his team’s run expectancy for the remainder of the inning. Distinguishing between runs produced from skilled and unskilled events is something that should be considered when forecasting future performance as unskilled events may be harder to repeat. However, when analyzing past performance, all run production should be accounted for, no matter the skill level it required to produce those runs.

There is an argument that the value from an intentional walk should just be assigned to the batting team as a whole, as the batter himself is doing nothing to cause the event to occur; that is, the batter is not swinging the bat, getting hit be a pitch, or astutely taking balls that could potentially be strikes. However, as the players on the field are the only ones who directly affect run production — it isn’t an abstract “ghost runner” on first base after an intentional walk, it’s the batter — the value from the change in run expectancy must be awarded to players on the field. While it can be difficult to determine how to award that value for the pitching team with multiple fielders involved in every event (pitcher and catcher most notably and the rest of the fielders for balls put into play), the only player on the batting team who can receive credit for the event is the batter.

If we accept that the intentional walk requires no skill from the batter, but agree that he should still receive credit for the event, then we can extend that logic to all unskilled events in which the batter could be involved. Along with intentional walks, that would include “reaching on catcher’s interference” and “striking out but reaching on an error, passed ball, or wild pitch.” In those cases, it is the catcher rather than the pitcher causing the batter to reach base but it doesn’t matter to the batting team. If the team’s run expectancy changed due to the batter reaching base, it makes no difference if it was the pitcher, catcher, or any other fielder causing the event to occur.

When building wOBA, the value of the weight for each component is calculated with respect to the value of an average out, like in the example above. Using the average value of all outs is very similar to using the broad definition of “single,” as discussed earlier. Very often we hear about productive outs, and yet we rarely see statistics quantify the value of different types of outs in a context-neutral manner. If a batter were to exclusively make all of his outs as groundballs to the right side of the infield, he would hurt his team less than if he were to make all of his outs as groundballs to the center of the infield. Groundouts to the right side of the infield allow runners on second and third base to advance more easily than groundouts to the center of the infield. Additionally, groundouts to the center of the infield have more potential to turn into double plays than groundouts to the right side of the infield. As above, the differences in value are minimal — around 0.04 runs in this case — but they add up over a large enough sample.

To deal with the difference in the value of outs, all specific types of outs should also be included in any run estimation, weighted in relation to the average value of an out. For instance, in 2013 the average value of all outs in relation to the average value of a plate appearance was -0.258 runs while the average value of a fly out to center field in relation to the average value of a plate appearance was -0.230 runs. Consequently, we can say that a fly out to center field is worth +0.028 runs in relation to the average value of an out. We can do the same for groundouts to the left side of the infield (-0.015) or lineouts to center field (+0.021), as well as every other type of out broken down by direction, batted ball type, and fielding location. Interestingly, and perhaps not surprisingly, all fly outs and lineouts to the outfield are less damaging than an average out while all types of outs in the infield are more damaging than an average out, except for groundouts to the right side of the infield and sacrifice bunts.

Taking the weights for each of these 104 components, applying them to the equivalent statistics for a league average hitter, and dividing by plate appearances, generates values that tend to fall between .280 and .300 based on the scoring environment, somewhat similar to the batting average for a league average player. In 2013, a league average player would have a score of .256 from this statistic compared to a batting average of .253. To make the statistic easily relatable in the baseball universe, I’ve chosen to scale the values in each season to batting average. The end result is a statistic called Offensive Value Added rate (OVAr) which has an average value equal to that of the batting average of a league average player in each season. So, if a .400 batting average is an historic threshold for batters, the same threshold can be applied to OVAr. Since 1993, as far back as this statistic can be calculated with current data, Barry Bonds is the only qualified player to post an OVAr above .400 in a single season, and he did it in four straight seasons (2001-2004).

Where OVAr mirrors the construction of the rate statistic wOBA, another statistic, Offensive Value Added (OVA), mirrors the construction of the counting statistic weighted Runs Above Average (wRAA). Here is the equation for OVA followed by the equation for wRAA.

OVA = ((OVAr – league OVAr) / OVAr Scale) x PA

wRAA = ((wOBA – league wOBA) / wOBA Scale) x PA

OVA values tend to be very similar to their wRAA counterparts, though they can potentially vary by over 10 runs at the extremes. In 2013, David Ortiz produced 48.1 runs above average according to OVA and “just” 40.3 runs above average according to wRAA, a 19.4% increase from his wRAA value. Of Ortiz’s extra 7.8 runs estimated by OVA, 4.3 of those runs came from the inclusion of intentional walks, and 2.5 of those runs came from Ortiz’s ability to produce slightly less damaging outs through his tendency to pull the ball to the right side of the field.

You won’t find many box scores or player pages that list direction, batted ball type, or whether the ball was fielded in the infield or outfield, but the data is publicly available for all seasons since 1993. While wOBA gives non-programmers the ability to calculate an advanced run estimator relatively easily, if we have data that makes the estimation more precise, then programmers should take advantage. Due to the relative difficulty in calculating these values, I’m providing links to spreadsheets with yearly OVAr and OVA values for hitters, Opponent OVAr and OVA values for pitchers, splits for hitters and pitchers based on handedness of the opposing player, and team OVA and OVAr values for offense and defense, with similar splits. Additionally, I’ve included wRAA values for comparison. Those values may not exactly match those you would find on FanGraphs due to rounding differences at various steps in the process, but they should give a general feel for the difference between OVA and wRAA.

I’ve obviously omitted the meat of the programming work, as I felt it was too technical to include every detail in an article like this. For more information on run estimators built with linear weights methodology I’d highly recommend reading The Book, The Hidden Game of Baseball by John Thorn and Pete Palmer, or any of a variety of articles by Colin Wyers over at Baseball Prospectus, like this one. I used ten years of play-by-play data to get a substantive sample++ of when each type of event happened on average, and I used a single season of data to create the run environments. Otherwise, the general construction of OVAr mirrors the work done by Tom Tango, Mitchel Lichtman, and Andrew Dolphin in The Book.

The next step for this statistic is to make it league and park neutral (nOVAr and nOVA). I chose to omit this step for the initial construction of these statistics as it was also omitted in the initial construction of wOBA and wRAA. Also, the current methods to determine park factors used by FanGraphs and ESPN, among other sites, are somewhat flawed and not something I want to implement. Until that next step, enjoy a pair of new statistics.

OVAr and OVA, Ordered Batters

OVAr and OVA, Alphabetical Batters

OVAr and OVA, Ordered Batter Splits

OVAr and OVA, Alphabetical Batter Splits

OVAr, Ordered Qualified Batters

OVAr, Ordered Qualified Batter Splits

Opponent OVAr and OVA, Ordered Pitchers

Opponent OVAr and OVA, Alphabetical Pitchers

Opponent OVAr and OVA, Ordered Pitcher Splits

Opponent OVAr and OVA, Alphabetical Pitcher Splits

Opponent OVAr, Ordered Qualified Pitchers

Opponent OVAr, Ordered Qualified Pitcher Splits

OVAr and OVA, Teams

OVAr and OVA, Team Splits

OVAr and OVA, Ordered Weights

OVAr and OVA, Alphabetical Weights

 

^^ These averages exclude all events in home halves of the 9th inning or later to avoid biases created by walk-off hits and the inability of the home team to score an unlimited number of runs in 9th inning or later like they can in any other inning.

** A number in the Base State column represents a runner on that base, with 0 representing bases empty.

++ I have one note on sample size that I didn’t think fit anywhere comfortably in the main body of the article. The biggest issue with a statistic built with very specific events is that some of those events are extremely rare. For instance, groundouts to the outfield have happened just 111 times since 1993, compared to groundouts to the infield that have happened 891,175 times since 1993. Consequently, the average value of outfield groundouts, split up direction, can vary substantially from year to year as different events are added or taken away from the sample. I choose to use a ten-year sample to attempt to limit those effects as much as possible but they still will be evident upon close examination. With that sample size, in 2013 a groundout to left field was worth -0.447 runs in relation to the average value of an out. In 2006 the same event was worth -0.089 runs, while in 2000 it was worth +0.154 runs.

As long as the statistic is built in a logically consistent manner, I don’t mind that low frequency events like outfield groundouts and infield doubles vary somewhat from year to year in estimated value, as the cumulative effect will be quite minimal. That being said, as I am trying to estimate the value of events as accurately as possible, the variation in value is a bit off-putting. It may be that a sample of 20 or more years would be necessary for those rare events, with a smaller sample size for the more common events. That adjustment will be considered for the nOVAr and nOVA implementations, but for OVAr and OVA I wanted the construction to be completely consistent.


TIPS, A New ERA Estimator

FIP, xFIP, SIERA are all very good ERA estimators, and their predictability is well documented. It is well known that SIERA is the best ERA estimator over samples that occur from season to season, followed very close by xFIP, with FIP lagging behind. FIP is best at showing actual performance though, because is uses all real events (K, BB, HR). Skill is commonly best attributed to either xFIP or SIERA. ERA is also well known to be the worst metric at predicting future performance, unless the sample size is very large <500IP with the pitcher remaining in the same or a very similar pitching environment.

FIP, xFIP, and SIERA are supposed to be Defense Independent Metrics, and they are. Well, they are independent of field defense, but there is one small error in the claim of defense independent. K’s and BB’s are not completely independent of defense. Catcher pitch framing plays a role in K’s and BB’s. Catchers can be good or bad at changing balls into strikes and this affects K’s and BB’s. Umpire randomness and umpire bias also play a role in K’s and BB’s. It is unknown how much of getting umpires to call more strikes is a skill for a pitcher or not. Some pitchers are consistent at getting more strike calls (Buehrle, Janssen) or less strike calls (Dickey, Delabar), but for most pitchers it is very random (especially in small sample sizes). For example Jason Grilli was in the top 5% in 2013 but was in bottom 10% in 2012.

I wanted to come up with another ERA estimator that eliminates catcher framing, umpire randomness and bias, and eliminates defense. I took the sample of pitchers who have pitched at least 200IP since 2008 (N=410) and analyze how different statistics that meet this criteria affect ERA-. I used ERA- since it takes out park factors and adjusts for the changes in the league from year to year. I looked at the plate discipline pitchf/x numbers (O-Swing, Z-Swing, O-Contact, Z-Contact, Swing, Contact, Zone, SwStr), the six different results based off plate discipline (zone or o-zone, swing or looking, contact or miss for ZSC%, ZSM%, ZL%, OSC%, OSM%, OL%), and batted ball profiles (GB%, LD%, FB%, IFFB%). *Please note that all plate discipline data is PitchF/X data, not the the other plate discipline on FanGraphs, this is important as the values differ*

The stats with very little to absolutely no correlation (R^2<0.01) were: Z-Swing%, Zone%, OSC%, ZSC%, ZL% (was a bit surprised as this would/should be looking strike%), GB%, and FB%. These guys are obviously a no-no to include in my estimator.

The stats with little correlation (R^2<0.1) were: Swing%, LD%, and IFFB%. I shouldn’t use these either.

O-Contact% (0.17), Z-Contact%, (.302), Contact% (.319), OSM% (0.206), and ZSM% (.248) are all obviously directly related to SwStr%. SwStr% had the highest correlation (.345) out of any of these stats. There is obviously no need to include all of the sub stats when I can just use SwStr%. SwStr% will be used in my metric.

OL% (0.105) is an obvious component of O-Swing% (0.192). O-Swing had the second highest correlation of the metrics (other than the components of SwStr%). I will use it as well. The theory behind using O-Swing% is that when the batter doesn’t swing it should almost always be a ball (which is bad), but when the batter swings, there are a two outcomes, a swing and miss (which is a for sure strike) or contact. Intuitively, you could say that contact on pitches outside the zone is not as harmful to pitchers as pitches inside the zone, as the batter should get worse contact. This is partially supported in the lower R^2 for O-Contact% to Z-Contact%. It is more harmful for a pitcher to have a batter make contact on a pitch in the zone, than a pitch out of the zone. This is why O-Swing is important and I will use it.

Using just SwStr% and O-Swing%, I came up with a formula to estimate (with the help of Excel) ERA-. I ran this formula through different samples and different tests, but it just didn’t come up with the results I was looking for. The standard deviation was way too small compared to the other estimators, and the root mean square error was just not good enough for predicting future ERA-.

I did not expect/want this estimator to be more predictive than xFIP or SIERA. This is because xFIP and SIERA have more environmental impacts in them that remain fairly constant. K% is always a better predictor of future K% than any xK% that you can come up with. Same with BB% Why? Probably because the environment of catcher framing, and umpire bias remain somewhat constant. Also (just speculation) pitchers who have good control can throw a pitch well out of the zone when they are ahead in the count, just to try and get the batter to swing or to “set-up” a pitch. They would get minus points for this from O-Swing, depending on how far the pitch is off the plate, but it may not affect their K% or BB% if they come back and still strike out the batter.

So I didn’t expect my statistic to be more predictive, but the standard deviation coupled with not that great of RMSE (was still better than ERA and FIP with a min of 40IP), caused me to be unhappy with my stat.

I then started to think about if there were any stats that were only dependent on the reaction between batter an pitcher that are skill based that FanGraphs does not have readily available? I started thinking about foul balls and wondered if foul ball rates were skill based and if they were related to ERA-. I then calculated the number of foul balls that each pitcher had induced. To find this I subtracted BIP (balls in play or FB+GB+LD+BU+IFFB) from contacts (Contact%*Swing%*Pitches). This gave me the number of fouls. I then calculated the rates of fouls/pitch and foul/contacts and compared these to ERA-. Foul/Contact or what I’m calling Foul%, had an R^2 of .239. That’s 2nd to only SwStr%. This got me excited, but I needed to know if Foul% is skill based and see what else it correlates with.

This article from 2008 gave me some insight into Foul%. Foul% correlates well to K% (obviously) and to BB% (negative relationship), since a foul is a strike. Foul% had some correlation to SwStr%, this is good as it means pitchers who are good at getting whiffs are also usually good at getting fouls. Foul% also had some correlation to FB% and GB%. The more fouls you give up, the more fly balls you give up (and less GB). This doesn’t matter however, as GB% and FB% had no correlation to ERA-. Foul% is also fairly repeatable year to year as evidenced in the article, so it is a skill. I will come up with a new estimator that includes Foul% as well.

I decided to use O-Looking% instead of O-Swing%, just to get a value that has a positive relationship to ERA (more O-looking means higher ERA), because SwStr% and O-Swing are negatively related. O-Looking is just the opposite of O-Swing and is calculated as (1 – O-Swing%).

The formula that Excel and I came up with is this: (I am calling the metric TIPS, for True Independent Pitching Skill)

TIPS = 6.5*O-Looking(PitchF/x)% – 9.5*SwStr% – 5.25*Foul% + C

C is a constant that changes from year to year to adjust to the ERA scale (to make an average TIPS = average ERA). For 2013 this constant was 2.68.

I converted this to TIPS- to better analyze the statistic. FIP, xFIP, and SIERA were also converted to FIP-, xFIP-, and SIERA-. I took all pitchers’ seasons from 2008-2013 to analyze. The sample varied in IP from 0.1 IP to 253 IP. I found the following season’s ERA- for each pitcher if they pitched more than 20 IP the next year and eliminated any huge outliers. Here were the results with no min IP. RMSE is root mean square error (smaller is better), AVG is the average difference (smaller is better), R^2 is self explanatory (larger is better), and SD is the standard deviation.

N=2316 ERA- FIP- xFIP- SIERA- TIPS-
RMSE 77.005 51.647 43.650 43.453 40.767
AVG 43.941 34.444 30.956 30.835 30.153
R^2 0.021 0.045 0.068 0.147 0.169
SD 69.581 38.654 24.689 24.669 15.751

Wow TIPS- beats everyone! But why? Most likely because I have included small samples and TIPS- is based off per pitch, as opposed to per batter (SIERA) or per inning (xFIP and FIP). There are far more pitches than AB or IP so TIPS will stabilize very fast. Let’s eliminate small sample sizes and look again.

Min 40 IP
N=1619 ERA- FIP- xFIP- SIERA- TIPS-
RMS 40.641 36.214 34.962 35.634 35.287
AVG 29.998 26.770 25.660 25.835 26.115
R^2 0.063 0.105 0.120 0.131 0.101
SD 26.980 19.811 15.075 17.316 13.843

 

Min 100 IP
N=654 ERA- FIP- xFIP- SIERA- TIPS-
RMSE 32.270 29.949 29.082 28.848 29.298
AVGE 24.294 22.283 21.482 21.351 22.038
R^2 0.080 0.118 0.143 0.145 0.095
SD 20.580 16.025 12.286 12.630 10.985

Now, TIPS is beaten out by xFIP and SIERA, but beats ERA and and is close to FIP (wins in RMSE, loses in R^2). This is what I expected, as I explained earlier K% and BB% are always better at predicting future K% and BB% and they are included in SIERA and xFIP. SIERA and xFIP take more concrete events (K, BB, GB) than TIPS. I didn’t want to beat these estimators, but instead wanted a estimator that is independent of everything except for pitcher-batter reaction.

TIPS won when there was no IP limit, so it obviously is the best to use in smaller sample sizes, but when is it better than xFIP and SIERA, and where does it start falling behind? I plotted the RMSE for my entire sample at each IP. Theoretically these should be an inverse relationship. After 150 IP it gets a bit iffy, as most of my sample is less than 100 IP. I’m more interested in IP under 100 anyhow.

Orange is TIPS, Blue is ERA, Red is FIP, Green is xFIP, and Purple is SIERA. If you can’t see xFIP, it’s because it is directly underneath SIERA (they are almost identical). This is roughly what the graph should look like to 100 IP:

Looking at the graph, at what IPs is TIPS better than predicting future ERA than xFIP and SIERA? It appears to be from 0 IP to around 70 IP.

Here is the graph for 1/RMSE (higher R^2). Higher number is better. This is the most accurate graph as the relationship should be inverse.

The 70-80 IP mark is clear here as well.

I’m not suggesting my estimator is better than xFIP or SIERA, it isn’t in samples over 75 IP, but I think it is, and can be, a very powerful tool. Most bullpen pitchers stay under 75 IP in a season. This means that my unnamed estimator would be very useful for bullpen arms in predicting future ERA. I also believe and feel that my estimator is a very good indicator of the raw skill of a pitcher. It would probably be even more predictive if we had robo-umps that eliminated umpire bias and randomness and pitch framing.

2013 TIPS Leaders with 100+IP

Name ERA FIP xFIP SIERA TIPS
Cole Hamels 3.6 3.26 3.44 3.48 3.02
Matt Harvey 2.27 2 2.63 2.71 3.09
Anibal Sanchez 2.57 2.39 2.91 3.1 3.23
Yu Darvish 2.83 3.28 2.84 2.83 3.23
Homer Bailey 3.49 3.31 3.34 3.39 3.26
Clayton Kershaw 1.83 2.39 2.88 3.06 3.32
Francisco Liriano 3.02 2.92 3.12 3.5 3.34
Max Scherzer 2.9 2.74 3.16 2.98 3.36
Felix Hernandez 3.04 2.61 2.66 2.84 3.37
Jose Fernandez 2.19 2.73 3.08 3.22 3.42

 

And Leaders from 40IP to 100IP

Name ERA FIP xFIP SIERA TIPS
Koji Uehara 1.09 1.61 2.08 1.36 1.87
Aroldis Chapman 2.54 2.47 2.07 1.73 2.03
Greg Holland 1.21 1.36 1.68 1.5 2.29
Jason Grilli 2.7 1.97 2.21 1.79 2.36
Trevor Rosenthal 2.63 1.91 2.34 1.93 2.42
Ernesto Frieri 3.8 3.72 3.49 2.7 2.45
Paco Rodriguez 2.32 3.08 2.92 2.65 2.50
Kenley Jansen 1.88 1.99 2.06 1.62 2.50
Glen Perkins 2.3 2.49 2.61 2.19 2.54
Edward Mujica 2.78 3.71 3.53 3.25 2.54

 


xHitting: Going beyond xBABIP (part I)

For a few years, it’s struck me as unusual that pitching and hitting metrics are asymmetric.  If the metrics we use to evaluate one group (FIP or wRC+) are so good, why don’t we use them for the other?

One issue is that we’re not used to evaluating pitchers on an OPS-type basis, and similarly we’re not used to evaluating hitters on an ERA basis.  Fine.  But there’s a bigger issue: Why do pitching metrics put so much more emphasis on the removal of luck?

While most sabermetricians are aware of BABIP, and recognize the pervasive impacts it can have on a batting line, attempts to (precisely) adjust hitter stats for BABIP are surprisingly uncommon.  While there do exist a few xBABIP calculators, these haven’t yet caught on en masse like FIP.  And xBABIP doesn’t appear on player pages in either FanGraphs or Baseball Prospectus.

xBABIP itself isn’t even the end goal.  What you probably really want is xAVG/xOBP/xSLG, etc.  Obtaining these is a bit cumbersome when you need to do the conversions yourself.

Moreover, it strikes me that xBABIP cannot be converted to xSLG without some ad hoc assumptions.  Let’s say you conclude a player would have gained or lost 4 hits under neutral BABIP luck.  What type of hits are those?  All singles?  2 singles and 2 doubles?  1 single, 2 doubles, 1 triple?  The exact composition of hits gained/lost affects SLG.  Or maybe you assume ISO is unaffected by BABIP, but this too is ad hoc.

At least to me, whenever a hitter performs better/worse than expected, we really care to know two things:

  1. Is it driven by BABIP?
  2. If so, what is the luck-neutral level of performance?

As I’ve attempted to illustrate, answering #2 is not so easy under existing methods.  (Nor do people always even attempt to answer it, really.)  Even answering #1 correctly takes a little bit of effort.  (“True talent” BABIP changes with hitting style, so it isn’t always enough just to compare current vs. career BABIP.  And then there are players with insufficient track record for career BABIP to be taken at face value.)

Compare this to pitchers.  When a pitcher posts a surprisingly good/bad ERA, we readily consult FIP/xFIP/SIERA.  Specific values, readily provided on the site.  So why not for hitters?

Here I attempt to help fill this gap.  The approach is to map a hitter’s peripheral performance to an entire distribution of hit outcomes.  These “expected” values of singles, doubles, triples, home runs, and outs, can then be used to computed “expected” versions of AVG, OBP, SLG, OPS, wOBA, etc.

Recovering xAVG and xOBP isn’t that different from current xBABIP-based approaches.  The main extension is that, unlike xBABIP, this provides an empirical basis to recover xSLG, and also xWOBA.

Steps:

  1. Calculate players’ rates of singles, doubles, triples, home runs, and outs among balls in play.  (Unlike some other BABIP settings, I count home runs as “balls in play” to estimate an expected number.)
  2. Regress each rate separately on a common set of peripherals.  You’ll now have predicted rates of each for each player.   (Keeping the explanatory variables common throughout ensures the rates sum to 100%.)
  3. Multiply by the number of balls in play (again counting home runs) to get expected counts of singles, doubles, triples, home runs, and outs.
  4. Use these to compute expected versions of your preferred statistics.

What explanatory peripherals are appropriate?  Initially I’ve used:

  • Line drive rate, ground ball rate, flyball rate, popup rate
  • Speed score
  • Flyball distance (from BaseballHeatMaps.com), to approximate power
  • Speed * ground ball rate
  • Flyball distance * flyball rate

These explanatory variables differ somewhat from those in the xBABIP formula linked earlier.  The main distinctions are adding flyball distance (think Miguel Cabrera vs. Ben Revere) and using Speed score instead of IFH%.  (IFH% already embeds whether the ball went for a hit.  Certainly in-sample this will improve model fit, but it might not be good for out-of-sample use.)

Regression results:

Spd FB Dist/1000 FB Dist missing (Spd*GB%)/1000 (FB Dist*FB%)/10000 LD% GB% FB% IFFB%/100 Pitcher dummy Constant
Singles rate -0.0177 0.0608 0.0111 0.4882 0.0090 -0.0019 -0.0063 -0.0066 -0.0417 -0.6833 0.7296
Doubles rate 0.0076 0.6044 0.1457 -0.1059 -0.0152 -0.0058 -0.0066 -0.0061 -0.0070 -0.6700 0.5235
Triples rate 0.0040 0.0193 0.0057 -0.0279 -0.0019 -0.0077 -0.0077 -0.0077 -0.0010 -0.7695 0.7634
HR rate 0.0018 0.9392 0.2764 -0.0295 0.0283 0.0081 0.0080 0.0085 -0.0127 0.8020 -1.0790
Outs rate 0.0043 -1.6238 -0.4389 -0.3249 -0.0202 0.0073 0.0125 0.0118 0.0624 1.3205 0.0625

Technical notes:

  • These are rates among balls in play (including home runs)
  • Each observation is a player-year (e.g. 2012 Mike Trout)
  • I’ve used 2010-2012 data for these regressions
  • Currently I’ve only grabbed flyball distance for players on the leaderboard at BaseballHeatMaps.  This is usually about 300 players per year, or most of the “everyday regulars.”  (Fear not, Ben Revere/Juan Pierre/etc. are included.)  The remaining cases get an indicator for ‘FB Dist missing.’
  • LD%, GB%, FB%, and IFFB% are coded so that 50% = 50, not 0.50.
  • Pitcher proxy = 1 if LD% + GB% + FB% = 0.  Initially I haven’t thrown out cases of pitcher hitting, nor other instances of limited PA.
  • Notice the interaction terms.  The full impact of GB% depends both on GB% and Speed; the full impact of FB% depends on both FB% and FB distance; etc.  So don’t just look at Speed, GB%, FB%, or FB Distance in isolation.
  • Don’t worry that the coefficients on pitcher proxy “look” a bit funny for HR rate and Outs rate.  (Remember that these cases also have LD%=0, GB%=0, and FB%=0.)  In total the average predicted HR rate for pitchers is 0.01% and their predicted outs rate is 94%.
  • Strictly speaking, these are backwards-looking estimators (as are FIP and its variants), but they might well prove useful in forecasting.

I next calculate xAVG, xOBP, xSLG, xOPS, and xWOBA.  For now, I’ve simply taken BB and K rates as given.  (xBABIP-based approaches seem to do the same, often.)

Early results are promising, as “expected” versions of AVG, OBP, SLG, OPS, and wOBA all outperform their unadjusted versions in predicting next-year performance.  (At least for the years currently covered.)

Which players deviated most from their xWOBA?  Here are the leaders/laggards for 2012, along with their 2013 performance:

Leaders Laggards
Name 2012 wOBA 2012 xWOBA Difference 2013 wOBA Name 2012 wOBA 2012 xWOBA Difference 2013 wOBA
Brandon Moss 0.402 0.311 0.091 0.369 Josh Harrison 0.274 0.355 -0.081 0.307
Giancarlo Stanton 0.405 0.332 0.073 0.368 Ryan Raburn 0.216 0.290 -0.074 0.389
Will Middlebrooks 0.357 0.285 0.072 0.300 Nick Hundley 0.205 0.265 -0.060 0.295
Chris Carter 0.369 0.298 0.071 0.337 Jason Bay 0.240 0.299 -0.059 0.306
John Mayberry 0.303 0.238 0.065 0.298 Eric Hosmer 0.291 0.349 -0.058 0.350
Torii Hunter 0.356 0.293 0.063 0.346 Gerardo Parra 0.317 0.369 -0.052 0.326
Jamey Carroll 0.299 0.244 0.055 0.237 Daniel Descalso 0.278 0.328 -0.050 0.284
Cody Ross 0.345 0.291 0.054 0.326 Jason Kipnis 0.315 0.365 -0.050 0.357
Melky Cabrera 0.387 0.333 0.054 0.303 Rod Barajas 0.272 0.322 -0.050
Kendrys Morales 0.339 0.286 0.053 0.342 Cameron Maybin 0.290 0.339 -0.049 0.209

Is performance perfect?  Obviously not.  The model does quite well for some, medium-well for others, and not-so-well for some.  Obviously this is not the end-all solution for xHitting.

Some future work that I have in mind:

  • A still more complete set of hitting peripherals.  I’m thinking of park factors, batted ball direction, and possibly others.
  • Testing partial-season performance
  • Comparing results against projection systems like ZiPS and Steamer

Otherwise, my main hope from this piece is to stimulate greater discussion of evaluating hitters on a luck-neutral basis.  Simply identifying certain players’ stats as being driven by BABIP is not enough; we really should give precise estimates of the underlying level of performance based on peripherals.  We do this for pitchers, after all, with good success.

Above I’ve contributed my two cents for a concrete method to do this.  A major extension to xBABIP-based approaches is that this offers an empirical basis to recover xSLG and xWOBA.  While the model is far from perfect, even in its current form it generates “expected” versions of AVG, OBP, SLG, OPS, and wOBA that outperform their unadjusted versions in predicting subsequent-year performance.  (Not just for leaders/laggards.)

Comments and suggestions are obviously welcome!


The Best of the Worst, Or, What Do Roberto Clemente, Pete Rose, and Ted Simmons Have In Common?

Note: I have no idea if I’m the first to do this, but quite frankly I don’t care.

There’s always been something strangely romantic to me about being the absolute worst at something. And when I say worst, I don’t just mean one of the worst–I’m talking about being the absolute worst period whatever period ever period. I can’t explain why it is–perhaps it’s because I’ve been last at virtually everything throughout my life, or perhaps it’s because I’m a fan of the Orioles–but for some reason, I’m transfixed by the idea of being the floor, the ultimate, the person or entity that everyone else looks down upon.

Now, what do my strange, borderline masochistic feelings towards the awful have to do with three of the better all-time players, two of whom are enshrined in Cooperstown and one of whom probably should be? Well, they all have one thing in common, which no one has seemed to realize: At one point or another, they were all the worst qualifying position player in the majors.

How, you ask? When? Why? I’ll answer your questions, but first I’d like to share with you some of the other big names that fit this criteria. Since the season ended on the 29th of September¹, there are now 143 seasons, meaning 143 worst players (or LVPs, for the sake of this exercise). I gathered up each of them, and saw that of the 143 atrocious seasons, many of them involved players that had good–or in some cases, great–careers. I then proceeded to order each player season by career WAR, to present you, the unedumacated reader, with…The Best Of The Worst.

By that, I mean: the following post consists of the top-10 careers (as measured by career WAR) for position players that were the worst in the major leagues for a particular season. I classified the general area of their career that the LVP season happened in: start-of-career bump, middle-of-career fluke, or end-of-career decline; I also put in my attempt at an explanation as to why the bad season happened. Oh, and I also divided them into groups (as they were somewhat similar), à la Bill Simmons’ NBA Trade Value Rankings.

Off we go!

GROUP I: AGING IN THE OUTFIELD²

10. Marquis Grissom

Career WAR: 26.9

LVP year/WAR: 2000/-1.8

Classification: End-of-career decline

Grissom had a solid career for the most part–he won a World Series, with a club that doesn’t tend to do well in the postseason; he won four consecutive Gold Gloves (from 1993 to 1996); at the time of his retirement, he was one of only seven players all-time with 200 home runs, 400 stolen bases, and 2000 hits (a club later joined by Johnny Damon); and he is now, to paraphrase Drake, 46 sittin’ on 51 mil ($51,946,500, to be exact). Also, as you can see above, he compiled a decent career WAR total, including two 5-win seasons in 1992 and 1993 for the Expos. However, you sure as hell wouldn’t have known that from watching his horrid 2000 season.

Traded to the Brewers in 1998 after the Indians resigned Kenny Lofton, Grissom was never able to recapture the magic from his early days north of the border, or at the very least, his sufficing days in Atlanta or Cleveland. At this point, his fielding in center (which peaked at 20.7 Def³ in 1994) was in decline, but his glove work in 2000, while unsatisfactory (-3.8 Def), wasn’t particularly bad for him, as he’d proceed to have Defs lower than that in four of his next (and last) five seasons. His baserunning was also trending downward; his BsR (which peaked at 10.5 in 1992, when he stole 78 bases) had plummeted all the way to -0.5, as he swiped a mere 20 bags. Again, though, this is a relatively minute contribution.

His batting was the major reason for his hideousness in 2000: He put up a Starlinesque triple-slash of .244/.288/.351, in a season when 16 players hit 40 home runs, 15 batted at least .330, and the major league-average triple-slash was .270/.345/.437. This all added up to an Ichiroesque wOBA of .282 and an Hechavarriaesque wRC+ of 59, which, combined with the aforementioned poor baserunning and defense, was enough to give him -1.8 WAR and edge him past Mike Lansing (-1.7) for the LVP.

This disappointing offensive season, while not all that unusual, was certainly a fluke to some degree. In terms of plate discipline, he was basically the same in 2000 (6.1% BB, 15.5% K) as he was for his career (6.2% BB, 13.8% K); it was when he put the ball in play that he got in trouble. His ISO of .108 was his lowest since 1991 (his first full season), and it convalesced the next year to a much healthier .183 (albeit in 172 fewer plate appearances); in addition, his BABIP was at .270, the second-lowest of his career to that point, although it dropped even further, to .242, the next year. It wasn’t like he had a huge rebound in 2001, either; however, the increase in ISO (and not qualifying for the batting title) was enough for a lofty -0.9 WAR in 468 trips to the plate the next year.

9. Carlos Lee

 Career WAR: 28.2

LVP year/WAR: 2010/-1.5

Classification: End-of-career decline

El Caballo clearly had some late-career struggles (as Richard Justice sure liked to point out); the huge (for the time) $100 million dollar deal that Houston signed him to certainly didn’t help the fans’ image of him. The contract notwithstanding, Lee had a decent career–two Silver Sluggers (2005, 2007), a five-win season in 2004 with the White Sox, and the career WAR that precedes this section. He also had the rare (for this era) distinction of never striking out 100 times in a season, and to top it all off, he was a player who acknowledged, and embraced, his critics. Plus, you can’t blame him for signing the contract–blame that dipshit of a GM, Ed Wa-wait, what’s that you say? Lee wasn’t signed to the ridiculous contract by Wade, but by…Tim Purpura? The guy with one reference on his Wikipedia page? Who the hell is he? Oh, whatever. Where was I?

Ah, yes. Lee’s final years. It should be noted that Mr. Lee’s abominable antepenultimate⁴ season was bookended by respectable seasons in 2009 and 2011, when he put up a combined 3.9 WAR–not exactly Mike Trout, but also not Delmon Young. With aaaaaalll of that said, though, the fact of the matter is: Carlos Lee was really goddamn awful in 2010. He provided typical Carlos Lee defense (-23.2 Def, edging his -22.3 Def from 2006 for the worst of his career) and baserunning (-3.6 BsR…pretty much in line with his career numbers), which commiserated with a .246/.291/.417 triple-slash (.310 wOBA, 89 wRC+) to give him a WAR of -1.5.

I brought up earlier that his 2010 season was, to some extent, a fluke. The main reason for his poor offensive showing in 2010 was twofold: a low BABIP (.238, 21 points lower than his previous career low of .259) motivated by a 15.6% line-drive rate, well below his career average of 20%; and a decrease in free passes (5.7% BB, much lower than his career rate of 7.5%). Both of these measure bounced back the following season, to .279 and 9%, respectively.

And lest we forget, 2010 was a year with a looooot of bad players–Melky Cabrera (-1.4 WAR) getting run out of Atlanta, Adam Lind (-1.0) and Jason Kubel (-0.4) coming crashing down to Earth after breakout seasons, Jonny Gomes (-0.4) and Carlos Quentin (-0.3) showing off their fielding prowess, Cesar Izturis (-0.4) being Cesar Izturis…But I digress. The main takeaway: 2010 Carlos Lee=horrible. Basically all other years Carlos Lee=not (to varying degrees).

GROUP II: THEY DON’T RACKA THE DISCIPRINE

8. Ray Durham

Career WAR: 30.3

LVP year/WAR: 1995/-1.4

Classification: Start-of-career bump

The Sugarman’s career met a premature end, but he was a trusty player for most of it. He was never a star player–his career-high single-season WAR was 3.9 in 2001, in his last full year with the White Sox–but he was always a contributor, putting up at least a two-win season for nine of the eleven seasons from 1996 to 2006. He made two All-Star teams (in 1998 and 2000), scored 100 runs in every year from 1997 to 2002 (if you care about such things), and stole at least 23 bases a season over that span (plus 30 in 1996). When his career started, though, he was just a fifth-round draft pick by the White Sox in 1990, who worked his way up through the farm system and won the starting second baseman job in 1995 spring training. How did that first season go?

Well, let’s start by saying he was never a very good defender. His career high Def was 8.1 in 2001, and he followed that up with -11.9 in 2002; for his career, his Def was -62.5, 2nd-worst among second basemen over that time. In the year in question, though, he took his defense to a new level. He put up…a -20.7 Def. Now, that doesn’t sound particularly (or at least historically) shitty, at least at first glance; after all, three players had worse figures this year alone⁵. However, one must consider the position that Mr. Durham manned was (and is) one of the premier defensive positions on the field, such that there have only been two–count ’em, two–players ever with a Def lower than that at second: Todd Walker (-21.5) in 1998, and Tony Womack (-24.2) in 1997. What’s more, Durham achieved that in only 1049.2 innings (in 122 games played); supposing he played 20% more innings (~1250, a standard 7 of the 19 qualified second basemen reached in 2013), he could have easily had a -25 Def, a depth the likes of which no second baseman has sunk to.

As dreadful as he was with the glove, he was still pretty reprehensible with the bat. In 517 plate appearances, he posted a .257/.309/.384 triple-slash and a .306 wOBA. In this day and age, those numbers are all right; in 2013, Brandon Phillips was able to put up a 91 wRC+ with similar lines. During the height of the PAWSTMOMNEP⁶ Era, in the Cell? Those numbers are unacceptable, and this was reflected in Durham’s 82 wRC+ and -10.9 Off. His awfulness in these two areas was not offset by his relatively solid baserunning (1.3 BsR), and he finished the season with a WAR of -1.4, which tied him with Kevin Stocker for the honor of ultimate player.

Durham’s defense never became great, but his offense rebounded after this fluke rookie season. His 6% walk rate as a rookie was much lower than his career average of 9.7%, and was by far the lowest of his career; his .127 ISO as a rookie was also considerably lower than his career ISO of .158, and was the third-lowest of his career. He was inconsistent for a few years after 1995, alternating between solid (2.0 WAR in 1996, 3.2 in 1998) and not so solid (0.3 in 1997, 1.2 in 1999), before accruing 2.7 WAR in 2000, the start of a seven-year run of at least two wins. But, despite what this article might lead you to believe, he was not, in fact, a good rookie.

7. Ron Fairly

Career WAR: 35.3

LVP year/WAR: 1967/-0.8

Classification: Middle-of-career fluke

In terms of hitting, Fairly is akin to the players that precedes him (and the player that he precedes): They all have so-so averages and power numbers, in addition to undesirable defense, but have fairly good plate discipline, which allowed them to enjoy fruitful careers. Hence, the cheesy and racist title for this section⁷.

Anyway…Fairly was another player like Durham–never elite, but always productive. He had two 4-win seasons (1970 and 1973), made the All-Star team in 1973 and 1977, and owned a career .360 OBP in an era where there was a dearth of offense. He also won three World Series, in 1959, 1963, and 1965–all with the Dodgers, with whom he spent the first 11.5 years of his 21 years in the majors. He was adequate throughout most of his tenure in Los Angeles; after his first three seasons (1958-60) when he didn’t start, he put up at least 1.9 WAR in each of the next six seasons (1961-66).

And then along came his putrid 1967. “How putrid?” you inquire. Very putrid, I reply. His defense (-15.7 Def in 1167.1 innings) and baserunning (-1.5 BsR) were as weak as they’d ever been , but this year was all about the offense. As a well-conducted player at the plate, Fairly took the base on balls quite a bit in his career–about once in every eight trips to the plate (12.5%). While his BB% of 9.7% was down from that average, and the lowest of his career to that point, it was basically in line with the MLB average of 9.8%; in addition, his strikeout rate of 9.2% (compared to 10.4% for his career) was well below the MLB average of 15.8%.

Like with Grissom, Fairly’s issues were chiefly with balls in play. He didn’t have a whole lot in the way of power, as his .101 ISO was a good deal below the major league-average of .148, and below his respectable career ISO of .142. The luck dragons were also not particularly fond of him that year; his BABIP freefell to a hapless .224. Even in the year prior to The Year Of The Pitcher, the major leagues still managed to hit .255/.302/.404, with a .280 BABIP and a .148 ISO; Mr. Fairly “hit” .220/.295/.321, which gave him a .277 wOBA and an 82 wRC+ (as opposed to .329 and 113 for MLB). In the end, he had -0.8 WAR for the year, which tied him with Zoilo Versalles for the LVP title⁸.

In summary, ISO and BABIP were the main reasons for Fairly’s nauseating 1967; each was down from .177 and .288 figures the year before, and after another down year in 1968 (.066 and .259)⁹, they would rebound to .202 and .270 in 1969, and would remain high as Fairly enjoyed the best years of his career in Montreal¹º.

6. Eddie Yost

 Career WAR: 37

LVP year/WAR: 1947/-0.8

Classification: Start-of-career bump

As Matt Klaassen wrote last year, one can only express sorrow when looking back at the career of Mr. Yost, who played 60 years too early. For his career, the third baseman had a modest .254 batting average and .371 slugging percentage, but a phenomenal .394 on-base percentage, due to a 17.6% career walk rate that, as the article points out, is second only to Barry Bonds since 1940. However, this was in the pre-pre-pre-Moneyball days, before many people knew about any stats, much less “advanced” stats like OBP. So, sadly, Yost is doomed to walk the earth as a forgotten man. Well, not really–he died last year–but you get what I’m saying.

In 1947, however? Eddie Yost wasn’t underappreciated, as anyone who knew anything about baseball could see that he was awful. After logging 47 combined plate appearances in 1944 and 1946 (and joining the navy in 1945, just in time for the end of the war), Yost finally got a chance to start in 1947, getting 485 plate appearances for the Washington Senators. How did he do with those plate appearances? Well…

He accrued free passes, or so it would appear at first glance; his walk rate of 9.3% would be sterling in this era, what with our major league walk rate of 7.9%. Back then, however, the major league walk rate was 9.7%, so he was actually below average; moreover, the major league strikeout rate was 9.6%, meaning his 11.8% strikeout rate was worse that average. His defense–which would never be that good, as his -91.7 career Def shows–was also rather lousy, to the tune of -6 Def, as was his baserunning (-1.5 BsR). However, it’s possible to play at an elite level with poor plate discipline (as Carlos Gomez sure has shown) and with poor fielding (as Miguel Cabrera sure has shown); what, then, made him so dissatisfactory?

Well, as that annoying bundle of sticks that the women love might say, it was all about (hashtag) that power. He had a .054 ISO; even in a year where the average ISO was .117, that’s not exactly Miggy levels. His BABIP was right around average (.275, to .277 for the majors), but this Pierre-like power, coupled with the aforementioned above-average strikeout rate, gave him a batting average of .238 and a slugging percentage of .292. His on-base percentage was a solid .312; however, the major league-average OBP was .336, along with a .261 AVG and .378 SLG. All of this commiserated to give him a .277 wOBA, 84 wRC+, and -17.1 Off; when Green Day was awakened, Fairly had -0.8 WAR, which beat out Jerry Priddy (-0.6) for the LVP title.

From there, Yost only got better. Yeah, there wasn’t much worse he could get, but his WAR still improved in each of the next four seasons, and he was a two-win player for seven straight years (1950-56). He pinnacled with 6.2 WAR in 1959, when he even hit for decent power, socking 21 dingers (with a .157 ISO) in his first year after leaving The Black Hole Known As Griffith Stadium¹¹. It’s a shame he had to start out as shittily as it did.

GROUP III: A ROCK AND GUITAR PLACE¹² (OR, AGING IN THE OUTFIELD, PART II)

5. Dave Parker

Career WAR: 41.1

LVP year/WAR: 1987/-0.6

Classification: End-of-career decline

Paul Swydan reminisced on the career of Parker earlier this year after it was announced that he (i.e. Parker) had Parkinson’s disease. I’ll briefly rehash Swydan’s reporting here.

Parker’s job when called up was to replace the player who is #1 on this list, and obviously, those were some big shoes to fill. Parker did not shrink from the spotlight, however, as he produced 30.3 WAR over his first five full seasons (1975-79), which tied him with The Despised One for fourth-most in the majors over that span. He attained quite a bit of hardware over this time as well–the NL MVP trophy in 1978 and Gilded Gloves¹³ in 1977-79–and won batting titles in 1977 and 1978.

These five years were the best of his career, and after the fourth year (1978) he was rewarded with a five-year, $5 million-dollar contract–the largest in MLB history at the time. However, much like a certain power-hitting Pennsylvanian today, he would struggle to live up to this contract; after the first year (1979), when he put up 5.7 WAR and was instrumental in helping the Pirates win the Fall Classic, he would only log 1660 plate appearances over the final four years of the deal (and only contribute 1.6 WAR in those years). This was due to several factors, namely his affinity for a certain white substance that is, generally speaking, frowned upon in our society; his continued usage of said substance earned him a full-year suspension for the 1986 season, which he was able to circumvent via community service, donations of his salary, and submission to random drug tests.

After his contract expired in 1983, he signed with the Reds, and his production was essentially the same as in the last few years in Shitts¹⁴ Pittsburgh (save for his fluky, 5.4-WAR 1985 season) and as it would be for the rest of his career. There was one year in particular, though, where he really hit rock bottom: the year that followed the suspension year.

In the year in question–his last in Cincy–Parker was, shall we say, no muy bueno. A career .290/.339/.471 hitter, Parker hit .253/.311/.433 in 1987. While one might attribute this to his 16.1% strikeout rate and 6.8% walk rate (compared to 15.5% and 8.9%, respectively, for the majors), these numbers are actually pretty analogous to his career numbers (15.1% and 6.1%, respectively). His power was also comparable to his career figures (.180 ISO in 1987, .181 for his career).

For Mr. Parker (like for so many of the others on this list) the issues arose from two areas: when the other team fielded the balls he hit, and when he fielded the balls the other team hit. Parker’s .265 BABIP was the lowest of his career for a full season, not to mention being 49 points below his career BABIP of .314 and 24 points below the major league-average BABIP of .289. His glovework also left something to be desired¹⁵–he posted a -17.8 Def, which was second-worst in the majors.

Overall, his hitting wasn’t too dreadful–his triple-slash was good enough for a .316 wOBA and 87 wRC+. He wasn’t even that bad, period–his WAR was -0.6, a relatively good figure. Out of the 143 seasons, only 5 players were LVPs with a higher WAR; Parker was just unfortunate enough to have a down season in a year where the only serious competition for the LVP was Cory Snyder (-0.5).

One last thing on Parker: As painful as his 1987 season was, his last season (1991) was even worse–he had -1.2 WAR in only 541 plate appearances for the Angels and Blue Jays. Unfortunately, he was robbed of the LVP crown by some scrub named Alvin Davis. Hah! What a loser! It’s not like any site editor has ever vehemently defended Davis and would cancel my account if I was to insult Mr. Mariner!

(Your move, Cameron.)

4. Bernie Williams

Career WAR: 44.3

LVP year/WAR: 2005/-2.3

Classification: End-of-career decline

Williams’ case for Cooperstown has generated quite a bit of controversy, over everything from the impact and weighting of postseason play to the cost of defense to True Yankeedom. That point, however, is now moot, as Williams is now off the ballot; whether or not he deserves to be is a can of shit I’d rather not open right now.

With Williams, the superlatives are certainly present–five All-Star appearances (1997-2001), four Gilded Gloves¹⁵ (1997-2000), four World Series rings (1996, 1998-2000), a Silver Slugger (2002), and the ALCS MVP in 2002. He also had seven 4-WAR seasons over an eight-year period (1995-2002), and was the 12th-most valuable position player in baseball over that time. Plus, he was, y’know, a True Yankee.

It’s the years to which this period came prior that I am focusing on. After that eight-year run of dominance, Williams fell off a cliff (as Paul Swydan explained earlier this year), putting up…wait for it…-3.3 WAR over the next four seasons, dead last in the majors in that span. Most of that negative WAR came from one year: 2005, the second-to-last of Mr. Williams’ career.

In said year, Williams (he of the career .297/.381/.477 triple-slash) batted a mere .249/.321/.367. The PAWSTMOMNEP Era was beginning to transition into the Era of the Pitcher, but offenses were still favored–the major league triple-slash was .264/.330/.419–and Williams’ batting (and -1.4-run baserunning) was enough to give him a .305 wOBA, 85 wRC+, and -11.2 Off. His strikeout rate in 2005 (13.7%) mirrored his career rate (13.4%) and was much better than the MLB rate (16.4%), and while his walk rate of 9.7% was low by his standards (career walk rate of 11.8%), it was still considerably better than the MLB average of 8.2%; his batted-ball rates were also all right (19% LD/43.5% GB/37.5% FB). It was a poor BABIP (.270, as opposed to .318 for his career and .295 for MLB) and a poorer ISO (.118, as opposed to .180 for his career and .154 for MLB) that did him in.

But Williams’ offensive struggles pale in comparison to his defensive ineptitude. Never the greatest with the glove, Williams hit a new low in 2005¹⁶, as he had a -30.2 Def in 862.2 innings. In terms of UZR, he was 29.2 runs below average, and because he didn’t play that much, his performance extrapolates to 42.5 runs below average per 150 games. As a point of reference, Adam Dunn’s worst career UZR/150 as a outfielder was -39.2 in 2009. So, yeah.

The disconcerting work in the field by Williams, coupled with ineffectiveness at the plate, gave him a -2.3 WAR for the year; this put him in a class of his own, as the next-worst player was Scott Hatteberg, whose -0.7 WAR was a full 1.6 behind him. Maybe, if his production hadn’t completely deteriorated at the tail end of his career, Williams would be in the Hall right now, and the point would still be moot, but for a better reason.

GROUP IV: CENTRAL COMPETITORS

3. Ted Simmons

Career WAR: 54.2

LVP year/WAR: 1984/-2.4

Classification: End-of-career decline

Look, I’m not saying Ted Simmons should go to the Hall of Fame. What I will say is this:

Career PA Career WAR
Player A 9685 54.2
Player B 9278 54.9

Simmons is player A. Ichiro Suzuki is player B. Just sayin’.

Anyway, regardless of one’s opinion on Mr. Simmons, one cannot deny that he was an excellent all-around player for most of his career. People of his day certainly didn’t, as he was named to eight All-Star teams (1972-74, 1977-79, 1981, 1983) and won a Silver Slugger (1980). He was generally regarded as the second-best hitting catcher of his era, behind Johnny Bench, and while catchers are held to a lower offensive standard than most other players, he was no slouch with the bat–his career wRC+ was 116, better than Adrian Beltre and Andruw Jones. His defense was mediocre (53rd out of 113 catchers in Def over the course of his career), but he was still outstanding, as the above comparison should show.

Like all mortal men, though, Simmons’ production decayed as he aged; after contributing at least three wins in 12 of 13 seasons from 1971 to 1983, he was at or below replacement level in four of his last five years. In the first of those five years, he was a special kind of awful.

In 1983 (the last year of the 13-year period), Simmons was actually quite valuable, to the tune of 3.7 WAR for the Brewers. The next year…well, everything fell apart. His triple-slash collapsed from a healthy .308/.351/.448 to a sickly .221/.269/.300, which took his wOBA from .352 to .259, his wRC+ from 122 to 60, and his Off from 16.6 to -24.6. His plate discipline was fairly similar in both years (6.3% BB%, 7.8% K% in 1983; 5.6% and 7.5% in 1984), although his walk rate in both years was a good deal below his career average of 8.8%. This meltdown was primarily caused by a David Murphy-like¹⁷ dropoff in BABIP, from .317 to .233, and a near-halving of ISO, from .140 to .078; both of these numbers were a good deal below Simmons’ career numbers (.284 and .152, respectively). What’s more, he spent most of his time at DH, meaning he was held to a higher offensive standard; thus, these already bad numbers were reduced even further.

When Simmons did play in the field (at first and third, not catcher), his defense was somewhat rotten, as he posted -4 TZ in 457.1 combined innings. With the subtracted defensive runs for not fielding at all, his Def dropped from -3 in 1983 to -14.5 Def (12th-worst in the majors) in 1984.

When the dust had settled, Simmons was left with -2.4 WAR, which gave him a comfortable lead over Curtis Wilkerson (-1.1) for the LVP honor. Simmons never really got much better than this–he gave the Brew crew 1 WAR in 1985, before costing the Braves a combined 1 win as a utility man over the next three years. Retiring at age 39, Simmons might’ve been enshrined if he had kept up his consistency into his late 30s.

While Simmons was arguably Hall-worthy, there’s no arguing over these next two. Well, there actually is a fair amount of arguing over this next guy, but…never mind.

2. Pete Rose

Career WAR: 80.3

LVP year/WAR: 1983/-1.9

Classification: End-of-career decline

Yes, the LVP in back-to-back years was a very valuable player overall. Surprised? Well, that’s allowable–when you started reading this, you probably had no idea what you were reading in the first place, much less if it would involve two exceptional players performing uncharacteristically poorly in two consecutive years.

Moving on…Rose hardly needs an introduction, but I’ll give him one anyway. Seventeen All-Star nods (1965, 1967-71, 1973-82, 1985); a three-time batting champion (1968-69, 1973) and three-time World Series champion (1975-76, 1980); two Gilded Gloves (1969-70); a Silver Slugger in 1981; the NL ROTY¹⁸ and MVP in 1963 and 1973, respectively; and one of the better (and most deserved)¹⁹ nicknames in baseball. Plus, there was that whole 4,256 hits thing, but nobody cares about that.

Rose spent most of his career with the Reds, winning two of his three championships with them. The third? That was won with the Phillies, with whom he spent five mostly forgettable years at the end of his career. After averaging 4.7 WAR over his first sixteen seasons with the Reds (and being the fourth-most valuable player in baseball over that span), Rose only earned 3.8 WAR in the next five years combined. To be fair, he was at least middling for the first four years, finishing above replacement level in each. In the last year (1983), however, he was anything but middling–and not in a good way.

1983 wasn’t a particularly great year for hitters–the aggregate MLB triple-slash was .261/.325/.389; nonetheless, Rose was still considerably below thoe numbers. Never one to hit for power (his best ISO was .164 in 1969), he sunk to new lows in 1983, pulling a Ben Revere;²º his ISO at season’s end was a repulsive .041, considerably lower than the major league ISO of .128. BABIP also wasn’t too kind to him, as he posted a .256 mark in that department, a good deal below the major league BABIP of .285. His plate discipline was superb (9.4% and 5% walk and strikeout rates, compared to 8.4% and 13.5%, respectively, for the majors), but this could not compensate for his failings in the other areas, and he ended up with a .245/.316/.286 triple-slash, with a .277 wOBA, 68 wRC+, and -21.6 Off.

He didn’t do himself any favors on the basepaths (-1.5 BsR) or in the field (-8 TZ, -14.7 Def in 1034 innings between first and the oufield), but he was never the greatest in those categories. In this one year, he was an all-bat guy with no bat²¹, and that usually doesn’t have good results.

When all was said and done, Rose’s WAR was -1.9, a figure that easily bested Dave Stapleton (-0.7) for the LVP title. One last note on Rose: Unlike the #1 player on this list, people seem to be cognizant of Rose’s awfulness in his LVP year–probably because Rose’s year occured near the end of his career, and he never received a chance to outshine it. This next guy, though? Well…

1. Roberto Clemente 

Career WAR: 80.5

LVP year/WAR: 1955/-0.9

Classification: Start-of-career bump

Yeah, I’d say he made up for one bad season. Either that, or fifteen All-Star games (1960-67, 1969-72)²², twelve consecutive Gold Gloves²³ (1961-72), four batting crowns (1961, 1964-65, 1967), the NL MVP (1966), the WS MVP (1971), and 80.1 career WAR were all for naught.

Despite all of the undeniable awesomeness of Clemente’s career as a whole, there is still that one blemish on his record: his less-than-perfect rookie year. After being drafted by the Pirates in 1954, Clemente was given the opportunity to start immediately, and the outcome wasn’t very good.

Clemente was never a particularly patient hitter, with a career BB% of 6.1%; in 1955, however, he was especially anxious, with a 3.6% BB% that was a good deal below the major league-average of 9.5%, and was the second-worst individual figure in the majors²⁴. He didn’t strike out that much, as his 12% K% wasn’t too much worse than the major league-average of 11.4%, and was identical to his career rate.

Always a high-BABIP guy (with a career number of .343), Clemente’s balls were unable to find holes in 1955, as he had a BABIP of .282 that, while being higher than the major league BABIP of .272, was still the second-lowest of Clemente’s career. His power also hadn’t developed yet²⁵, as his .127 ISO (which was also similar to the major league ISO of .136) was much lower than his career ISO of .158. Overall, Clemente batted .255/.284/.382, with a .294 wOBA, 73 wRC+, and -18.5 Off. His baserunning wasn’t much of a factor (-1.3 runs, compared to 2.1 total for his career), but on defense, he was substandard, putting up a -4.6 Def that was 21st out of 31 outfielders²⁶.

This all coalesced into a WAR of -0.9, which allowed Clemente to beat Don Mueller (-0.7) for the LVP title by a narrow margin. It would take Clemente a few years to really get going; from 1956 to 1959, he was worth a modest 8.9 WAR²⁷ (44th in the majors) as he struggled through various injuries. By 1960, though, he was healthy, and would contribute 72.7 WAR (fourth in the majors) from then until…well, you know.

***

What does all of this mean? Well, the average major league player is worth 2.97 wins over the course of their career; the average player that won an LVP is worth 6.04 wins over the course of their career. Impacted by outliers, you say? Even when the ten players listed here are taken out, the average career WAR of the remaining 133 is 3.05–slightly better than for every player. Looking a little deeper, we can see that of the 16,292 players with a plate appearance, 852 (or 5.2%) are worth 20 or more wins over the course of their careers. For LVPs? 15 out of 143, or 10.5%²⁸.

Is this good news for Eric Hosmer and Adeiny Hechavarria? Possibly. Would assuming this is good news for Hosmer and Hechavarria be drawing a causation from a correlation? Probably. I don’t know. What do I know? I know that if you read this all of the way through, I just wasted a sizable chunk of your time. And in my book, that is a job well done.

——————————————————————————————————-

¹Or the 30th, technically. Whatever.

²As far as I can tell, that’s an original joke. Feel free to chastise me if I’m wrong.

³How exactly do I cite the Def stat? Is it a plural type of thing, like “Player X had 5 Defs last year”, or a singular, like…well, like I wrote in the post?

⁴Spellcheck, you have crossed a line.

⁵The worst fielder in the majors, according to Def? Carlos Beltran and his -21.4. Yes, the three-time Gold Glove-winning, two-time Fielding Bible-winning Carlos Beltran. Yes, I’m also unsure how to react.

Why has no one else realized this, though? Cameron? Sullivan? God forbid, Cistulli? I’m looking at you all! Write something about this, or else I’ll be forced to!

⁶You know, the Possibly Affiliated With Substances That May Or May Not Enhance Performance Era.

⁷Hey, it’s South Park’s words, not mine.

⁸Versalles’ career was also rather notable–and not just for his abnormal appellation.

⁹To be fair, every hitter had a down year in 1968.

¹ºOne doesn’t receive too many opportunities to type this (much to the chagrin of Mr. Keri).

¹¹According to the B-R bullpen, there were only two–count ’em, two–fair balls ever hit out of Griffith. TWO! In 51 motherfucking seasons! So Safeco ain’t so bad, I guess.

¹²Get it? ‘Cuz Parker did…and Williams did…oh, never mind.

¹³Not a typo (read on).

¹⁴Sorry, force of habit (I’m a Ravens fan).

¹⁵In a manner not dissimilar to a controversial shortstop of our era (or to the next player on the list), Parker was always one whose reputation overshadowed his production in the field, as neither basic statistics (.965 career fielding percentage, 137th out of 167 players over the course of his career) nor advanced statistics (-127.5 career Def, 688th out of 704 players over the course of his career) were particularly fond of his work with the glove.

¹⁶Actually, every Yankees defender hit a new low in 2005, as their team defense was the lowest of any team in the UZR era (-141.7 runs); this was due in no small part to the horrifying outfield of Williams, Gary Sheffield (-26 UZR), and Hideki Matsui (-15.2 UZR).

¹⁷I’ll have more on that in the coming weeks.

¹⁸How does one abbreivate Rookie of the Year? ROTY, RotY, or some combination of the two?

¹⁹Deserved on more than one level.

²ºI shouldn’t need to explain what I mean by that. Also, Rose pulled a Ben Revere in 1984 and 1986, albeit in non-qualifying seasons.

²¹I know I’ve heard that expression before–I think it was used to describe Jesus Montero–but I can’t seem to find where it was used.

²²No, I didn’t do my math wrong–from 1959 to 1962, MLB had two All-Star games.

²³I was going back and forth about whether to call this one gilded; over the indicated time span, Clemente was eighth among outfielders in Def–quite good, but not the best (as winning a Gold Glove in each year would imply).

²⁴It was not, however, the worst mark of his career, as he would post walk rates of 2.3% and 3.3% in 1956 and 1959, respectively.

²⁵In terms of power, Clemente was a late peaker, with five of his six highest ISOs coming during or after his age-31 season (1966). I’d be interested in knowing how common that is.

²⁶For whatever reason, defensive records for each individual outfield position only go back to 1956–any time before that, it’s all just lumped into “Outfield”. Also, for that matter, innings played on defense only go back to that date as well, which doesn’t seem to make sense, given that play-by-play data is available back then.

²⁷Interestingly enough, Clemente’s greatest defensive seasons were during this period. He had 20.7 Def in 1958, and 19.3 in 1957; his next-best season with the glove was in 1968 (16.8).

²⁸The other five (besides the ten listed here): Milt Stock (22.3 career WAR, -2.7 in 1924); Alvin Davis (21.1 career WAR, -1.6 in 1991); Raul Ibanez (20.5 career WAR, -1.7 in 2011); Jason Bay (20.3 career WAR, -1.1 in 2007); and Buck Weaver (20.3 career WAR, -1.1 in 1912).