Power and Patience (Part V of a Study)

One, two, one, two, three, four.

Sorry. Those were links to the first four parts. Anyway, now it’s time to fill the circle of this series. This final piece isn’t really much of an analysis, but sort of a potpourri of interesting trivia. Trivia’s where these five weeks started, after all. Hopefully there was sufficient analytical substance to the first four parts. (Or any.)

Here is an interesting tidbit to start: only two batting title qualifiers have ever had a higher ISO than OBP in a season. One was Barry Bonds in his insane, 73-HR 2001 season (.536 ISO, .515 OBP–I told you it was insane). The other was Matt Williams in 1994. Take a look at the 1994 OBP and ISO scatter chart among qualifiers, with a line of y=x for reference:

I trust you to figure out which one belongs to the current manager of the Washington Nationals. He had a .319 OBP and a .339 ISO that season. (And, FYI, that lonely dot in the lower left belongs to 24 year old Twins catcher Matt Walbeck and his .204/.246/.284 in 359 PA. And that one insanely close to the .500 OBP? Frank Thomas.)

And Barry Bonds’s 2001? Well, just take a look:

Yeah.

(I kind of wanted just to show that chart.)

That only two players ever had a single season, let alone career, with a higher ISO than OBP, a good way to measure a player’s relative prowess at each facet of hitting is to look at the gap between those statistics.

Care to guess the player with a career OBP below the historical average of .333 who has the smallest gap between his career OBP and ISO? To the surprise of nobody, it’s:

Dave Kingman

Kingman posted a career .302 OBP and .242 ISO, making him the ultimate in empty power. By Kingman’s last year, 1986 with Oakland, all he could do was hit home runs. He had 35, while hitting .210/.255/.431, which even in 1986 was only good for a wRC+ of 86. Kingman also has the 2nd highest ISO period among those with a sub-.333 OBP, behind Russell Branyan (.253 ISO, .329 OBP).

Expand this list, by the way, and it feels like a pretty accurate indicator of players who provided solid and at times even great power, but weren’t great offensive players. The top 10: Kingman, Steve Balboni, Ron Kittle, Branyan, Tony Armas, Alfonso Soriano, Dick Stuart, Matt Williams, Tony Batista and Mark Reynolds. The debuts of those players range from 1958 (Stuart) to 2007 (Reynolds), so this phenomenon is not exactly a 21st century one. It does, however, divide pretty well along pre- and post-expansion lines.

Among players who debuted before Stuart, the next smallest gap here belongs to a Hall of Famer: Ernie Banks, with a .330 OBP and .226 ISO. He’s 18th on the list, so that’s about where the last paragraph’s thesis breaks down. During his career, 1953-71, the league-wide non-pitcher OBP was .329, so Banks was about average reaching base, but provided a ton of value from his years at shortstop and his power (1953-71 ISO: .135).

Wally Post is 19th, and he debuted in 1949, making him the top pre-1950 debut player on the OBP minus ISO list, and the smallest gap belonging to someone who debuted before 1940 belongs to DiMaggio, who debuted in 1937. He ended up with a .324 OBP and .164 ISO in his 10 seasons with the Bees, Reds, Pirates and Giants. We’re talking, of course, about Vince DiMaggio, not Dom.

Go back all the way to 1901 and you find the career of:

Albert Samuel “Hobe” Ferris

Hobe Ferris played from 1901-09 and never led the league in home runs, but was in the top 7 five times in a nine-year career on his way to 40 career home runs. His .102 career ISO came in a time frame when league-wide non-pitcher ISO was .077, but he only produced a career .265 OBP (vs. the league’s .310). A second- and third-baseman with a good defensive reputation (backed up today by his +70 career fielding runs on Baseball Reference), he also may have been the first power threat in MLB history who didn’t reach base effectively. His best season was actually during the nadir of the dead ball era, his penultimate year in 1908 when he hit .270/.291/.353 for a 109 wRC+. This was mostly due to an unusually efficient year reaching base, but even his .083 ISO was better than the league’s .069.

All-time, however, Ferris’s OBP-ISO gap ranks as just the 166th smallest out of 692 who meet the 3000 PA, sub-.333 thresholds. The 167th smallest belongs to another turn-of-the-century player, the infamous Bill Bergen, who was just bad at everything. In general, you’re just not going to find turn of the century players whose ISO’s are particularly close to their OBP’s, because ISO’s were so low 100 years ago.

To start getting into the other types of players–good OBP, not so good power–let’s remove any cap on the OBP and see what happens at both ends of the list of OBP and ISO gaps. Again, 3000 PA is the cutoff.

10 Lowest Gaps: Kingman, Mark McGwire, Balboni, Kittle, Branyan, Juan Gonzalez, Sammy Sosa, Ryan Howard, Armas, Soriano

10 Highest: Roy Thomas, Miller Huggins, Eddie Stanky, Eddie Collins, Max Bishop, Richie Ashburn, Ferris Fain, Johnny Pesky, Luke Appling, Muddy Ruel

So, apparently Mark McGwire’s .263 career batting average is a little misleading…as in, perhaps the most misleading batting average of all time. He posted a .394 OBP and .325 ISO. The other three players who weren’t on this list when sub-.333 OBP’s were removed are Gonzalez, Sosa, and Howard. None of them have spotless resumes, but they are bound to be the 2nd to 4th best hitters on that list in most any ranking of these players, subjective or objective. After Howard, the next few players on this list who had an OBP above .333: Richie Sexson (15th), Albert Belle (20th), Jose Canseco (25th), Andruw Jones (28th) and Greg Vaughn (30th). All probably better hitters than Kingman and certainly better hitters than Balboni.

Meanwhile, Roy Thomas has the highest such difference, with a line from 1901-11 of .282/.403/.329. (He debuted in 1899.) From 1900-06, Thomas led the majors in walks every year except 1905. He hit a fascinating .327/.453/.365 in 1903, for a 138 wRC+.

We might think that everybody with a large gap is from the dead ball era, but such is not the case. Richie Ashburn (1948-62) and Luke Appling (1930-50) carved out Hall of Fame careers. They got away with a lack of power by hitting .300 in their careers. These next two players weren’t career .300 hitters, providing value more so with high walk rates, and how can we talk about players who got on base but didn’t hit for power without them:

Eddie Stanky and Ferris Fain

Stanky (.410 OBP, .080 ISO) played from 1943-53 and Fain (.424 OBP, .106 ISO) from 1947-55, and they might be the two most famous players in MLB history in terms of reaching base without being much of a power threat. They were pioneers of the you’re-never-pitching-around-me-but -I-will-foul-off-pitches-and-work-a-walk-anyway school of hitting, especially Stanky, who only hit .268 and slugged .348 in his career. (Roy Thomas could have been the “pioneer” of this if power were more of a thing when he played.) Stanky’s most striking season in this regard was probably 1946 when he hit .273/.436/.352. Fain, meanwhile, had a .455 OBP and .066 ISO in his last season in 1955.

Just as the first list in this piece lacked many dead-ball era players, this list of large OBP-ISO gaps seems to lack 21st (and late 20th) century players. The first player to debut after 1980 that we meet on the list, in the 13th place?

Luis Castillo

Castillo’s offensive production was almost entirely in his .290 batting average. If batting average says little about McGwire, it says almost as little about Castillo, who posted a career .368 OBP and .061 ISO.

The first good hitter on the list (with his career 97 wRC+, Castillo was decidedly average) is Dave Magadan, 23rd, with a .390 OBP and just a .089 ISO. He had a 117 career wRC+. Magadan’s 1995 season with Houston was his wildest as he managed an OBP of .428 with an ISO of just .086.

Two spots below Magadan is one of the three who started us down this month-plus-long path:

Wade Boggs

Boggs had a .328/.415/.443 career line for a 132 wRC+. In his rookie season in 1982 (381 PA), he was already producing a .406 OBP…with an ISO of just .092.

We might as well wrap up with our other two above-.400 OBP, under-.200 ISO players since 1961. Joe Mauer (.405 OBP, .146 ISO) and Rickey Henderson (.401 OBP, .140 ISO) have wRC+’s of 134 and 132 respectively. Their OBP-ISO gaps of .261 and .259 rank among the 200 largest gaps, or roughly the 90th percentile.

There are plenty more angles, more than I can cover, that one could take with this. At this link you can find the list of players with 3000 PA since 1901, ordered from the largest OBP-ISO to the smallest, with extra stats (as I didn’t change or remove the default dashboard stats).


The R.A. Dickey Effect – 2013 Edition

It is widely talked about by announcers and baseball fans alike, that knuckleball pitchers can throw hitters off their game and leave them in funks for days. Some managers even sit certain players to avoid this effect. I decided to analyze to determine if there really is an effect and what its value is. R.A. Dickey is the main knuckleballer in the game today, and he is a special breed with the extra velocity he has.

Most people that try to analyze this Dickey effect tend to group all the pitchers that follow in to one grouping with one ERA and compare to the total ERA of the bullpen or rotation. This is a simplistic and non-descriptive way of analyzing the effect and does not look at the how often the pitchers are pitching not after Dickey.

Dickey's Dancing Knuckleball
Dickey’s Dancing Knuckleball (@DShep25)

I decided to determine if there truly is an effect on pitchers’ statistics (ERA, WHIP, K%, BB%, HR%, and FIP) who follow Dickey in relief and the starters of the next game against the same team. I went through every game that Dickey has pitched and recorded the stats (IP, TBF, H, ER, BB, K) of each reliever individually and the stats of the next starting pitcher, if the next game was against the same team. I did this for each season. I then took the pitchers’ stats for the whole year and subtracted their stats from their following Dickey stats to have their stats when they did not follow Dickey. I summed the stats for following Dickey and weighted each pitcher based on the batters he faced over the total batters faced after Dickey. I then calculated the rate stats from the total. This weight was then applied to the not after Dickey stats. So for example if Janssen faced 19.11% of batters after Dickey, it was adjusted so that he also faced 19.11% of the batters not after Dickey. This gives an effective way of comparing the statistics and an accurate relationship can be determined. The not after Dickey stats were then summed and the rate stats were calculated as well. The two rate stats after Dickey and not after Dickey were compared using this formula (afterDickeySTAT-notafterDickeySTAT)/notafterDickeySTAT. This tells me how much better or worse relievers or starters did when following Dickey in the form of a percentage.

I then added the stats after Dickey for starters and relievers from all four years and the stats not after Dickey and I applied the same technique of weighting the sample so that if Niese’12 faced 10.9% of all starter batters faced following a Dickey start against the same team, it was adjusted so that he faced 10.9% of the batters faced by starters not after Dickey (only the starters that pitched after Dickey that season). The same technique was used from the year to year technique and a total % for each stat was calculated.

The most important stat to look at is FIP. This gives a more accurate value of the effect. Also make note of the BABIP and ERA, and you can decide for yourself if the BABIP is just luck, or actually better/worse contact. Normally I would regress the results based on BABIP and HR/FB, but FIP does not include BABIP and I do not have the fly ball numbers.

The size of the sample was also included, aD means after Dickey and naD is not after Dickey. Here are the results for starters following Dickey against the same team.

Dickey Starters

It can be concluded that starters after Dickey see an improvement across the board. Like I said, it is probably better to use FIP rather than ERA. Starters see an approximate 18.9% decrease in their FIP when they follow Dickey over the past 4 years. So assuming 130 IP are pitched after Dickey by a league average set of pitchers (~4.00 FIP), this would decrease their FIP to around 3.25. 130 IP was selected assuming ⅔ of starter innings (200) against the same team. Over 130 IP this would be a 10.8 run difference or around 1.1 WAR! This is amazingly significant and appears to be coming mainly from a reduction in HR%. If we regress the HR% down to -10% (seems more than fair), this would reduce the FIP reduction down to around 7%. A 7% reduction would reduce a 4.00 FIP down to 3.72, and save 4.0 runs or 0.4 WAR.

Here are the numbers for relievers following Dickey in the same game.

Dickey Bullpen

Relievers see a more consistent improvement in the FIP components (K, BB, HR) between each other (11.4, 8.1, 4.9). FIP was reduced 10.3%. Assuming 65 IP (in between 2012 and 2013) innings after Dickey of an average bullpen (or slightly above average, since Dickey will likely have setup men and closers after him) with a 3.75 FIP, FIP would get reduced to 3.36 and save 3 runs or 0.3 WAR.

Combining the un-regressed results, by having pitchers pitch after him, Dickey would contribute around 1.4 WAR over a full season. If you assume the effect is just 10% reduction in FIP for both groups, this number comes down to around 0.9 WAR, which is not crazy to think at all based off the results. I can say with great confidence, that if Dickey pitches over 200 innings again next year, he will contribute above 1.0 WAR just from baffling hitters for the next guys. If we take the un-regressed 1.4 WAR and add it to his 2013 WAR (2.0) we get 3.4 WAR, if we add in his defence (7 DRS), we get 4.1 WAR. Even though we all were disappointed with Dickey’s season, with the effect he provides and his defence, he is still all-star calibre.

Just for fun, lets apply this to his 2012. He had 4.5 WAR in 2012, add on the 1.4 and his 6 DRS we get 6.5 WAR, wow! Using his RA9 WAR (6.2) instead (commonly used for knucklers instead of fWAR) we get 7.6 WAR! That’s Miguel Cabrera value! We can’t include his DRS when using RA9 WAR though, as it should already be incorporated.

This effect may even be applied further, relievers may (and likely do) get a boost the following day as well as starters. Assuming it is the same boost, that’s around another 2.5 runs or 0.25 WAR. Maybe the second day after Dickey also sees a boost? (A lot smaller sample size since Dickey would have to pitch first game of series). We could assume the effect is cut in half the next day, and that’d still be another 2 runs (90 IP of starters and relievers). So under these assumptions, Dickey could effectively have a 1.8 WAR after effect over a full season! This WAR is not easy to place, however, and cannot just be added onto the teams WAR, it is hidden among all the other pitchers’ WARs (just like catcher framing).

You may be disappointed with Dickey’s 2013, but he is still well worth his money. He is projected for 2.8 WAR next year by Steamer, and adding on the 1.4 WAR Dickey Effect and his defence, he could be projected to really have a true underlying value of almost 5 WAR. That is well worth the $12.5M he will earn in 2014.

For more of my articles, head over to Breaking Blue where we give a sabermetric view on the Blue Jays, and MLB. Follow on twitter @BreakingBlueMLB and follow me directly @CCBreakingBlue.


The Effect of Devastating Blown Saves

It’s a pretty well documented sabremetric notion that pitching your closer when you have a three run lead in the ninth is probably wasting him. You’re likely going to win the game anyways, since the vast majority of pretty much everyone allowed to throw baseballs in the major leagues is going to be able to keep the other team from scoring three runs.

But we still see it all the time. Teams keep holding on to their closer and waiting until they have a lead in the ninth to trot him out there. One of the reasons for this is that blowing a lead in the ninth is devastating—it’ll hurt team morale more to blow a lead in the ninth than to slip behind in the seventh. And then this decrease in morale will cause for the players to play more poorly in the future, which will result in more losses.

Or will it?

We’re going to look at how teams play following games that they devastatingly lose to see if there’s any noticeable drop in performance. The “devastating blown save” stat can be defined as any game in which a team blows the lead in the ninth and then goes on to lose. Our methodology is going to look at team records in both the following game as well as the following three games to see if there’s any worsening of play. If the traditional thought is right (hey, it’s a possibility!), it will show up in the numbers. Let’s take a look.

All Games (2000-2012)

9+ Inning Games

Devastating BS’s

Devastating BS%

Following Game W%

Three Game W%

31,405

1,333

4.24%

.497

.484

In the following game, the team win percentage was very, very close to 50%. Over a sample size of 1,333 that’s completely insignificant. But what about the following three games, where the win percentage drops down to roughly 48.4%? Well, that’s a pretty small deviation from the 50% baseline, and is of questionable statistical significance. And wouldn’t it make sense that if the devastating blow save effect existed at all it would occur in the directly following game, and not wait until later to manifest itself? It seems safe to say that the “morale drop” of devastatingly losing is likely nonexistent—or at most incredibly small. We’re dealing with grown men after all. They can take it.

Another thing you might want to consider when looking at these numbers is that teams with lots of blown saves are probably more likely to be subpar. Not so fast. The win% of teams weighted to their amount of blown 9th innings over the years is .505. This is probably because better teams are more likely to be ahead in the first place, and so they are going to be on the bubble to blow saves more often even if they blow them a smaller percentage of the time. Just for the fun of seeing how devastation-prone your team has been over the past 13 years, however, here’s a table of individual team results.

 Devastating Blown Saves By Team (2000-2012)

Team

Devastating Blown Saves

Next Game W%

Milwaukee

63

0.460

Chicago Cubs

60

0.4

Kansas City

57

0.315

Toronto

54

0.592

Chicago White Sox

52

0.615

Houston

51

0.372

NY Mets

50

0.56

St. Louis

48

0.625

Texas

46

0.543

Cleveland

46

0.586

Texas

46

0.543

Florida

45

0.511

Baltimore

45

0.377

Oakland

44

0.545

Seattle

44

0.5

Boston

41

0.585

Cincinnati

41

0.585

Los Angeles

40

0.425

Detroit

39

0.384

Atlanta

39

0.743

Detroit

39

0.384

San Diego

35

0.4

Anaheim

34

0.529

New York Yankees

33

0.666

Minnesota

33

0.515

Pittsburgh

32

0.468

Montreal

25

0.2

Washington

18

0.555

Miami (post-change)

8

0.375

Congratulations Pittsburgh, you’ve been the least devastated full-time team over the past 13 years! Now if there’s a more fun argument against the effects of devastating losses than that previous sentence, I want to hear it. Meanwhile the Braves have lived up to their nickname, winning in an outstanding 74.3% of games following devastating losses (it looks like we’ve finally found our algorithm for calculating grit, ladies and gentleman) while the hapless Expos rebounded in just 20% of their games. Milwaukee leads the league in single-game heartbreak, etc. etc. Just read the table. These numbers are fun. Mostly meaningless, but fun.

Back to the point: team records following devastating losses tend to hover very, very close to .500. Managers shouldn’t worry about how their teams lose games—they should worry about if their teams lose games. Because, in the end, that’s all that matters.


Raw data courtesy of Retrosheet.


Weighting Past Results: Hitters

We all know by now that we should look at more than one year of player data when we evaluate players. Looking at the past three years is the most common way to do this, and it makes sense why: three years is a reasonable time frame to try and increase your sample size while not reaching back so far that you’re evaluating an essentially different player.

 The advice for looking at previous years of player data, however, usually comes with a caveat. “Weigh them”, they’ll say. And then you’ll hear some semi-arbitrary numbers such as “20%, 30%, 50%”, or something in that range. Well, buckle up, because we’re about to get a little less arbitrary.

 Some limitations: The point of this study isn’t to replace projection systems—we’re not trying to project declines/improvements here. We’re simply trying to understand how past data tends to translate into future data.

 The methodology is pretty simple. We’re going to take three years of player data (I’m going to use wRC+ since it’s league-adjusted etc., and I’m only trying to measure offensive production), and then weight the years so that we can get an expected 4th year wRC+. We’re then going to compare our expected wRC+ against the actual wRC+*. The closer the expected to our actual, the better the weights.

 *Note: I am using four-year spans of player data from 2008-2013, and limiting to players that had at least 400 PA in four consecutive years. This should help throw out outliers and to give more consistent results. Our initial sample size is 244, which is good enough to give meaningful results.

 I’ll start with the “dumb” case. Let’s just weigh all of the years equally, so that each year counts for 33.3% of our expected outcome.

 Expected vs. Actual wRC+, unweighted

Weight1

Weight2

Weight3

Average Inaccuracy

33.3%

33.3%

33.3%

16.55

 Okay, so we’re averaging missing the actual wRC+ by roughly 16.5. That means that we’re averaging 16.5% inaccuracy when extrapolating the past into the future with no weights. Now let’s try being a little smarter about it and try some different weights out.

 Expected vs. Actual wRC+, various weights

Weight1

Weight2

Weight3

Average Inaccuracy

20%

30%

50%

16.73

25%

30%

45%

16.64

30%

30%

40%

16.58

15%

40%

45%

16.62

0%

50%

50%

16.94

0%

0%

100%

20.15

Huh! It seems that no matter what we do, “intelligently weighting” each year never actually increases our accuracy. If you’re just generally trying to extrapolate several past years of wRC+ data to try and predict a fourth year of wRC+, your best bet is to just unweightedly average the past wRC+ data. Now, the differences are small (for example, our weights of [.3, .3, .4] were only .03 different in accuracy the unweighted total, which is statistically insignificant), but the point remains: weighing data from past years simply does not increase your accuracy. Pretty counter-intuitive.

Let’s dive a little deeper now—is there any situation in which weighting a player’s past does help? We’ll test this by limiting our ages. For example: are players that are younger than 30 better served by weighing their most previous years heavily? This would make sense, since younger players are most likely to experience a true-talent change. (Sample size: 106)

 Expected vs. Actual wRC+, players younger than 30

Weight1

Weight2

Weight3

Average Inaccuracy

33.3%

33.3%

33.3%

16.17

20%

30%

50%

16.37

25%

30%

45%

16.29

30%

30%

40%

16.26

15%

40%

45%

16.20

0%

50%

50%

16.50

0%

0%

100%

20.16

Ok, so that didn’t work either. Even with young players, using unweighted totals is the best way to go. What about old players? Surely with aging players the recent years would most represent a player’s decline. Let’s find out (Sample size: 63).

 Expected vs. Actual wRC+, players older than 32

Weight1

Weight2

Weight3

Average Inaccuracy

33.3%

33.3%

33.3%

16.52

16%

30%

50%

16.18

25%

30%

45%

16.27

30%

30%

40%

16.37

15%

40%

45%

16.00

0%

50%

50%

15.77

0%

55%

45%

15.84

0%

45%

55%

15.77

0%

0%

100%

18.46

Hey, we found something! With aging players you should weight a player’s last two seasons equally, and you should not even worry about three seasons ago! Again, notice that the difference is small (you’ll be about 0.8% more correct by doing this than using unweighted totals). And as with any stat, you should always think about why you’re coming to the conclusion that you’re coming to. You might want to weight some players more aggressively than others, especially if they’re older.

In the end, it just really doesn’t matter that much. You should, however, generally use unweighted weights since differences in wRC+ are pretty much always results of random fluctuation and very rarely the result of actual talent change. That’s what the data shows. So next time you hear someone say “weigh their past three years 3/4/5” (or similar), you can snicker a little. Because you know better.


Two Different Scenarios of a Mike Trout Extension

There has been plenty of conjecture on the timing and amount of Mike Trout’s next contract.  People gravitate towards round numbers and that’s why you often hear talk about ten years and $300 million.  I heard one pundit refer to 10/300 after his first season, and have heard several refer to these figures during this off season.  But is 10/300 even realistic?

The first step of his analysis is to look at the early years of a contract extension.  For a player that hasn’t even hit his arbitration years, we’ve seen discounting of the players pre-arbitration and arbitration years on their way to seven- or eight-year contracts.  So while the disbursement of money in a player’s early years might not be a one for one match with what they would be from the arbitration process, they’re generally close, if not a little smaller for some players.  The theory seems to go that the player trades off the potentially bigger payoff of arbitration awards, in return for secure, guaranteed and somewhat smaller annual contract value on a multi-year deal.

Mike Trout will break records, but not only on the playing field.  If he goes to arbitration, we’ll see amounts not seen for 1st, 2nd and 3rd-year arbitration-eligible players.  We can quibble about what those amounts will be but I’m guessing on the low end they might be $10 M/$15 M/$20 M, and on the high end $15/$20/$25.  Mike Trout has achieved so much in so little time that he might have quite a bit of leverage to earn a full payout of potential arbitration amounts, in the early years of a multi-year contract extension.

So the value of the early years of Mike’s next contract might look like this:

Year signed 1 2 3 4
2014 0.5 15 20 25
2015 15 20 25

Note: the table shows possible values of the early years of his contract.  Actual payments will probably be much different.  If he signs in 2014, then he will likely get much more than $500,000 in year 1.  Or there might be a bonus that gets spread across these early seasons.  I’m stipulating values here because I believe they’re easier to predict.

The rest gets easier, in one sense.  What is Mike Trout worth during his free-agent years, from the age of 26 to approximately 32.  Is he worth $30 million, $35? or even $40 million per year?  Remember, the Angels are buying out his peak seasons.  This is creme de la creme.  It’s similar to A-Rod from the age of 26-32 where he earned $25 million per year in 2001 dollars and was worth every penny.

Angels management might be a little worried about not signing Mike this year because those free-agent years could get really expensive if next season he puts up even more stupendous numbers.  But my question is, should they be worried?  That’s why I look at two different scenarios.  One, sign him this offseason.  The second, pay him minimum again this year and give him the big contract next offseason.

Year signed 1 2 3 4 5 6 7 8 9 10 11 Total
2014 0.5 15 20 25 35 35 35 35 35 35 270.5
2015 15 20 25 40 40 40 40 40 40 40 340

What you notice about scenario one, right off, is that $35 million per year seems like a lot of money.  But when you total it up over the seemingly magic number of big baseball contracts, ten years, it only totals to $270 million.  For Trout to be paid 10/300, the Angels would have to value his free agent years at $40 million per year.  Dave Cameron’s crowd sourcing project of predicting the salary of signing Trout to a single season came out to be around $40 million.  To guarantee $40 mill for 6 consecutive seasons which are four years off from occurring seems to be one helluva lot of risk for the Angels to assume at this point.

Especially because the Angels don’t necessarily need to be in a rush to assume that much risk.  So I’m making a prediction here.  If Mike Trout gets a ten-year contract extension this year, it will be for less than $300 million.  I think of $270 as being a sort of ceiling for him this year.  $220 to $250 million, might be much more realistic.

That leads us to scenario 2.  Sign him in 2015.  And let’s assume Trout puts up another monstrous season, one where the Angels will supposedly rue not securing the big fish to a long-term contract, the year before.  What are his free agent seasons valued at this point?  $40 million is still probably absurd but let’s follow this along and see where it goes.  The contract now is 10/$340.  But when you look at the average cost of Mike Trout across the years he remains an Angel, you get $27 million across ten seasons in the first scenario, and $30.9 million across 11 seasons in the second scenario.  So you’re paying a premium of $3.9 million per year for waiting one extra season before signing him.  But don’t forget, in return for waiting that extra year, you also tack on another year of Mike Trout goodness at the end of his contract.

When you consider the extra year, the real difference between the two scenarios is $3o to $35 million.  That’s not pocket change.  But consider this, the Angels have paid the Yankees $30+ million to take Vernon Wells off their hands for two years.

The other thing to consider here is if there is some natural market ceiling on annual salary for any player.  If so, Mike Trout might approach it.  Dave Cameron mentioned this possibility in the crowdsourcing piece.  If $40 million is just too high a number for any player to be valued at annually, then waiting til next off season could be the much better scenario if his free-agent seasons top off at $36 or $37 million.

If the Angels can get Mike Trout at say 10/240 this season, they should probably jump on it.  But if him and his agent aren’t budging off 10/270, or higher, it’s probably best to wait one more season.


Power and Patience (Part IV of a Study)

We saw in Part Three that the R^2 between OBP and ISO for the annual average of each from 1901-2013 is .373. To find out the correlation between OBP and ISO at the individual level, I set the leaders page to multiple seasons 1901-2013, split the seasons, and set the minimum PA to 400, then exported the 16,001 results to Open Office Calc.

(Yes, sixteen thousand and one. You can find anything with FanGraphs! Well, anything that has to do with baseball. Meanwhile, Open Office operating on Windows 7 decides it’s tired the moment you ask it to sum seven cells. At least it gets there in the end.)

The result was .201, so we’re looking at even less of a correlation in a much larger sample compared to the league-wide view. Are there periods where the correlation is higher?

Recall from Part Two that from 1994-2013 the R^2 for the league numbers was .583. Using individual player lines (400+ PA) from those seasons increases our sample size from 20 to 4107 (again splitting seasons). This gives us an R^2 of .232. That’s a little higher than .201, but not very much so.

All in all, it’s not the most surprising thing. On-base percentage and isolated power, mathematically, basically have nothing in common other than at-bats in the denominator. Given that, any correlation between them at all (and there is some) suggests that it either helps players hit for power to be an on-base threat, or vice versa. Not that one is necessary for the other, but there’s something to it. And throughout history, as we saw in Part One, a majority of players are either good at both aspects of hitting, or neither.

In fact, it’s the exceptions to that rule that triggered this whole series, those higher-OBP, lower-ISO players. Again from part one, there were 12 19th century, 21 pre-1961 20th century, and 3 post-1961 20th-21st century players with a career OBP over .400 and ISO below .200.

Much of this can probably be attributed to that consistency OBP has had historically relative to ISO that we observed a couple weeks ago. Continuing with the somewhat arbitrary 1961 expansion era cutoff, from 1961-present, 168 players with 3000 PA have an ISO over .200 and 18 have an OBP over .400; from 1901-60, it was 43 with the ISO over .200 and 31 with the OBP over .400. The .200+ ISO’s are split 80-20% and the .400+ OBP’s are split about 60-40%. The latter is the much smaller gap, as we’d expect. (Some players whose careers straddled 1961 are double-counted, but you get the basic idea.)

But let’s see if we can trace the dynamics that brought us to this point. What follows is basically part of part one in a part three format (part). In other words, we’re going to look at select seasons, and in those seasons, compare the number of players above and below the average OBP and ISO. Unfortunately, it’s hard to park-adjust those numbers, so a player just above the average ISO at Coors and a player just below it at Safeco are probably in each other’s proper place. But that’s a minor thing.

After the league-wide non-pitcher OBP and ISO are listed, you’re going to see what might look like the results of monkeys trying to write Hamlet. But “++” refers to the number of players with above-average OBP and ISO; “+-” means above-average OBP, below-average ISO; “-+” means below-average OBP and above-average ISO; and “- -” means, obviously, below-average OBP and ISO. The years were picked for various reasons, including an attempt at spreading them out chronologically. Notes are sparse as the percentages are the main thing to notice.

1901: .330 OBP, .091 ISO. Qualified for batting title: 121. 35% ++, 25% +-, 12% -+, 28% – –

1908: .295 OBP, .069 ISO. Qualified for batting title: 127. 41% ++, 23% +-, 8% -+, 28% – –

The sum of OBP and ISO was its lowest ever in 1908.

1921: .346 OBP, .117 ISO. Qualified for batting title: 119. 42% ++, 24% +-, 8% -+, 26% – –

Baseball rises from the dead ball era. Still relatively few players are hitting for power while not getting on base as much.

1930: .356 OBP, .146 ISO. Qualified for batting title: 122. 45% ++, 23% +-, 3% -+, 29% – –

The best pre-WWII season for OBP and ISO. Almost nobody was about average at hitting for power while not as good at reaching base. Two-thirds of qualifiers had an above-average OBP vs. fewer than half with an above-average ISO.

1943: .327 OBP, .096 ISO. Qualified for batting title: 106. 41% ++, 24% +-, 10% -+, 25% – –

World War II, during which OBPs stayed near the average but ISOs tanked. That would not necessarily appear in these numbers, because the players in this segment are categorized vs. each year’s average.

1953: .342 OBP, .140 ISO. Qualified for batting title: 88. 44% ++, 22% +-, 14% -+, 20% – –

The first year where sISO exceeded OBP was easily the lowest so far in terms of players below average in both OBP and ISO. (Note: So few players qualified on account of the Korean War.)

1969: .330 OBP, .127 ISO. Qualified for batting title: 121. 45% ++, 17% +-, 14% -+, 23% – –

1983: .330 OBP, .131 ISO. Qualified for batting title: 133. 43% ++, 16% +-, 17% -+, 25% – –

1969 and 1983 were picked because of their historically average league-wide numbers for both OBP and ISO. The percentages for each of the four categories are about equal in both seasons.

2000: .351 OBP, .171 ISO. Qualified for batting title: 165. 39% ++, 16% +-, 15% -+, 29% – –

The sum of OBP and ISO was its highest ever in 2000.

2011: .325 OBP, .147 ISO. Qualified for batting title: 145. 50% ++, 17% +-, 12% -+, 21% – –

2012: .324 OBP, .154 ISO. Qualified for batting title: 144. 44% ++, 24% +-, 14% -+, 18% – –

2013: .323 OBP, .146 ISO. Qualified for batting title: 140. 45% ++, 24% +-, 17% -+, 14% – –

Originally, this part ended with just 2013, but that showed an abnormally low “- -” percentage, so now 2011-13 are all listed. From 2011 to 2012, the split groups (above-average at 1 of the 2 statistics, “+-” or “-+”) increased sharply while the number of generally good and generally bad hitters decreased. From 2012 to 2013, there was almost no change in qualifiers based on OBP (the “++” and “+-” groups). Among those with below-average OBPs, the number with above-average power increased as the number with below-average power decreased. Most significantly, 2011-13 has produced an overall drop in players who are below average at both.

I don’t want to draw too many conclusions from this set of 12 out of 113 seasons. But a few more things come up besides the recent decline in players below average in both OBP and ISO.

Regarding “++” Players

Unsurprisingly, limiting the samples to qualifiers consistently shows a plurality of players to be good at both the OBP and ISO things.

Regarding “- -” Players 

Essentially, until 2012, this group was always at least 1/5 of qualifiers, and usually it was 1/4 or more. The last couple years have seen a decline here. Is it a trend to keep an eye on in the future (along with the league-wide OBP slump from Part 3)?

Regarding “++” and “- -” Players

Meanwhile, the majority of players will be above average at both getting on base or hitting for power, or below average at both. The sum of those percentages is just about 60% at minimum each year. Of the ten seasons above, the lowest sum is actually from 2013, mostly on account of the 14% of players who were below average at both.

This also means that it’s a minority of players who “specialize” in one or the other.

Regarding “+-” vs. “-+” Players

The “-+” players, those with below-average OBPs and above-average ISOs, show the best-defined trends of any of the four categorizations. In general, before 1953, when OBP was always “easier” to be good at than ISO (via OBP vs. sISO as seen in Parts 2 and 3), you saw fewer ISO-only players than you see today. Either they were less valuable because power was less a part of the game and of the leagues’ offenses, or they were less common since it was harder to exceed the league average.

The number of OBP-only players is more complicated, because they too were more common in the pre-1953 days. But they have jumped in the last two years from 1/6 of qualifiers from ’69-’11 to 1/4 of qualifiers in 2012 and 2013. Overall, the recent decline in “- -” players has come at the expense of “+-” players. This can also be interpreted as indicating that players are becoming better at reaching base while remaining stagnant at hitting for power (important distinction: that’s compared to the annual averages, not compared to the historical average; as we saw last week, OBP is in a historical decline at the league level).

Conclusion

The key takeaway for all of this is that there are always going to be more players who are above-average in both OBP and ISO or below average in both. Even if the correlations between OBP and ISO on the individual level aren’t overly high, clearly more players are good at both or at neither.

This isn’t just on account of players with enough PA to qualify for the league leaders being better hitters in general, because while the number of players above-average in both who qualify is always a plurality, it’s almost never a majority. It takes a number of players who are below-average at both to create a majority in any given year.

In terms of OBP-only players and ISO-only players, the former have almost always outnumbered the latter. This is sufficiently explained in that reaching base is often key to being a good hitter, while hitting for power is optional. (That’s why OPS has lost favor, because it actually favors slugging over OBP.) Even when batting average was the metric of choice throughout baseball, those who got the plate appearances have, in general, always been good at getting on base, but not necessarily at hitting for power.

Next week this series concludes by looking at the careers of some selected individual players. The most interesting ones will be the either-or players, with a significantly better OBP or ISO. We won’t look much at players like Babe Ruth or Bill Bergen, but instead players like Matt Williams or Wade Boggs. Stay tuned.


How Much Work are AL Starters Doing, and What Difference Has It Made in Team Success?

Baseball fans have been treated to incredible starting pitching performances in recent years, with several ace staffs leading their teams to regular-season and postseason success. Initially, I set out to examine the number of innings pitched by AL starting rotations because I expected that there would be a big disparity from team to team. And more specifically, I thought that the percentage of innings pitched by a team’s starting rotation would correlate positively to either its W-L record, or more likely, its Pythagorean W-L record.

I gathered five years of data (2009 – 2013 seasons) and calculated the Starting Pitcher Innings Pitched Percentage (SP IP%). This number is simply the number of innings a team’s starters pitched divided by the total innings the team pitched. If a starter was used in relief, those innings didn’t count. I only looked at AL teams, because I assumed that NL starting pitchers could be pulled from games prematurely for tactical, pinch-hitting purposes, while AL starters were likely to stay in games as long as they weren’t giving up runs, fatigued, or injured.

Two things struck me about the results:

1. There was little correlation between a team’s SP IP% and its W-L record or its SP IP % and Pythagorean W-L record

2. The data showed little variance and was normally distributed

I looked at 71 AL team seasons from 2009 – 2013 and found that on average, AL Teams used starting pitchers for 66.8% of innings, with a standard deviation of 2.83%. The data followed a rather normal distribution, with teams SP IP% breaking down as follows:

Standard Deviations # of Teams % of Total Teams
-2 or lower 2 2.82%
-1 to -2 10 14.08%
-1 to 0 22 30.99%
0 to 1 26 36.62%
1 to 2 10 14.08%
2 or higher 1 1.41%

Over two-thirds of the teams (48 of 71) fell within the range of 63.6 to 69.2 SP IP%, which is much less variance than I expected to find.  And only three seasons fall outside the range of two standard deviations from the mean: two outliers on the negative end and one on the positive end. Those teams are:

Negative Outliers:

2013 Minnesota Twins: 60.06 SP IP%

2013 Chicago White Sox 60.25 SP IP%

Positive Outlier:

2011 Tampa Bay Rays 73.02 SP IP%

Taken at the extreme, these numbers show a huge gap in the number of innings the teams got out of their starters. Minnesota, for example, got only 871 innings out of starters in 2013, while the 2011 Tampa Bay Rays 1,058 innings in a season with fewer overall innings pitched. Another way of conceptualizing it would be to say that Minnesota starters pitched averaged just over 5 1/3 innings of each nine-inning game in 2013, while the 2011 Rays starters pitched nearly 6 2/3 innings. But when the sample is viewed as a whole the number of innings is quite close, as seen on this graph of SP IP% for the last five years:

Scatter plot diagram

 

The correlation between SP IP% and team success (measured via W-L or Pythagorean W-L) was minimal. (The Pearson coefficient values of the correlations were .1692 and .1625, respectively).  Team victories are dependent on too many variables to isolate a connection between team success (measured via team wins) and  SP IP%;  a runs scored/runs allowed formula for calculating W-L record was barely an improvement over the traditional W-L measurement. Teams like the Seattle Mariners exemplify the issue with correlating the variables: their starters have thrown above-average numbers of innings in most of the years in the study, but rarely finished with a winning record.

What I did find, to my surprise, was a relatively narrow range of SP IP% over the last five years, with teams distributed normally around an average of 66% of innings. In the future, it might be helpful to expand the sample, or look at a historic era to see how the SP IP% workload has changed over time. The relative consistency of SP IP% over five seasons and across teams could make this metric useful for future studies of pitching workloads, even if these particular correlations proved unsuccessful.


Revenue Sharing Deal Cubs Struck with Rooftop Owners Holding Up Wrigley Field Renovations

During the 2013 baseball season, the City of Chicago approved a $500 million plan to renovate Wrigley Field and build an adjacent office building and hotel.  Included in the renovation plan is the proposed construction of a large video board behind the left field bleachers and signs advertising Budweiser behind the right field bleachers.  The Cubs have delayed the start of this project, however, because the owners of the rooftop businesses across from the ballpark have threatened to file a lawsuit against the Cubs because the proposed signage will obstruct the views of the field from their respective rooftop businesses.

Rooftop Litigation History

Detroit Base-Ball Club v. Deppert, 61 Mich. 63, 27 N.W. 856 (Mich., 1886)

Disputes over neighbors viewing ballgames are nothing new.  In 1885, John Deppert, Jr. constructed a rooftop stand on his barn that overlooked Recreation Park, home to the National League’s Detroit Wolverines, future Hall of Famer Sam Thompson and a rotation featuring the likes of men named Stump Wiedman, Pretzels Getzien and Lady Baldwin.  The Wolverines claimed that they had to pay $3000 per month for rent and that the 50 cent admission fees, helped to offset this cost.  They were thereby “annoyed” by Deppert charging people, between 25 to 100 per game, to watch the games from his property and asked the court to forever ban Deppert from using his property in this manner.

Deppert countered that the ballgames had ruined the quiet enjoyment of his premises, that ballplayers often trespassed on his land in pursuit of the ball and that he often had to call the police to “quell fights and brawls of the roughs who assemble there to witness the games.”  He further claimed that his viewing stand had passed the city’s building inspection and that he had the legal right to charge admission and sell refreshments.

The trial court dismissed the Wolverines case and the ball club appealed.  The Supreme Court of Michigan agreed that the Wolverines had no right to control the use of the adjoining property; therefore, Deppert was within his rights to erect a stand on his barn roof and sell refreshments to fans that wanted to watch the game.  Furthermore, there was no evidence that Deppert’s rooftop customers would otherwise have paid the fees to enter Recreation Park.

Similarly, the rooftops of the buildings across the street from Shibe Park were frequently filled with fans wanting a view of the Philadelphia Athletics game action.  While never happy about the situation, Connie Mack was pushed too far in the early 1930s when the rooftop operators started actively poaching fans from the ticket office lines.  Mack responded by building the “Spite Fence,” a solid wall that effectively blocked the view of the field from the buildings across 20th Street.

Lawsuits were filed but the “Spite Fence” remained in place throughout the remainder of the use of Shibe Park, later renamed Connie Mack Stadium.

The Current Dispute

Chicago National League Ball Club, Inc. v. Skybox on Waveland, LLC, 1:02-cv-09105 (N.D.IL.)

In this case, the Cubs sued the rooftop owners on December 16, 2002 seeking compensatory damages, disgorgement to the Cubs of the defendants’ profits and a permanent injunction prohibiting the rooftop owners from selling admissions to view live baseball games at Wrigley Field, among other remedies and under several causes of action.  According to the complaint, the Cubs alleged that the defendant rooftop operators “…have unlawfully misappropriated the Cubs’ property, infringed its copyrights and misleadingly associated themselves with the Cubs and Wrigley Field.  By doing so, Defendants have been able to operate multi-million dollar businesses in and atop buildings immediately outside Wrigley Field and unjustly enrich themselves to the tune of millions of dollars each year, while paying the Cubs absolutely nothing.”

In their statement of undisputed facts, the defendants countered that the rooftops had been used to view games since the park opened on April 23, 1914 as home of the Chicago Federal League team and that the Cubs conceded that their present management knew the rooftop businesses were selling admissions since at least the late 1980s.

In May 1998, the City of Chicago enacted an ordinance authorizing the rooftops to operate as “special clubs,” which allowed them to sell admissions to view Cubs games under city license.  The City wanted their piece of the action and interestingly, the Cubs made no formal objection to the ordinance.  Based on the licensure and lack of any opposition from the Cubs, the rooftop owners made substantial improvements to enhance the experience and to meet new City specifications.

By January 27, 2004, the Cubs had reached a written settlement with owners of 10 of the defendant rooftop businesses which assured that the Cubs “would not erect windscreens or other barriers to obstruct the views of the [settling rooftops]” for a period of 20 years.  The remaining rooftop owners later settled and the case was dismissed on April 8, 2004, just days ahead of the Cubs home opener set for April 12th.

After the 2004 agreement legitimized their businesses, the rooftop owners made further improvements to the properties.  Long gone are the days that a rooftop experience meant an ice-filled trough of beer and hot dogs made on a single Weber.  The rooftop operations are now sophisticated businesses with luxurious accommodations, enhanced food and beverage service and even electronic ticketing.

As a result of the settlement agreement of Cubs’ 2002 lawsuit, the team now has legitimate concerns that a subsequent lawsuit by the rooftop owners to enforce the terms of the contract could ultimately result in the award of monetary damages to the rooftop owners; cause further delays in the commencement of the construction project due to a temporary restraining order; or, be the basis of an injunction preventing the Cubs from erecting the revenue-producing advertising platforms for the remainder of the rooftop revenue sharing agreement.

It is obvious that the rooftop owners need the Cubs more than the Cubs need them; however, the Cubs wanted their piece of the rooftop owners’ profits (estimated to be a payment to the Cubs in the range of $2 million annually) and now the Cubs have to deal with the potential that their massive renovation project will be held up by the threat of litigation over the blocking of the rooftop views.


The Silver Sluggers: Another Award to Get Angry About!

Note: I have no idea if I’m the first to do this, but quite frankly I don’t care.

Every year, the Gold Gloves are awarded, and people get pissed off about whom they are awarded to. Every year, the Silver Sluggers are also awarded, and…well, no one really gives a fuck about the Silver Sluggers. Why? Hell, I don’t know. They don’t have the “storied tradition” of the Gold Gloves, the “time-honored legends” or the…uh…”legendary honors”? Look, people like to use weird cliches about how things used to be, and then Mike Bates writes quasi-racist articles¹ about it.

Personally, I enjoy the Silver Sluggers–sarcastically using them as superlatives for a player (“He’s won four, he must be good!”), looking forward to the nominations and announcements of the winners, but most importantly, arguing over them. It’s no secret that most awards are controversial–not just in baseball, but in all walks of life. People have differing opinions, and the technology available today makes it easier than ever for those opinions to be spouted furiously for the whole world to hear. In baseball, though, we are different. We have FACTS! We have EVIDENCE! We have STATISTICS!

What was the point of that disjointed rant? As I mentioned earlier, there has been many a bad pick for the Gold Gloves. However, the same is also true for the Silver Sluggers, and aside from Jeff Sullivan, no one seems to give a damn about it. Well, given that I am no one (see what I did there?), I decided that a damn should be given about it. I tracked down all of the Silver Slugger winners, back to 1980 (when they were first awarded), and saw if their wRC+ was the best at their respective position². What did I find?

Well…There were quite a few snubs. There are now 34 seasons of Silver Sluggers, which means there are 613³ Silver Slugger winners. Of those, 226 (36.9%) were undeserving by my methodology. Most of these were forgivable oversights, but some were simply awful choices; I have presented to you today several of the latter, for your viewing pleasure.

Below, you’ll see the 10 worst Silver Sluggers of all time, as measured by difference between the winner’s wRC+ and the deserved winner’s wRC+.

10. AL Outfield–1991

Winner: Joe Carter (123 wRC+)

Deserving Winner: Danny Tartabull (168 wRC+)

In his last season with the Royals before he headed to the Bronx (and to Seinfeld), Tartabull had the best season of his career, putting up 4.5 WAR for the Royals despite accruing only 557 plate appearances. His fielding was just as poor as it had ever been (-21 Def), meaning that all of his excellence had to be derived from his offense, and it was. In those 557 plate appearances, he batted .316/.397/.593, for a .430 wOBA and a 168 wRC+, highest among all outfielders. But was he good enough to win the award that is given to the best offensive players? Evidently not, as that honor went to Joe Carter and his .273/.330/.503 triple-slash, .361 wOBA, and 123 wRC+. Tartabull was clearly superior to Carter, so why did he get robbed?

It wasn’t for a lack of consistent position–although he would become a full-time DH later in his career, Tartabull started in right field for 124 of the 132 games that he played. Looking at traditional stats, Carter is only marginally better than Tartabull (33 HRs and 108 RBIs for the former, 31 HRs and 100 RBIs for the latter), and he still had a 43-point lead in batting average. Neither of them had won any Silver Sluggers prior to this, although Carter would win one the following year⁴. In this case, I suppose the voters picked Carter because he played the most, even if his aggregate offense was worth less than half that of Tartabull (23.4 wRAA  to 47.9 wRAA). As you’ll soon see, this oversight was acceptable compared to some of the other egregious ones.

9. AL Designated Hitter–1998

Winner: Jose Canseco (110 wRC+)

Deserving Winner: Edgar Martinez (156 wRC+)

The 35-year-old Martinez was still going strong at this point, putting up at least 5 WAR for the fourth of six consecutive years. His 5-win season in 1998 was primarily based on his ability with the bat, as he played 150 of his 154 games at DH. The Mariners were certainly happy with his production, as he hit .322/.429/.565, for a .427 wOBA and a 157 wRC+. However, a certain time-traveling outfielder was instead rewarded with the Silver Slugger, and it’s not hard to see why.

While Martinez hit for a good amount of power, Canseco outslugged him by a mile, or at least in the one area the voters care about. Martinez only had 29 round-trippers, compared to 46 for Canseco. Yes, Canseco also only batted .237 with a .318 OBP, .354 wOBA, and 110 wRC+, but that’s not important–he hit 46 dingers!

Reputation probably didn’t play a huge role with this one, as each player had won three times before (1992, 1995, and 1997 for Martinez; 1988, 1990, 1991 for Canseco⁵). The (theoretical) ability to drive in runners also wasn’t important, as the two players had nearly identical RBI lines (102 for Martinez, 107 for Canseco); moreover, both were equally durable (672 PAs for Martinez, 658 PAs for Canseco). In the end, the ability to hit the ball out of the park was what stole the award from Martinez, even though both rate stats and cumulative stats (12.7 wRAA for Canseco, 53.5 for Martinez) agreed that other factors were important as well.

8. AL Third Base–1995

Winner: Gary Gaetti (111 wRC+)

Deserving Winner: Jim Thome (158 wRC+)

In his first season qualifying for the batting title, Thome didn’t disappoint, as he gave the Indians six wins above replacement level; he was solid with the glove (1.1 Def at third), but his work with the bat set him apart: He smashed 25 home runs in 557 plate appearances, while hitting .314/.438/.558 with a .433 wOBA and 158 wRC+. Nevertheless, he would be disappointed at season’s end–no, not because the Indians lost the World Series, but because he got robbed of an award to measure the best offensive players at any given position!

Anyway, while Thome’s blossoming power was nothing to shrug at, Gaetti’s power was even more impressive, as he hit 35 homers in only 21 more plate appearances. However, his game suffered everywhere else, as he batted only .261, got on base at a .329 clip, and had a wOBA and wRC+ of .360 and 111, respectively. Both of them played the majority of their games at third base, so both were judged against each other; Thome, though, was unarguably better, which was reflected in wRC+ and wRAA (46.2 for Thome, 13.1 for Gaetti). However, the voters have a tendency to not listen to rational arguments, so Gaetti’s superior home run and RBI totals (96 compared to 73 for Thome) gave him the sought-after crown.

7. AL Outfield–1994

Winner: Kirby Puckett (124 wRC+)

Deserving Winner: Paul O’Neill (171 wRC+)

Because 1994 was shortened by the strike, counting stats from this season have to be taken with a grain of salt. One counting stat in particular was the deciding factor in this race, and I’ll soon reveal what it was. O’Neill was in pinstripes for the second of nine straight seasons, and he lived up to the lofty standard that the garb carries. In 443 plate appearances, O’Neill had 4.3 WAR, despite a Def of -10.7; this was due, then, to the fact that he demolished his way to a .359/.460/.603 line, with a .450 wOBA and 171 wRC+. But did the voters care? No, because a wife-beater was supposedly better.

Puckett was certainly good in 1994, hitting .317/.362/.540 with a .381 wOBA and 124 wRC+ in 482 plate appearances. O’Neill, though, had more than double the wRAA (43.6 to 19.3), and the sizable wRC+ lead; in addition, ONeill actually outhomered him, 21 to 20, and had the aforementioned advantage in batting average. Going down the Triple Crown checklist, that leaves one category: RBIs. O’Neill brought 83 runners home–an acceptable total, to say the least. Puckett, however, blew him out of the water, with 112 RBIs–in 108 games! That’s pretty impressive, if you care about such things, and God knows the voters care. Hence, the Silver Slugger was not given to its rightful owner, all because of one useless stat.

6. AL Designated Hitter–1996

Winner: Paul Molitor (114 wRC+)

Deserving Winner: Edgar Martinez (163 wRC+)

Should Martinez make the Hall of Fame? Probably. Will he make the Hall of Fame? Given his recent history, I’m inclined to say no. Would winning two deserved Silver Sluggers have helped his case? Well…Again, nobody really cares about this thing, so probably not. But the point of all of these rhetorical questions is: Martinez was a boss in 1996 (as he was in 1998). The second of six straight five-win seasons, Martinez was a full-time DH, meaning that he had to crank out the offense constantly if he wanted to remain a high performer. He most certainly did crank, to the tune of a .327/.464/.595 triple-slash, with a .450 wOBA and 163 wRC+ in 634 trips to the plate. You wouldn’t know that from looking at the awards, though, as the guy that deserves to be in Cooperstown was shut out by the only guy at his position that is in Cooperstown. What caused this?

While Martinez didn’t hit a whole lot of long balls–his .269 ISO was derived primarily from his 52 doubles, not his 26 homers–Molitor was even worse, hitting only 9 round trippers in 728 plate appearances. What the voters proved in 1996 was that they didn’t depend solely on primitive statistics like “home runs” to determine a player’s worth. They used advanced statistics for the modern age, like batting average and runs batted in! In those regards, Molitor had clear advantages over Martinez, with a .341 average and 113 RBIs. Now, Molitor’s dearth of walks and power meant that his OBP and SLG were a mere .390 and .468, respectively, which in turn meant that his wOBA was .372 and his wRC+ was 114, which in turn meant that he was completely inferior to Martinez in rate and counting stats (23.0 wRAA, compared to 62.2 for Martinez), but he had 113 RBIs! And a .341 average! That’s gotta count for something!

This was not, however, the only big-boned brouhaha that brewed in 1996…

5. NL First Base–1996

Winner: Andres Galarraga (123 wRC+)

Deserving Winner: Jeff Bagwell (173 wRC+)

In Bagwell’s second of four 7-WAR seasons, he put up some serious numbers for the Astros, hitting .315/.451/.570 with a .433 wOBA and a 173 wRC+ in 719 plate appearances as a first baseman; with a -7.8 Def, he needed to mash to earn his keep. Galarraga was also a relatively poor defender (-7.5 Def), so the same went for him. He also hit quite well, or so it would appear; his triple-slash was .304/.357/.601, which gave him a .402 wOBA in 691 plate appearances at first–not that far off from Bagwell. Why, then, was the wRC+ gap so large?

‘Twas about the elevation, dearie. Galarraga played for the Rockies, meaning he played half of his games at Coors Field, meaning he was expected to hit like a monster. While the aforementioned batting line was rather good by major-league standards, it was merely adequate by the mountain standard, and his 123 wRC+ and 39.3 wRAA reflected that. By contrast, Bagwell played in the Astrodome half of the time, which was not particularly good to hitters as a whole⁶; thus, his 173 wRC+ and his 60.1 wRAA.

Obviously, the voters were unaware of the effects a player’s home park can have on his all-around production, or else they would have discounted Galarraga’s 47 home runs and 150 RBIs. With this next case (well, these next few cases, really), though, there’s no excuse.

4. NL Pitcher–1985

Winner: Rick Rhoden (18 wRC+)

Deserving Winner: Mike Krukow (71 wRC+)

My theory is that the voters are all secretly supporters of the DH, and they all want to see it implicated across both leagues. How else can you explain 15 of the 34 pitchers (44.1%) that have won being undeserving, or that the four worst picks (of any position) were all pitchers? Anyway, Krukow was quite good (for a pitcher) with the bat in 1985, slugging his way to a .218/.259/.345 line, with a .271 wOBA and a 71 wRC+; looking at more traditional stats, he hit one home run and had three RBIs in 66 trips to the plate. He was also pretty good with the arm, accruing 3.1 WAR in 194.2 innings for the Giants in his second of six years by the bay.

Rhoden was also solid on the mound in his seventh of eight years with the Pirates, putting up 2.6 WAR in 213.1 innings pitched. He won the Silver Slugger the year before (and actually deserved to), so maybe the voters were just lazy and assumed he hit well the next year. Make no mistake, though–he did not hit well at all in 1985. His triple-slash was an anemic .189/.211/.230, meaning his wOBA was .200 and his wRC+ was 18; he also went homerless, and had only 6 RBIs. His offense (or lack thereof) cost the Pirates 7.2 runs, three times that of Rhoden (-2.4). For reasons that escape me, that performance was apparently Silver Slugger-worthy, and now the wrong man has gone home with the award for yet another year. But don’t you worry–it gets much, much worse…

3. NL Pitcher–1998

Winner: Tom Glavine (37 wRC+)

Deserving Winner: Mike Hampton (91 wRC+)

Hampton is best remembered for two things: Signing the largest contract in baseball history (for the time) with the Rockies and proceeding to stink up the joint before getting traded to the Braves; and being a pretty damn good hitter. Like, a better career wRC+ than Ozzie Guillen good. Yeah, that’s a bad comparison to make, whatever. The point is, Hampton could hit, and 1998 was no exception–in his penultimate year with the Astros, he had a .262/.348/.328 batting line, which translated to a .312 average and a more than satisfactory 91 wRC+. Glavine, on the other hand, was a less than satisfactory hitter, both for his career and in this year⁷. He batted a mere .239/.250/.282, which only gave him a .237 wOBA and a 37 wRC+. Cumulative stats reflect this as well, as Hampton’s offense was worth 5.6 runs more than Glavine’s (-1.2 to -6.8 wRAA). Triple-crown stats don’t reveal anything–neither player homered, although Glavine had seven RBIs to Hampton’s two.

The reason for Glavine’s victory here was likely twofold. One, Glavine pitched better than Hampton, with the former’s 2.47 ERA in 229.1 innings dwarfing the latter’s 3.36 ERA in 211.2 innings. Second, Glavine had a better reputation, which is where it gets complicated. See, Hampton was a good hitter, and Glavine wasn’t (as footnote 7 should make perfectly clear); however, according to reputation, both of these men were good hitters (for their position), as they took home a combined nine Silver Sluggers. The difference between the two? 1998 was the end of Glavine’s run of Silver Sluggers, whereas the next year (i.e. 1999) was the first of five straight for Hampton⁸. In this case, Glavine’s notoriety, which was built up prior to 1998, won him the award, while Hampton’s fame won him a few later (see footnote 8).

Without a doubt, the 1998 pitcher’s Silver Slugger was one of the worst in the history of the award. Sadly, there are two years that were even worse.

2. NL Pitcher–1983

Winner: Fernando Valenzuela (20 wRC+)

Deserving Winner: Tim Lollar (78 wRC+)

Lollar’s career was pretty unremarkable–he put up 2.5 WAR in 906.0 innings for four teams. In 1983, he pitched for the Padres, and he was in line with his career numbers–0.4 WAR and a 4.61 ERA in 175.2 innings. At the plate, though, he was a revelation–well, comparatively speaking. He hit .241/.292/.345 in 65 plate appearances, which gave him a .285 wOBA and a 78 wRC+, best in the National League among qualified pitchers. Valenzuela’s career was notably more remarkable, as his career WAR was 38.5 over 2930.0 innings for six teams. In the year in question, he pitched well for the Dodgers, accruing 3.9 WAR over 257.0 innings (with a 3.75 ERA). Hitting did not work out quite as well, to say the least: In 105 plate appearances, his triple-slash was .187/.194/.253, which translated to a .199 wOBA and a 20 wRC+.

Lollar was much better than Valenzuela, by both advanced and basic stats–they both hit one homer, but Lollar had 11 RBIs to Valenzuela’s 7. Lollar’s offense only cost the Padres 1.7 runs below average, whereas Valenzuela’s took nearly 10 runs away from the Dodgers. This is one of the more puzzling awards (though not as puzzling as the next one); my best guess is that Valenzuela rode on the coattails of his incredible rookie year in 1981⁹. Unfortunately, this was not the darkest hour for the prestigious honor that is the Silver Slugger award; no, that time would come six years later, in a travesty greater than any that came before it,

1. NL Pitcher–1989

Winner: Don Robinson (43 wRC+)

Deserving Winner: Bob Knepper (111 wRC+)

Given that Bob Knepper’s career wRC+ is 3–yes, three–I’m inclined to believe that his 1989 season was a fluke. The second-to-last season of his career, 1989 didn’t go well for him as a pitcher–he put up a 5.13 ERA while costing the Astros and Giants -0.8 wins over 165.0 innings. As a hitter, though, he was never better–somehow, he managed to get on base in 32.7% of his trips to the plate, with a decent .372 slugging percentage for good measure. His competence in these two areas was enough to compensate for his sub-Mendozan batting average (.186) and bring his wOBA and wRC+ to .324 and 111, respectively.

The antithesis to this would be Robinson, who was quite good on the mound (at least by traditional stats), with a 3.43 ERA in 197.0 innings for the Giants, but was completely ineffective at the plate (even for a pitcher). The owner of a respectable career wRC+ of 60, Robinson sunk down to 43 in 1989, as he only batted .185/.195/.309 (.226 wOBA). In what world was that worth more than Knepper? A world where the voters for most major awards rely on archaic means of measuring player performance–i.e. homers and RBIs. Knepper only knocked one out once in 55 plate appearances, while Robinson did it thrice in 82 PAs; Robinson also out-ribbied Knepper, seven to three. When the dust had settled, Knepper was worth 0.5 wRAA, while Robinson was worth -5.4; despite this, the Silver Slugger went to Robinson.

***

Having finally finished with this torturous exercise, I now see why people don’t place any value in the Silver Sluggers. They’re pointless awards, given out solely on reputation and not actual performance. Anyone who takes them seriously is just aksing for…Wait, what’s that? The Orioles won HOW MANY Silver Sluggers?

In summation: The Silver Slugger is the best award in baseball, and it’s a shame that the level of respect for it is as low at it is.

———————————————————————————————–

¹Just to be clear: I thoroughly enjoyed the article, and don’t consider Bates to be racist in any way.

²A little bit about the methodology: I decided on wRC+ (as opposed to, say Off) for two reasons. First, I wanted to see who the best hitters were, not the best offensive players, meaning baserunning was not to be included. Is that small-minded? Probably. Is the award in question called the Silver Slugger/Baserunner, and is the award itself a combination of a silver bat and a silver pair of cleats? Certainly not. Second (and more rationally), I wanted to measure the best hitters overall, not in terms of aggregate value; using Off or wRAA would benefit players that played longer. To pull some numbers out of my ass as an example, a guy with a 140 wRC+ is a better hitter than a guy with a 130 wRC+, but if the latter received 700 plate appearances while the former only received 550, Off or wRAA (or any counting stat) wouldn’t reflect that. But, just to be safe, I also put each player’s wRAA somewhere in the writeup.

³In 2004, there were two AL catchers that won–neither of whom was deserving.

⁴That would also be undeserved; Carter’s 120 wRC+ in 1992 paled in comparison to Shane Mack and his 142 wRC+.

⁵Oddly enough, all of those were deserving. Canseco’s wRC+s of 169, 157, and 152 in 1988, 1990, and 1991, respectively, were among the three best among qualified outfielders in those years, and Martinez’s wRC+s of 165, 182, and 164, respectively, were the best among qualifying DHs in those years.

⁶Side note: Why did this never happen? Come on, people–I expected more from you.

⁷Glavine won four Silver Sluggers over the course of his career (in 1991, 1995, 1996, and 1998). Care to speculate as to how many of those were justified? That’s right, none! In those years, Glavine’s wRC+s were 50, 41, 81, and 37, respectively, when Tommy Greene (94 wRC+ in 1991), Kevin Foster (65 wRC+ in 1995), Jason Isringhausen (84 wRC+ in 1996), and Hampton were far better. Also, in case you were wondering, Glavine’s career wRC+ is 22.

⁸Of those five, three (1999, 2001, and 2002, with wRC+s of 111, 106, and 112, respectively) were the right choice, and two (2000 and 2003, with wRC+s of 56 and 52, respectively, when Omar Daal and Russ Ortiz had wRC+s of 83 and 81, respectively) were not.

⁹He won the Silver Slugger in that year as well, and that was also undeserved, as his 55 wRC+ was outshined by Gaylord Perry’s 71 wRC+.


Power and Patience (Part III of a Study)

So, last week we hopefully learned a few things. Let’s continue looking at league-wide trends.

In terms of getting on base, not getting on base, hitting for power, and not hitting for power, there are actually four mostly-distinct periods in baseball history for each combination. Define these terms against the historical average and you get:

  • 1901-18 – Players aren’t getting on base or hitting for power

  • 1919-52 – Players are getting on base but not hitting for power

  • 1953-92 – Players aren’t getting on base but are hitting for power

  • 1993-pres-Players are getting on base and hitting for power

There are some exceptions, but this paradigm mostly holds true. Here’s another depiction of the “eras” involved:

YEAR (AVG)

OBP (.333)

ISO (.130)

1901-18

.316

.081

1919-52

.343

.120

1953-92

.329

.131

1993-present

.338

.158

The periods from 1901-52 and since 1993 really are quite distinct, but the 1953-92 period is the hardest to truly peg and kind of has to be squeezed in there. In fact, those figures are quite close to the historical average. Well, actually, the OBP before 1993 is just as much below the average as the OBP after 1993 is above it. When the same era, categorized by offense, includes both 1968 and 1987, there is going to be some finagling.

So, really, there hasn’t been a clear period in MLB history with above-average power and below-average on-base percentages, while the “Ruth-Williams Era” (1919-52) had below-average power (again, vs. the historical average) but above-average on-base percentages.

Still, breaking things down into four eras is too simplistic. What follows is a walk-through, not of every season in MLB history, but key seasons, using some of the “metrics” from the first two parts of this series.

1918: .207 XB/TOB, -.038 sISO-OBP, 95 OBP+, 57 ISO+

In 1918, MLB hitters earned .207 extra bases on average. By 1921, they were earning .300 extra bases after year-to-year gains of 19%, 8%, and 12%. How much of this was on account of the Sultan of Swat? In 1918, Babe Ruth was already earning .523 extra bases, but had only 382 plate appearances. In 1921, however, he had 693 plate appearances and averaged .717 extra bases. Without him, the 1918 and 1921 ratios change to .205 and .295, respectively. So he’s only responsible for .003 of the increase. (My guess from a couple weeks ago was way off. He’s still just one player.) Perhaps the effect on the power boom of his individual efforts is overstated. However, his success was clear by 1921, so his influence on how other hitters hit seems properly stated. While Ruth’s 11 HR in 1918 tied Tillie Walker for the MLB lead, five other players had 20+ home runs in 1921.

OBP was low in 1918, and most seasons up to that point, but the dead ball era really was mostly a power vacuum. OBP already had two seasons (1911-12) around the current average, even though it would not get back there until 1920.

1921: .300 XB/TOB, -.027 sISO-OBP, 104 OBP+, 90 ISO+

So we touched on the 1918-21 period moments ago. Power skyrocketed, but still to about 10% below its current norm. Meanwhile, OBP was well on its way to a long above-average stretch: OBP+ was 100 or higher every single year from 1920 through 1941.

1930: .364 XB/TOB, -.007 sISO-OBP, 107 OBP+, 112 ISO+

1930 was the most power-heavy MLB season until 1956 and is even today the second-highest OBP season in MLB history at .35557, just behind the .35561 mark set in 1936. Non-pitchers hit .303/.356/.449 in 1930. Ten players hit 35 or more home runs, including 40+ for Wilson, Ruth, Gehrig and Klein.

Like we’ll see in 1987, however, 1930 was really the peak of a larger trend: XB/TOB grew 6+% for the third straight year before dropping 14% in 1931 and another 12% in 1933 (with a 9% spike in 1932).

1943: .261 XB/TOB, -.028 sISO-OBP, 98 OBP+, 74 ISO+

World War II in general was a bad time for hitters, at least from a power standpoint, with 1943 the worst season among them, but 1945 almost as bad. From 1940-45, the XB/TOB ratio fell 23%. It remained low until 1947. (But even at its lowest point in this time frame in 1942, it was still a better year for power than 1918.) OBP, however, was actually at about its current historical average during the war (within one standard deviation of the mean throughout), so there wasn’t a total offensive collapse. However, it was the first time since the deadball era that OBP+ was below 100. Either way, perhaps the coming look at individual players will tell us what happened.

1953: .365 XB/TOB, .001 sISO-OBP, 103 OBP+, 108 OPS+

Thanks to an 11% increase in XB/TOB, it was finally “easier,” relatively, to hit a double or homer than it was to make it to base in the first place. Also playing a role, however, was the OBP; in 1950 it was only harder to hit for power because players were reaching base at a pretty good clip; the OBP+ and ISO+ that year (1950) were 106 and 110.

1968: .320 XB/TOB, .003 sISO-OBP, 93 OBP+, 84 ISO+

1968 is often considered perhaps the all-time nadir for Major League hitters outside of the dead ball era, and non-pitchers only earned an average of .320 extra bases per time on base that year. It wasn’t just power that suffered, however, although it did, but it was also the worst league-wide OBP in 51 years. In fact, OBP was so low, it was actually ever so slightly easier to hit for power in 1968 than it was to reach base.

The thing about 1968 is that, while 1969 featured a lower mound, no 1.12 ERA’s, and a solid recovery for both OBP and ISO, it didn’t automatically revert baseball hitters to their pre-mid-60s form. Power fluctuated wildly in the roughly 25-year period between 1968-93.

1977: .378 XB/TOB, .010 sISO-OBP, 100 OBP+, 108 ISO+

1977, rather than 1930 or 1987, may be really the flukiest offensive season in MLB history. ISO+ shot up from 83 to 108, after having not been above 96 since 1970. MLB hitters earned 26% more extra bases per times on base than in 1976, easily the biggest one-year increase in MLB history. XB/TOB then promptly decreased 10% in 1978; it’s the only time that figure has gone up 10% in one year and declined 10% the next. It was the only season where sISO was .010 above OBP from 1967-84. 35 players homered 25 times or more, the most in MLB history until 1987. 1977 was a banner year for getting on base as well, although, as usual, not as much as ISO. It was the highest OBP season from 1970-78 and one of four seasons from 1963-92 with an average OBP vs. the historical average.

1987: .416 XB/TOB, .023 sISO-OBP, 101 OBP+, 120 ISO+

1987 has a big reputation as a fluky power season, and players earned .416 extra bases per time on base that year, but that was “only” a 9% spike from the prior season. Additionally, XB/TOB had actually increased every year from 1982-87, except for a 2% drop in 1984. The 1987 season was mostly the peak of a larger trend, which came crashing down in 1988, when the ratio dropped more than 15% to .353 extra bases. The .400 mark would not be broken again until 1994’s .412, but after that point, this ratio would never fall below the 0.400 it was in 1995.

This season was, however, the only one in the Eighties with an OBP+ over 100. From 1963-92, in fact, OBP was at or above the historical norm in just four seasons (1970, 1977, 1979, 1987). As with power, however, OBP collapsed in 1988 more so than it had gained in 1987, falling to 1981 levels (97 OBP+).

1994: .412 XB/TOB, .017 sISO-OBP, 103 OBP+, 122 ISO+

XB/TOB leapt over 10% from 1992-93, and another 9.5% in 1994, ushering in a power era that hasn’t quite yet flamed out. 1994 was the year power really took off relative to OBP: in 1992, sISO and OBP were even; in 1993, the gap was still about half of what it would be in favor of sISO in 1994. 1994 also featured the highest ISO to that point, higher than even in the culmination of the mid-80’s power trend in 1987. While there would be some years between 1993 and 2009 with modest decreases in power, even in 2013, ISO+ was 112–its lowest mark since 1993. More on the current power and OBP environment momentarily.

1901-2013: Changes in XB/TOB

Extra bases per time on base was our first choice of metric. How has this particular one changed in certain years?

Overall, nine times has this ratio spiked at least 10% in one season: 1902-03 (+12%), 1918-19 (+19%), 1920-21 (+12%), 1945-46 (+11%), 1949-50 (+10%), 1952-53 (+11%), 1976-77 (+26%), 1981-82 (+12%), and 1992-93 (+10%).

Meanwhile, it decreased by 10 or more percent on six occasions: 1901-02 (-11%), 1930-31 (-14%), 1932-33 (-12%), 1941-42 (-11%), 1977-78 (-10%), 1987-88 (-15%).

2014-???

We’ll try to make this a little more interesting: where is baseball going from here? Can we look at these trends throughout history and determine what the next few years might look like?

XB/TOB dropped 4.8% in 2013. It was the sharpest one-year drop since a 5.6% fall in 1992, but that season only preceded a power boom. Both were modest declines historically, and this one is unlikely to portend much. However, this year’s 112 ISO+ was a new low for the post-strike era.

Yet the bigger issue in 2013 was a stagnant OBP, which has been below the current average since 2009 after being above it every year since 1992. OBP never deviates very much from its norm, but 26/30 seasons from 1963-92 featured a below average OBP.

Will OBP continue to stay low? It has fallen every year since 2006, from .342 to .323, which represents the longest continuous decline in MLB history. It may be unlikely that it decreases further, but the below-average-since-2009 fact is worrisome if you enjoy offense. Stagnation for such a length of time has nearly always been part of a larger trend, mostly in the dead ball era and that 30 year period from 1963-92.

One thing we can probably say is that the “Steroid Era” is over. From 1993-2009, OBP+ was never below 101 and ISO+ never below 109. Take 1993 out of the sample, and ISO+ is never below 118, and from 1996-2009, 14 years, ISO was 20% or more above the historical norm every time.

But since 2009, that 20% threshold has never been reached, although 2012’s ISO+ of 119 comes close. Nonetheless, power from 2010-present has yet to reach mid-90s, early 2000s levels. Power could still increase in the future, but likely for reasons other than PED’s (although the Melky Cabreras and Ryan Brauns of the world always leave a doubt).

If I had to guess, power and home runs are here to stay, even if 2000’s .171 stands as the highest non-pitcher ISO for years to come. (That really is a crazy figure if you think about it: non-pitchers that year hit for power at roughly the career rates of Cal Ripken or Ken Caminiti. In 2013, they were down to more “reasonable” levels similar to Johnny Damon or Barry Larkin.)

The on-base drought is more of a concern for offenses, however, but because OBP is so consistent, that OBP drought could be persistent, but minor.

This concludes the league-wide observations of power and patience. Part IV next week will look at things like “X players with an OBP of Y and ISO of Z in year 19-something.” Part V will then look at individual players. Maybe we can even wrap up with the ones who started this whole series: Joe Mauer, Rickey Henderson, and Wade Boggs. I guess we’ll have to find out.