How Well Did the FanGraphs Playoff Odds Work?

One of the more fan-accessible advanced stats are playoff odds [technically postseason probabilities]. Playoff odds range from 0% – 100% telling the fan the probability that a certain team will reach the MLB postseason. These are determined by creating a Monte Carlo simulation which runs the baseball season thousands of times [10,000 times specifically for FanGraphs]. In those simulations, if a team reaches the postseason 5,000 times, then the team is predicted to have a 50% probability for making the postseason. FanGraphs runs these every day, so playoff odds can be collected every day and show the story of a team’s season if they are graphed.

Above is a composite graph of the three different types of teams. The Dodgers were identified as a good team early in the season and their playoff odds stayed high because of consistently good play. The Brewers started their season off strong but had two steep drop offs in early July and early September. Even though the Brewers had more wins than the Dodgers, the FanGraphs playoff odds never valued the Brewers more than the Dodgers. The Royals started slow and had a strong finish to secure themselves their first postseason birth since 1985. All these seasons are different and their stories are captured by the graph. Generally, this is how fans will remember their team’s season — by the storyline.

Since the playoff odds change every day and become either 100% or 0% by the end of the season, the projections need to be compared to the actual results at the end of the season. The interpretation of having a playoff probability of 85% means that 85% of the time teams with the given parameters will make the postseason.

I gathered the entire 2014 season playoff odds from FanGraphs, put the predictions in buckets containing 10% increments of playoff probability. The bucket containing all the predictions for 20% means that 20% of all the predictions in that bucket will go on to postseason. This can be applied to all the buckets 0%, 10%, 20%, etc.

Above is a chart comparing the buckets to the actual results. Since this is only using one year of data and only 10 teams made the playoffs, the results don’t quite match up to the buckets. The desired pattern is encouraging, but I would insist on looking at multiple years before making any real conclusions. The results for any given year is subject to the ‘stories’ of the 30 teams that played that season. For example, the 2014 season did not have a team like the 2011 Red Sox, who failed to make the postseason after having a > 95% playoff probability. This is colloquially considered an epic ‘collapse’, but the 95% probability prediction not only implies there’s chance the team might fail, but it PREDICTS that 5% of the teams will fail. So there would be nothing wrong with the playoff odds model if ‘collapses’ like the Red Sox only happened once in a while.

The playoff probability model relies on an expected winning percentage. Unlike a binary variable like making the postseason, a winning percentage has a more continuous quality to the data, so this will make the evaluation of the model easier. For the most part most teams do a good job staying around the initial predicted winning percentage coming really close to the prediction by the end of the season. Not every prediction is correct, but if there are enough good predictions the predictive model is useful.

Teams also aren’t static, so teams can become worse by trading away players at the trade deadline or improve by acquiring those good players who were traded. There are also factors like injuries or player improvement, that the prediction system can’t account for because they are unpredictable by definition. The following line graph allows you to pick a team and check to see how they did relative to the predicted winning percentage. Some teams are spot on like the Pirates, but there are a few like the Orioles which are really far off.

The residual distribution [the actual values – the predicted values] should be a normal distribution centered around 0 wins. The following graph shows the residual distribution in numbers of wins, the teams in the middle had their actual results close to the predicted values. The values on the edges of the distribution are more extreme deviations. You would expect that improved teams would balance out the teams that got worse. However, the graph is skewed toward the teams that become much worse implying that there would be some mechanism that makes bad teams lose more often. This is where attitude, trades, and changes in strategy would come into play. I’d would go so far to say this is evidence that soft skills of a team like chemistry break down.

Since I don’t have access to more years of FanGraphs projections or other projection systems, I can’t do a full evaluation of the team projections. More years of playoff odds should yield probability buckets that reflect the expectation much better than a single year. This would allow for more than 10 different paths to the postseason to be present in the data. In the absence of this, I would say the playoff odds and predicted win expectancy are on the right track and a good predictor of how a team will perform.

Evaluating the Eno Sarris Pitcher Analysis Method

For regular listeners of the Sleeper and the Bust podcast , I do not need to tell you what the Eno Sarris Pitcher Analysis Method is (let’s drop the Eno and leave the Sarris so we can call it SPAM). For those who aren’t familiar, you can see it at work in this article and this one over here. Basically, it is based on the idea that a pitcher can be evaluated by comparing their performance in several key metrics against league averages. We are primarily looking at swinging strike rates and groundball rates by pitch type.

I wanted to see how well this method works, so I grabbed my handy Excel toolkit and pulled down lots of pitching data. Unfortunately, pitch-type PITCHf/x data is not on the FanGraphs leaderboard (come on, Appelman!), so I headed on over to Baseball Prospectus to use their PITCHf/x leaderboards. I pulled the GB/BIP, swing%, whiff/swing, and velocity data for all starters that threw at least 50 of each pitch type in a given season. Is 50 pitches an arbitrary cut-off? Yes, yes it is.

I included four seam fastballs, two seam fastballs, cut fastballs, curves, sliders, changeups, and splitfingers. I used all the data that was available, which goes back to 2007. And, because I am impatient and couldn’t wait until the 2014 season was in the books, I didn’t include data from the last two weeks of this season. I calculated the swinging strike % by multiplying the swing % and the whiff/swing values together. After this, I pulled the K%, ERA, and WHIP data from the FanGraphs leaderboards. In all, I analyzed 1,851 pitcher-seasons.

Note: the swinging strike rates I calculated do differ from those on the player pages at FanGraphs. I’m not sure why there is a discrepancy since they are both based on PITCHf/x data, but there is one. Therefore, I did not use the FanGraphs pitch-type benchmarks in this analysis.

I pulled K%, ERA, and WHIP because I wanted to use these as proxies for pitching outcomes (i.e. my dependent variables). I amended SPAM to include four-seam velocity, because we all know how much of an effect velocity has on run prevention.

Here’s how I did this. I first calculated the league averages for each metric for each season to account for the pitching environment of that season. The table below shows the league average values for each of the metrics for each season.

 FF FT FC CU SL CH FS Year SwStr% SwStr% SwStr% SwStr% SwStr% SwStr% SwStr% 2007 6.1% 4.6% 10.0% 10.2% 13.4% 13.1% 13.9% 2008 5.9% 4.5% 9.7% 9.7% 14.2% 13.0% 14.1% 2009 6.1% 4.7% 9.7% 10.1% 14.1% 12.5% 15.2% 2010 6.0% 4.8% 9.8% 9.5% 14.1% 13.5% 14.5% 2011 6.3% 4.5% 9.1% 9.9% 14.9% 12.8% 14.7% 2012 6.6% 5.0% 10.3% 10.9% 15.6% 13.1% 15.5% 2013 6.7% 5.1% 9.3% 10.5% 15.0% 13.8% 17.2% 2014 6.56% 5.1% 9.8% 10.7% 15.5% 14.3% 17.4%

 FF FT FC CU SL CH FS FF Year GB% GB% GB% GB% GB% GB% GB% Velocity BB% 2007 33.8% 49.8% 44.8% 47.2% 42.9% 48.1% 52.9% 91.06 8.92% 2008 33.2% 49.9% 44.1% 48.7% 44.1% 46.8% 52.0% 90.87 9.17% 2009 33.1% 48.9% 42.9% 50.6% 43.7% 47.2% 53.5% 91.17 9.13% 2010 35.6% 48.9% 43.9% 50.1% 44.0% 47.6% 52.9% 91.22 8.61% 2011 33.8% 49.9% 45.2% 48.9% 45.8% 47.3% 54.7% 91.57 8.23% 2012 34.0% 50.9% 43.8% 52.2% 43.9% 48.6% 53.2% 91.76 8.36% 2013 34.6% 51.4% 45.0% 50.2% 45.8% 47.4% 54.6% 92.02 8.33% 2014 35.8% 50.6% 46.1% 49.9% 45.3% 50.3% 52.7% 92.24 7.84%

I then gave each pitcher one point for each metric that was above league average. For example, King Felix this year gets above average whiffs on five pitches, gets above average grounders on four pitches and has above average four-seam velocity, so he gets ten points. I then computed the SPAM score for each pitcher in each season by summing the scores for the individual metrics.

Here is a table of some randomly-selected pitcher-seasons to give you an idea of the types of SPAM scores I found. This table shows you that there are certainly outliers, guys with good results and bad scores or vice-versa.

 Player Year Score ERA WHIP Felix Hernandez 2014 10 2.07 0.91 Zach McAllister 2014 6 5.51 1.49 Yu Darvish 2012 11 3.90 1.28 Bronson Arroyo 2011 2 5.07 1.37 Drew Pomeranz 2011 1 5.40 1.31 Johan Santana 2008 7 2.53 1.15 Zack Greinke 2008 8 3.47 1.28 Edinson Volquez 2010 9 4.31 1.50

Before we dive into the results, I am not a statistician, but I am an engineer, so maybe I’m not completely off the hook. I am looking at these results from a high level and a simple perspective. Maybe I can build off these results and look for deeper connections in the future. First, let’s just look at some averages.

 SPAM without BB% Averages in Each SPAM Bin SPAM Score ERA WHIP K% # of Pitcher Seasons 0 7.27 1.77 13.3% 44 1 5.92 1.62 14.1% 120 2 5.64 1.56 15.4% 218 3 5.05 1.49 15.9% 298 4 4.72 1.42 16.9% 297 5 4.52 1.40 17.2% 293 6 4.14 1.34 18.7% 226 7 4.02 1.31 19.7% 182 8 3.79 1.30 19.9% 110 9 3.60 1.27 20.9% 38 10 3.39 1.20 22.1% 17 11 3.42 1.21 23.0% 7 12 3.45 1.12 26.8% 1

The above table shows the average K%, ERA, WHIP for each SPAM score, along with the number of pitcher-seasons that earned that score.

Finally, onto the scatter plots! First up, we have the K% vs. SPAM score graph. We expect this one to have a strong positive correlation, since whiff rates and velocity normally correspond to strikeouts (ground balls, not so much). I used a simple linear regression, since it seemed to be the best fit and the easiest to understand.

Here is the WHIP vs. SPAM score graph.

Here is the ERA vs. SPAM score graph.

Obviously, none of these show strong R2 values, but the table of averages above and these graphs do show there is a clear trend here, with higher scores mostly leading to lower ERAs and WHIPs, and higher K%.

None of the above accounts for control directly, so I thought I would try adding BB% as another metric to the SPAM score. I computed the league average walk rate for each season and handed out the points. The addition of BB% changed the values, but didn’t really impact the trends. Below is the averages table for the SPAM scores with BB%. Below that, you will find the three graphs again. The linear trend lines are a little better fit now, but nothing earth-shattering.

 SPAM with BB% Averages in Each SPAM Bin SPAM Score ERA WHIP K% # of Pitcher Seasons 0 7.70 1.88 13.1% 27 1 6.38 1.73 13.7% 73 2 6.02 1.65 15.0% 160 3 5.24 1.52 15.7% 254 4 4.89 1.45 16.2% 287 5 4.69 1.42 17.2% 289 6 4.34 1.37 17.4% 270 7 4.03 1.31 19.1% 197 8 3.90 1.29 20.0% 160 9 3.71 1.27 20.1% 80 10 3.44 1.22 21.0% 34 11 3.41 1.20 22.5% 14 12 3.36 1.20 21.5% 5 13 3.45 1.12 26.8% 1

So, what does all this tell us? Well, it seems that Eno’s SPAM method does a pretty good job of identifying pitchers that will be successful and is useful for identifying breakout pitchers. The beauty of this method is that it does not require a lot of data. Per-pitch metrics stabilize faster than per plate-appearance ones, so we can start to evaluate pitchers after only a start or two instead of waiting for the 170 PA required for BB% or the 70 PA for K%. I plan on digging deeper into this data over the offseason to see if I can pull any more insights from it. Please let me know in the comments if you think of something worth investigating further. Eno, if you are reading this, I hope I gave your method the treatment it deserves. And, as I do in all of my online ramblings, I will end with Tschüs!

Another Look at Momentum in October

It’s coming to that time of the year when baseball fans hunker down into their deep-seeded trenches of pro-momentum and anti-momentum factions in regards to playoff baseball. Dave Cameron wrote, just the other day, about how teams that do better in the second half don’t do any better come playoff time, and just about every Baseball Prospectus article these days will mention that narratives can be written either way after the fact, but before the fact, we simply don’t know whether the hot team will “stay hot,” or the struggling team will manage “to right the ship” come playoff time.

Since this post is showing up on FanGraphs, the readers will likely surmise (correctly) that I have historically sided with the anti-momentum crowd. However, an interesting thing happened the other day.

I was trying to make my case to a friend about how hot teams don’t have an inherent advantage, so I made my way over to Baseball-Reference to check in on some of the most recent World Series winners. I wanted to see how they had performed in September (or those few regular season games that spill into October in certain years) to prove that you didn’t need to be hot to end the season in order to capture baseball’s biggest prize. So I started with last season.

As it turns out, the Red Sox went 16-9 over baseball’s final month, which was their second best month of the season. Well, that doesn’t prove anything, it’s simply one year. Then I went to 2012. As it turns out, the Giants went 20-10 from the beginning of September, their best stretch month of baseball all season. Still, that’s only two. The Cardinals of 2011 would certainly be different. Nope. Eighteen and eight, in what was their best month of baseball of the season. This could go on for a while, but let’s simply go to the chart:

World Series Champions’ Late Season Success 2002-2013

 WS Champ Year Sept/Oct W-L Month W/L% Season W-L Season W/L% Month Rank Red Sox 2013 16-9 0.640 97-65 0.599 2nd Giants 2012 20-10 0.667 94-68 0.580 1st Cardinals 2011 18-8 0.692 90-72 0.556 1st Giants 2010 19-10 0.655 92-70 0.568 2nd Yankees 2009 20-11 0.645 103-59 0.636 3rd Phillies 2008 17-8 0.680 92-70 0.568 1st Red Sox 2007 16-11 0.593 96-66 0.593 3rd Cardinals 2006 12-17 0.414 83-78 0.516 5th White Sox 2005 19-12 0.613 99-63 0.611 4th Red Sox 2004 21-11 0.656 98-64 0.605 3rd Marlins 2003 18-8 0.692 91-71 0.562 2nd Angels 2002 18-9 0.667 99-63 0.611 2nd 214-124 0.633 1134-809 0.584

As the reader can see, the World Series champs of the past twelve years were almost always playing some of their best baseball in the final month (with an occasional October nubbin) of the season. Sure, the 2006 Cardinals were under .500, but they were a pretty fluky team in general, owning the fewest regular season wins ever of a championship team. Other than those Cardinals, however, every other team was above .500 during those final four-to-five weeks. And sure, some of that can be explained by the fact that these are top teams who are likely to be nearly .500 or above every month, but in seven of the last twelve years, these teams had either their best, or second best, month right at the end of the season. Their September/October winning percentage was nearly fifty points higher than their season totals, and would be even higher with those strong September/October records removed.

I began to wonder if I had stumbled onto something.

Sure, Cameron and the Prospectus gang had proven that in the Large N analysis of the entire second half and playoffs as a whole momentum didn’t matter, but what about on a smaller scale, maybe this phenomenon did hold some water. So I expanded my search back to the beginning of the Wild Card era, which seemed a natural breaking point given that before 1995 (well, technically 1994, but we all know how that played out) only four teams made the playoffs (which was an expansion from the two teams that made it throughout baseball history until 1969). Let’s check out the chart:

World Series Champions’ Late Season Success 1995-2013

 WS Champ Year Sept/Oct W-L Month W/L% Season W-L Season W/L% Month Rank Red Sox 2013 16-9 0.640 97-65 0.599 2nd Giants 2012 20-10 0.667 94-68 0.580 1st Cardinals 2011 18-8 0.692 90-72 0.556 1st Giants 2010 19-10 0.655 92-70 0.568 2nd Yankees 2009 20-11 0.645 103-59 0.636 3rd Phillies 2008 17-8 0.680 92-70 0.568 1st Red Sox 2007 16-11 0.593 96-66 0.593 3rd Cardinals 2006 12-17 0.414 83-78 0.516 5th White Sox 2005 19-12 0.613 99-63 0.611 4th Red Sox 2004 21-11 0.656 98-64 0.605 3rd Marlins 2003 18-8 0.692 91-71 0.562 2nd Angels 2002 18-9 0.667 99-63 0.611 2nd Diamondbacks 2001 14-13 0.519 92-70 0.568 5th Yankees 2000 13-18 0.419 87-74 0.540 5th Yankees 1999 17-14 0.548 98-64 0.605 5th Yankees 1998 16-11 0.593 114-48 0.704 6th Marlins 1997 12-15 0.444 92-70 0.568 6th Yankees 1996 16-11 0.593 92-70 0.568 t-3rd Braves 1995 16-12 0.571 90-54 0.625 4th 318-218 0.593 1799-1259 0.588

And wouldn’t you know it. Yet more proof that a big enough sample size can debunk almost any baseball myth. From 1995-2001, there were a pair of losing records, and the best any team did was have their tied-for-third best month of the season. With the addition of only those seven years, the winning percentage that had had such a big gap before is now nearly exactly even in September/October compared to the season as a whole. Now if the World Series rolls around in a month, and the Orioles and Cardinals (the two best September records in 2014, so far) are playing maybe we can pay a little bit of attention to this trend since it has been prevalent for over a decade. But if we get to the World Series and it ends up as a 1989 Bay Bridge Series rematch, with the ice cold A’s, and the only slightly better of late Giants squaring off, we’ll know that once again the large sample size guys have won.

2014 Ken Giles: 2011 Craig Kimbrel’s Long-Lost Brother

With 2014’s baseball season winding down, end of year award discussion is starting to kick into high gear. It seems every day there’s a new article discussing X player’s case for winning Y award, when likely Z will win it.

Mike Petriello wrote an article discussing the NL Rookie of the Year race, and in it stated that it comes down to two players — Billy Hamilton of the Reds or Jacob DeGrom of the Mets. Ken Giles of the Phillies may not be considered a contender for the award, but by every statistical measure Giles’s 2014 rookie season compares  favorably with Craig Kimbrel’s 2011 RoY winning season.

In 2011, the NL Rookie of the Year award was a unanimous decision — Craig Kimbrel! Ice in his veins! 46 saves! Those strike outs! That slider! Could you vote for anyone else in good conscience?

Kimbrel was (and still is) a fantastic pitcher. But if his case for Rookie of the Year was unanimous, does that mean Ken Giles should also garner some consideration? And if Ken Giles had started the season at the Major League level and produced like he has so far, what would that look like? Would he have a better shot then? Let’s dive into the numbers.

Note: I am not an expert with projections. Therefore — all rate stats will stay the same between Ken Giles’s 2014 season and the full-season extrapolation. Sorry to disappoint.

First, some dashboard stats:

Both pitchers allow very low AVG despite having average to below-average luck with BABIP. Their LOB% is well above average, and they don’t allow a lot of home runs. As a result, their accumulated WAR values are both very good. Let’s dig into some rate stats to see how they compare there.

By FIP and xFIP, these pitchers are comparable. By ERA Giles has the advantage, which likely can be explained by the difference in BABIP.

Both pitchers have K rates that are simply awesome. Kimbrel gives up a few more free passes,  but makes up for it with some more K’s. As a result, their xFIP is nearly identical.

Now let’s look at how they achieve these results:

Stuff wise, they mirror one another. Both fastball/slider guys, with some real heat on their fastballs and sliders that fare rather well.
The real eye-opener — they even attack hitters the same way. Take a look at Kimbrel’s Pitch% heat chart in comparison with Giles’s. They are remarkably close to one another.

So we have two pitchers that have great stuff and get great results, but Giles is not considered a candidate. Why? Oh right:

Kimbrel was the closer and Giles was stuck behind the 13 million dollar man.

That should not sway our opinion and lead us to devalue the year Giles has had. We are smarter than that! If Giles had been up since April (and ready to face major-league hitters), in all likelihood we’d be talking about him when it came to NL RoY voting.

One last note: Minimum 40 IP, only two rookies have ever had a lower FIP than Ken Giles. Those occurred in 1884 (Henry Porter, 1.27) and 1908 (Roy Witherup, 1.31). Baseball history is long and filled with many numbers. Ken Giles ranks near the top of that list, and the two players in front of him played in the dead-ball era. What Giles is doing is special, and should be recognized.

Your One-Stop Shop for Postseason Narrative Debunking

I, like you, have been hearing and reading a lot about the postseason and which teams are best positioned to go deep into October. The rationales aren’t always based on more tangible factors — like, say, which teams are good — but rather on “hidden” or “insider” clues (use of scare quotes completely intentional) drawn from other qualities. I decided to test each of the factors I’ve heard or read about.

Full disclosure: This isn’t exactly original research. Well, it is, in that I made a list of various hypotheses to test, decided how to test them, and then spent hours pulling down data from Baseball-Reference and FanGraphs in order to create an unwieldy 275×23 spreadsheet full of logical operators. But it isn’t original in that some of the questions I’m addressing have been addressed elsewhere. For example, I’m going to consider whether postseason experience matters. David Gassko at the Hardball Times addressed the impact of players’ postseason experience on postseason outcomes in 2008, and Russell Carleton at Baseball Prospectus provided an analysis recently. I’m not claiming to be the first person to have thought of these items or of putting them to the test. What I’ve got here, though, is an attempt to combine a lot of narratives in one place, and to bring the research up to date through the 2013 postseason.

I’m going to look at seven questions:

• Does prior postseason experience matter?
• Do veteran players have an edge?
• How important is momentum leading into the postseason?
• Does good pitching stop good hitting?
• Are teams reliant on home runs at a disadvantage?
• Is having one or more ace starters an advantage?

For each question, I’ll present my research methodology and my results. Then, once I’ve presented all the conclusions, I’ll follow it up with a deeper discussion of my research methodology for those of you who care. (I imagine a lot of you do. This is, after all, FanGraphs.) In all cases, I’ve looked at every postseason series since the advent of the current Divisional Series-League Championship Series-World Series format in 1995. (I’m ignoring the wild card play-in/coin flip game.) That’s 19 years, seven series per year (four DS, 2 LCS, 1 WS), 133 series in total.

DOES POSTSEASON EXPERIENCE MATTER?

The Narrative: Teams that have been through the crucible of baseball’s postseason know what to expect and are better equipped to handle the pressures–national TV every game, zillions of reporters in the clubhouse, distant relations asking for tickets–than teams that haven’t been there before.

The Methodology: For each team, I checked the number of postseason series they played over the prior three years. The team with the most series was deemed the most experienced. If there was a tie, no team was more experienced. I also excluded series in which the more experienced team had just one postseason series under its belt, i.e., a Divisional Series elimination. I figured a team had to do more than one one-and-done in the past three years to qualify as experienced. In last year’s NLCS, for example, the Cardinals had played in five series over the past three years (three in 2011, two in 2012), while the Dodgers had played in none. So St. Louis got the nod. In the Dodgers’ prior Divisional Series, LA played an Atlanta team that lost a Divisional Series in 2010, its only postseason appearance in the prior three years, so neither team got credit for experience.

The Result: Narrative debunked. There have been 101 series in which one team was more experienced than the other, per my definition. The more experienced team won 50 of those series, or 49.5%. There is, at least since 1995, no relationship between postseason experience and success in the postseason.

DOES VETERAN PLAYERS HAVE AN EDGE?

The Narrative: The pressure on players grows exponentially in October. A veteran presence helps keep the clubhouse relaxed and helps players perform up to their capabilities, yet stay within themselves. Teams lacking that presence can play tight, trying to throw every pitch past the opposing batters and trying to hit a three-run homer with the bases empty on every at bat. (Sorry, I know, I’m laying it on thick, but that’s what you hear.)

The Methodology: For each team, I took the average of the batters’ weighted (by at bats + gamed played) age and the pitchers’ weighted (by 3 x games started + games + saves) age. I considered one team older than the other if its average age was 1.5 years older than that of its opponent. For example, in the 2012 ALCS, the Yankees’s average age was 31.5, and the Tigers’ was 28.1, so the Yankees had a veteran edge. When the Tigers advanced to the World Series against San Francisco, the Giants’ average age was 28.9, so neither team had an advantage.

The Result: Narrative in doubt. There have been 51 series in which one team’s average age was 1.5 or more years greater than the other. The older team won 27 of those series, or 53%. That’s not enough to make a definite call. And if you take away just one year–2009, when the aging Yankees took their most recent World Series–the percentage drops to 50%–no impact at all.

HOW IMPORTANT IS MOMENTUM LEADING INTO THE POSTSEASON?

The Narrative: Teams that end the year on a hot streak can carry that momentum right into the postseason. By contrast, a team that plays mediocre ball leading up to October develops bad habits, or forgets how to win, or something. (Sorry, but I have a really hard time with this one. We’re hearing it a lot this year–think of the hot Pirates or the cold A’s–but there are other teams, like the Orioles, who have the luxury of resting their players and lining up their starting rotation. I have a hard time believing that the O’s 3-3 record since Sept. 17 means anything.)

The Methodology: I looked up each team’s won-lost percentage over the last 30 days of the season and deemed a team as having more momentum if its winning percentage was 100 or more percentage points higher than that of its opponent. For example, in one of last year’s ALDS, the A’s were 19-8 (.704 winning percentage) over their last 30 days and the Tigers were 13-13 (.500), so the A’s had momentum. The Red Sox entered the other series on a 16-9 run (.640) and the Rays were 17-12 (.586), so neither team had an edge.

The Result: Narrative in doubt, and then, only for the Divisional Series. There have been 64 series in which one team’s winning percentage over its past 30 days was 100 percentage points higher than that of its opponent. In those series, the team with the better record won 33, or 51.5% of the time. That’s not much of an edge. And when you consider that a lot of those were in the Divisional Series, where the rules are slanted in favor of the better team (the team with the better record generally gets home field advantage), it goes away completely. Looking just at the ALCS, NLCS, and World Series, the team with the better record over the last 30 days of the season won 13 of 27 series, or 48%, debunking the narrative. In the Divisional Series, the hotter team over the last 30 days won 20 of 37 series, or 53%. That’s an edge, but not much of one.

DOES GOOD PITCHING STOP GOOD HITTING?

The Narrative: Pitching and defense win in October. Teams that hit a lot get shut down in the postseason.

The Methodology: I struggled with a methodology for this one. I came up with this: When a team whose hitting (measured by park-adjusted OPS) was 5% better than average faced a team whose pitching (by park-adjusted ERA) was 5% better than average, I deemed it as a good-hitting team meeting a good-pitching team. For example, the 2012 ALCS featured a good-hitting Yankees team (112 OPS+) against a good-pitching Tigers team (113 ERA+). The Yankees were also good-pitching (110 ERA+), but the Tigers weren’t good-hitting (103 OPS+).

The Result: Narrative in doubt. There have been 65 series in which a good-hitting team faced a good-pitching team, as defined above. (There were four in which both teams qualified as good-hitting and good-pitching; in those cases, I went with the better-hitting team for compiling my results.) In those series, the better-hitting team won 32 times, or 49%. That is, good hitting beat good pitching about half the time. That pretty much says it.

ARE TEAMS RELIANT AT HOME RUNS AT A DISADVANTAGE?

The Narrative: Teams that sit back and wait for home runs are at a disadvantage in the postseason, when better pitching makes run manufacture more important. Scrappy teams advance, sluggers go home.

The Methodology: I calculated each team’s percentage of runs derived from home runs. In every series, if one team derived 5% more of its runs from homers than another, I deemed that team as reliant on home runs. For example, in last year’s NLCS, the Cardinals scored 204 of their 783 runs on homers (26%). The Dodgers scored 207 of their 649 via the long ball (32%). So the Dodgers were more reliant on home runs. In the ALCS, the Red Sox scored 36% of their runs (305/853) on homers compared to 38% for the Tigers (301/796), so neither team had an edge.

The Result: Narrative in doubt. There have been 60 series in which one team derived a 5% or greater proportion of its runs from homers than its opponent. In those series, the more homer-happy team won 27 series, or 45% of the time. So the less homer-reliant team won 55%, which is OK, but certainly not a strong majority. And if you remove just one year–2012, when the less homer-reliant team won six series (three of those victories were by the Giants)–the percentage drops to 50%.

IS HAVING ONE OR MORE ACE STARTERS AN ADVANTAGE?

The Narrative: An ace starter can get two starts in a postseason series (three if he goes on short rest in the seventh game of a Championship or World Series.) Assuming he wins, that means his team needs win only one of three remaining games in a Divisional Series and only two of five or one of four in a Championship or World Series. A team lacking such a lights-out starter is at a disadvantage.

The Methodology: This is another one I struggled with. Defining an “ace” isn’t easy. I arrived at this: I totaled the Cy Young Award points for each team’s starters. If one team’s total exceeded the other’s by 60 or more points — the difference between the total number of first- and second-place votes since 2010 — I determined that team had an edge in aces. (The difference was half that prior to 2010, because the voting system changed in 2010, when the voting went from three deep to five deep and the difference between a first and second place vote rose from one point to two.) For example, in last year’s Boston-Tampa Bay Divisional Series, the only starter to receive Cy Young consideration was Tampa Bay’s Matt Moore, who got four points for two fourth-place votes. That’s not enough to give the Rays an edge. But in the other series, Tigers Max Scherzer (203 points) and Anibal Sanchez (46) combined for 249 points, while the A’s got 25 points for Bartolo Colon. That gives an edge to the Tigers.

The Result: Narrative in doubt. There have been 82 series in which one team’s starters got significantly more Cy Young Award vote points than its opponents’. The team with higher total won 44 series, or just under 54%. That’s not much better than a coin flip. And again, one year — in this case, 2001, when the team with the significantly higher Cy Young tally won six series — tipped the balance. Without the contributions of Randy Johnson, Curt Schilling, Roger Clemens, Freddy Garcia, Jamie Moyer, and Mike Mussina to that year’s postseason, the team with the apparent aces has won just 38 of 76 series, exactly half.

Conclusion: None of the narratives I examined stand up to scrutiny. Maybe the team that wins in the postseason, you know, just plays better.

Now, About the Methodology: I know there are limitations and valid criticisms of how I analyzed these data. Let me explain myself.

For postseason experience, I feel pretty good about counting the number of series each team played over the prior three years. One could argue that I should’ve looked at the postseason experience of the players rather than the franchise, but I’ll defend my method. There isn’t so much roster and coaching staff turnover from year to year to render franchise comparisons meaningless.

For defining veteran players, there are two issues. First, my choice of an age difference of 1.5 years is admittedly arbitrary. My thinking was pretty simple: one year doesn’t amount to much, and there were only 35 series in which the age difference was greater than two years. So 1.5 was a good compromise. Second, I know, age isn’t the same as years of experience. But it’s an OK proxy, it’s readily available, and it’s the kind of thing that the narrative’s built on. Bryce Harper has more plate appearances than J.D. Martinez, but he’s also over five years younger–whom do you think the announcers will describe as the veteran?

For momentum, I think the 30-day split’s appropriate. I could’ve chosen 14 days instead of 30 — FanGraphs’ splits on its Leaders boards totally rock — but I thought that’d include too many less meaningful late-season games when teams, as I mentioned, might be resting players and setting up their rotations. As for the difference of 100 points for winning percentage, that’s also a case of an admittedly arbitrary number that yields a reasonable sample size. A difference of 150 points, for example, would yield similar results but a sample size of only 39 compared to the 64 I got with 100 points.

For good hitting and good pitching, I realize that there are better measures of “good” than OPS+ and ERA+: wRC+ and FIP-, of course, among others. But I wanted to pick statistics that were consistent with the narrative. When a sportswriter or TV announcer says “good pitching beats good hitting,” I’ll bet you that at least 99 times out of a hundred that isn’t shorthand for “low FIP- beats high wRC+.” If you and I were asked to test whether good pitching beats good hitting, that’s probably how we’d do it. But that’s not what we’re looking at here OPS and ERA are more consistent with the narrative.

For reliance on home runs, it seems pretty clear to me that the right measure is percentage of runs scored via the long ball. Again, my choice of a difference of five percentage points is arbitrary, but it’s a nice round number that yields a reasonable sample size.

Finally, my use of Cy Young voting to determine a team’s ace or aces: Go ahead, open fire. I didn’t like it, either. But once again, we’re looking at a narrative, which may not be the objective truth. Look, Roger Clemens won the AL Cy Young Award in 2001 because he went 20-3. He was fourth in the league in WAR. He was ninth in ERA. He was third in FIP. He was, pretty clearly to me, not only not the best pitcher in the league, but also only the third best pitcher on his own team (I’d take Mussina and Pettite first). But I’ll bet you that when the Yankees played the Mariners for the ALCS that year (too far ago for me to remember clearly), part of the storyline was how the Yankees got stretched to five games in the Divisional Series and therefore wouldn’t have their ace, Roger Clemens, available until the fourth game against the Mariners. Never mind that Pettite was the MVP of the ALCS. The ace narrative is based on who’s perceived as the ace, not who actually is. (And a technical note: Until the Astros moved from the NL to the AL, the difference between first- and second-place votes in the two leagues were different, since there were 28 voters in the AL and 32 in the NL. The results I listed aren’t affected by that small difference. I checked.)

What the Hell Happened to Rafael Soriano?

Now that the division title belongs to the Nats, and the race for the number one seed in the NL is pretty much locked up, there are still a few reasons to watch the rest of the regular season games (if you are a Nats fan). If I were an unbiased observer, I would find the whole Rafael Soriano situation fascinating. He was having a fantastic first half, and while his ERA was beating his peripherals by a decent margin, his peripherals were still pretty strong. There was reason to expect regression, but not reason to expect a full-on collapse. But Soriano has picked up over two runs on his ERA during this second half and gone from closer to “cross your fingers mop up guy.” While watching another mentally exhausting Soriano “save” on Sunday, I wanted to figure out what exactly had happened to a season that started out so promising.

One thing that is important to remember is that relievers are volatile, and a few bad outings can throw things out of whack. In September, Soriano has given up 7 ER in 7.2 IP. That’s awful, but it’s also only seven innings. You could find quite a few SPs this season who have had a stretch of 7.2 IP giving up 7 runs. Strikeouts haven’t been the problem either, as he is averaging over a K per inning, and a 3:1 K-BB ratio. And despite the recent blow up (by recent I mean the entire second half), Soriano is still sporting a .59 HR/9 ratio for the season, which is much lower than his career of .86 HR/9. His BABIP is the exact same as last year, his strikeouts are up significantly (6.89 in ’13 versus 8.70 this year), and while his walks are up too, the overall K-BB is stronger. Not to mention that he has the second best SwStr% of his career after posting a career low in the same metric last year. So with all these seemingly positive things happening, what’s the deal? Where has this implosion come from if it doesn’t stem from gophers or a high BABIP against?

I think the answer is two-fold: extra-base hits, and a lack of infield fly balls. Below is a chart from 2013 of hit types against Soriano:

Here is 2014:

There are two important takeaways from this chart. One, even though Soriano gave up more dingers in 2013, he has given up significantly more extra-base hits this season. By my count, he gave up 15 extra base hits last year, and 21 this year (including home runs). Six may not sound like a lot, but that’s a 40% increase. When you only throw 60 innings a season, that makes a huge difference.

Two, look at the location of the outs in 2014 compared to 2013. Notice how there are way more silver dots in the infield in 2013. As a pitcher, infield fly balls are the second best thing to strikeouts. They are an out basically 100% of the time and runners can’t advance on an out like they can on a deep fly ball or a grounder. Soriano went from a 16.3% infield fly ball percentage in 2013 to 7.4% this year. A pattern is forming here. For a guy who pitches with runners on base fairly frequently, infield fly balls and strikeouts are a fantastic way to get out of a jam. Even though Soriano has more Ks this year, he also has far fewer IFFB, which almost offset one another. A lower IFFB% despite a higher overall fly ball percentage from 2013 explains a lot of what’s happening here. More balls in the outfield leads to more extra-base hits or even runners advancing/scoring on an out.

I came up with a quick metric I’ll call “Nearly Automatic Out Percentage” to illustrate my point. Soriano has faced 248 total batters this year compared to 277 last year. He has 59 Ks and 6 IFFBs this year (65 total nearly automatic outs) for a NAO% of 23.5%, compared to 51 Ks and 14 IFFBs in 2013 (also 65 nearly automatic outs), a NAO% of 26.2%. These numbers are closer than I would have thought considering how much better Soriano has been with Ks this season. But when you factor in the additional extra-base hits and a few additional walks/HBP, it explains how the end result in 2014 can be so similar to 2013 (nearly the same WAR, ERA, and xFIP in 2013 and 2014) in two completely different ways.

Chris Iannetta’s Peculiar Season

The BABIP gods are a most fickle bunch. They come and go as they please, gracing the bats of some while abandoning others altogether. Take Chris Johnson, for example. Aided by a .394 BABIP (roughly 10% greater than his career average), Johnson finished second to Michael Cuddyer in pursuit of the 2013 NL batting title. This season, however, Johnson’s batting average has dropped 58 points following a BABIP regression. Losing a portion of his hits has certainly hurt Johnson’s offensive production — this season, Johnson has produced runs at a rate 19% below league average.

BABIP is not entirely driven by luck, however. In fact, each hitter’s batted ball profile influences their BABIP. Generally speaking, players who hit more line drives and ground balls carry a higher BABIP than fly ball hitters. While it seems reasonable for Derek Jeter and Joe Mauer to carry career BABIPs in the neighborhood of .350, expecting Adam Dunn to sustain a similar BABIP would be folly.

Now, to Chris Iannetta. Sporting a career fly ball rate of 42.8%, the Angels’ backstop is a true fly ball hitter. Iannetta’s 2014 batted ball profile bears a striking resemblance to that of his 2013 campaign. Observe the table below:

Table 1: Batted Ball Profiles for Chris Iannetta, 2013 & 2014

 Year FB% League FB% LD% League LD% GB% League GB% BABIP 2013 43.4% 34.3% 19.3% 21.2% 37.3% 44.5% .284 2014 42.5% 34.4% 20.3% 20.7% 37.2% 44.9% ?

Very similar. Although a hitter’s BABIP is not solely dependent on his batted ball profile, we might reasonably expect Iannetta’s 2014 BABIP to reside in the neighborhood of his 2013 mark. Well ladies and gentlemen, at the time of this writing, Chris Iannetta carries a 2014 BABIP of .330, a mark 16.6% above his career average of .283!

A peculiar development indeed. Let’s take a step back and examine Iannetta’s run production in a broader context:

Table 2: Offensive Production for Chris Iannetta, 2013 & 2014

 Year BABIP AVG BB% ISO wRC+ 2013 .283 .225 17.0% .148 112 2014 .330 .252 14.7% .148 128

The BABIP gods have certainly smiled on Iannetta this season. Despite the same ability to hit for power and a minor dip in plate discipline, Iannetta’s BABIP spike has fueled a 16% increase in run production. Among catchers with a minimum of 350 plate appearances, Iannetta’s wRC+ currently ranks him the sixth-best hitting catcher in the league. Iannetta’s newfound singles are certainly helping the Angels’ cause.

Because of random variation and luck, it is hardly rare for a hitter to experience a jump in BABIP. What is truly remarkable, however, is that Iannetta’s BABIP has jumped 15% above his career average while he has produced fly balls at a rate 20% greater than league average. To experience such a spike in BABIP while hitting a high percentage of fly balls seems quite rare. But how rare?

In order to better appreciate the peculiarity of Iannetta’s season and look for possible comparisons, I searched the past five seasons for players who experienced a BABIP jump 15% greater than career average while producing fly balls at a rate 20% above league average. Consider the table below:

Table 3: From 2009-2013, Player Seasons with a BABIP 15% Greater than Career Average, Fly Ball Rate 20% Greater than League Average (Minimum 400 PA)

 Year/Player Career BABIP BABIP Y1 BABIP Y2 AVG Y1 AVG Y2 BB% Y1 BB% Y2 ISO Y1 ISO Y2 wRC+ Y1 wRC+ Y2 2009 Mark Reynolds .293 .338 (’09) .257 (’10) .260 .198 11.5% 13.9% .284 .234 127 96 2010 Adam Dunn .286 .329 (’10) .240 (’11) .260 .159 11.9% 15.1% .276 .118 136 60 2010 Colby Rasmus .298 .354 (’10) .267 (’11) .276 .225 11.8% 9.5% .222 .166 130 90 2010 Nelson Cruz .299 .348 (’10) .288 (’11) .318 .263 8.5% 6.4% .258 .246 147 116 2010 Nick Swisher .290 .335 (’10) .295 (’11) .288 .260 9.1% 15.0% .223 .180 134 124 2013 Colby Rasmus .298 .356 (’13) .294 (’14) .276 .225 8.1% 7.7% .225 .223 129 102

That’s a motley crew. At first glance, one commonality emerges. Unsurprisingly, each hitter experienced significant BABIP regression the year after their jump. The BABIP gods hit some harder than others. Adam Dunn seems like an unfair comparison for what might happen to Iannetta — his remarkably terrible 2011 was fueled by more than BABIP regression. Similarly, Nick Swisher, Mark Reynolds and 2011 Colby Rasmus each saw fairly significant erosion in their power numbers. Swisher retained a good portion his productivity by dramatically increasing his BB%, but I don’t think that’s a fair expectation for Iannetta.

Perhaps the best example of what might happen to Iannetta is 2013-14 Colby Rasmus. In the midst of a BABIP regression, Rasmus has maintained his power numbers and plate discipline. Nonetheless, he’s currently producing runs at a rate 27% lower than last year. Those extra outs sure do add up.

Ultimately, if Iannetta can sustain his ISO and BB%, he should remain valuable for the Angels. Although Iannetta is on the wrong side of the aging curve, a mild BABIP regression with minor skill erosion would forecast a wRC+ somewhere in the neighborhood of 105-115. The Angels will certainly take that from their catcher.

Interestingly enough, the only hitter besides Iannetta to fit the parameters of a BABIP 15% greater than career average and fly ball rate 20% greater than league average this season is Devin Mesoraco. Mesoraco, however, is currently enjoying a well-documented swing renaissance, rendering his career BABIP rate generally unreliable for the purposes of this study. Going forward, Mesoraco is much more likely to sustain his present success than Iannetta.

The Straw Man of the Pitcher-for-MVP “Debate”

There has been much discussion lately regarding the people who hold the belief that pitchers are not deserving candidates for the MVP award.  What I don’t see is very many people who actually come out and say pitchers don’t deserve the MVP award.  Perhaps, in my daily consumption of hours of baseball news, analysis, and commentary across various media, I am somehow missing out on a significant sector or demographic that holds this belief, and so it is in fact more prevalent than what I observe, but in reality it appears that very few consider it to be such a black-and-white issue.

In fact, I would argue that both the sabermetric community and the less-analytically-inclined community both agree that it is a gray area, but approach it in different ways.

In Ken Rosenthal’s recent post on the topic, he points out that it is far from black-and-white; the last time we had a pitcher named MVP (Verlander in 2011), he was on 27 of 28 ballots.  So maybe there is one sportswriter in 28 or so who believes pitchers shouldn’t be MVPs.  Although, we shouldn’t even assume said writer would never vote for a pitcher; maybe he just felt it wasn’t Verlander’s year.

In fact 2011 was an interesting year (especially for those WAR-lubbers), in that (non-MVP) Roy Halladay in the NL had a WAR of 8.1, which was ahead of NL MVP Ryan Braun’s 7.2 (though not ahead of non-MVP and non-cheater Matt Kemp’s 8.4!).  Over in the AL, Ellsbury’s WAR was 9.1 compared to Verlander’s 6.9.  In fact 10 AL hitters had a WAR of 6.3 or greater.

On the flip side, take Jeff Sullivan’s recent post:

Say the best position player comes in around 8. Say the best pitcher comes in around 8. Say, for simplicity, that all of the different WARs are even in agreement. Doesn’t that function as a conversation-ender? You can always debate a given individual’s WAR, but doesn’t that rather matter-of-factly put pitchers and position players on the same scale?

Overall I’m very much in the camp that pitchers deserve the MVP.  But we do need to acknowledge that WAR is based an up-front division of the 1000 WAR given out per season, with 43% going to pitchers and 57% going to hitters.  It’s not that these numbers are arbitrary; a great deal of thought has been put into how to value the relative contributions of various positions (WAR’s positional adjustments are in a similar vein), and this is an interesting problem across all team sports.

Nevertheless, it holds true that in any given year, the top WAR leaders tend to be position players.  When people make sweeping statements like “position players play every day, starters only play every 5 days,” I don’t think (many of) those people are unwilling to acknowledge that starters’ contributions on the day they pitch are far more impactful than position players’ contributions; they’re just saying that in general, they see more cases where the best position players are the most valuable to their teams than the best starting pitchers — which is exactly what the WAR leaderboards say as well.

Regarding the valuation of different positions in team sports: often times, the nature of the game is such that certain positions are inherently more impactful; this ends up being a great example of why replacement level is an invaluable tool.  Consider the case of kickers in the NFL.  Suppose we modified the rules so that touchdowns didn’t immediately award 6 points; rather, it gave the scoring team the opportunity to kick an extra point that was worth 7 points.  Would this make kickers more valuable?  It certainly would make them more important, but I’m not convinced kickers’ salaries would change much.  The difference between the success rates of the best kicker in the league and the worst kicker in the league (or a replacement-level kicker) would be very small — they all make extra points about 99.7% of the time.  You’d still care more about having offensive players who can score those touchdowns (and defensive players who can prevent touchdowns).

Now, if the rules were different, and that “7-point-extra-point” actually had to be kicked from 58 yards deep, then there would suddenly be a huge difference between the success rates of the best kickers and the replacement-level kickers.  The kickers capable of hitting those 7-pointers at a high success rate would suddenly command enormous contracts and be kings of the league.

To me this is the essence of the Pitcher-for-MVP Debate: almost everyone agrees that as a whole, pitchers are less valuable than hitters.  We give hitters more WAR and bigger contracts.  That doesn’t mean there aren’t years where the best pitcher isn’t better than the best hitter, but almost everyone, sabermetrically-inclined or not, seems to come to the conclusions that in general, “position players have more impact.”

Yasiel Puig’s Struggles vs. Lefties

It’s well documented that Yasiel Puig has been having a rough second half to the season. FanGraphs’ own Jeff Sullivan covered Puig’s troubles in a great piece here, and other articles like this one, and this one, and this one, continue to pop up. Further, a recent dugout altercation with veteran Matt Kemp have only made the media scrutiny on baseball’s most volatile player tighter. Jeff discussed Puig’s inability to do anything but roll over inside pitches of late, and his failure to lift fastballs as well. Let’s keep that information in the back of our mind for a second and look at Puig’s L/R splits for 2013 and 2014.

 Season Handedness G AB PA H 1B 2B 3B HR BB SO HBP AVG 2013 vs L 46 103 117 35 23 5 1 6 16 25 1 0.340 2013 vs R 100 279 315 87 57 16 1 13 23 72 10 0.312

 Season Handedness G AB PA H 1B 2B 3B HR BB SO HBP AVG 2014 vs L 64 121 146 30 24 3 1 2 20 20 4 0.248 2014 vs R 135 405 457 126 74 32 8 12 47 96 6 0.311

Notice the drastic drop in Puig’s performance against left-handed pitching. Now both samples are limited in terms of plate appearances, but I don’t think you can attribute this drop in performance entirely to luck. First, see the difference in how right-handed and left-handed pitchers have attacked Puig by location in 2014.

Left-handed pitchers have made a significantly more concerted effort to pitch Puig inside, the same area that Jeff acutely pointed out Puig has been struggling. However, this isn’t much different than the way left-handers pitched Puig a year ago. See below for 2013 chart:

What has changed though is Puig’s ability to hit left-handed change-ups, and off-speed pitches in general. In 2013, Puig swung and missed at a lot of change-ups (28% whiff rate), but when he did make contact he did damage (.539 SLG in 26 AB’s where he put a change-up in play). In 2014 though, Puig has cut down on the misses (20% whiff rate), but also lost his ability to impact the baseball against the pitch (no extra base hits vs. lhp change-ups). A similar trend, but not as exaggerated one, can be found if you look at Puig vs. breaking pitches.

This isn’t a secret either. In last night’s contest, during his at bats against the Cubs lefty Tsuyoshi Wada, 4 of the 7 pitches Puig saw were change-ups. Wada did let one creep over the plate in his second at bat and Puig was able to hit a grounder through the left side.

But let’s go back to the examples in Jeff’s article. In the at bats where Puig is successful he gets to the ball out front and is able to get extension through his swing. Yet, in the examples where Puig is unsuccessful he rolls over the ball, is late, hits the ball deeper in accordance to his body, and cannot get the same extension. Granted both of the examples are against righties, but it illustrates the greater point of how Puig’s timing right now is off against fastballs (particularly fastballs on his hands and up).

And the problem with being late against the fastball is the rest of the game starts to speed up. To try and account for his deficiency, Puig has likely started to to cheat (start his swing earlier), leaving him more vulnerable to off-speed pitches away. And if you’re a lefty with a good change-up, you have a serious advantage versus Puig right now.

The question you might be asking yourself is why can’t righties take advantage of the same flaw. Well, since August 1, they have to an extent, and against right-handed four-seam fastballs Puig is a mere 5 for 35.

However, against off-speed pitches it’s a different story. For his career Puig recognizes and hits breaking balls considerably better than change-ups. Against sliders and curveballs, he’s batted .327 and .298 respectively, compared to a lowly .219 against change-ups.

And given that Puig is right-handed he’s a lot less likely to see change-ups from right-handed pitchers. Per Max Marchi’s data, pitchers are more than twice as likely to throw change-ups to opposite-side hitters than same-side hitters. This holds true for Puig, who in 2014, has seen 16% of pitches from left-handers be change-ups, compared to only 7% of pitches from right-handers. So while the advantage is still there for righties, it’s less likely they’ll get to it, or can do so within the limits of their arsenal.

What’ll be interesting to see is if a team will actually bring a lefty out of the bullpen to face Puig in the postseason. If it happens, one likely scenario would be Marco Gonzales of St. Louis (if he makes the playoff roster), whose profile suggests him being Puig’s kryptonite. He throws over 30% change-ups against right-handers and 51% of his fastball to righties have been located inside.

Another poor match-up would be if the Dodgers face the Nationals and Gio Gonzalez is on the mound. Gonzalez has upped his change-up usage against right-handers to 23% in 2014, and has limited hitters to a .230 average against the pitch with a 23% whiff rate.

I also think it’s important to watch how Puig handles inside fastballs the remainder of the season. It’s conceivable the adrenaline of a playoff series could help him regain his timing against the pitch and get him back in sync. Like any hitter his swing is constantly adjusting, and it could start clicking for the Cuban slugger at any point in time. The Dodgers are hoping it clicks soon, or else they’ll be stuck searching elsewhere for offensive production when October rolls around.

Data courtesy of FanGraphs and Brooks Baseball

Featured Image courtesy of USA Today

Defining Balanced Lineups

We’re used to hearing about teams having balanced or deep lineups. Other teams are defined as “stars and scrubs”. While I think we all know what these term mean, it’s not something that’s ever been quantified (at least, not to my knowledge). Since the issue of depth is an interesting one to me, I thought it’d be fun to to tackle this using wOBA.

For each team, I calculated wOBA on a team level, then the weighted standard deviation for all players. This produces each teams’ distribution, but since the size of the standard deviation is dependent on the average, (meaning that it’s not standard when comparing teams) I used the coefficient of variation (aka CV, simply standard deviation/average) as the final measure of consistency. The lower the CV, the smaller the spread of wOBA performance.