Does the Home Run Derby Affect Batted Ball Distribution? by Rylan Edwards August 19, 2015 Last week on RotoGraphs’ The Sleeper and the Bust podcast, Eno and Paul briefly discussed the possibility that Todd Frazier’s second half swoon in 2014 and again here in 2015 might have something to do with his participation in the Home Run Derby. While de-bunking the Derby Curse has been a popular topic of many data-driven pieces in recent years, research has largely focused on outcomes. For example, looking at changes in first and second half OPS and HR% for participants. Eno considered that the effects of the Derby might reveal themselves in other more subtle manifestations like batted ball data. Looks like he was onto something. Most of the research on the subject that I’ve read takes a binary approach to participation – comparing splits of those who participated to those of players who didn’t. However, the Derby Curse’s narrative is that dozens of max-effort and mostly pull-side swings ruin a player’s 2nd half approach at the plate. So why would Bret Boone’s 2003 zero-homer first round exit lead to a 6% decrease in HR/FB rate in the 2nd half? After all, his *cough* economical Derby performance required he take only the minimum number of swings possible. Could it be plausible that changes in batted ball distribution are correlated with Derby performance rather than mere participation? To find out, I exported the 1st and 2nd half Batted Ball data from the FanGraphs leaderboards for all Derby participants dating back to 2002, the earliest that batted ball data is available. I then added a column for home runs hit by each participant and regressed changes in batted ball rates for each BIP type against the number of home runs hit in each Derby performance. In doing so I found 3 statistically significant relationships: ΔOppo%, ΔMed%, and ΔHard%, with the first two negatively correlated with HR hit and the latter positively correlated. Coeff R2 p-value ΔOppo% -0.10531 0.06042 0.01149 ΔMed% -0.11301 0.04041 0.03977 ΔHard% 0.10106 0.03567 0.05365 As one might expect running only simple regressions, the R2 values are low, intimating that other factors explain the majority of the variance. And I’m not sure that even a great performance at the Derby that requires those repeated max-effort and mostly pull-side swings has that significant of an effect on the RoS batted ball data. That said, a participant who hit 20 HR could expect on average to see a 2% decrease both in Oppo% and Med Hit % and a 2% increase in Hard Hit %. It’s interesting that if anything, the data suggests that a better Derby performance correlates to an increased Hard% in the 2nd half, although it seems to come at the expense of Med% not Soft%. Nevertheless, an increase in hard-hit balls runs contrary to the notion that success at the Derby leads to a second half swoon. ΔMed% ΔHard% And while there’s no statistically significant increase in Pull%, it’s worth noting that the opposite hit type, Oppo%, decreases for those who do well at the Derby. Is that because players think more about pulling the ball and favor the inside pitch post-Derby or is there some temporary loss of skill in going the other way? Perhaps looking at how Pull/Center/Oppo distributions and heatmaps change in the weeks following the Derby might shed more light on that. ΔOppo% So while we may not necessarily have proved or disproved the existence of a Derby Curse, we at least discovered that an exciting Derby performance is, if anything, more likely to precede an increase in the amount of hard contact a participant makes in the second half. Unfortunately for Bret Boone, this news may have come 12 years too late.