xHitting (Part 2): Improved Model, Now with 2013 Leaders/Laggards

Happy holidays, all.  It took me a while, but I finally have the second installment of xHitting ready.  First off, thank you to all those who read/commented on the first piece.  For those who didn’t get a chance to read it, the goal here is to devise luck-neutralized versions of popular hitter stats, like OPS or wOBA.  A main extension over existing xBABIP calculators is that this approach offers an empirical basis to recover slugging and ISO, by estimating each individual hit type.

I’ve returned today with an improved version of the model.  Highlights:

  • One more year of data (now 2010-2013)
  • Now includes batted-ball direction (all player-seasons with at least 100 PA)
  • FB distance now recorded for all player-seasons with at least 100 PA

(There’s no theoretical reason for the 100 PA cutoff, only that I was grabbing some of the new data by hand and couldn’t justify the time to fetch literally every single player.)

I have also relaxed the uniformity of peripherals used for each outcome.  At least one reader asked for this, and after thinking about it a while, I decided I agree more than I disagree.  The main advantage of imposing uniformity was that it ensures the predicted rates (when an outs model is also included) sum to 100%.  But it is true that there are certain interactions or non-linearities that are important for some outcomes, but not others.  Including these where they don’t fully belong has a cost to standard errors/precision, and to intuitive interpretation.  To ensure rates still sum to 100%, there’s no longer an explicit ‘outs’ model; outs are simply assumed to be the remainder.

For those curious, below I display regression results for each outcome and its respective peripherals.  You can otherwise skip below if these are not of direct interest.

(The sample includes all player-years with at least 100 plate appearances between the 2010 and 2013 MLB seasons.  Park factors denote outcome-specific park factors available on FanGraphs.  Robust standard errors, clustered by player, are in parentheses; *** p$<$0.01, ** p$<$0.05, * p$<$0.1)

The new variables seem to help, as each outcome is now modeled more accurately than before (by either R2 or RMSE).  For comparison, here are the R2’s of the original specification:

  • 0.367 for singles rate
  • 0.236 for doubles rate
  • 0.511 for triples rate
  • 0.631 for HR rate

Something else I noticed: for balls that stay “inside the fence,” both pull/opp and actual side of the field matter.  Consider singles: the ball needs to be thrown to 1st base (right side of infield) specifically.  Thus an otherwise-equivalent ball hit to the left side is not the same as one hit to the right side, since the defensive play is harder to make from the left side.  Similarly, hitting the ball to left field is less conducive for triples than hitting the ball to right field.

But hitting the ball to the left side as a lefty is not the same as hitting it there as a righty, since one group is “pulling” while the other group is “slapping.”  The direction x handedness interactions help account for this.

How well do the predicted rates do in forecasting?  For singles, doubles, and triples, the predicted rates do unambiguously better than realized rates in forecasting next season’s rates.  Things are a little less clear for home runs, which I will expand on below.

Although predicted HR rate shows a slight edge in Table 1, the pattern often reverses (for HR only) if you use a different sample restriction — say requiring 300 PA in the preceding season.  (For other outcomes, the qualitative pattern from Table 1 still holds even under alternative sample restrictions.)

So home runs appear to be a potential problem area.  What should we do when we need HR to compute xAVG/xSLG/xOPS/xWOBA, etc.?  Should we:

  1. Use predicted HR anyway?
  2. Use actual HR instead?
  3. Use some combo of actual and predicted HR?

Empirically there is a clear answer for which choice is best.  But before getting to that, let’s take a look at whether predicted home-run rate tells us anything at all in terms of regression.  That is, if you’ve been hitting HR’s above/below your “expected” rate, do you tend to regress toward the prediction?

The answer to this seems to be “yes,” evidenced by the negative coefficient on ‘lagged rate residual’ below.

So, although realized HR rate is sometimes a better standalone forecaster of future home runs, predicted HR rate is still highly useful in predicting regression.  Making use of both, it seems intuitively best to use some combo of actual and predicted HR rate for forecasting.

This does, in fact, seem to be the best option empirically.  And this is true whether your end outcome of interest is AVG, OBP, SLG, ISO, OPS, or wOBA.

Observations:

  • (Option 1 = predicted HR only; Option 2 = actual HR only; Option 3 = combo)
  • Whether you use option 1, 2, or 3, xAVG and xOBP make better forecasters than actual past AVG or OBP
  • Option 1 does not do well for SLG, ISO , OPS, or wOBA
  • ^This was not the case in the previous article, but results to that point had sort of a funky sample, having recorded flyball distance only for a partial list of players
  • Option 2 “saves” things for xOPS and xWOBA, but still isn’t best for SLG or ISO
  • Option 3 makes the predicted version better for any of AVG, OBP, SLG, ISO, OPS, or wOBA

End takeaways:

  • The original premise that you can use “expected hitting,” estimated from peripherals, to remove luck effects and better predict future performance seems to be true; but you might need to make a slight HR adjustment.
  • The main reason I estimate each hit type individually is for the flexibility it offers in subsequent computations.  Whether you want xAVG, xOPS, xWOBA, etc., you have the component pieces that you need.  This would not be true if I estimated just a single xWOBA, and other users prefer xOPS or xISO.
  • A major extension over existing xBABIP methods is that this offers an empirical basis to recover xSLG.  The previous piece actually provides more commentary on this.
  • Natural next steps are to test partial-season performance, and also whether projection systems like ZiPS can make use of the estimated luck residuals to become more accurate.

Finally, I promised to list the leading over- and underachievers for the 2013 season.  By xWOBA, they are as follows:

Overachievers (250+ PA) Underachievers (250+ PA)
Name 2013 wOBA 2013 xWOBA Difference Name 2013 wOBA 2013 xWOBA Difference
Jose Iglesias 0.327 0.259 0.068 Kevin Frandsen 0.286 0.335 -0.049
Yasiel Puig 0.398 0.338 0.060 Alcides Escobar 0.247 0.296 -0.049
Colby Rasmus 0.365 0.315 0.050 Todd Helton 0.322 0.369 -0.047
Ryan Braun 0.370 0.321 0.049 Ryan Hanigan 0.252 0.296 -0.044
Ryan Raburn 0.389 0.344 0.045 Darwin Barney 0.252 0.296 -0.044
Mike Trout 0.423 0.379 0.044 Edwin Encarnacion 0.388 0.429 -0.041
Junior Lake 0.335 0.292 0.043 Josh Rutledge 0.281 0.319 -0.038
Matt Adams 0.365 0.323 0.042 Wilson Ramos 0.337 0.374 -0.037
Justin Maxwell 0.336 0.295 0.041 Yuniesky Betancourt 0.257 0.294 -0.037
Chris Johnson 0.354 0.314 0.040 Brian Roberts 0.309 0.345 -0.036

Comments/suggestions?





Sam is an Oakland A's fan and economist who received his Ph.D. from UC San Diego in 2017.

6 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Jonah Pemstein
10 years ago

This is really interesting, and opens up a lot of possibilities for new research, but right now I don’t think it is at a point where we should be using it to measure things if the r^2 for most of the regressed statistics is under .5. It certainly has some credibility – the r^2s aren’t tiny, so the numbers do mean something, and Jose Iglesias is exactly who I would expect to see atop the overachievers leaderboard as well – but there is a lot of room for improvement. Great job though.

Brandon Reppert
10 years ago

Great work Sam, I remember your first post and this is exactly the kind of work we need to be doing to better understand hitter evaluation. We’re at a bit of a disadvantage without hitf/x data, but the stuff provided by spray charts/ BIS is decent enough.

Again, excellent work in the right direction.

Ronald
10 years ago

it’s wild to me that EE has a higher xOBA than Mike Trout’s actual wOBA.

Economists do it with models
10 years ago

Is there anyway that we can see the entire list?