Offering a Solution to the fWAR League Adjustments by AC_Butcha_AC January 7, 2015 This article is a response to Noah’s thought inspiring articles about a modification to the FIP-based pitching fWAR and his issues with the fWAR league adjustments in which I want to lay out a possible solution to the somewhat “flawed” league adjustments currently used. My method could be applied to a divisional context as well therefore I won’t address it specifically. I am not a native speaker therefore please do not take any offense in grammar or spelling mistakes. Let’s start with the basics of the current concept. 1,000 WAR has to be given out each year to all players implying a replacement level of .294. Even if for some reason every player on all the current 25-man rosters happened to be abducted by aliens this would not change. Even if both leagues consisted entirely of “replacement” players, 1,000 WAR would be handed out. This is our model and it is a great one because it includes context so beautifully and effortlessly. Here is a little thought experiment: Say these aliens are huge fans of the NL for some reason and decide to abduct the entire league’s player population. We would be left with the untouched AL (we assume the AL and NL are of exactly equal strength for this thought experiment). Again, 1,000 WAR has to be distributed among all big league players. If our current model is handling league adjustments correctly we would expect to see 0 WAR in the NL and 1,000 WAR in the AL. Unfortunately, the current fWAR model wouldn’t spit out a result coming close to this. Here is why: Even in a reality where about 88% of all games are played internally in a given league a great portion of the fWAR calculation is based on treating MLB as being ONE league instead of two rather independent leagues. The consequences can be strongly seen in my thought experiment. Because every player in the NL would be a replacement player we could hardly find a hint of the changed talent level in the NL’s stats. This is because replacement level hitters are facing replacement level pitching and my guess would be that the NL’s overall batting line and R/G would barely change – even if the talent changed dramatically. Now wOBA is calculated using both leagues and the offensive output by these replacement hitters would be weighted as if they put up these numbers against actual major league competition. Thus, the NL would be undeservedly credited with batting runs and run prevention for the pitchers (again versus replacement hitters). This is certainly an exaggeration but it is still true with one league being weaker. The only way we would notice the changed talent level would be the interleague record against the AL. In a perfectly balanced world with two equally strong and talented leagues we were to see a .500 record and our 1,000 WAR could be handed out 50/50 between the AL and NL and 57/43 between position players and pitchers. What would the interleague record be? What would it have to be? The answer is pretty easy: .294 aka replacement level. Now this is interesting and it seems like we are going somewhere. Here seems to lie the key for the proper league adjustments because how much WAR should be handed out to a league that wins at a replacement level against a “true” major league? Sounds pretty darn like a league full of replacement players which are by definition worth 0 WAR. And this 0 WAR should be the correct answer based on our assumptions in this thought experiment. How do we get there? 1) Calculate every aspect that goes into WAR (R/PA, wOBA, FIP, etc) separately for both leagues. In fact we have to treat both leagues as independent. This would mean 500 WAR for each league per default, distributed 57/43 between position players and pitchers. 2) Figure out the interleague record. I would suggest using something like a 3 year regressed rolling average (Just like the 5 year rolling regressed park factors on FG that can actually change a player’s WAR retroactively if his home park happens to play very hitter – or pitcher friendly in the immediate future) I will use a .525 record in favor of the AL for an example later on. 3) Based on the “true” replacement levels of .294 for teams, .380 for starters and .470 for relievers we calculate an “artificial replacement level” for the weaker and the stronger league via the odds ratio. Using the .525 interleague record for the AL as an example this will come out to an artificial replacement level of .315 for NL teams / .274 for AL teams .404 for NL starting pitchers / .357 for AL staring pitchers .495 for NL relievers / .445 for AL relievers. To help interpret these numbers think about it this way: The .475 NL is the weaker league. A “replacement team” would have a .294 record in the NL (forget about interleague for a moment). If this team plays against a .294 AL team, we would expect a .500 W% IF both leagues are equally strong. But we already established that the AL wins at a .525 clip when two teams with “equal” records IN their respective leagues match up. The .315 “artificial” replacement level for the NL means that we expect a .315 NL team to win 50% of all games a against a .294 AL team. Thus, we can conclude that the replacement level bar to clear should be put a little higher in the NL because it seems easier to accumulate value in the weaker league. On the other hand the opposite is true for the AL, where the replacement level bar should be put a little lower for the same opposite reasons and to be consistent with handing out 1,000 WAR each year. 4) Derive the correct distribution of WAR for both leagues based on the artificial replacement levels. In my thought experiment at the beginning we would have a 0/1,000 WAR distribution, because replacement level would actually be .500 for the NL using my methodology in 3). A balanced league would have a 500/500 WAR distribution with a replacement level of .294 for both leagues. With the AL winning at a .525 clip against the NL this means a WAR distribution close to 450/550 in favor of the AL. The WAR distribution for 2014 on FG was 472/528 in favor of the AL. Conclusion There are really some beautiful and elegant side effects. The independence of both league’s calculations would mean interleague adjustments are not necessary at all. This is because even if there are about 12% interleague games, pitchers and hitters are only compared to the stats that other players in the same league have put up – interleague included. The adjustment takes place when we evaluate the interleague record because this is the only direct way to measure difference in strength/talent. The current league adjustments are a little bit flawed in my opinion because wOBA and the run environment is calculated for the entire MLB and interleague records are not taken into consideration at all. Therefore a stiff replacement level is used for all years. My methodology addresses these problems and scales an artificial replacement level for each year and league based on a multi-year regressed interleague record while still keeping the overall replacement level for all of MLB to .294 and 1,000 WAR each year. To be honest with you I am not a huge fan of divisional adjustments because of small samples and differing opponents. In an entire season’s interleague schedule there should be a lot more signal. I think when applying divisional adjustments we would have to regress heavily. I am not entirely sold yet to include a possibly very complicated divisional adjustment when its heavily regression doesn’t give us much to learn from anyway. But I am open to be sold the other way. Look forward to a follow-up in which I walk through some real life examples and present some of the changes my methodology brings. Feel free to comment and discuss! Prost!