Much criticism has been levied onto baseball managers and their inability to see past the archetypal dominant closer who closes pitches in save situations. Writers in the statistical community have observed and critiqued the many flaws which come with the save statistic and how it’s perceived by fans, managers, and baseball decision-makers as far back at least 2008 . Accumulating saves is a function of opportunity and degree of difficulty that is certainly not the best way to get at a relief pitcher’s ability to get outs. More objective methods such as ERA and its estimators, like Fielding Independent Pitching (FIP) and Skill-Interactive Earned Run Average (SIERA). are better ways to evaluate a pitcher’s talent, and Win Probability Added (WPA) is better for measuring a pitcher’s importance to winning specific games. This criticism has definitely been heard in the intervening years by people running ball which, can be shown by the number of pitchers who are getting saves on each team and the variance of save totals for a given team.
A team with high variance in their save totals means that there is one player who accumulates a lot of saves and some number who have very few, opposed to lower variance representing a more even distribution of saves among pitchers. This variance metric is heavily negatively correlated (-0.74) with the number of pitchers a team has record a save in a given season. This means the more pitchers recording a save on a team, the more likely the distribution is to be equitable and the insistence on using your best pitcher in only a save situation is lower. Based on this analysis, somewhere between 2008 and 2011 was the peak on the capital “C” Closer in the majors. A rather precipitous drop occurred in 2016 and has continued on a downward trajectory to the point where last year saw the most equitable distribution of saves among teams since 1987, excluding the lockout-shortened 1994 campaign. Read the rest of this entry »