Author Archive

Home Runs and Temperature: Can We Test a Simple Physical Relationship With Historical Data?

Unlike most home-run-related articles written this year, this one has nothing to do with the recent home run surge, juiced balls, or the fly-ball revolution. Instead, this one’s about the influence of temperature on home-run rates.

Now, if you’re thinking here comes another readily disproven theory about home runs and global warming (a la Tim McCarver in 2012), don’t worry – that’s not where I’m going with this. Alan Nathan nicely settled the issue by demonstrating that temperature can’t nearly account for the large changes in home-run rates throughout MLB history in his 2012 Baseball Prospectus piece.

In this article, I want to revisit Nathan’s conclusion because it presents a potentially testable hypothesis given a large enough data set. If you haven’t read his article or thought about the relationship between temperature and home runs, it comes down to simple physics. Warmer air is less dense. The drag force on a moving baseball is proportional to air density. Therefore (all else being equal), a well-hit ball headed for the stands will experience less drag in warmer air and thus have a greater chance of clearing the fence. Nathan took HitTracker and HITf/x data for all 2009 and 2010 home runs and, using a model, estimated how far they would have gone if the air temperature were 72.7°F rather than the actual game-time temperature. From the difference between estimated 72.7°F distances and actual distances, Nathan found a linear relationship between game-time temperature and distance. (No surprise, given that there’s a linear dependence of drag on air density and a linear dependence of air density on temperature.) Based on his model, he suggests that a warming of 1°F leads to a 0.6% increase in home runs.

This should in principle be a testable hypothesis based on historical data: that the sensitivity of home runs per game to game-time temperature is roughly 0.6% per °F. The issue, of course, is that the temperature dependence of home-run rates is a tiny signal drowned out by much bigger controls on home-run production [e.g. changes in batting approach, pitching approach, PED usage, juiced balls (maybe?), field dimensions, park elevation, etc.]. To try to actually find this hypothesized temperature sensitivity we’ll need to (1) look at a massive number of realizations (i.e. we need a really long record), and (2) control for as many of these variables as possible. With that in mind, here’s the best approach I could come up with.

I used data (from Retrosheet) to find game-time temperature and home runs per game for every game played from 1952 to 2016. I excluded games for which game-time temperature was unavailable (not a big issue after 1995 but there are some big gaps before) and games played in domed stadiums where the temperature was constant (e.g. every game played at the Astrodome was listed as 72°F). I was left with 72,594 games, which I hoped was a big enough sample size. I then performed two exercises with the data, one qualitatively and one quantitatively informative. Let’s start with the qualitative one.

In this exercise, I crudely controlled for park effects by converting the whole data set from raw game-time temperatures (T) and home runs per game (HR) to what I’ll call T* and HR*, differences from the long-term median T and HR values at each ball park over the whole record. Formally, for any game, T* and HR* are defined such that T* = T Tmed,park and HR* = HR – HRmed,park, where Tmed,park and HRmed,park are median temperature and HR/game, respectively, at a given ballpark over the whole data set. A positive value of HR* for a given game means that more home runs were hit than in a typical ball game at that ballpark. A positive value for T* means that it was warmer than usual for that particular game than on average at that ballpark. Next, I defined “warm” games as those for which T*>0 and “cold” games as those for which T*<0. I then generated three probability distributions of HR* for: 1) all games, 2) warm games and 3) cold games. Here’s what those look like:

The tiny shifts of the warm-game distribution toward more home runs and cold-game distribution toward fewer home runs suggests that the influence of temperature on home runs is indeed detectable. It’s encouraging, but only useful in a qualitative sense. That is, we can’t test for Nathan’s 0.6% HR increase per °F based on this exercise. So, I tried a second, more quantitative approach.

The idea behind this second exercise was to look at the sensitivity of home runs per game to game-time temperature over a single season at a single ballpark, then repeat this for every season (since 1952) at every ballpark and average all the regression coefficients (sensitivities). My thinking was that by only looking at one season at a time, significant changes in the game were unlikely to unfold (i.e. it’s possible but doubtful that there could be a sudden mid-season shift in PED usage, hitting approach, etc.) but changes in temperature would be large (from cold April night games to warm July and August matinees). In other words, this seemed like the best way to isolate the signal of interest (temperature) from all other major variables affecting home run production.

Let’s call a single season of games at a single ballpark a “ballpark-season.” I included only ballpark-seasons for which there were at least 30 games with both temperature and home run data, leading to a total of 930 ballpark-seasons. Here’s what the regression coefficients for these ballpark-seasons look like, with units of % change in HR (per game) per °F:

A few things are worth noting right away. First, there’s quite a bit of scatter, but 75.1% of these 930 values are positive, suggesting that in the vast majority of ballpark-seasons, higher home-run rates were associated with warmer game-time temperatures as expected. Second, unlike a time series of HR/game over the past 65 years, there’s no trend in these regression coefficients over time. That’s reasonably good evidence that we’ve controlled for major changes in the game at least to some extent, since the (linear) temperature dependence of home-run production should not have changed over time even though temperature itself has gradually increased (in the U.S.) by 1-2 °F since the early ‘50s. (Third, and not particularly important here, I’m not sure why so few game-time temperatures were recorded in the mid ‘80s Retrosheet data.)

Now, with these 930 realizations, we can calculate the mean sensitivity of HR/game to temperature, resulting in 0.76% per °F. [Note that the scatter is large and the distribution doesn’t look very Gaussian (see below), but more Dirac-delta like (1 std dev ~ 1.66%, but middle 33% clustered within ~0.4% of mean)].

Nonetheless, the mean value is remarkably similar to Alan Nathan’s 0.6% per °F.

Although the data are pretty noisy, the fact that the mean is consistent with Nathan’s physical model-based result is somewhat satisfying. Now, just for fun, let’s crudely estimate how much of the league-wide trend in home runs can be explained by temperature. We’ll assume that the temperature change across all MLB ballparks uniformly follows the mean U.S. temperature change from 1952-2016 using NOAA data. In the top panel below, I’ve plotted total MLB-wide home runs per complete season (30 teams, 162 games) season by upscaling totals from 154-game seasons (before 1961 in the AL, 1962 in the NL), strike-shortened seasons, and years with fewer than 30 teams accordingly. In blue is the expected MLB-wide HR total if the only influence on home runs is temperature and assuming the true sensitivity to be 0.6% per °F. No surprise, the temperature effect pales in comparison to everything else. Shown in the bottom plot is the estimated difference due to temperature alone in MLB-wide season home run totals from the 1952 value of 3,079 (again, after scaling to account for differences in number of games and teams). You can think of this plot as telling you how many of the total home runs hit in a season wouldn’t have made it over the fence if air temperatures at remained constant at 1952 levels.

While these anomalies comprise a tiny fraction of the thousands of home runs hit per year, one could make that case (with considerably uncertainty admitted) that as many as 59 of these extra temperature-driven home runs were hit in 2016 (or about two per team!).


Not All One-Run Games are Created Equal

It’s the bottom of the fourth. No outs. Your beloved Milwaukee Brewers are up to bat trailing the Dodgers 1-0, with Clayton Kershaw on the mound. They’ve picked up two scattered hits and drawn a walk over four innings, but the sentiment in the dugout and the stands seems to read if they haven’t scored yet, chances don’t look so good.

Consider the same situation, now, with one small change. Your Brewers are still down by a run. It’s still the bottom of the fourth. Kershaw is still dealing. But it’s 2-1 Los Angeles this time. Milwaukee has still only gotten two hits and drawn a single walk, but the timing has worked out such that a run scored. By the numbers, things are almost exactly the same. No question about it. The sentiment, though, is certainly different. We’ve broken through once already, think the players, manager, and fans. We can do it again. Well, of course the Brewers can do it again. But, statistically speaking, will they? That is: when trailing by one run as they enter a half-inning, is a team more likely to come back in a non-shutout than in a game in which they haven’t yet scored?

The answer is “yes,” although only by what initially appears to be a small margin. In 2013, 5705 half-innings began with the batting team trailing by a run. 11.4% (651) of those half-innings ended with the batting team tied or in the lead. The same year, 2915 half-innings began with the batting team trailing specifically by the score of 1 to 0. 11.1% (324) of those ended in a lead change or tie.

At first glance, a 0.3% difference between odds of scoring when down by a run versus the specific case of being down 1-0 seems minor. And it is, really. For years with complete-season data available since 1871, the percent of half-innings started where it’s a one-run game and the losing team up to bat which resulted in a lead change or tie (let’s call this %ORLC) averages out to 11.5% ± 1.3% (1 σ). The subset of these in which the batting team was being shutout (let’s call this %ORSLC) has an average of 10.6% ± 1.1% (1 σ). Middle-school statistics will tell you that while, yes, %ORSLC is on average nearly a percent lower than %ORLC, they fall within a standard deviation of each other and, thus, their difference is not statistically significant.

That’s true. But baseball isn’t middle-school statistics and two subsets whose error ranges overlap are not for all practical purposes equal. Quite remarkably, %ORLC has exceeded %ORSLC for each consecutive season of Major League Baseball since 1977 (when %ORSLC was 0.2% higher) and every year since 1871 except for five seasons (out of the 111 years of complete-season data that were available).

That is: in 106 out of the last 111 seasons for which box scores have been logged every game, a batting team behind in a one-run ballgame has successfully erased the deficit more often when not trailing 1-0. The margin isn’t huge, of course, but the trend is meaningful.

Above: Percentage of one-run game situations and specific 1-0 game situations (%ORLC and %ORSLC, respectively) in which the team losing scores to tie or take the lead

After all, baseball is a game of small but meaningful margins. The 111-year average relative difference between these two metrics (10.6% vs 11.5%) is proportional to a .277 batting average versus .300, or 89 wins in a 162-game season instead of 97. The latter is perhaps a more relevant comparison, since it is gaining (and maintaining) a lead that is crucial to winning games.

Among teams in 2013, however, these differences aren’t so marginal. In %ORLC (percentage of half-innings in which a team trailing by a run ties it up or takes the lead) the Royals finished first at 16.7% and the Cubs finished last at 6.5%. In %ORSLC (same stat but for the score 1-0), the Rays finished first at 16.7% (same number, coincidentally) and the Red Sox finished last at 4.9%. Considering the Royals didn’t make the playoffs in 2013 and the Red Sox won the World Series, I wouldn’t use %ORLC and %ORSLC as indicators of a team’s ultimate success unless you’re looking to lose a lot of money in Vegas.

While one could theorize for hours on the meaning and utility of each made-up statistic, it sure doesn’t seem like %ORLC and %ORSLC are indicative of much on a team-by-team basis. But that doesn’t mean they’re useless. Let’s go back to the long-term trend of %ORLC and %ORSLC, where the former was higher than the latter 106 out of 111 times.

Some underlying process, it would seem, must be responsible for this impressive stat. If we are to believe that teams truly underperform, ever so slightly, when they’re losing 1-0 due only to the fact that they’re being shut out, shouldn’t we able to see the effect of psychology on performance somewhere else?

As it turns out, you don’t have to look far. Let’s consider the general situation of a team coming up to bat down by a run (not only the specifically 1-0 case), which is colloquially termed a “one-run game.” We’ll abbreviate any instance of this (a trailing team coming to bat in any half-inning) as OR. Now this situation could happen at any point in a game. A visiting team leads off with a run in the top of the 1st, the home team comes up to bat – that’s an OR. It’s all tied-up in the top of the 13th, the third baseman slugs a solo shot to left, three outs are recorded, the home team steps up the plate with one chance to stay alive – that’s an OR. So, in what inning on average does an OR occur?

In 2013, the answer was the 4.95th inning. In 2012 and also for the last 111 years of available records, the 4.91st inning. Baseball amazes us once again with its year-to-year consistency in obscure statistics. But this obscure stat isn’t all that meaningful on its own. Okay, so most one-run situations occur near the 5th inning – so what?

Well, let’s take a look now at the average inning in which a team scored in an OR to tie or take the lead. We’ll call this a one-run game situation where the lead changes, or ORLC. In 2013, of all the instances of ORLCs, the average time they occurred was the 5.18th inning. In 2012, the 5.10th inning. And for the same 111 seasons of recorded game data, the 5.20th inning. Once again, we see a marginal but nonetheless compelling deviation from the average, just as we saw with %ORSLC. Teams score in one-run situations about a third of an inning later than the one-run situations tend to occur themselves. That may not seem like a whole lot, but consider that in our 111-season dataset only two years – 1902 and 1912 – saw earlier ORLCs than ORs on average. Just two years in one-hundred eleven.

Above: Average innings of occurrence for one-run game situations (OR) and one-run game situations in which the trailing team scores to tie or take the lead (ORLC)

So what’s going on? I like to think of average ORLC minus average OR as a league-wide statistic for urgency. Consider the following: if the inning number had no effect on the performance of a trailing team in a one-run situation, then we would see roughly the same average inning of occurrence for both OR and ORLC. Out of 111 years, we’d expect to see about 55 years in which OR occurred earlier on average than ORLC and around 55 in which it didn’t. But we don’t see this at all, which strongly suggests that inning number has an effect on how a team does at the plate when down by a run. This is the urgency statistic. It describes a trend that has rung true for the past 101 consecutive seasons of Major League Baseball – when time is running out and the 9th inning is rapidly approaching, teams in close games get their acts together and produce runs. Not every time, of course, but we’re speaking in averages of massive sample sizes here.

So, while your Brewers are likely to fare worse trailing Kershaw and the Dodgers 1-0 than 2-1, take solace in the fact that it’s the fourth inning. Statistically speaking, they’ll have a better chance breaking through as the game goes on and their need for a run becomes more urgent. The effect of team psychology has left its imprint on the records of baseball games since the sport’s earliest days.