Author Archive

Adding to the K-vs.-Clutch Dilemma

A few recent researchers have been doing some fascinating work on the relationship between strikeouts and clutch and leverage performance. Some good work has been done and there has even been good content added to the comment sections of the respective articles. To start a talk on anything that has to do with clutch performance, there are a few things that need to be settled first.

What is clutch?

The stat called ‘clutch’ has aptly been called into question recently. Does it measure what it is intended to measure, is the main issue. Clutch is namely one’s ability to perform in high leverage situations vs. their performance in not-high leverage situations. If someone is notably poor in important PAs compared to their relative performance in lower leverage situations, clutch will let us know. However, if someone is a .310 hitter in all situations, that hitter is very good, but clutch is not really going to tell us much.

I think the topic has been popularized partly because of Aaron Judge, who had a notoriously low ‘clutch’ number last season. Many have blamed his process to striking out, which indeed could very well be a factor in the relative situational performance gap. However, Judge helped his team win last year despite his record-setting strikeout process. Still, Judge wasn’t even top 40 in WPA last year, but then again neither were a lot of good players. But are high strikeout guys really worse off in high leverage spots? The rationale with putting a strong contact hitter up to the plate in high leverage game-changing spots is intuitively obvious, but all else equal, is someone like Ichiro really better in game-changing situations than someone like Judge?

Many have been using clutch to compare relationships with other stats. To be quite honest, I can’t seem to get much of a statistical relationship between anything and ‘clutch’ so I am opting for a different route. We know that a player’s high leverage PAs are worth many times more to the importance of their team as low leverage situations, by about a factor of 10. If we assume WPA is the best way of measuring a player’s impact to their team winning in terms of coming through in leverage spots, then we can tackle the clutch problem, in the traditional sense of the word.

WPA is not perfect, like every other statistic that exists or will exist. There are a lot of factors that play into a player’s potential WPA. Things like place in the batting order, strength of teammates among other factors all play a part. But in terms of measuring performance in high leverage, it works quite well.

Examining the correlation matrix between WPA and several other variables tells is some interesting things.

**K=K% and BB=BB%

We assume already that a more skilled hitter is going to better be able to perform in high leverage situations than a not as skilled hitter. What we see is that K% appears to have a negative relationship with WPA, but not a strong one, and not as strong as BB%, which has a positive relationship. Looking at statistics like wOBA, K% and BB% along with WPA can be tricky because players with good wRC numbers can also strike out a lot. See Mike Trout a few years back. Those same players can also walk a lot. I like this correlation matrix because it also shows the relationship between stats like wOBA and K%, which you can see are negatively correlated but also very thinly. The relationship between stats like these will not be perfect. Again, productive hitters can still strikeout a lot. Those same players again can also walk a lot. This helps to lend evidence to confirm that a walk is much more valuable than a strikeout is detrimental.

I’ll add a few more variables to the correlation matrix without trying to make it too messy.

We see again that WPA and wOBA show the strongest relationship. The matrix also suggests that we debunk the myth that ground ball competent hitters lead to better performance in high leverage situations.

So why do we judge players like Judge (no pun intended) so much for their proneness to striking out, when overall, they are very productive hitters who still produce runs for their teams? The answer is that we probably shouldn’t. But it wouldn’t be right just to stop there.

So how exactly should we value strikeouts? One comment in a recent article mentioned that when measuring clutch against K% and BB%, he or she finds a statistically significant negative relationship between K% and clutch. However, that statistical significance goes away when also controlling for batting averages. Interestingly, I found the same is true when using WPA as the dependent variable but instead of using batting average, I used wOBA.

To further test this, I use an Ordinary Least Squares Linear regression to test WPA against several variables to try to find relationships. I run several models based mainly on some prior studies that suggests relationships with high leverage performance and other variables. Before I go into the models, I feel I need to talk a little more about the data.

More about the data:

I wanted to have a large sample size of recent data so I use a reference period of 10 years, encompassing the 2007-2017 seasons. I use position players with at least 200 PAs for each year that they appear in the data, which seems to allow me to capture other players with significant playing time besides just starters. This also gives me a fairly normal distribution of the data. The summary statistics are shown below.

There aren’t really abnormalities in the data to discuss. I find the standard deviations of the variables to be especially interesting, which will help me with my analysis. All in all, I get a fairly normal distribution of data, which is what I am going for. The only problems I found with observations swaying far from the mean were with ISO and wOBA. To account for this, I square both the variables, which I found produces the most normal adjustment of any transformation. The squared wOBA and ISO variables is what I will be using in the models.

I use multiple regression and probability techniques to try to shed light on the relationship between strikeouts and high leverage performance. First I use an OLS linear regression model with a few different specifications. These specifications can be found below.

For the first equation, I find that wOBA, BB% and K% all have statistically significant relationships with WPA at the one percent level. I know that is not exactly ground breaking, but we can get a better idea of the magnitudes of the relationship. The results of the first regression are below.

I find that these three variables alone account for about 60% of the variance in WPA. Per the model, we find that a one percentage point increase in K% corresponds to about a 1.14 percentage point decrease in WPA. Upping your walk rate one percent has a greater effect in the other direction, corresponding to about a 5-percentage point increase in WPA. Also per the model, we find that a one percentage point increase in the square root of wOBA corresponds to about a 35.50 percentage point increase in WPA. These interpretations, however, are tricky, and do not really mean much. Since WPA usually runs from about a -3 to +6 scale, looking at percentage point increases does not really tell us anything tangible, but it does give a sense of magnitude.

To account for this, I convert the measurement weights into changes by standard deviation to help us compare apples-to-apples on a level field. The betas of the variables shown below.

We see that wOBA not surprisingly has the greatest effect on WPA while K% has the smallest. All else equal, a one standard deviation increase in K% corresponds with just a -0.04 standard deviation decrease in WPA. A one standard deviation increase in BB% has more an upward effect on WPA than K% does a downward one, albeit by not much. Though the standard deviations for these variables are not very big, so the movement increments will be small. Nevertheless, we still see level comparisons across the variables in terms of magnitude.

We go back to the fact that good hitters still sometimes strike out a good portion of the time. We like to think that strikeout hitters are also just power hitters, but Mike Trout was not that when he won his MVP while striking out more than anyone in the league. Not completely gone are the days where the only ones who were allowed to strike out were the ones who hit 40+ round trippers a year. I’m not necessary trying to argue one way or another, but getting comfortable with high strikeout yet productive players could take some getting used to. We value pitchers who can rack up high numbers of strikeouts because it eliminates the variance in batted balls, but comparing high K pitchers and high K batters is not exactly the same. Simply putting the ball in play is not quite enough in the MLB when you’re a hitter, but eliminating the batted ball variance through strikeouts is important for pitchers.

Speaking of batted ball variance, we can account for that in the models. I add ISO, hard hit ball%, GB% and FB%. I would have liked to add launch angle to the sample but I do not have the time to match the data right now, but that would likely improve the sample. I do my best and account for exit velocity with Hard%. I do not account for Soft% or Med% because some preliminary tests showed no statistical significance. Same goes for LD%, which was a bit surprising. I am mainly looking for how K% changes while controlling for these new variables, and if I can get any better account for the variance in the model.

When controlling for the new variables, the magnitude of the K% shows a stronger negative relationship. We find that despite some other popular belief, ground balls seem to be negatively correlated with WPA, but not as much as fly balls. wOBA and BB% show the strongest positive relationship with WPA. Hard% shows a positive relationship with WPA but is only significant at the 10% level. This model accounts for about 65% of the variation in WPA.

Batted ball profiling for WPA is still a little tricky. Running F-tests for significance on GB and FB, I find that indeed both of them together are significant in the model. However, when controlling for season to season variance, GB and FB percentages are not significant and don’t help the model. I think it’s likely the case that extreme fly ball hitters, all else equal, will not be as strong in high leverage situations.  Kris Bryant seems to fit the profile of a guy who constantly puts the ball in the air yet struggled in high leverage spots last year. On the opposite end of the spectrum, extreme ground ball hitters were not WPA magicians either. It is likely that when looking at the entire sample, FB and GB rates play a part, but when looking at an individual season level, the variance in these rates doesn’t really tell us much.

The explanation may be as simple as that MLB fielders are good. Yes, batted ball variance is very real, but simply making contact, all else equal, does not much change your ability at adding to your team’s chances at winning as striking out. Do not get me wrong, putting a ball in play is always better, but the simple fact of putting the ball in play in itself is not much more helpful. In addition, striking out a lot could suggest mechanical issues with a player’s swing, timing issues etc, though I do not believe it should be a blanket generalization. Mike Trout (I like mentioning Trout, but there are many more who fit this profile) may strike out a lot (not so much anymore) but he also has a great controlled swing where he hits the ball at optimal launch/speed angles, making him good at performing in high leverage situations.

Perhaps the shift has hurt the ability of extreme pull hitters to produce enough to the point where it hurts their WPA. A better idea would probably be to look at platoon splits to see if extreme pull lefties are hurt more than extreme pull righties, since lefties get shifted on much more often. The next explanation is more of an opinion gathered from my playing days and could easily be debated, but the ability to use the whole field is a sign of a better well-rounded hitter. Being an extreme pull hitter often means you lock yourself in to one approach, one swing, and one pitch. But again, I have no statistical evidence to back that up, but that is what I have gathered while being on the field. I think it is good to sometimes throw the eye test into statistical analysis to keep the study grounded.

It seems that performance in high leverage situations is more a mentality and ability to adjust approaches given the situation. The overall conclusion I gather is that K% is detrimental to one’s ability to perform in high leverage situations, but not by much. There are good hitters who strike out a bit, but those good hitters are still good hitters, as demonstrated but the strong relationship between stats like wOBA and WPA. Yes, Aaron Judge struck out a lot last season and had a big dip in relative performance in high leverage situations as seen by his Clutch metric, but all 29 other teams wish they had him. However, even when looking at BB/K rate, the leaders at the very top also show the highest WPAs, but the other leaders beyond that do not follow suit.

To see a more visual relationship between K% and WPA, below is a scatter plot comparing the two metrics with a line of best fit.

Looking a scatter plot of WPA vs. K%, we can see a slight downward relationship with WPA, but the data is mostly scattered around the means, helping confirm my aforementioned conclusion. We can see that there are not as many high K guys with high WPAs as there are high K guys with lower WPAs, but that doesn’t really tell us much because there are obviously going to be more average and below average players than above average. I’ll let you guess the player who had an over 30% K rate yet had a WPA of well over 5.

I know the matrix graph is a little overwhelming, but we can see that K% does not show much of a strong visual relationship with anything. We see a slight upward tick in the slope of measuring K and ISO together, but still predominantly scattered around the means. We also see a slight downward tick in the slope of GB% and K%. Besides the obvious strong relationship with wOBA and WPA, BB% does indeed show positive visual relationship with WPA. The fact that ISO shows a relationship with both K and WPA is interesting. Perhaps ISO helps explain the quality of batted ball variance that I have been trying to capture. The 2s after wOBA and ISO indicate their squared variables.

It seems that no one trait makes a hitter good in high leverage situations or not. Exceptionally well-rounded hitters, such as Joey Votto and Mike Trout, seem to constantly be ahead of everyone else in high leverage situations. Even still, they are not the same types of hitters exactly, though both walk a lot and make quality contact with the baseball. I believe that performance in high leverage situations is a mentality and the ability to keep a solid approach in the face of pressure. Using the Clutch metric itself is probably better when looking at how batters deal with pressure, but players know what is high leverage and what is not and respond accordingly.

Interestingly enough, though I won’t go into much detail here, I took O-Swing and Z-Swing rates and measured them both independently against WPA as well as with the full model. What I found was that O-Swing’s effect on WPA is statistically significant from zero while Z-Swing’s is not. O-Swing% of course showed a negative relationship with WPA. Disciplined batters who have the ability not to chase pitches, thereby recognizing good ones, indeed are poised to do better in big spots (if that is not stating the obvious). I don’t think anyone will pinpoint the exact qualities of a good situational hitter. The best pure hitters will have the edge on WPA, even if they are prone to striking out.


Overcoming Imperfect Information

When a team trades a veteran for a package of prospects, only minor-league data and the keen eye of scouts can be used to assess the likely future major-league contributions from those particular players. Teams have accurately relied on the trained eyes of scouts for generations, but of course the analytics community wants its foot in the game too. Developments such as Chris Mitchell’s KATOH systems make some strides, as it is helpful to compare historical information. Does prospects rank on MLB.com’s or Baseball America’s top-prospect list really indicate how productive a player will be in the major leagues? Of course, baseball players are human, and production will always vary due to the result of numerous factors that could potentially change the course of someone’s career. Perhaps a player meets a coach that dramatically changes his game around, or a pitcher discovers a new-found talent for an impressive curveball that jumps him from low fringe prospect to MLB ready. The dilemma of imperfect information will always be present, so team must use the best resources available to them to tackle the problem.

To start my analysis of imperfect information, I look at the top 100 position prospects from 2009 using data from BaseballReference.com. I break up the prospects into three groups based on their prospect ranking, which are position players ranked 1-10, 11-20 and 21-100. I then look at the value that those prospects contributed in their first six seasons in the major leagues, as well as their to-date total contributions using fWAR. I choose to look at the first six seasons of a player’s career because that is how long a player is under team control before reaching free agency. This study does not take into account any contract extensions that may have been given before a player reached free agent-eligibility. For players who have not been in the MLB for six full seasons, I look at their total contributions so far. The general idea for this study was inspired by a 2008 article by Victor Wang that looked at imperfect prospect information.

I convert the prospects’ production into monetary value based on the relative WAR values that were commanded in the free-agent market that year. I use fWAR to encompass the best measure of total value. When teams trade for prospects, they understand that they are trading wins today for wins in the future. Since baseball is a business and teams care about their performance on the field each year, I need to account for that fact in my analysis. In order to do that, I assume all else equal, a win today is more valuable than a win in the future. I apply an 8% discount rate to each prospect’s WAR value and create a discounted WAR value (dWAR). The value of the discount rate can be debated, but the 8% rate seems appropriate for the time framed looked at.

From here, I break up the prospects into a few different subgroups based on their average WAR contributed over their first six seasons in the major leagues. I follow some of the guidelines laid out in other studies with some slight modifications. Players with 0 or negative WAR per year are labeled as busts. Players with slightly above 0-2 WAR are contributors. Players with 2-4 WAR are starters and players with 4+ WAR are stars. Like described previously, I estimate the players’ monetary savings to their team by taking their monetary value based on WAR performance and comparing it to what similar production would command in the free-agent market for that year. There seems to be some debate on the value of one WAR in the free-agent market, however my calculations show that about $7 million bought one WAR leading up to the 2009 season. Victor Wang suggests that the price for one WAR had about a 10% inflation rate from year to year. I find the present value of each player’s WAR, then divide it by the $7 million dollars per WAR that would have been commanded in the free-agent market in order to find a player’s effective savings to their team based on production.

Position Prospects Ranked 1-10

Bust Contributor Starters Star AVG WAR/Y
1 2 5 2 2.83
10.00% 20.00% 50.00% 20.00%

 

Bust Contributor Starters Star
WAR/Y 0.43 1.53 2.73 5.17
Probability 10.00% 20.00% 50.00% 20.00%
PV Savings/y (in millions) 1.88 8.46 10.91 27.98

Interestingly enough, this prospect class panned out quite well compared to some other recent draft classes. The only bust in terms of discounted WAR turned out to be Travis Snider of Toronto, who was ranked the sixth-best prospect in 2009 but only managed to accumulate a cumulative WAR slightly above 0 in his first six seasons. Though the top 10 position-player prospects from this class feature names such as Jason Heyward and Mike Moustakas, the player that contributed the greatest WAR over his first six seasons from the top 10 ranking was Buster Posey of San Francisco, who posted nearly 6 WAR a year. It is important to understand that the savings a player gives to his team based on his production does not indicate any “deserved” salary for that player. Instead, it merely indicates the amount of money the team would have had to spend in the free-agent market to acquire that exact same production. The top 10 position-player prospects from this prospect class turned very productive to their respective teams, having a 70% chance of being either a contributor or star.

Position Prospects Ranked 11-20

Bust Contributor Starters Star AVG WAR/Y
5 2 1 2 2.158950617
50.00% 20.00% 10.00% 20.00%

 

Bust Contributor Starters Star
WAR/Y 0.67 1.6 3.56 5.71
Probability 50.00% 20.00% 10.00% 20.00%
PV Savings/y (in millions) 3.21 8.36 19.10 30.90

The next group is the 11-20 ranked position players. As perhaps expected, there are more busts in this group of ranked prospects. The variation of the small is sample is spread through the rest of the categories. Giancarlo Stanton, the 16th ranked prospect, and Andrew McCutchen, the 33rd ranked prospect, turned out to the be the two stars from the list. As the chart shows, the probability of getting a bust at this ranking of prospects is much higher than the 1-10 rankings. The variance does show, however, that player outcomes expectancy can also be promising at this ranking level. There was an identical chance of player becoming a star in this group compared to the first group, and a 50% chance of them being at least a contributor. In total, four of the top 20 prospects from 2009 turned out to be stars to this point in their careers, though not all have reached six full service years in the majors.

Position Prospects Ranked 21-100

Bust Contributor Starters Star
12 7 3 1
38.71% 22.58% 9.68% 3.23%

 

Bust Contributor Starters Star
WAR/Y 0.35 1.46 3.22 3.87
Probability 38.71% 22.58% 9.68% 3.23%
PV Savings/y 1.48 7.52 17.18 20.80

 

The next group of charts shows the rest of the top 100 ranked position players. The chart shows there is much more potential for busts to be found in this ranking; however, we must keep in mind that the variance will be different in this group automatically because of the larger sample size than the first two groups. Nearly 40% of position players ranked 21-100 turned out to be busts. In addition, only Freddie Freeman of Atlanta managed to get above the 4+ dWAR/year threshold to qualify as a star. In fact, the most common category of these ranked position players is a bust. When drafting a player, a team never knows for certain the production that the pick will produce in the major leagues, no matter the pick number of the draft pick. In addition, prospect rankings based on minor-league performance is still not a completely accurate indicator of future MLB productivity. Higher-ranked prospects in 2009 did have higher probability of contributing more to their major-league club, though rankings are understandably volatile. A variety of factors play into the volatile nature of prospect outcomes and the prospect risk premium. Part of the reason I chose to only look at position players is because they are traditionally safer from injury than pitchers, and therefore carry slightly less of a risk premium.

Looking at the variance of dWAR for the prospect group, the distribution is skewed left, which is to be expected because not all prospects will turn out to be as equally strong, and most will not become stars. It also makes sense because in any given year, only a few top prospects will become very strong players, while most will hover around average. We also see that the inner quartile range is about from 0.5 dWAR per year to slightly above 2.5 dWAR per year. Therefore, it could be expected that a team get production in that range from a given prospect ranked 1-100, varying sightly in what rank group they are in. A useful analysis would be to make a distribution chart of each rank group, but in the interest of brevity, I do not do that here.

New ways of evaluating both minor league and amateur players to relieve some of the prospect-risk premium is useful, although risk will always be present. In the next part of this study, I will try to discover statistically significant correlations between college and major-league performance in order to try to reduce the noise of prospect-risk premium. One of the great things about the baseball player development structure is that it allows players with the right work ethic and dedication, as well as others who were overlooked in high rounds of the draft, to prove themselves in the minor leagues. That can seldom be said it other professional sports. The famous example of this was Mike Piazza, who was one of the last overall picks in his draft class and worked his way to a Hall of Fame career. With perfect information, the graph would be perfectly skewed left, with each ranked prospect achieving a higher dWAR than the next ranked prospect. Some may attribute the imperfect information dilemma to drafting or the evaluation of minor-league performance, and some may attribute it to differences in player-development systems. Some may also rationally say that both the players and the scouts are humans and will not be perfect. Prospects rankings for a given year are based on several factors, including a player’s proximity to contributing on the major-league level. The most talented minor-league players could be at a lower ranking in a given year because of their age or development level, which could cause some unwanted variance in the data. Looking at the just the 100 top prospects helps somewhat eliminate this problem, but will not make the problem completely disappear. It is difficult to know when teams plan on calling up prospects anyway, and it really depends on the needs of the team. Some make the jump at 20, while others make the jump at 25, or even later.

This type of analysis could be useful for things like estimating opportunity cost of a trade involving prospects for both financial trade-offs and present versus future on-field production. A lot of factors play into the success of a prospect. When evaluating any player, things such as makeup and work ethic are just as big of factors as measurable statistics. Evaluating college and high-school players for the annual Rule 4 draft can be especially difficult because of the limited statistical information that are accessible. Team scouts work very hard to accurately evaluate the top amateur players in the United States and around the world in order to put their team in a good position for the draft. Despite the immense baseball knowledge that scouts bring to player evaluation, statistical analysis on college players is still explored and used to complement traditional scouting reports. Prospect-risk premium will always be something teams must deal with, but efficiently allocating players into a major-league pipeline is essential for every front office.

There have been a few other articles on sites such as FanGraphs and The Hardball Times on statistical analysis of college players. Cubs president Theo Epstein told writer Tom Verducci that the Cubs analytics team has developed a specific algorithm for evaluating college players. The process involved sending interns to photocopy old stat sheets on college players from before the data was recorded electronically.

Though I do not doubt the Cubs have a very accurate and useful algorithm for such a goal, the algorithm is not publicly available for review, and understandably so. However, for the several articles which tackle this question on other baseball statistical websites, I think there is some room for improvement. First, the multiple of different complex statistical analysis techniques to compare college versus MLB statistics yield about the same disappointing results as the other, meaning that some of the models are probably unnecessarily complicated. Second, though the authors may imply it by default, statistical models in no way account for the character and makeup of a college player and prospect. Even in the age of advanced analytics, the human and leadership elements of the game still hold great value. Therefore, statistical rankings should not be taken as precise recommended draft order. In addition, they do not take into account injury history and risk of a player. Teams can increase their odds of adding a future starter or star over a player’s first six seasons by drafting position players, who have been historically shown to be safer bets than pitchers due to a lesser injury risk.

The model in this post attempts to find statistically significant correlations between players’ college stats and a player’s stats for his first six seasons in the MLB. Six seasons is the amount of time a team has a drafted player under control until they reach free agency and the player is granted negotiating powers with any team, like we’ve gone over. However, the relationship between college batting statistics and MLB fWAR can only go so far because of the lack of fielding and other data for college players.

The first thing I did was merge databases of Division I college players for years 2002-2007 with their statistics for their first six years in the MLB. There is some noise in the model since some payers in the MLB who were drafted in later years in my sample have not spent six years in the MLB, which is accounted for. I only look at the first 100 players drafted each year. I then calculate each player’s college career wOBA per the methods recommended by Victor Wang in his 2009 article on a similar topic. However, since wOBA weights are not recorded for college players, the statistic is more of an arbitrary wOBA that uses the weights from the 2013 MLB season. Since wOBA weights do not vary heavily from year to year, it will do the trick for the purpose of this analysis. For MLB players, wOBA compared to wRC and wRC+ have a 97% correlation (varying slightly on the size of the sample) so I did not feel it was necessary to calculate wRC in addition to wOBA. In fact, when using ordinary least squares and multiple least squares regression techniques, I would have experienced problems with pairwise collinearity, so calculating both statistics would have proved pointless. Along with an ordinary least squares regression technique, I also use multiple least squares and change the functional form to double logarithmic. (A future study I hope to tackle soon is to use logistic regression techniques to calculate the odds of a college player ending up each of the four WAR groups for their first six season in the majors.)

Due to the limitations in the data as well as the restrictions on the amount of top 100 picks that actually make it to the MLB, the analysis is somewhat limited, yet still produces some valuable results. Interestingly, though perhaps unsurprisingly, my calculated wOBA for each player’s college career showed a strong and statistically significantly relationship with wOBA produced in the MLB. To a lesser extent, college wOBA also indicates a statistically significant relationship with MLB-produced WAR, even though this study does not take into account defense, baserunning, etc. Looking at a collinearity matrix, I find that college wOBA and MLB wOBA have about a 25% pairwise collinearity. In addition, the matrix shows a similar pairwise collinearity of about 25% between college wOBA and MLB WAR, though at a lower level of confidence. Using an ordinary least squares regression, I use different functional forms to further evaluate the strength of the relationship between college and MLB statistics.

The first model confirms a fairly strong and statistically significant relationship at the 1% level between college and MLB wOBA with a correlation coefficient of about .25. College strikeout to walk ratio is also statistically significant at the 1% level albeit without a strong correlation coefficient. Even so, looking back at the matrix indicated that players who are less prone to the strikeout in college, on average, see better success in the MLB. Interestingly enough, college wOBA and strikeout to walk ratio are about the only two statistically significant statistics that I can find by running several models with different functional forms. Per the model, we can also say that it is likely that college hitters with extra-base-hit ability have better prospects in the majors. The R-square for model one is about .20, which is not terrible, but certainty not enough information to provide a set-in stone model. The constant in the regressions seem to capture noise that is difficult to replicate, lending insight to the extreme variance and unpredictability of the draft.

For model 2, I use a double logarithmic functional form with a multiple least squares linear regression in order to see the variance in MLB wOBA with college wOBA and strikeout to walk ratio. The results of this regression are slightly stronger and look a bit more promising to the conclusion that the calculated college wOBA is a strong predictor of MLB wOBA.

According to the results of the double log model, a one percent increase in MLB wOBA corresponds to about 36% increase in college wOBA, all else equal. (Since the model is in double log form, the interpretation is done by percent and percentage points.) We can more simply interpret this that a player, on average and all else equal, will have a one percent higher wOBA in MLB for every 36% increase to their college wOBA compared to other players. The coefficient is significant at the one percent level. In addition, a one percent increase in MLB wOBA corresponds to about a six percent decrease in college strikeout to walk ratio. Again, I get about a R-squared of about 0.20.

Perhaps the most interesting thing that these regressions have shown is that college batting average has almost no correlation with MLB success. This may be a little misleading because hitters who get drafted in high rounds and who do well in the MLB will likely have high college batting averages, but the regressions show that there are other things teams should look for in their draft picks besides a good batting average. Traits such as low amounts of strikeouts, especially relative to the number of walks, helping indicate a player’s pure ability to get on base. When evaluating college players, factors such as character build, work ethic and leadership abilities will be just as good as indicators for success for strong college ball players. Perhaps the linear weights measurements used in wOBA calculations are on to something. Accurate weights can obviously not be applied to college statistics without the proper data, but the comparisons using MLB weights for college players can still be useful. In addition, it is also well known that position players are traditionally safer higher-round picks than pitchers due to injury risk. I would argue that strong college hitters are often times the most productive top prospects, while younger pitchers who can develop in a team’s player-development system can be beneficial for a strong farm system and pipeline to the major leagues. Many high-upside arms can be found coming out of high school, rather than taking power college pitchers. In addition, arms from smaller schools often times are overlooked due to the competitive environment they player in. Nevertheless, hidden and undervalued talent exists that could result in high-upside rewards, both financially and productively for teams.