The Case for Dennis Rodman, Part 2/4 (a)(ii)—Player Valuation and Unconventional Wisdom

In my last post in this series, I outlined and criticized the dominance of gross points (specifically, points per game) in the conventional wisdom about player value. Of course, serious observers have recognized this issue for ages, responding in a number of ways—the most widespread still being ad hoc (case by case) analysis. Not satisfied with this approach, many basketball statisticians have developed advanced “All in One” player valuation metrics that can be applied broadly.

In general, Dennis Rodman has not benefitted much from the wave of advanced “One Size Fits All” basketball statistics. Perhaps the most notorious example of this type of metric—easily the most widely disseminated advanced player valuation stat out there—is John Hollinger’s Player Efficiency Rating:

image_thumb13_thumb

In addition to ranking Rodman as the 7th best player on the 1995-96 Bulls championship team, PER is weighted to make the league average exactly 15—meaning that, according to this stat, Rodman (career PER: 14.6) was actually a below average player. While Rodman does significantly better in a few predictive stats (such as David Berri’s Wages of Wins) that value offensive rebounding very highly, I think that, generally, those who subscribe to the Unconventional Wisdom typically accept one or both of the following: 1) that despite Rodman’s incredible rebounding prowess, he was still just a very good a role-player, and likely provided less utility than those who were more well-rounded, or 2) that, even if Rodman was valuable, a large part of his contribution must have come from qualities that are not typically measurable with available data, such as defensive ability.

My next two posts in this series will put the lie to both of those propositions. In section (b) of Part 2, I will demonstrate Rodman’s overall per-game contributions—not only their extent and where he fits in the NBA’s historical hierarchy, but exactly where they come from. Specifically, contrary to both conventional and unconventional wisdom, I will show that his value doesn’t stem from quasi-mystical unmeasurables, but from exactly where we would expect: extra possessions stemming from extra rebounds. In part 3, I will demonstrate (and put into perspective) the empirical value of those contributions to the bottom line: winning. These two posts are at the heart of The Case for Dennis Rodman, qua “case for Dennis Rodman.”

But first, in line with my broader agenda, I would like to examine where and why so many advanced statistics get this case wrong, particularly Hollinger’s Player Efficiency Rating. I will show how, rather than being a simple outlier, the Rodman data point is emblematic of major errors that are common in conventional unconventional sports analysis – both as a product of designs that disguise rather than replace the problems they were meant to address, and as a product of uncritically defending and promoting an approach that desperately needs reworking.

Player Efficiency Ratings

John Hollinger deserves much respect for bringing advanced basketball analysis to the masses. His Player Efficiency Ratings are available on ESPN.com under Hollinger Player Statistics, where he uses them as the basis for his Value Added (VA) and Expected Wins Added (EWA) stats, and regularly features them in his writing (such as in this article projecting the Miami Heat’s 2010-11 record), as do other ESPN analysts. Basketball Reference includes PER in their “Advanced” statistical tables (present on every player and team page), and also use it to compute player Value Above Average and Value Above Replacement (definitions here).

The formula for PER is extremely complicated, but its core idea is simple: combine everything in a player’s stat-line by rewarding everything good (points, rebounds, assists, blocks, and steals), and punishing everything bad (missed shots, turnovers). The value of particular items are weighted by various league averages—as well as by Hollinger’s intuitions—then the overall result is calculated on a per-minute basis, adjusted for league and team pace, and normalized on a scale averaging 15.

Undoubtedly, PER is deeply flawed. But sometimes apparent “flaws” aren’t really “flaws,” but merely design limitations. For example: PER doesn’t account for defense or “intangibles,” it is calculated without resort to play-by-play data that didn’t exist prior to the last few seasons, and it compares players equally, regardless of position or role. For the most part, I will refrain from criticizing these constraints, instead focusing on a few important ways that it fails or even undermines its own objectives.

Predictivity (and: Introducing Win Differential Analysis)

Though Hollinger uses PER in his “wins added” analysis, its complete lack of any empirical component suggests that it should not be taken seriously as a predictive measure. And indeed, empirical investigation reveals that it is simply not very good at predicting a player’s actual impact:

image4_thumb3

This bubble-graph is a product of a broader study I’ve been working on that correlates various player statistics to the difference in their team’s per-game performance with them in and out of the line-up.  The study’s dataset includes all NBA games back to 1986, and this particular graph is based on the 1300ish seasons in which a player who averaged 20+ minutes per game both missed and played at least 20 games.  Win% differential is the difference in the player’s team’s winning percentage with and without him (for the correlation, each data-point is weighted by the smaller of games missed or played.  I will have much more to write about nitty-gritty of this technique in separate posts).

So PER appears to do poorly, but how does it compare to other valuation metrics?

image_thumb1

SecFor (or “Secret Formula”) is the current iteration of an empirically-based “All in One” metric that I’m developing—but there is no shame in a speculative purely a priori metric losing (even badly) as a predictor to the empirical cutting-edge.

However, as I admitted in the introduction to this series, my statistical interest in Dennis Rodman goes way back. One of the first spreadsheets I ever created was in the early 1990’s, when Rodman still played for San Antonio. I knew Rodman was a sick rebounder, but rarely scored—so naturally I thought: “If only there were a formula that combined all of a player’s statistics into one number that would reflect his total contribution.” So I came up with this crude, speculative, purely a priori equation:

Points + Rebounds + 2*Assists + 1.5*Blocks + 2*Steals – 2*Turnovers.

Unfortunately, this metric (which I called “PRABS”) failed to shed much light on the Rodman problem, so I shelved it.  PER shares the same intention and core technique, albeit with many additional layers of complexity.  For all of this refinement, however, Hollinger has somehow managed to make a bad metric even worse, getting beaten by my OG PRABS by nearly as much as he is able to beat points per game—the Flat Earth of basketball valuation metrics.  So how did this happen?

Minutes

The trend in much of basketball analysis is to rate players by their per-minute or per-possession contributions.  This approach does produce interesting and useful information, and they may be especially useful to a coach who is deciding who to give more minutes to, or to a GM who is trying to evaluate which bench player to sign in free agency.

But a player’s contribution to winning is necessarily going to be a function of how much extra “win” he is able to get you per minute and the number of minutes you are able to get from him.  Let’s turn again to win differential:

image19_thumb

For this graph, I set up a regression using each of the major rate stats, plus minutes played (TS%=true shooting percentage, or one half of average points per shot, including free throws and 3 pointers).  If you don’t know what a “normalized coefficient” is, just think of it as a stat for comparing the relative importance of regression elements that come in different shapes and sizes. The sample is the same as above: it only includes players who average 20+ minutes per game.

Unsurprisingly, “minutes per game” is more predictive than any individual rate statistic, including true shooting.  Simply multiplying PER by minutes played significantly improves its predictive power, managing to pull it into a dead-heat with PRABS (which obviously wasn’t minute-adjusted to begin with).

I’m hesitant to be too critical of the “per minute” design decision, since it is clearly an intentional element that allows PER to be used for bench or rotational player valuation, but ultimately I think this comes down to telos: So long as PER pretends to be an arbiter of player value—which Hollinger himself relies on for making actual predictions about team performance—then minutes are simply too important to ignore. If you want a way to evaluate part-time players and how they might contribute IF they could take on larger roles, then it is easy enough to create a second metric tailored to that end.

Here’s a similar example from baseball that confounds me: Rate stats are fine for evaluating position players, because nearly all of them are able to get you an entire game if you want—but when it comes to pitching, how often someone can play and the number of innings they can give you is of paramount importance. E.g., at least for starting pitchers, it seems to me that ERA is backwards: rather than calculate runs allowed per inning, why don’t they focus on runs denied per game? Using a benchmark of 4.5, it would be extremely easy to calculate: Innings Pitched/2 – Earned Runs. So, if a pitcher gets you 7 innings and allows 2 runs, their “Earned Runs Denied” (ERD) for the game would be 1.5. I have no pretensions of being a sabermetrician, and I’m sure this kind of stat (and much better) is common in that community, but I see no reason why this kind of statistic isn’t mainstream.

More broadly, I think this minutes SNAFU is reflective of an otherwise reasonable trend in the sports analytical community—to evaluate everything in terms of rates and quality instead of quantity—that is often taken too far. In reality, both may be useful, and the optimal balance in a particular situation is an empirical question that deserves investigation in its own right.

PER Rewards Shooting (and Punishes Not Shooting)

As described by David Berri, PER is well-known to reward inefficient shooting:

“Hollinger argues that each two point field goal made is worth about 1.65 points. A three point field goal made is worth 2.65 points. A missed field goal, though, costs a team 0.72 points. Given these values, with a bit of math we can show that a player will break even on his two point field goal attempts if he hits on 30.4% of these shots. On three pointers the break-even point is 21.4%. If a player exceeds these thresholds, and virtually every NBA player does so with respect to two-point shots, the more he shoots the higher his value in PERs. So a player can be an inefficient scorer and simply inflate his value by taking a large number of shots.”

The consequences of this should be properly understood: Since this feature of PER applies to every shot taken, it is not only the inefficient players who inflate their stats.  PER gives a boost to everyone for every shot: Bad players who take bad shots can look merely mediocre, mediocre players who take mediocre shots can look like good players, and good players who take good shots can look like stars. For Dennis Rodman’s case—as someone who took very few shots, good or bad— the necessary converse of this is even more significant: since PER is a comparative statistic (even directly adjusted by league averages), players who don’t take a lot of shots are punished.
Structurally, PER favors shooting—but to what extent? To get a sense of it, let’s plot PER against usage rate:

image_thumb10

Note: Data includes all player seasons since 1986. Usage % is the percentage of team possessions that end with a shot, free throw attempt, or turnover by the player in question. For most practical purposes, it measures how frequently the player shoots the ball.

That R-squared value corresponds to a correlation of .628, which might seem high for a component that should be in the denominator. Of course, correlations are tricky, and there are a number of reasons why this relationship could be so strong. For example, the most efficient shooters might take the most shots. Let’s see:

image_thumb13

Actually, that trend-line doesn’t quite do it justice: that R-squared value corresponds to a correlation of .11 (even weaker than I would have guessed).

I should note one caveat: The mostly flat relationship between usage and shooting may be skewed, in part, by the fact that better shooters are often required to take worse shots, not just more shots—particularly if they are the shooter of last resort. A player that manages to make a mediocre shot out of a bad situation can increase his team’s chances of winning, just as a player that takes a marginally good shot when a slam dunk is available may be hurting his team’s chances.  Presently, no well-known shooting metrics account for this (though I am working on it), but to be perfectly clear for the purposes of this post: neither does PER. The strong correlation between usage rate and PER is unrelated.  There is nothing in its structure to suggest this is an intended factor, and there is nothing in its (poor) empirical performance that would suggest it is even unintentionally addressed. In other words, it doesn’t account for complex shooting dynamics either in theory or in practice.

Duplicability and Linearity

PER strongly rewards broad mediocrity, and thus punishes lack of the same. In reality, not every point that a player scores means their team will score one more point, just as not every rebound grabbed means that their team will get one more possession.  Conversely—and especially pertinent to Dennis Rodman—not every point that a player doesn’t score actually costs his team a point.  What a player gets credit for in his stat line doesn’t necessarily correspond with his actual contribution, because there is always a chance that the good things he played a part in would have happened anyway. This leads to a whole set of issues that I typically file under the term “duplicability.”

A related (but sometimes confused) effect that has been studied extensively by very good basketball analysts is the problem of “diminishing returns” – which can be easily illustrated like this:  if you put a team together with 5 players that normally score 25 points each, it doesn’t mean that your team will suddenly start scoring 125 points a game.  Conversely—and again pertinent to Rodman—say your team has 5 players that normally score 20 points each, and you replace one of them with somebody that normally only scores 10, that does not mean that your team will suddenly start scoring only 90. Only one player can take a shot at a time, and what matters is whether the player’s lack of scoring hurts his team’s offense or not.  The extent of this effect can be measured individually for different basketball statistics, and, indeed, studies have showed wide disparities.

As I will discuss at length in Part 2(c), despite hardly ever scoring, differential stats show that Rodman didn’t hurt his teams offenses at all: even after accounting for extra possessions that Rodman’s teams gained from offensive rebounds, his effect on offensive efficiency was statistically insignificant.  In this case (as with Randy Moss), we are fortunate that Rodman had such a tumultuous career: as a result, he missed a significant number of games in a season several times with several different teams—this makes for good indirect data.  But, for this post’s purposes, the burning question is: Is there any direct way to tell how likely a player’s statistical contributions were to have actually converted into team results?

This is an extremely difficult and intricate problem (though I am working on it), but it is easy enough to prove at least one way that a metric like PER gets it wrong: it treats all of the different components of player contribution linearly.  In other words, one more point is worth one more point, whether it is the 15th point that a player scores or the 25th, and one more rebound is worth one more rebound, whether it is the 8th or the 18th. While this equivalency makes designing an all-in one equation much easier (at least for now, my Secret Formula metric is also linear), it is ultimately just another empirically testable assumption.

I have theorized that one reason Rodman’s PER stats are so low compared to his differential stats is that PER punishes his lack of mediocre scoring, while failing to reward the extremeness of his rebounding.  This is based on the hypothesis that certain extreme statistics would be less “duplicable” than mediocre ones.  As a result, the difference between a player getting 18 rebounds per game vs. getting 16 per game could be much greater than the difference between them getting 8 vs. getting 6.  Or, in other words, the marginal value of rebounds would (hypothetically) be increasing.

Using win percentage differentials, this is a testable theory. Just as we can correlate an individual player’s statistics to the win differentials of his team, we can also correlate hypothetical statistics the same way.  So say we want to test a metric like rebounds, except one that has increasing marginal value built in: a simple way to approximate that effect is to make our metric increase exponentially, such as using rebounds squared. If we need even more increasing marginal value, we can try rebounds cubed, etc.  And if our metric has several different components (like PER), we can do the same for the individual parts:  the beauty is that, at the end of the day, we can test—empirically—which metrics work and which don’t.

For those who don’t immediately grasp the math involved, I’ll go into a little detail: A linear relationship is really just an exponential relationship with an exponent of 1.  So let’s consider a toy metric, “PR,” which is calculated as follows: Points + Rebounds.  This is a linear equation (exponent = 1) that could be rewritten as follows: (Points)^1 + (Rebounds)^1.  However, if, as above, we thought that both points and rebounds should have increasing marginal values, we might want to try a metric (call it “PRsq”) that combined points and rebounds squared, as follows:  (Points)^2 + (Rebounds)^2.  And so on.  Here’s an example table demonstrating the increase in marginal value:

image1_thumb1

The fact that each different metric leads to vastly different magnitudes of value is irrelevant: for predictive purposes, the total value for each component will be normalized — the relative value is what matters (just as “number of pennies” and “number of quarters” are equally predictive of how much money you have in your pocket).  So applying this concept to an even wider range of exponents for several relevant individual player statistics, we can empirically examine just how “exponential” each statistic really is:

image8_thumb

For this graph, I looked at each of the major rate metrics (plus points per game) individually.  So, for each player-season in my (1986-) sample, I calculated the number of points, points squared, points^3rd. . . points^10th power, and then correlated all of these to that player’s win percentage differential.  From those calculations, we can find roughly how much the marginal value for each metric increases, based on what exponent produces the best correlation:  The smaller the number at the peak of the curve, the more linear the metric is—the higher the number, the more exponential (i.e., extreme values are that much more important).  When I ran this computation, the relative shape of each curve fit my intuitions, but the magnitudes surprised me:  That is, many of the metrics turned out to be even more exponential than I would have guessed.

As I know this may be confusing to many of my readers, I need to be absolutely clear:  the shape of each curve has nothing to do with the actual importance of each metric.  It only tells us how much that particular metric is sensitive to very large values.  E.g., the fact that Blocks and Assists peak on the left and sharply decline doesn’t make them more or less important than any of the others, it simply means that having 1 block in your scoreline instead of 0 is relatively just as valuable as having 5 blocks instead of 4.  On the other extreme, turnovers peak somewhere off the chart, suggesting that turnover rates matter most when they are extremely high.

For now, I’m not trying to draw a conclusive picture about exactly what exponents would make for an ideal all-in-one equation (polynomial regressions are very very tricky, though I may wade into those difficulties more in future blog posts).  But as a minimum outcome, I think the data strongly supports my hypothesis: that many stats—especially rebounds—are exponential predictors.  Thus, I mean this less as a criticism of PER than as an explanation of why it undervalues players like Dennis Rodman.

Gross, and Points

In subsection (i), I concluded that “gross points” as a metric for player valuation had two main flaws: gross, and points. Superficially, PER responds to both of these flaws directly: it attempts to correct the “gross” problem both by punishing bad shots, and by adjusting for pace and minutes. It attacks the “points” problem by adding rebounds, assists, blocks, steals, and turnovers. The problem is, these “solutions” don’t match up particularly well with the problems “gross” and “points” present.
The problem with the “grossness” of points certainly wasn’t minutes (note: for historical comparisons, pace adjustments are probably necessary, but the jury is still out on the wisdom of doing the same on a team-by-team basis within a season). The main problem with “gross” was shooting efficiency: If someone takes a bunch of shots, they will eventually score a lot of points.  But scoring points is just another thing that players do that may or may not help their teams win. PER attempted to account for this by punishing missed shots, but didn’t go far enough. The original problem with “gross” persists: As discussed above, taking shots helps your rating, whether they are good shots or not.

As for “points”: in addition to any problems created by having arbitrary (non-empirical) and linear coefficients, the strong bias towards shooting causes PER to undermine its key innovation—the incorporation of non-point components. This “bias” can be represented visually:

image

Note: This data comes from a regression to PER including each of the rate stats corresponding to the various components of PER.

This pie chart is based on a linear regression including rate stats for each of PER’s components. Strictly, what it tells us is the relative value of each factor to predicting PER if each of the other factors were known. Thus, the “usage” section of this pie represents the advantage gained by taking more shots—even if all your other rate stats were fixed.  Or, in other words, pure bias (note that the number of shots a player takes is almost as predictive as his shooting ability).

For fun, let’s compare that pie to the exact same regression run on Points Per Game rather than PER:

image

Note: These would not be the best variables to select if you were actually trying to predict a player’s Points Per Game.  Note also that “Usage” in these charts is NOT like “Other”—while other variables may affect PPG, and/or may affect the items in this regression, they are not represented in these charts.

Interestingly, Points Per Game was already somewhat predictable by shooting ability, turnovers, defensive rebounding, and assists. While I hesitate to draw conclusions from the aesthetic comparison, we can guess why perhaps PER doesn’t beat PPG as significantly as we might expect: it appears to share much of the same DNA. (My more wild and ambitious thoughts suspect that these similarities reflect the strength of our broader pro-points bias: even when designing an All-in-One statistic, even Hollinger’s linear, non-empirical, a priori coefficients still mostly reflect the conventional wisdom about the importance of many of the factors, as reflected in the way that they relate directly to points per game).

I could make a similar pie-chart for Win% differential, but I think it might give the wrong impression: these aren’t even close to the best set of variables to use for that purpose.  Suffice it to say that it would look very, very different (for an imperfect picture of how much so, you can compare to the values in the Relative Importance chart above).

Conclusions

The deeper irony with PER is not just that it could theoretically be better, but that it adds many levels of complexity to the problem it purports to address, ultimately failing in strikingly similar ways.  It has been dressed up around the edges with various adjustments for team and league pace, incorporation of league averages to weight rebounds and value of possession, etc. This is, to coin a phrase, like putting lipstick on a pig. The energy that Hollinger has spent on dressing up his model could have been better spent rethinking the core of it.

In my estimation, this pattern persists among many extremely smart people who generate innovative models and ideas: once created, they spend most of their time—entire careers even—in order: 1) defending it, 2) applying it to new situations, and 3) tweaking it.  This happens in just about every field: hard and soft sciences, economics, history, philosophy, even literature. Give me an academic who creates an interesting and meaningful model, and then immediately devotes their best efforts to tearing it apart! In all my education, I have had perhaps two professors who embraced this approach, and I would rank both among my very favorites.

This post and the last were admittedly relatively light on Rodman-specific analysis, but that will change with a vengeance in the next two.  Stay tuned.


Update (5/13/11): Commenter “Yariv” correctly points out that an “exponential” curve is technically one in the form y^x (such as 2^x, 3^x, etc), where the increasing marginal value I’m referring to in the “Linearity” section above is about terms in the form x^y (e.g., x^2, x^3, etc), or monomial terms with an exponent not equal to 1.  I apologize for any confusion, and I’ll rewrite the section when I have time.

The Aesthetic Case Against 18 Games

By most accounts, the NFL’s plan to expand the regular season from 16 to 18 games is a done deal.  Indulge me for a moment as I take off my Bill-James-Wannabe cap and put on my dusty old Aristotle-Wannabe kausia:  In addition to various practical drawbacks, moving to 18 games risks disturbing the aesthetic harmony—grounded in powerful mathematics—inherent in the 16 game season.
Analytically, it is easy to appreciate the convenience of having the season break down cleanly into 8-game halves and 4-game quarters.  Powers of 2 like this are useful and aesthetically attractive: after all, we are symmetrical creatures who appreciate divisibility.  But we have a possibly even more powerful aesthetic attachment to certain types of asymmetrical relationships:  Mozart’s piano concertos aren’t divided into equally-sized beginnings, middles and ends.  Rather, they are broken into exposition, development, and recapitulation—each progressively shorter than the last.

Similarly, the 16 game season can fairly cleanly be broken into 3 or 4 progressively shorter but more important sections.  Using roughly the same proportions that Mozart would, the first 10 games (“exposition”) would set the stage and reveal who we should be paying attention to; the next 3-4 games (“development”) would be where the race for playoff positioning really begins in earnest, and the final 2-3 weeks (“recapitulation”) are where hopes are realized and hearts are broken—including the final weekend when post-season fates are settled.  Now, let’s represent the season as a rectangle with sides 16 (length of the season) and 10 (length of the “exposition”), broken down into consecutively smaller squares representing each section:

image

Note: The “last” game gets the leftover space, though if the season were longer we could obviously keep going.

At this point many of you probably know where this is going: The ratio between each square to all of the smaller pieces is roughly equal, corresponding to the “divine proportion,” which is practically ubiquitous in classical music, as well as in everything from book and movie plots to art and architecture to fractal geometry to unifying theories of “all animate and inanimate systems.”  Here it is again (incredibly clumsily-sketched) in the more recognizable spiral form:

image

The golden ratio is represented in mathematics by the irrational constant phi, which is:

1.6180339887…

Which, when divided into 1 gets you:

.6180339887…

Beautiful, right? So the roughly 10/4/1/1 breakdown above is really just 16 multiplied by 1/phi, with the remainder multiplied by 1/phi, etc—9.9, 3.8, 1.4, .9—rounded to the nearest game.  Whether this corresponds to your thinking about the relative significance of each portion of the season is admittedly subjective.  But this is an inescapably powerful force in aesthetics (along with symmetricality and symbols of virility and fertility), and can be found in places most people would never suspect, including in professional sports.  Let’s consider some anecdotal supporting evidence:

  • The length of a Major League Baseball season is 162 games.  Not 160, but 162.  That should look familiar.
  • Both NBA basketball and NHL hockey have 82-game seasons, or roughly half-phi.  Note 81 games would be impractical, because of need for equal number of home and road games (but bonus points if you’ve ever felt like the NBA season was exactly 1 game too long).
  • The “exposition” portion of a half-phi season would be 50 games.  The NHL and NBA All-Star breaks both take place right around game 50, or a little later, each year.
  • Though still solidly in between 1/2 and 2/3 of the way through the season, MLB’s “Summer Classic” usually takes place slightly earlier, around game 90 (though I might submit that the postseason crunch doesn’t really start until after teams build a post-All Star record for people to talk about).
  • The NFL bye weeks typically end after week 10.
  • Fans and even professional sports analysts are typically inclined to value “clutch” players—i.e., those who make their bones in the “Last” quadrant above—way more than a non-aesthetic analytical approach would warrant.

Etc.
So fine, say you accept this argument about how people observe sports, your next question may be: well, what’s wrong with 18 games? any number of games can be divided into phi-sized quadrants, right?  Well, the answer is basically yes, it can, but it’s not pretty:

image

The numbers 162, 82, and 16 all share a couple of nice qualities: first they are all roughly divisible by 4, so you have nice clean quarter-seasons.  Second, they each have aesthetically pleasing “exposition” periods: 100 games in MLB, 50 in the NBA and NHL, and 10 in the NFL.  The “exposition” period in an 18-game season would be 11 games.  Yuck!  These season-lengths balance our competing aesthetic desires for the harmony of symmetry and excitement of asymmetry.  We like our numbers round, but not too round.  We want them dynamic, but workable.

Finally, as to why the NFL should care about vague aesthetic concerns that it takes a mathematician to identify, I can only say: I don’t think these patterns would be so pervasive in science, art, and in broader culture if they weren’t really important to us, whether we know it or not.  Human beings are symmetrical down the middle, but as some guy in Italy noticed, golden rectangles are not only woven into our design, but into the design of the things we love.  Please, NFL, don’t take that away from us.

C.R.E.A.M. (Or, “How to Win a Championship in Any Sport”)

Does cash rule everything in professional sports?  Obviously it keeps the lights on, and it keeps the best athletes in fine bling, but what effect does the root of all evil have on the competitive bottom line—i.e., winning championships?

For this article, let’s consider “economically predictable” a synonym for “Cash Rules”:  I will use extremely basic economic reasoning and just two variables—presence of a salary cap and presence of a salary max in a sport’s labor agreement—to establish, ex ante, which fiscal strategies we should expect to be the most successful.  For each of the 3 major sports, I will then suggest (somewhat) testable hypotheses, and attempt to examine them.  If the hypotheses are confirmed, then Method Man is probably right—dollar dollar bill, etc.

Conveniently, on a basic yes/no grid of these two variables, our 3 major sports in the U.S. fall into 3 different categories:

image

So before treating those as anything but arbitrary arrangements of 3 letters, we should consider the dynamics each of these rules creates independently.  If your sport has a team salary cap, getting “bang for your buck” and ferreting out bargains is probably more important to winning than overall spending power.  And if your sport has a low maximum individual salary, your ability to obtain the best possible players—in a market where everyone knows their value but must offer the same amount—will also be crucial.  Considering permutations of thriftiness and non-economic acquisition ability, we end up with a simple ex ante strategy matrix that looks like this:

image

These one-word commandments may seem overly simple—and I will try to resolve any ambiguity looking at the individual sports below—but they are only meant to describe the most basic and obvious economic incentives that salary caps and salary maximums should be expected to create in competitive environments.

Major League Baseball: Spend

Hypothesis:  With free-agency, salary arbitration, and virtually no payroll restrictions, there is no strategic downside to spending extra money.  Combined with huge economic disparities between organizations, this means that teams that spend the most will win the most.

Analysis:  Let’s start with the New York Yankees (shocker!), who have been dominating baseball since 1920, when they got Babe Ruth from the Red Sox for straight cash, homey.  Note that I take no position on whether the Yankees filthy lucre is destroying the sport of Baseball, etc.  Also, I know very little about the Yankees payroll history, prior to 1988 (the earliest the USA Today database goes).  But I did come across this article from several years ago, which looks back as far as 1977.  For a few reasons, I think the author understates the case.  First, the Yankees low-salary period came at the tail end of a 12 year playoff drought (I don’t have the older data to manipulate, but I took the liberty to doodle on his original graph):

image

Note: Smiley-faces are Championship seasons.  The question mark is for the 1994 season, which had no playoffs.

Also, as a quirk that I’ve discussed previously, I think including the Yankees in the sample from which the standard deviation is drawn can be misleading: they have frequently been such a massive outlier that they’ve set their own curve.  Comparing the Yankees to the rest of the league, from last season back to 1988, looks like this:

image

Note: Green are Championship seasons.  Red are missed playoffs.

In 2005 the rest-of-league average payroll was ~$68 million, and the Yankees’ was ~$208 million (the rest-of-league standard deviation was $23m, but including the Yankees, it would jump to $34m).

While they failed to win the World Series in some of their most expensive seasons, don’t let that distract you:  money can’t guarantee a championship, but it definitely improves your chances.  The Yankees have won roughly a quarter of the championships over the last 20 years (which is, astonishingly, below their average since the Ruth deal).  But it’s not just them.  Many teams have dramatically increased their payrolls in order to compete for a World Series title—and succeeded! Over the past 22 years, the top 3 payrolls (per season) have won a majority of titles:

image

As they make up only 10% of the league, this means that the most spendy teams improved their title chances, on average, by almost a factor of 6.

National Basketball Association: Recruit (Or: “Press Your Bet”)

Hypothesis:  A fairly strict salary cap reigns in spending, but equally strict salary regulations mean many teams will enjoy massive surplus value by paying super-elite players “only” the max.  Teams that acquire multiple such players will enjoy a major championship advantage.

Analysis: First, in case you were thinking that the 57% in the graph above might be caused by something other than fiscal policy, let’s quickly observe how the salary cap kills the “spend” strategy: image

Payroll information from USA Today’s NBA and NFL Salary Databases (incidentally, this symmetry is being threatened, as the Lakers, Magic, and Mavericks have the top payrolls this season).

I will grant there is a certain apples-to-oranges comparison going on here: the NFL and NBA salary-cap rules are complex and allow for many distortions.  In the NFL teams can “clump” their payroll by using pro-rated signing bonuses (essentially sacrificing future opportunities to exceed the cap in the present), and in the NBA giant contracts are frequently moved to bad teams that want to rebuild, etc.  But still: 5%.  Below expectation if championships were handed out randomly.
And basketball championships are NOT handed out randomly.  My hypothesis predicts that championship success will be determined by who gets the most windfall value from their star player(s).  Fifteen of the last 20 NBA championships have been won by Kobe Bryant, Tim Duncan, or Michael Jordan.  Clearly star-power matters in the NBA, but what role does salary play in this?

Prior to 1999, the NBA had no salary maximum, though salaries were regulated and limited in a variety of ways.  Teams had extreme advantages signing their own players (such as Bird rights), but lack of competition in the salary market mostly kept payrolls manageable.  Michael Jordan famously signed a lengthy $25 million contract extension basically just before star player salaries exploded, leaving the Bulls with the best player in the game for a song (note: Hakeem Olajuwon’s $55 million payday came after he won 2 championships as well).  By the time the Bulls were forced to pay Jordan his true value, they had already won 4 championships and built a team around him that included 2 other All-NBA caliber players (including one who also provided extreme surplus value).  Perhaps not coincidentally, year 6 in the graph below is their record-setting 72-10 season:
image

Note: Michael Jordan’s salary info found here.  Historical NBA salary cap found here.

The star player salary situation caught the NBA off-guard.  Here’s a story from Time magazine in 1996 that quotes league officials and executives:

“It’s a dramatic, strategic judgment by a few teams,” says N.B.A. deputy commissioner Russ Granik. .
Says one N.B.A. executive: “They’re going to end up with two players making about two-thirds of the salary cap, and another pair will make about 20%. So that means the rest of the players will be minimum-salary players that you just sign because no one else wants them.” . . .
Granik frets that the new salary structure will erode morale. “If it becomes something that was done across the league, I don’t think it would be good for the sport,” he says.

What these NBA insiders are explaining is basic economics:  Surprise!  Paying better players big money means less money for the other guys.  Among other factors, this led to 2 lockouts and the prototype that would eventually lead to the current CBA (for more information than you could ever want about the NBA salary cap, here is an amazing FAQ).

The fact that the best players in the NBA are now being underpaid relative to their value is certain.  As a back of the envelope calculation:  There are 5 players each year that are All-NBA 1st team, while 30+ players each season are paid roughly the maximum.  So how valuable are All-NBA 1st team players compared to the rest?  Let’s start with: How likely is an NBA team to win a championship without one?

image

In the past 20 seasons, only the 2003-2004 Detroit Pistons won the prize without a player who was a 1st-Team All-NBAer in their championship year.
To some extent, these findings are hard to apply strategically.  All but those same Pistons had at least one home-grown All-NBA (1st-3rd team) talent—to win, you basically need the good fortune to catch a superstar in the draft.  If there is an actionable take-home, however, it is that most (12/20) championship teams have also included a second All-NBA talent acquired through trade or free agency: the Rockets won after adding Clyde Drexler, the second Bulls 3-peat added Dennis Rodman (All-NBA 3rd team with both the Pistons and the Spurs), the Lakers and Heat won after adding Shaq, the Celtics won with Kevin Garnett, and the Lakers won again after adding Pau Gasol.

Each of these players was/is worth more than their market value, in most cases as a result of the league’s maximum salary constraints.  Also, in most of these cases, the value of the addition was well-known to the league, but the inability of teams to outbid each other meant that basketball money was not the determinant factor in the players choosing their respective teams.  My “Recruit” strategy anticipated this – though it perhaps understates the relative importance of your best player being the very best.  This is more a failure of the “recruit” label than of the ex ante economic intuition, the whole point of which was that cap+max –> massive importance of star players.

National Football League: Economize (Or: “WWBBD?”)

Hypothesis:  The NFL’s strict salary cap and lack of contract restrictions should nullify both spending and recruiting strategies.  With elite players paid closer to what they are worth, surplus value is harder to identify.  We should expect the most successful franchises to demonstrate both cunning and wise fiscal policy.

Analysis: Having a cap and no max salaries is the most economically efficient fiscal design of any of the 3 major sports.  Thus, we should expect that massively dominating strategies to be much harder to identify.  Indeed, the dominant strategies in the other sports are seemingly ineffective in the NFL: as demonstrated above, there seems to be little or no advantage to spending the most, and the abundant variance in year-to-year team success in the NFL would seem to rule out the kind of individual dominance seen in basketball.

Thus, to investigate whether cunning and fiscal sense are predominant factors, we should imagine what kinds of decisions a coach or GM would make if his primary qualities were cunning and fiscal sensibility.  In that spirit, I’ve come up with a short list of 5 strategies that I think are more or less sound, and that are based largely on classically “economic” considerations:

1.  Beg, borrow, or steal yourself a great quarterback:
Superstar quarterbacks are probably underpaid—even with their monster contracts—thus making them a good potential source for surplus value.  Compare this:

Note: WPA (wins added) stats from here.

With this:

The obvious caveat here is that the entanglement question is still empirically open:  How much do good QB’s make their teams win v. How much do winning teams make their QB’s look good?  But really quarterbacks only need to be responsible for a fraction of the wins reflected in their stats to be worth more than what they are being paid. (An interesting converse, however, is this: the fact that great QB’s don’t win championships with the same regularity as, say, great NBA players, suggests that a fairly large portion of the “value” reflected by their statistics is not their responsibility).

2. Plug your holes with the veteran free agents that nobody wants, not the ones that everybody wants:
If a popular free agent intends to go to the team that offers him the best salary, his market will act substantially like a “common value” auction.  Thus, beware the Winner’s Curse. In simple terms: If 1) a player’s value is unknown, 2) each team offers what they think the player is worth, and 3) each team is equally likely to be right; then: 1) The player’s expected value will correlate with the average bid, and 2) the “winning” bid probably overpaid.

Moreover, even if the winner’s bid is exactly right, that just means they will have successfully gained nothing from the transaction.  Assuming equivalent payrolls, the team with the most value (greatest chance of winning the championship) won’t be the one that pays the most correct amount for its players, it will—necessarily—be the one that pays the least per unit of value.  To accomplish this goal, you should avoid common value auctions as much as possible!  In free agency, look for the players with very small and inefficient markets (for which #3 above is least likely to be true), and then pay them as little as you can get away with.

3. Treat your beloved veterans with cold indifference.
If a player is beloved, they will expect to be paid.  If they are not especially valuable, they will expect to be paid anyway, and if they are valuable, they are unlikely to settle for less than they are worth.  If winning is more important to you than short-term fan approval, you should be both willing and prepared to let your most beloved players go the moment they are no longer a good bargain.

4. Stock up on mid-round draft picks.
Given the high cost of signing 1st round draft picks, 2nd round draft picks may actually be more valuable.  Here is the crucial graph from the Massey-Thaler study of draft pick value (via Advanced NFL Stats):

image
The implications of this outcome are severe.  All else being equal, if someone offers you an early 2nd round draft pick for your early 1st round draft pick, they should be demanding compensation from you (of course, marginally valuable players have diminishing marginal value, because you can only have/play so many of them at a time).

5. When the price is right: Gamble.

This rule applies to fiscal decisions, just as it does to in-game ones.  NFL teams are notoriously risk-averse in a number of areas: they are afraid that someone after one down season is washed up, or that an outspoken player will ‘disrupt’ the locker room, or that a draft pick might have ‘character issues’.  These sorts of questions regularly lead to lengthy draft slides and dried-up free agent markets.  And teams are right to be concerned: these are valid possibilities that increase uncertainty.  Of course, there are other possibilities. Your free agent target simply may not be as good as you hope they are, or your draft pick may simply bust out.  Compare to late-game 4th-down decisions: Sometimes going for it on 4th down will cause you to lose immediately and face a maelstrom of criticism from fans and press, where punting or kicking may quietly lead to losing more often.  Similarly, when a team takes a high-profile personnel gamble and it fails, they may face a maelstrom of criticism from fans and press, where the less controversial choice might quietly lead to more failure.

The economizing strategy here is to favor risks when they are low cost but have high upsides.  In other words, don’t risk a huge chunk of your cap space on an uncertain free agent prospect, risk a tiny chunk of your cap space on an even more uncertain prospect that could work out like gangbusters.

Evaluation:

Now, if only there were a team and coach dedicated to these principles—or at least, for contrapositive’s sake, a team that seemed to embrace the opposite.

Oh wait, we have both!  In the last decade, Bill Belichick and the New England Patriots have practically embodied these principles, and in the process they’ve won 3 championships, have another 16-0/18-1 season, have set the overall NFL win-streak records, and are presently the #1 overall seed in this year’s playoffs. OTOH, the Redskins have practically embodied the opposite, and they have… um… not.
Note that the Patriots’ success has come despite a league fiscal system that allows teams to “load up” on individual seasons, distributing the cost onto future years (which, again, helps explain the extreme regression effect present in the NFL).  Considering the long odds of winning a Super Bowl—even with a solid contender—this seems like an unwise long-run strategy, and the most successful team of this era has cleverly taken the long view throughout.

Conclusions

The evidence in MLB and in the NBA is ironclad: Basic economic reasoning is extremely probative when predicting the underlying dynamics behind winning titles.  Over the last 20 years of pro baseball, the top 3 spenders in the league each year win 57% of the championships.  Over a similar period in basketball, the 5 (or fewer) teams with 1st-Team All-NBA players have won 95%.

In the NFL, the evidence is more nuance and anecdote than absolute proof.  However, our ex ante musing does successfully predict that neither excessive spending nor recruiting star players at any cost (excepting possibly quarterbacks) is a dominant strategy.

On balance, I would say that the C.R.E.A.M. hypothesis is substantially more supported by the data than I would have guessed.

The Case for Dennis Rodman, Part 2/4 (a)(i)—Player Valuation and Conventional Wisdom

Dennis Rodman is a – perhaps the – classic hard case for serious basketball valuation analysis.  The more you study him, the more you are forced to engage in meta-analysis: that is, examining the advantages and limitations of the various tools in the collective analytical repertoire.  Indeed, it’s even more than a hard case, it’s an extremely important one: it is just these conspicuously difficult situations where reliable analytical insight could be most useful, yet depending on which metric you choose, Rodman is either a below-average NBA player or one of the greatest of all time.  Moreover, while Rodman may be an “extreme” of sorts, this isn’t Newtonian Physics: the problems with player valuation modeling that his case helps reveal – in both conventional and unconventional forms – apply very broadly.

This section will use Dennis Rodman as a case study for my broader critique of both conventional and unconventional player valuation methods.  Sub-section (i) introduces my criticism and deals with conventional wisdom, and sub-section (ii) deals with unconventional wisdom and beyond.  Section (b) will then examine how valuable Rodman was specifically, and why.  Background here, here, here, here, and here.

First – A Quick Meta-Critique:

Why is it that so many sports-fans pooh-pooh advanced statistical analysis, yet, when making their own arguments, spout nothing but statistics?

  • [So-and-so] scored 25 points per game last season, solidifying their position in the NBA elite.
  • [Random QB] had ten 3000-yard passing seasons, he is sooo underrated.
  • [Player x]’s batting average is down 50 points, [team y] should trade him while they still can.

Indeed, the vast majority of people are virtually incapable of making sports arguments that aren’t stats-based in one way or another.  Whether he knows it or not, Joe Average is constantly learning and refining his preferred models, which he then applies to various problems, for a variety of purposes — not entirely unlike Joe Academic.  Yet chances are he remains skeptical of the crazy-talk he hears from the so-called “statistical experts” — and there is truth to this skepticism: a typical “fan” model is extremely flexible, takes many more variables from much more diverse data into account, and ultimately employs a very powerful neural network to arrive at its conclusions.  Conversely, the “advanced” models are generally rigid, naïve, over-reaching, hubristic, prove much less than their creators believe, and claim even more.  Models are to academics like screenplays are to Hollywood waiters: everyone has one, everyone thinks theirs is the best, and most of them are garbage.  The broad reliability of “common sense” over time has earned it the benefit of the doubt, despite its high susceptibility to bias and its abundance of easily-provable errors.

The key is this: While finding and demonstrating such error is easy enough, successfully doing so should not – as it so often does – lead one (or even many) to presume that it qualifies them to replace that wisdom, in its entirety, with their own.

I believe something like this happened in the basketball analytic community:  reacting to the manifest error in conventional player valuation, the statisticians have failed to recognize the main problem – one which I will show actually limits their usefulness – and instead have developed an “unconventional” wisdom that ultimately makes many of the same mistakes.

Conventional Wisdom – Points, Points, Points:

The standard line among sports writers and commentators today is that Dennis Rodman’s accomplishments “on the court” would easily be sufficient to land him in the Hall of Fame, but that his antics “off the court” may give the voters pause.  This may itself be true, but it is only half the story:  If, in addition to his other accomplishments, Rodman had scored 15 points a game, I don’t think we’d be having this discussion, or really even close to having this discussion (note, this would be true whether or not those 15 points actually helped his teams in any way).  This is because the Hall of Fame reflects the long-standing conventional wisdom about player valuation: that points (especially per game) are the most important measure of a player’s (per game) contribution.
Whether most people would explicitly endorse this proposition or not, it is still reflected in systematic bias.  The story goes something like this:  People watch games to see the players do cool things, like throw a ball from a long distance through a tiny hoop, and experience pleasure when it happens.  Thus, because pleasure is good, they begin to believe that those players must be the best players, which is then reinforced by media coverage that focuses on point totals, best dunks plays of the night, scoring streaks, scoring records, etc.  This emphasis makes them think these must also be the most important players, and when they learn about statistics, that’s where they devote their attention.  Everyone knows about Kobe’s 81 points in a game, but how many people know about Scott Skiles’s 30 assists? or Charles Oakley’s 35 rebounds? or Rodman’s 18 offensive boards? or Shaq’s 15 blocks?  Many fans even know that Mark Price is the all-time leader in free throw percentage, or that Steve Kerr is the all-time leader in 3 point percentage, but most have never even heard of rebound percentage, much less assist percentage or block percentage.  And, yes, for those who vote for the Hall of Fame, it is also reflected in their choices.  Thus, before dealing with any fall-out for his off-court “antics,” the much bigger hurdle to Dennis Rodman’s induction looks like this:

image

This list is the bottom-10 per-game scorers (of players inducted within 25 years of their retirement).  If Rodman were inducted, he would be the single lowest point-scorer in HoF history.  And looking at the bigger picture, it may even be worse than that.  Here’s a visual of all 89 Hall of Famers with stats (regardless of induction time), sorted from most points to fewest:

image

So not only would he be the lowest point scorer, he would actually have significantly fewer points than a (linear) trend-line would predict the lowest point scorer to have (and most of the smaller bars just to the left of Rodman were Veteran’s Committee selections).  Thus, if historical trends reflect the current mood of the HoF electorate, resistance is to be expected.

The flip-side, of course, is the following:

imageNote: this graphic only contains the players for whom this stat is available, though, as I demonstrated previously, there is no reason to believe that earlier players were any better.
Clearly, my first thought when looking at this data was, “Who the hell is this guy with a TRB% of only 3.4?”  That’s only 1 out of every *30* rebounds!* The league average is (obviously) 1 out of 10.  Muggsy Bogues — the shortest player in the history of the NBA (5’3”) — managed to pull in 5.1%, about 1 out of every 20.  On the other side, of course, Rodman would pace the field by a wide margin – wider, even, than the gap between Jordan/Chamberlain and the field for scoring (above).  Of course, the Hall of Fame traditionally doesn’t care that much about rebounding percentages:

image

So, of eligible players, 24 of the top 25 leaders in points per game are presently in the Hall (including the top 19 overall), while only 9 of the top 25 leaders in total rebound percentage can say the same.  This would be perfectly rational if, say, PPG was way way more important to winning than TRB%.  But this seems unlikely to me, for at least two reasons: 1) As a rate stat, TRB% shouldn’t be affected significantly by game or team pace, as PPG is; and 2) TRB% has consequences on both offense and defense, whereas PPG is silent about the number of points the player/team has given up.  To examine this question, I set up a basic correlation of team stats to team winning percentage for the set of every team season since the introduction of the 3-point shot.  Lo and behold, it’s not really close:

image

Yes, correlation does not equal causation, and team scoring and rebounding are not the same as individual scoring and rebounding.  This test isn’t meant to prove conclusively that rebounding is more important than scoring, or even gross scoring — though, at the very least, I do think it strongly undermines the necessity of the opposite: that is, the assumption that excellence in gross point-scoring is indisputably more significant than other statistical accomplishments.
Though I don’t presently have the data to confirm, I would hypothesize (or, less charitably, guess) that individual TRB% probably has a more causative effect on team TRB% than individual PPG does on team PPG [see addendum] (note, to avoid any possible misunderstanding, I mean this only w/r/t PPG, not points-per-possession, or anything having to do with shooting percentages, true or otherwise).  Even with the proper data, this could be a fairly difficult hypothesis to test, since it can be hard to tell (directly) whether a player scoring a lot of points causes his team to score a lot of points, or vice versa.  However, that hypothesis seems to be at least partially supported by studies that others have conducted on rebound rates – especially on the offensive side (where Rodman obviously excelled).

The conventional wisdom regarding the importance of gross points is demonstrably flawed on at least two counts: gross, and points.  In sub-section (ii), I will look at how the analytical community attempted to deal with these problems, as well as at how they repeated them.
*(It’s Tiny Archibald)


Addendum (4/20/11):

I posted this as a Graph of the Day a while back, and thought I should add it here:
image

More info in the original post, but the upshot is that my hypothesis that “individual TRB% probably has a more causative effect on team TRB% than individual PPG does on team PPG” appears to be confirmed (the key word is “differential”).

UPDATE: Advanced NFL Stats Admits I Was Right. Sort Of.

Background:  In January, long before I started blogging in earnest, I made several comments on this Advanced NFL Stats post that were critical of Brian Burke’s playoff prediction model, particularly that, with 8 teams left, it predicted that the Dallas Cowboys had about the same chance of winning the Super Bowl as the Jets, Ravens, Vikings, and Cardinals combined. This seemed both implausible on its face and extremely contrary to contract prices, so I was skeptical.  In that thread, Burke claimed that his model was “almost perfectly calibrated. Teams given a 0.60 probability to win do win 60% of the time, teams given a 0.70 probability win 70%, etc.”  I expressed interest in seeing his calibration data, ”especially for games with considerable favorites, where I think your model overstates the chances of the better team,” but did not get a response.

I brought this dispute up in my monstrously-long passion-post, “Applied Epistemology in Politics and the Playoffs,” where I explained how, even if his model was perfectly calibrated, it would still almost certainly be underestimating the chances of the underdogs.  But now I see that Burke has finally posted the calibration data (compiled by a reader from 2007 on).  It’s a very simple graph, which I’ve recreated here, with a trend-line for his actual data:

image

Now I know this is only 3+ years of data, but I think I can spot a trend:  for games with considerable favorites, his model seems to overstate the chances of the better team.  Naturally, Burke immediately acknowledges this error:

On the other hand, there appears to be some trends. the home team is over-favored in mismatches where it is the stronger team and is under-favored in mismatches where it is the weaker team. It’s possible that home field advantage may be even stronger in mismatches than the model estimates.

Wait, what? If the error were strictly based on stronger-than-expected home-field advantage, the red line should be above the blue line, as the home team should win more often than the model projects whether it is a favorite or not – in other words, the actual trend-line would be parallel to the “perfect” line but with a higher intercept.  Rather, what we see is a trend-line with what appears to be a slightly higher intercept but a somewhat smaller slope, creating an “X” shape, consistent with the model being least accurate for extreme values.  In fact, if you shifted the blue line slightly upward to “shock” for Burke’s hypothesized home-field bias, the “X” shape would be even more perfect: the actual and predicted lines would cross even closer to .50, while diverging symmetrically toward the extremes.

Considering that this error compounds exponentially in a series of playoff games, this data (combined with the still-applicable issue I discussed previously), strongly vindicates my intuition that the market is more trustworthy than Burke’s playoff prediction model, at least when applied to big favorites and big dogs.

Yes ESPN, Professional Kickers are Big Fat Chokers

A couple of days ago, ESPN’s Peter Keating blogged about “icing the kicker” (i.e., calling timeouts before important kicks, sometimes mere instants before the ball is snapped).  He argues that the practice appears to work, at least in overtime.  Ultimately, however, he concludes that his sample is too small to be “statistically significant.”  This may be one of the few times in history where I actually think a sports analyst underestimates the probative value of a small sample: as I will show, kickers are generally worse in overtime than they are in regulation, and practically all of the difference can be attributed to iced kickers.  More importantly, even with the minuscule sample Keating uses, their performance is so bad that it actually is “significant” beyond the 95% level.

In Keating’s 10 year data-set, kickers in overtime only made 58.1% of their 35+ yard kicks following an opponent’s timeout, as opposed to 72.7% when no timeout was called.  The total sample size is only 75 kicks, 31 of which were iced.  But the key to the analysis is buried in the spreadsheet Keating links to: the average length of attempted field goals by iced kickers in OT was only 41.87 yards, vs. 43.84 yards for kickers at room temperature.  Keating mentions this fact in passing, mainly to address the potential objection that perhaps the iced kickers just had harder kicks — but the difference is actually much more significant.
To evaluate this question properly, we first need to look at made field goal percentages broken down by yard-line.  I assume many people have done this before, but in 2 minutes of googling I couldn’t find anything useful, so I used play-by-play data from 2000-2009 to create the following graph:

image

The blue dots indicate the overall field-goal percentage from each yard-line for every field goal attempt in the period (around 7500 attempts total – though I’ve excluded the one 76 yard attempt, for purely aesthetic reasons).  The red dots are the predicted values of a logistic regression (basically a statistical tool for predicting things that come in percentages) on the entire sample.  Note this is NOT a simple trend-line — it takes every data point into account, not just the averages.  If you’re curious, the corresponding equation (for predicted field goal percentage based on yard line x) is as follows:

 \large{1 - \dfrac{e^{-5.5938+0.1066x}} {1+e^{-5.5938+0.1066x}}}

The first thing you might notice about the graph is that the predictions appear to be somewhat (perhaps unrealistically) optimistic about very long kicks.  There are a number of possible explanations for this, chiefly that there are comparatively few really long kicks in the sample, and beyond a certain distance the angle of the kick relative to the offensive and defensive linemen becomes a big factor that is not adequately reflected by the rest of the data (fortunately, this is not important for where we are headed).  The next step is to look at a similar graph for overtime only — since the sample is so much smaller, this time I’ll use a bubble-chart to give a better idea of how many attempts there were at each distance:

image

For this graph, the sample is about 1/100th the size of the one above, and the regression line is generated from the OT data only.  As a matter of basic spatial reasoning — even if you’re not a math whiz — you may sense that this line is less trustworthy.  Nevertheless, let’s look at a comparison of the overall and OT-based predictions for the 35+ yard attempts only:

image

Note: These two lines are slightly different from their counterparts above.  To avoid bias created by smaller or larger values, and to match Keating’s sample, I re-ran the regressions using only 35+ yard distances that had been attempted in overtime (they turned out virtually the same anyway).

Comparing the two models, we can create a predicted “Choke Factor,” which is the percentage of the original conversion rate that you should knock off for a kicker in an overtime situation:

image

A weighted average (by the number of OT attempts at each distance) gives us a typical Choke Factor of just over 6%.  But take this graph with a grain of salt: the fact that it slopes upward so steeply is a result of the differing coefficients in the respective regression equations, and could certainly be a statistical artifact.  For my purposes however, this entire digression into overtime performance drop-offs is merely for illustration:  The main calculation relevant to Keating’s iced kick discussion is a simple binomial probability:  Given an average kick length of 41.87 yards, which carries a predicted conversion rate of 75.6%, what are the odds of converting only 18 or fewer out of 31 attempts?  OK, this may be a mildly tricky problem if you’re doing it longhand, but fortunately for us, Excel has a BINOM.DIST() function that makes it easy:

image

Note : for people who might not pick:  Yes, the predicted conversion rate for the average length is not going to be exactly the same as the average predicted value for the length of each kick.  But it is very close, and close enough.

As you can see, the OT kickers who were not iced actually did very slightly better than average, which means that all of the negative bias observed in OT kicking stems from the poor performance seen in just 31 iced kick attempts.  The probability of this result occurring by chance — assuming the expected conversion rate for OT iced kicks were equal to the expected conversion rate for kicks overall — would be only 2.4%.  Of course, “probability of occurring by chance” is the definition of statistical significance, and since 95% against (i.e., less than 5% chance of happening) is the typical threshold for people to make bold assertions, I think Keating’s statement that this “doesn’t reach the level of improbability we need to call it statistically significant” is unnecessarily humble.  Moreover, when I stated that the key to this analysis was the 2 yard difference that Keating glossed over, that wasn’t for rhetorical flourish:  if the length of the average OT iced kick had been the same as the length of the average OT regular kick,  the 58.1% would correspond to a “by chance” probability of 7.6%, obviously not making it under the magic number.

A History of Hall of Fame QB-Coach Entanglement

Last week on PTI, Dan LeBatard mentioned an interesting stat that I had never heard before: that 13 of 14 Hall of Fame coaches had Hall of Fame QB’s play for them.  LeBatard’s point was that he thought great quarterbacks make their coaches look like geniuses, and he was none-too-subtle about the implication that coaches get too much credit.  My first thought was, of course: Entanglement, anyone? That is to say, why should he conclude that the QB’s are making their coaches look better than they are instead of the other way around?  Good QB’s help their teams win, for sure, but winning teams also make their QB’s look good.  Thus – at best – LeBatard’s stat doesn’t really imply that HoF Coaches piggyback off of their QB’s success, it implies that the Coach and QB’s successes are highly entangled.  By itself, this analysis might be enough material for a tweet, but when I went to look up these 13/14 HoF coach/QB pairs, I found the history to be a little more interesting than I expected.

First, I’m still not sure exactly which 14 HoF coaches LeBatard was talking about.  According the the official website, there are 21 people in the HoF as coaches.  From what I can tell, 6 of these (Curly Lambeau, Ray Flaherty, Earle Neale, Jimmy Conzelman, Guy Chamberlain and Steve Owen) coached before the passing era, so that leaves 15 to work with.  A good deal of George Halas’s coaching career was pre-pass as well, but he didn’t quit until 1967 – 5 years later than Paul Brown – and he coached a Hall of Fame QB anyway (Sid Luckman).  Of the 15, 14 did indeed coach HoF QB’s, at least technically.

To break the list down a little, I applied two threshold tests:  1) Did the coach win any Super Bowls (or league championships before the SB era) without their HoF QB?  And 2) In the course of his career, did the coach have more than one HoF QB?  A ‘yes’ answer to either of these questions I think precludes the stereotype of a coach piggybacking off his star player (of course, having coached 2 or more Hall of Famer’s might just mean that coach got extra lucky, but subjectively I think the proxy is fairly accurate).  Here is the list of coaches eliminated by these questions:

[table “5” not found /]
Joe Gibbs wins the outlier prize by a mile: not only did he win 3 championships “on his own,” he did it with 3 different non-HoF QB’s.  Don Shula had 3 separate eras of greatness, and I think would have been a lock for the hall even with the Griese era excluded.  George Allen never won a championship, but he never really had a HoF QB either: Jurgensen (HoF) served as Billy Kilmer (non-HoF)’s backup for the 4 years he played under Allen.  Sid Gillman had a long career, his sole AFL championship coming with the Chargers in 1963 – with Tobin Rote (non-HoF) under center.  Weeb Ewbank won 2 NFL championships in Baltimore with Johnny Unitas, and of course won the Super Bowl against Baltimore and Unitas with Joe Namath.  Finally, George Halas won championships with Pard Pearce (5’5”, non-HoF), Carl Brumbaugh (career passer rating: 34.9, non-HoF), Sid Luckman (HoF) and Billy Wade (non-HoF).  Plus, you know, he’s George Halas.
[table “1” not found /]
Though Chuck Noll won all of his championships with Terry Bradshaw (HoF), those Steel Curtain teams weren’t exactly carried by the QB position (e.g., in the 1974 championship season, Bradshaw averaged less than 100 passing yards per game).  Bill Walsh is a bit more borderline: not only did all of his championships come with Joe Montana, but Montana also won a Super Bowl without him.  However, considering Walsh’s reputation as an innovator, and especially considering his incredible coaching tree (which has won nearly half of all the Super Bowls since Walsh retired in 1989), I’m willing to give him credit for his own notoriety.  Finally, Vince Lombardi, well, you know, he’s Vince Lombardi.

Which brings us to the list of the truly entangled:
[table “4” not found /]
I waffled a little on Paul Brown, as he is generally considered an architect of the modern league (and, you know, a team is named after him), but unlike Lombardi, Walsh and Knoll, Brown’s non-Otto-Graham-entangled accomplishments are mostly unrelated to coaching.  I’m sure various arguments could be made about individual names (like, “You crazy, Tom Landry is awesome”), but the point of this list isn’t to denigrate these individuals, it’s simply to say that these are the HoF coaches whose coaching successes are the most difficult to isolate from their quarterback’s.

I don’t really want to speculate about any broader implications, both because the sample is too small to make generalizations, and because my intuition is that coaches probably do get too much credit for their good fortune (whether QB-related or not).  But regardless, I think it’s clear that LeBatard’s 13/14 number is highly misleading.

Applied Epistemology in Politics and the Playoffs

Two nights ago, as I was watching cable news and reading various online articles and blog posts about Christine O’Donnell’s upset win over Michael Castle in Delaware’s Republican Senate primary, the hasty, almost ferocious emergence of consensus among the punditocracy – to wit, that the GOP now has virtually zero chance of picking up that seat in November – reminded me of an issue that I’ve wanted to blog about since long before I began blogging in earnest: NFL playoff prediction models.

Specifically, I have been critical of those models that project the likelihood of each surviving team winning the Super Bowl by applying a logistic regression model (i.e., “odds of winning based on past performance”) to each remaining game.  In January, I posted a number of comments to this article on Advanced NFL Stats, in which I found it absurd that, with 8 teams left, his model predicted that the Dallas Cowboys had about the same chance of winning the Super Bowl as the Jets, Ravens, Vikings, and Cardinals combined. In the brief discussion, I gave two reasons (in addition to my intuition): first, that these predictions were wildly out of whack with contract prices in sports-betting markets, and second, that I didn’t believe the model sufficiently accounted for “variance in the underlying statistics.”  Burke suggested that the first point is explained by a massive epidemic of conjunction-fallacyitis among sports bettors.  On its face, I think this is a ridiculous explanation: i.e., does he really believe that the market-movers in sports betting — people who put up hundreds of thousands (if not millions) of dollars of their own money — have never considered multiplying the odds of several games together?  Regardless, in this post I will put forth a much better explanation for this disparity than either of us proffered at the time, hopefully mooting that discussion.  On my second point, he was more dismissive, though I was being rather opaque (and somehow misspelled “beat” in one reply), so I don’t blame him.  However, I do think Burke’s intellectual hubris regarding his model (aka “model hubris”) is notable – not because I have any reason to think Burke is a particularly hubristic individual, but because I think it is indicative of a massive epidemic of model-hubrisitis among sports bloggers.

In Section 1 of this post, I will discuss what I personally mean by “applied epistemology” (with apologies to any actual applied epistemologists out there) and what I think some of its more-important implications are.  In Section 2, I will try to apply these concepts by taking a more detailed look at my problems with the above-mentioned playoff prediction models.

Section 1: Applied Epistemology Explained, Sort Of

For those who might not know, “epistemology” is essentially a fancy word for the “philosophical study of knowledge,” which mostly involves philosophers trying to define the word “knowledge” and/or trying to figure out what we know (if anything), and/or how we came to know it (if we do).  For important background, read my Complete History of Epistemology (abridged), which can be found here: In Plato’s Theaetetus, Socrates suggests that knowledge is something like “justified true belief.”  Agreement ensues.  In 1963, Edmund Gettier suggests that a person could be justified in believing something, but it could be true for the wrong reasons.  Debate ensues.  The End.

A “hot” topic in the field recently has been dealing with the implications of elaborate thought experiments similar to the following:

*begin experiment*
Imagine yourself in the following scenario:  From childhood, you have one burning desire: to know the answer to Question X.  This desire is so powerful that you dedicate your entire life to its pursuit.  You work hard in school, where you excel greatly, and you master every relevant academic discipline, becoming a tenured professor at some random elite University, earning multiple doctorates in the process.  You relentlessly refine and hone your (obviously considerable) reasoning skills using every method you can think of, and you gather and analyze every single piece of empirical data relevant to Question X available to man.  Finally, after decades of exhaustive research and study, you have a rapid series of breakthroughs that lead you to conclude – not arbitrarily, but completely based on the proof you developed through incredible amounts of hard work and ingenuity — that the answer to Question X is definitely, 100%, without a doubt: 42.  Congratulations!  To celebrate the conclusion of this momentous undertaking, you decide to finally get out of the lab/house/library and go celebrate, so you head to a popular off-campus bar.  You are so overjoyed about your accomplishment that you decide to buy everyone a round of drinks, only to find that some random guy — let’s call him Neb – just bought everyone a round of drinks himself.  What a joyous occasion: two middle-aged individuals out on the town, with reason to celebrate (and you can probably see where this is going, but I’ll go there anyway)!  As you quickly learn, it turns out that Neb is around your same age, and is also a professor at a similarly elite University in the region.  In fact, it’s amazing how much you two have in common:  you have relatively similar demographic histories, identical IQ, SAT, and GRE scores, you both won multiple academic awards at every level, you have both achieved similar levels of prominence in your academic community, and you have both been repeatedly published in journals of comparable prestige.  In fact, as it turns out, you have both been spent your entire lives studying the same question!  You have both read all the same books, you have both met, talked or worked with many comparably intelligent — or even identical — people:  It is amazing that you have never met!  Neb, of course, is feeling so celebratory because finally, after decades of exhaustive research and study, he has just had a rapid series of breakthroughs that lead him to finally conclude – not arbitrarily, but completely based on the proof he developed through incredible amounts of hard work and ingenuity — that the answer to Question X is definitely, 100%, without a doubt: 54.

You spend the next several hours drinking and arguing about Question X: while Neb seemed intelligent enough at first, everything he says about X seems completely off base, and even though you make several excellent points, he never seems to understand them.  He argues from the wrong premises in some areas, and draws the wrong conclusions in others.  He massively overvalues many factors that you are certain are not very important, and is dismissive of many factors that you are certain are crucial.  His arguments, though often similar in structure to your own, are extremely unpersuasive and don’t seem to make any sense, and though you try to explain yourself to him, he stubbornly refuses to comprehend your superior reasoning.  The next day, you stumble into class, where your students — who had been buzzing about your breakthrough all morning — begin pestering you with questions about Question X and 42.  In your last class, you had estimated that the chances of 42 being “the answer” were around 90%, and obviously they want to know if you have finally proved 42 for certain, and if not, how likely you believe it is now.  What do you tell them?

All of the research and analysis you conducted since your previous class had, indeed, led you to believe that 42 is a mortal lock.  In the course of your research, everything you have thought about or observed or uncovered, as well as all of the empirical evidence you have examined or thought experiments you have considered, all lead you to believe that 42 is the answer.  As you hesitate, your students wonder why, even going so far as to ask, “Have you heard any remotely persuasive arguments against 42 that we should be considering?”  Can you, in good conscience, say that you know the answer to Question X?  For that matter, can you even say that the odds of 42 are significantly greater than 50%?  You may be inclined, as many have been, to “damn the torpedoes” and act as if Neb’s existence is irrelevant.  But that view is quickly rebutted:  Say one of your most enterprising students brings a special device to class:  when she presses the red button marked “detonate,” if the answer to Question X is actually 42, the machine will immediately dispense $20 bills for everyone in the room; but if the answer is not actually 42, it will turn your city into rubble.  And then it will search the rubble, gather any surviving puppies or kittens, and blend them.

So assuming you’re on board that your chance encounter with Professor Neb implies that, um, you might be wrong about 42, what comes next?  There’s a whole interesting line of inquiry about what the new likelihood of 42 is and whether anything higher than 50% is supportable, but that’s not especially relevant to this discussion.  But how about this:  Say the scenario proceeds as above, you dedicate your life, yadda yadda, come to be 100% convinced of 42, but instead of going out to a bar, you decide to relax with a bubble bath and a glass of Pinot, while Neb drinks alone.  You walk into class the next day, and proudly announce that the new odds of 42 are 100%.  Mary Kate pulls out her special money-dispensing device, and you say sure, it’s a lock, press the button.  Yay, it’s raining Andrew Jacksons in your classroom!  And then: **Boom** **Meow** **Woof** **Whirrrrrrrrrrrrrr**.  Apparently Mary Kate had a twin sister — she was in Neb’s class.

*end experiment*

In reality, the fact that you might be wrong, even when you’re so sure you’re right, is more than a philosophical curiosity, it is a mathematical certainty.  The processes that lead you to form beliefs, even extremely strong ones, are imperfect.  And when you are 100% certain that a belief-generating process is reliable, the process that led you to that belief is likely imperfect.  This line of thinking is sometimes referred to as skepticism — which would be fine if it weren’t usually meant as a pejorative.

When push comes to shove, people will usually admit that there is at least some chance they are wrong, yet they massively underestimate just what those chances are.  In political debates, for example, people may admit that there is some miniscule possibility that their position is ill-informed or empirically unsound, but they will almost never say that they are more likely to be wrong than to be right.  Yet, when two populations hold diametrically opposed views, either one population is wrong or both are – all else being equal, the correct assessment in such scenarios is that no-one is likely to have it right.

When dealing with beliefs about probabilities, the complications get even trickier:  Obviously many people believe some things are close to 100% likely to be true, when the real probability may be some-much if not much-much lower.  But in addition to the extremes, people hold a whole range of poorly-calibrated probabilistic beliefs, like believing something is 60% likely when it is actually 50% or 70%.  (Note: Some Philosophically trained readers may balk at this idea, suggesting that determinism entails everything having either a 0 or 100% probability of being true.  While this argument may be sound in classroom discussions, it is highly unpragmatic: If I believe that I will win a coin flip 60% of the time, it may be theoretically true that the universe has already determined whether the coin will turn up heads or tails, but for all intents and purposes, I am only wrong by 10%).

But knowing that we are wrong so much of the time doesn’t tell us much by itself: it’s very hard to be right, and we do the best we can.  We develop heuristics that tend towards the right answers, or — more importantly for my purposes — that allow the consequences of being wrong in both directions even out over time.  You may reasonably believe that the probability of something is 30%, when, in reality, the probability is either 20% or 40%.  If the two possibilities are equally likely, then your 30% belief may be functionally equivalent under many circumstances, but they are not the same, as I will demonstrate in Section 2 (note to the philosophers: you may have noticed that this is a bit like the Gettier examples: you might be “right,” but for the wrong reasons).

There is a science to being wrong, and it doesn’t mean you have to mope in your study, or act in bad faith when you’re out of it.  “Applied Epistemology” (at least as this armchair philosopher defines it) is the study of the processes that lead to knowledge and beliefs, and of the practical implications of their limitations.

Part 2:  NFL Playoff Prediction Models

Now, let’s finally return to the Advanced NFL Stats playoff prediction model.  Burke’s methodology is simple: using a logistic regression based on various statistical indicators, the model estimates a probability for each team to win their first round matchup.  It then repeats the process for all possible second round matchups, weighting each by its likelihood of occurring (as determined by the first round projections) and so on through the championship.  With those results in hand, a team’s chances of winning the tournament is simply the product of their chances of winning in each round.  With 8 teams remaining in the divisional stage, the model’s predictions looked like this:

image

Burke states that the individual game prediction model has a “history of accuracy” and is well “calibrated,” meaning that, historically, of the teams it has predicted to win 30% of the time, close to 30% of them have won, and so on.  For a number of reasons, I remain somewhat skeptical of this claim, especially when it comes to “extreme value” games where the model predicts very heavy favorites or underdogs.  (E.g’s:  What validation safeguards do they deploy to avoid over-fitting?  How did they account for the thinness of data available for extreme values in their calibration method?)  But for now, let’s assume this claim is correct, and that the model is calibrated perfectly:  The fact that teams predicted to win 30% of the time actually won 30% of the time does NOT mean that each team actually had a 30% chance of winning.

That 30% number is just an average.  If you believe that the model perfectly nails the actual expectation for every team, you are crazy.  Since there is a large and reasonably measurable amount of variance in the very small sample of underlying statistics that the predictive model relies on, it necessarily follows that many teams will have significantly under or over-performed statistically relative to their true strength, which will be reflected in the model’s predictions.  The “perfect calibration” of the model only means that the error is well-hidden.

This doesn’t mean that it’s a bad model: like any heuristic, the model may be completely adequate for its intended context.  For example, if you’re going to bet on an individual game, barring any other information, the average of a team’s potential chances should be functionally equivalent to their actual chances.  But if you’re planning to bet on the end-result of a series of games — such as in the divisional round of the NFL playoffs — failing to understand the distribution of error could be very costly.

For example, let’s look at what happens to Minnesota and Arizona’s Super Bowl chances if we assume that the error in their winrates is uniformly distributed in the neighborhood of their predicted winrate:

image

For Minnesota, I created a pool of 11 possible expectations that includes the actual prediction plus teams that were 5% to 25% better or worse.  I did the same for Arizona, but with half the deviation.  The average win prediction for each game remains constant, but the overall chances of winning the Super Bowl change dramatically.  To some of you, the difference between 2% and 1% may not seem like much, but if you could find a casino that would regularly offer you 100-1 on something that is actually a 50-1 shot, you could become very rich very quickly.  Of course, this uniform distribution is a crude one of many conceivable ways that the “hidden error” could be distributed, and I have no particular reason to think it is more accurate than any other.  But one thing should be abundantly clear: the winrate model on which this whole system rests tells us nothing about this distribution either.

The exact structure of this particular error distribution is mostly an empirical matter that can and should invite further study.  But for the purposes of this essay, speculation may suffice.  For example, here is an ad hoc distribution that I thought seemed a little more plausible than a uniform distribution:

image

This table shows the chances of winning the Super Bowl for a generic divisional round playoff team with an average predicted winrate of 35% for each game.  In this scenario, there is a 30% chance (3/10) that the prediction gets it right on the money, a 40% chance that the team is around half as good as predicted (the bottom 4 values), a 10% chance that the team is slightly better, a 10% chance that it is significantly better, and a 10% chance that the model’s prediction is completely off its rocker.  These possibilities still produce a 35% average winrate, yet, as above, the overall chances of winning the Super Bowl increase significantly (this time by almost double).  Of course, 2 random hypothetical distributions don’t yet indicate a trend, so let’s look at a family of distributions to see if we can find any patterns:

image

This chart compares the chances of a team with a given predicted winrate to win the Super Bowl based on uniform error distributions of various sizes.  So the percentages in column 1 are the odds of the team winning the Super Bowl if the predicted winrate is exactly equal to their actual winrate.  Then each subsequent column is the chances of them winning the Superbowl if you increase the “pool” of potential actual winrates by one on each side.  Thus, the second number after 35% is the odds of winning the Super Bowl if the team is equally likely to be have a 30%, 35%, or 40% chance in reality, etc.  The maximum possible change in Super Bowl winning chances for each starting prediction is contained in the light yellow box at the end of each row.  I should note that I chose this family of distributions for its ease of cross-comparison, not its precision.  I also experimented with many other models that produced a variety of interesting results, yet in every even remotely plausible one of them, two trends – both highly germane to my initial criticism of Burke’s model – endured:
1.  Lower predicted game odds lead to greater disparity between predicted and actual chances.
To further illustrate this, here’s a vertical slice of the data, containing the net change for each possible prediction, given a discreet uniform error distribution of size 7:

image

2.  Greater error ranges in the underlying distribution lead to greater disparity between predicted and actual chances.

To further illustrate this, here’s a horizontal slice of the data, containing the net change for each possible error range, given an initial winrate prediction of 35%:

image

Of course these underlying error distributions can and should be examined further, but even at this early stage of inquiry, we “know” enough (at least with a high degree of probability) to begin drawing conclusions.  I.e., We know there is considerable variance in the statistics that Burke’s model relies on, which strongly suggests that there is a considerable amount of “hidden error” in its predictions.  We know greater “hidden error” leads to greater disparity in predicted Super Bowl winning chances, and that this disparity is greatest for underdogs.  Therefore, it is highly likely that this model significantly under-represents the chances of underdog teams at the divisional stage of the playoffs going on to win the Superbowl.  Q.E.D.

This doesn’t mean that these problems aren’t fixable: the nature of the error distribution of the individual game-predicting model could be investigated and modeled itself, and the results could be used to adjust Burke’s playoff predictions accordingly.  Alternatively, if you want to avoid the sticky business of characterizing all that hidden error, a Super-Bowl prediction model could be built that deals with that problem heuristically: say, by running a logistical regression that uses the available data to predict each team’s chances of winning the Super Bowl directly.

Finally, I believe this evidence both directly and indirectly supports my intuition that the large disparity between Burke’s predictions and the corresponding contract prices was more likely to be the result of model error than market error.  The direct support should be obvious, but the indirect support is also interesting:  Though markets can get it wrong just as much or more than any other process, I think that people who “put their money where their mouth is” (especially those with the most influence on the markets) tend to be more reliably skeptical and less dogmatic about making their investments than bloggers, analysts or even academics are about publishing their opinions.  Moreover, by its nature, the market takes a much more pluralistic approach to addressing controversies than do most individuals.  While this may leave it susceptible to being marginally outperformed (on balance) by more directly focused individual models or persons, I think it will also be more likely to avoid pitfalls like the one above.

Conclusions, and My Broader Agenda

The general purpose of post is to demonstrate both the importance and difficulty of understanding and characterizing the ways in which our beliefs – and the processes we use to form them — can get it wrong.  This is, at its heart, a delicate but extremely pragmatic endeavor.  It involves being appropriately skeptical of various conclusions — even when they seem right to you – and recognizing the implications of the multitude of ways that such error can manifest.

I have a whole slew of ideas about how to apply these principles when evaluating the various pronouncements made by the political commentariat, but the blogosphere already has a Nate Silver (and Mr. Silver is smarter than me anyway), so I’ll leave that for you to consider as you see fit.

Easy NFL Predictions, the SkyNet Way

In this post I briefly discussed regression to the mean in the NFL, as well as the difficulty one can face trying to beat a simple prediction model based on even a single highly probative variable.  Indeed, for all the extensive research and cutting-edge analysis they conduct at Football Outsiders, they are seemingly unable to beat “Koko,” which is just about the simplest regression model known to primates.  Capture

Of course, since there’s no way I could out-analyze F.O. myself — especially if I wanted to get any predictions out before tonight’s NFL opener – I decided to let my computer do the work for me: this is what neural networks are all about.  In case you’re not familiar, a neural network is a learning algorithm that can be used as a tool to process large quantities of data with many different variables — even if you don’t know which variables are the most important, or how they interact with each other.

The graphic to the right is the end result of several whole minutes of diligent configuration (after a lot of tedious data collection, of course).  It uses 60 variables (which are listed under the fold below), though I should note that I didn’t choose them because of their incredible probative value – many are extremely collinear, if not pointless — I mostly just took what was available on the team and league summary pages on Pro Football Reference, and then calculated a few (non-advanced) rate stats and such in Excel.

Now, I don’t want to get too technical, but there are a few things about my methodology that I need to explain. First, predictive models of all types have two main areas of concern: under-fitting and over fitting.  Football Outsiders, for example, creates models that “under fit” their predictions.  That is to say, however interesting the individual components may be, they’re not very good at predicting what they’re supposed to.  Honestly, I’m not sure if F.O. even checks their models against the data, but this is a common problem in sports analytics: the analyst gets so caught up designing their model a priori that they forget to check whether it actually fits the empirical data.  On the other hand, to the diligent empirically-driven model-maker, overfitting — which is what happens when your model tries too hard to explain the data — can be just as pernicious.  When you complicate your equations or add more and more variables, it gives your model more opportunity to find an “answer” that fits even relatively large data-sets, but which may not be nearly as accurate when applied elsewhere.

For example, to create my model, I used data from the introduction of the Salary Cap in 1994  on.  When excluding seasons where a team had no previous or next season to compare to, this left me with a sample of 464 seasons.  Even with a sample this large, if you include enough variables you should get good-looking results: a linear regression will appear to make “predictions” that would make any gambler salivate, and a Neural Network will make “predictions” that would make Nostradamus salivate.  But when you take those models and try to apply them to new situations, the gambler and Nostradamus may be in for a big disappointment.  This is because there’s a good chance your model is “overfit”, meaning it is tailored specifically to explain your dataset rather than to identifying the outside factors that the data-set reveals.  Obviously it can be problematic if we simply use the present data to explain the present data.  “Model validation” is a process (woefully ignored in typical sports analysis), by which you make sure that your model is capable of predicting data as well as explaining it.  One of the simplest such methods is called “split validation.”  This involves randomly splitting your sample in half, creating a “practice set” and a “test set,” and then deriving your model from the practice set while applying it to the test set.  If “deriving” a model is confusing to you, think of it like this: you are using half of your data to find an explanation for what’s going on and then checking the other half to see if that explanation seems to work.  The upside to this is that if your method of model-creating can pass this test reliably, your models should be just as accurate on new data as they are on the data you already have.  The downside is that you have to cut your sample size in half, which leads to bigger swings in your results, meaning you have to repeat the process multiple times to be sure that your methodology didn’t just get lucky on one round.

For this model, the main method I am going to use to evaluate predictions is a simple correlation between predicted outcomes and actual outcomes.  The dependent variable (or variable I am trying to predict), is the next season’s wins.  As a baseline, I created a linear correlation against SRS, or “Simple Rating System,” which is PFR’s term for margin of victory adjusted for strength of schedule.  This is the single most probative common statistic when it comes to predicting the next season’s wins, and as I’ve said repeatedly, beating a regression of one highly probative variable can be a lot of work for not much gain.  To earn any bragging rights as a model-maker, I think you should be able to beat the linear SRS predictions by at least 5%, since that’s approximately the edge you would need to win money gambling against it in a casino.  For further comparison, I also created a “Massive Linear” model, which uses the majority of the variables that go into the neural network (excluding collinear variables and variables that have almost no predictive value).  For the ultimate test, I’ve created one model that is a linear regression using only the most probative variables, AND I allowed it to use the whole sample space (that is, I allowed it to cheat and use the same data that it is predicting to build its predictions).  For my “simple” neural network, of course, I didn’t do any variable-weighting or analysis myself, and it required very little configuration:  I used a very slow ‘learning rate’ (.025 if that means anything to you) with a very high number of learning cycles (5000), with decay on.  For the validated models, I repeated this process about 20 times and averaged the outcomes.  I have also included the results from running the data through the “Koko” model, and added results from the last 2 years of Football Outsiders predictions.  As you will see, the neural network was able to beat the other models fairly handily:

Football Outsider numbers are obviously not since 1994.  Note that Koko actually performs on par with F.O. overall, though both are pretty weak compared to the SRS regression or the cheat regression.  “Koko” performed very well last season, posting a  .560 correlation, though apparently last season was highly “predictable,” as all of the models based on previous patterns performed extremely well.  Note also that the Massive Linear model performs poorly: this is as a result of overfitting, as explained above.

Now here is where it gets interesting.  When I first envisioned this post, I was planning to title it “Why I Don’t Make Predictions; And: Predictions!” — on the theory that, given the extreme variance in the sport, any highly-accurate model would probably produce incredibly boring results.  That is, most teams would end up relatively close to the mean, and the “better” teams would normally just be the better teams from the year before.  But when applied the neural network to the data for this season, I was extremely surprised by its apparent boldness:


I should note that the numbers will not add up perfectly as far as divisions and conferences go.  In fact, I slightly adjusted them proportionally to make them fit the correct number of games for the league as a whole (which should have little or positive effect on its predictive power). SkyNet does not know the rules of football or the structure of the league, and its main goal is to make the most accurate predictions on a team by team basis, and then destroy humanity.

Wait, what?  New Orleans struggling to make the playoffs?  Oakland with a better record than San Diego?  The Jets as the league’s best team?  New England is out?!?  These are not the predictions of a milquetoast forecaster, so I am pleased to see that my simple creation has gonads.  Of course there is obviously a huge amount of variance in this process, and a .43 correlation still leaves a lot to chance. But just to be completely clear, this is exactly the same model that soundly beat Koko, Football Outsiders, and several reasonable linear regressions — some of which were allowed to cheat – over the past 15 years.  In my limited experience, neural networks are often capable of beating conventional models even when they produce some bizarre outcomes:  For example, one of my early NBA playoff wins-predicting neural networks was able to beat most linear regressions by a similar (though slightly smaller) margin, even though it predicted negative wins for several teams.  Anyway, I look forward to seeing how the model does this season.  Though, in my heart of hearts, if the Jets win the Super Bowl, I may fear for the future of mankind.

A list of all the input variables, after the jump:

Read the rest of this entry »

Quantum Randy Moss—An Introduction to Entanglement

[Update: This post from 2010 has been getting some renewed attention in response to Randy Moss’s mildly notorious statement in New Orleans. I’ve posted a follow-up with more recent data here: “Is Randy Moss the Greatest?” For discussion of the broader idea, however, you’re in the right place.]

As we all know, even the best-intentioned single-player statistical metrics will always be imperfect indicators of a player’s skill.  They will always be impacted by external factors such as variance, strength of opponents, team dynamics, and coaching decisions.  For example, a player’s shooting % in basketball is a function of many variables – such as where he takes his shots, when he takes his shots, how often he is double-teamed, whether the team has perimeter shooters or big space-occupying centers, how often his team plays Oklahoma, etc – only one of which is that player’s actual shooting ability.  Some external factors will tend to even out in the long-run (like opponent strength in baseball).  Others persist if left unaccounted for, but are relatively easy to model (such as the extra value of made 3 pointers, which has long been incorporated into “true shooting percentage”).  Some can be extremely difficult to work with, but should at least be possible to model in theory (such as adjusting a running back’s yards per carry based on the run-blocking skill of their offensive line).  But some factors can be impossible (or at least practically impossible) to isolate, thus creating systematic bias that cannot be accurately measured.  One of these near-impossible external factors is what I call “entanglement,” a phenomenon that occurs when more than one player’s statistics determine and depend on each other.  Thus, when it comes to evaluating one of the players involved, you run into an information black hole when it comes to the entangled statistic, because it can be literally impossible to determine which player was responsible for the relevant outcomes.

While this problem exists to varying degrees in all team sports, it is most pernicious in football.  As a result, I am extremely skeptical of all statistical player evaluations for that sport, from the most basic to the most advanced.  For a prime example, no matter how detailed or comprehensive your model is, you will not be able to detangle a quarterback’s statistics from those of his other offensive skill position players, particularly his wide receivers.  You may be able to measure the degree of entanglement, for example by examining how much various statistics vary when players change teams.  You may even be able to make reasonable inferences about how likely it is that one player or another should get more credit, for example by comparing the careers of Joe Montana with Kansas City and Jerry Rice with Steve Young (and later Oakland), and using that information to guess who was more responsible for their success together.  But even the best statistics-based guess in that kind of scenario is ultimately only going to give you a probability (rather than an answer), and will be based on a miniscule sample.

Of course, though stats may never be the ultimate arbiter we might want them to be, they can still tell us a lot in particular situations.  For example, if only one element (e.g., a new player) in a system changes, corresponding with a significant change in results, it may be highly likely that that player deserves the credit (note: this may be true whether or not it is reflected directly in his stats).  The same may be true if a player changes teams or situations repeatedly with similar outcomes each time.  With that in mind, let’s turn to one of the great entanglement case-studies in NFL history: Randy Moss.
I’ve often quipped to my friends or other sports enthusiasts that I can prove that Randy Moss is probably the best receiver of all time in 13 words or less.  The proof goes like this:

Chad Pennington, Randall Cunningham, Jeff George, Daunte Culpepper, Tom Brady, and Matt Cassell.

The entanglement between QB and WR is so strong that I don’t think I am overstating the case at all by saying that, while a receiver needs a good quarterback to throw to him, ultimately his skill-level may have more impact on his quarterback’s statistics than on his own.  This is especially true when coaches or defenses key on him, which may open up the field substantially despite having a negative impact on his stat-line.  Conversely, a beneficial implication of such high entanglement is that a quarterback’s numbers may actually provide more insight into a wide receiver’s abilities than the receiver’s own – especially if you have had many quarterbacks throwing to the same receiver with comparable success, as Randy Moss has.

Before crunching the data, I would like to throw some bullet points out there:

  • There have been 6 quarterbacks who have started 9 or more games in a season with Randy Moss as one of their receivers (for obvious reasons, I have replaced Chad Pennington with Kerry Collins for this analysis).
  • Only two of them had starting jobs in the seasons immediately prior to those with Moss (Kerry Collins, Tom Brady).
  • Only one of them had a starting job in the season immediately following those with Moss (Matt Cassell).
  • Pro Bowl appearances of quarterbacks throwing to Moss: 6.  Pro-Bowl appearances of quarterbacks after throwing to Moss: 0.
  • Daunte Culpepper made the Pro Bowl 3 times in his 5 seasons throwing to Moss.  He has won a combined 5 games as a starting quarterback in 5 seasons since.

With the exception of Kerry Collins, all of the QB’s who have thrown to Moss have had “career” years with him (Collins improved, but not by as much at the others).  To illustrate this point, I’ve compiled a number of popular statistics for each quarterback for their Moss years and their other years, in order to figure out the average affect Moss has had.  To qualify as a “Moss year,” they had to have been his quarterback for at least 9 games.  I have excluded all seasons where the quarterback was primarily a reserve, or was only the starting quarterback for a few games.  The “other” seasons include all of that QB’s data in seasons without Moss on his team.  This is not meant to bias the statistics, the reason I exclude partial seasons in one case and not the other is that I don’t believe occasional sub work or participation in a QB controversy accurately reflects the benefit of throwing to Moss, but those things reflect the cost of not having Moss just fine.  In any case, to be as fair as possible, I’ve included the two Daunte Culpepper seasons where he was seemingly hampered by injury, and the Kerry Collins season where Oakland seemed to be in turmoil, all three of which could arguably not be very representative.

As you can see in the table below, the quarterbacks throwing to Moss posted significantly better numbers across the board:

Randy Moss_20110_image001[Edit to note: in this table’s sparklines and in the charts below, the 2nd and third positions are actually transposed from their chronological order.  Jeff George was Moss’s 2nd quarterback and Culpepper was his 3rd, rather than vice versa.  This happened because I initially sorted the seasons by year and team, forgetting that George and Culpepper both came to Minnesota at the same time.]

Note: Adjusted Net Yards Per Attempt incorporates yardage lost due to sacks, plus gives bonuses for TD’s and penalties for interceptions.  Approximate Value is an advanced stat from Pro Football Reference that attempts to summarize all seasons for comparison across positions.  Details here.

Out of 60 metrics, only 3 times did one of these quarterbacks fail to post better numbers throwing to Moss than in the rest of his career:  Kerry Collins had a slightly lower completion percentage and slightly higher sack percentage, and Jeff George had a slightly higher interception percentage for his 10-game campaign in 1999 (though this was still his highest-rated season of his career).  For many of these stats, the difference is practically mind-boggling:  QB Rating may be an imperfect statistic overall, but it is a fairly accurate composite of the passing statistics that the broader football audience cares the most about, and 19.8 points is about the difference in career rating between Peyton Manning and J.P. Losman.

Though obviously Randy Moss is a great player, I still maintain that we can never truly measure exactly how much of this success was a direct result of Moss’s contribution and how much was a result of other factors.  But I think it is very important to remember that, as far as highly entangled statistics like this go, independent variables are rare, and this is just about the most robust data you’ll ever get.  Thus, while I can’t say for certain that Randy Moss is the greatest receiver in NFL History, I think it is unquestionably true that there is more statistical evidence of Randy Moss’s greatness than there is for any other receiver.

Full graphs for all 10 stats after the jump:

Read the rest of this entry »