UPDATE: Advanced NFL Stats Admits I Was Right. Sort Of.

Background:  In January, long before I started blogging in earnest, I made several comments on this Advanced NFL Stats post that were critical of Brian Burke’s playoff prediction model, particularly that, with 8 teams left, it predicted that the Dallas Cowboys had about the same chance of winning the Super Bowl as the Jets, Ravens, Vikings, and Cardinals combined. This seemed both implausible on its face and extremely contrary to contract prices, so I was skeptical.  In that thread, Burke claimed that his model was “almost perfectly calibrated. Teams given a 0.60 probability to win do win 60% of the time, teams given a 0.70 probability win 70%, etc.”  I expressed interest in seeing his calibration data, ”especially for games with considerable favorites, where I think your model overstates the chances of the better team,” but did not get a response.

I brought this dispute up in my monstrously-long passion-post, “Applied Epistemology in Politics and the Playoffs,” where I explained how, even if his model was perfectly calibrated, it would still almost certainly be underestimating the chances of the underdogs.  But now I see that Burke has finally posted the calibration data (compiled by a reader from 2007 on).  It’s a very simple graph, which I’ve recreated here, with a trend-line for his actual data:

image

Now I know this is only 3+ years of data, but I think I can spot a trend:  for games with considerable favorites, his model seems to overstate the chances of the better team.  Naturally, Burke immediately acknowledges this error:

On the other hand, there appears to be some trends. the home team is over-favored in mismatches where it is the stronger team and is under-favored in mismatches where it is the weaker team. It’s possible that home field advantage may be even stronger in mismatches than the model estimates.

Wait, what? If the error were strictly based on stronger-than-expected home-field advantage, the red line should be above the blue line, as the home team should win more often than the model projects whether it is a favorite or not – in other words, the actual trend-line would be parallel to the “perfect” line but with a higher intercept.  Rather, what we see is a trend-line with what appears to be a slightly higher intercept but a somewhat smaller slope, creating an “X” shape, consistent with the model being least accurate for extreme values.  In fact, if you shifted the blue line slightly upward to “shock” for Burke’s hypothesized home-field bias, the “X” shape would be even more perfect: the actual and predicted lines would cross even closer to .50, while diverging symmetrically toward the extremes.

Considering that this error compounds exponentially in a series of playoff games, this data (combined with the still-applicable issue I discussed previously), strongly vindicates my intuition that the market is more trustworthy than Burke’s playoff prediction model, at least when applied to big favorites and big dogs.

Applied Epistemology in Politics and the Playoffs

Two nights ago, as I was watching cable news and reading various online articles and blog posts about Christine O’Donnell’s upset win over Michael Castle in Delaware’s Republican Senate primary, the hasty, almost ferocious emergence of consensus among the punditocracy – to wit, that the GOP now has virtually zero chance of picking up that seat in November – reminded me of an issue that I’ve wanted to blog about since long before I began blogging in earnest: NFL playoff prediction models.

Specifically, I have been critical of those models that project the likelihood of each surviving team winning the Super Bowl by applying a logistic regression model (i.e., “odds of winning based on past performance”) to each remaining game.  In January, I posted a number of comments to this article on Advanced NFL Stats, in which I found it absurd that, with 8 teams left, his model predicted that the Dallas Cowboys had about the same chance of winning the Super Bowl as the Jets, Ravens, Vikings, and Cardinals combined. In the brief discussion, I gave two reasons (in addition to my intuition): first, that these predictions were wildly out of whack with contract prices in sports-betting markets, and second, that I didn’t believe the model sufficiently accounted for “variance in the underlying statistics.”  Burke suggested that the first point is explained by a massive epidemic of conjunction-fallacyitis among sports bettors.  On its face, I think this is a ridiculous explanation: i.e., does he really believe that the market-movers in sports betting — people who put up hundreds of thousands (if not millions) of dollars of their own money — have never considered multiplying the odds of several games together?  Regardless, in this post I will put forth a much better explanation for this disparity than either of us proffered at the time, hopefully mooting that discussion.  On my second point, he was more dismissive, though I was being rather opaque (and somehow misspelled “beat” in one reply), so I don’t blame him.  However, I do think Burke’s intellectual hubris regarding his model (aka “model hubris”) is notable – not because I have any reason to think Burke is a particularly hubristic individual, but because I think it is indicative of a massive epidemic of model-hubrisitis among sports bloggers.

In Section 1 of this post, I will discuss what I personally mean by “applied epistemology” (with apologies to any actual applied epistemologists out there) and what I think some of its more-important implications are.  In Section 2, I will try to apply these concepts by taking a more detailed look at my problems with the above-mentioned playoff prediction models.

Section 1: Applied Epistemology Explained, Sort Of

For those who might not know, “epistemology” is essentially a fancy word for the “philosophical study of knowledge,” which mostly involves philosophers trying to define the word “knowledge” and/or trying to figure out what we know (if anything), and/or how we came to know it (if we do).  For important background, read my Complete History of Epistemology (abridged), which can be found here: In Plato’s Theaetetus, Socrates suggests that knowledge is something like “justified true belief.”  Agreement ensues.  In 1963, Edmund Gettier suggests that a person could be justified in believing something, but it could be true for the wrong reasons.  Debate ensues.  The End.

A “hot” topic in the field recently has been dealing with the implications of elaborate thought experiments similar to the following:

*begin experiment*
Imagine yourself in the following scenario:  From childhood, you have one burning desire: to know the answer to Question X.  This desire is so powerful that you dedicate your entire life to its pursuit.  You work hard in school, where you excel greatly, and you master every relevant academic discipline, becoming a tenured professor at some random elite University, earning multiple doctorates in the process.  You relentlessly refine and hone your (obviously considerable) reasoning skills using every method you can think of, and you gather and analyze every single piece of empirical data relevant to Question X available to man.  Finally, after decades of exhaustive research and study, you have a rapid series of breakthroughs that lead you to conclude – not arbitrarily, but completely based on the proof you developed through incredible amounts of hard work and ingenuity — that the answer to Question X is definitely, 100%, without a doubt: 42.  Congratulations!  To celebrate the conclusion of this momentous undertaking, you decide to finally get out of the lab/house/library and go celebrate, so you head to a popular off-campus bar.  You are so overjoyed about your accomplishment that you decide to buy everyone a round of drinks, only to find that some random guy — let’s call him Neb – just bought everyone a round of drinks himself.  What a joyous occasion: two middle-aged individuals out on the town, with reason to celebrate (and you can probably see where this is going, but I’ll go there anyway)!  As you quickly learn, it turns out that Neb is around your same age, and is also a professor at a similarly elite University in the region.  In fact, it’s amazing how much you two have in common:  you have relatively similar demographic histories, identical IQ, SAT, and GRE scores, you both won multiple academic awards at every level, you have both achieved similar levels of prominence in your academic community, and you have both been repeatedly published in journals of comparable prestige.  In fact, as it turns out, you have both been spent your entire lives studying the same question!  You have both read all the same books, you have both met, talked or worked with many comparably intelligent — or even identical — people:  It is amazing that you have never met!  Neb, of course, is feeling so celebratory because finally, after decades of exhaustive research and study, he has just had a rapid series of breakthroughs that lead him to finally conclude – not arbitrarily, but completely based on the proof he developed through incredible amounts of hard work and ingenuity — that the answer to Question X is definitely, 100%, without a doubt: 54.

You spend the next several hours drinking and arguing about Question X: while Neb seemed intelligent enough at first, everything he says about X seems completely off base, and even though you make several excellent points, he never seems to understand them.  He argues from the wrong premises in some areas, and draws the wrong conclusions in others.  He massively overvalues many factors that you are certain are not very important, and is dismissive of many factors that you are certain are crucial.  His arguments, though often similar in structure to your own, are extremely unpersuasive and don’t seem to make any sense, and though you try to explain yourself to him, he stubbornly refuses to comprehend your superior reasoning.  The next day, you stumble into class, where your students — who had been buzzing about your breakthrough all morning — begin pestering you with questions about Question X and 42.  In your last class, you had estimated that the chances of 42 being “the answer” were around 90%, and obviously they want to know if you have finally proved 42 for certain, and if not, how likely you believe it is now.  What do you tell them?

All of the research and analysis you conducted since your previous class had, indeed, led you to believe that 42 is a mortal lock.  In the course of your research, everything you have thought about or observed or uncovered, as well as all of the empirical evidence you have examined or thought experiments you have considered, all lead you to believe that 42 is the answer.  As you hesitate, your students wonder why, even going so far as to ask, “Have you heard any remotely persuasive arguments against 42 that we should be considering?”  Can you, in good conscience, say that you know the answer to Question X?  For that matter, can you even say that the odds of 42 are significantly greater than 50%?  You may be inclined, as many have been, to “damn the torpedoes” and act as if Neb’s existence is irrelevant.  But that view is quickly rebutted:  Say one of your most enterprising students brings a special device to class:  when she presses the red button marked “detonate,” if the answer to Question X is actually 42, the machine will immediately dispense $20 bills for everyone in the room; but if the answer is not actually 42, it will turn your city into rubble.  And then it will search the rubble, gather any surviving puppies or kittens, and blend them.

So assuming you’re on board that your chance encounter with Professor Neb implies that, um, you might be wrong about 42, what comes next?  There’s a whole interesting line of inquiry about what the new likelihood of 42 is and whether anything higher than 50% is supportable, but that’s not especially relevant to this discussion.  But how about this:  Say the scenario proceeds as above, you dedicate your life, yadda yadda, come to be 100% convinced of 42, but instead of going out to a bar, you decide to relax with a bubble bath and a glass of Pinot, while Neb drinks alone.  You walk into class the next day, and proudly announce that the new odds of 42 are 100%.  Mary Kate pulls out her special money-dispensing device, and you say sure, it’s a lock, press the button.  Yay, it’s raining Andrew Jacksons in your classroom!  And then: **Boom** **Meow** **Woof** **Whirrrrrrrrrrrrrr**.  Apparently Mary Kate had a twin sister — she was in Neb’s class.

*end experiment*

In reality, the fact that you might be wrong, even when you’re so sure you’re right, is more than a philosophical curiosity, it is a mathematical certainty.  The processes that lead you to form beliefs, even extremely strong ones, are imperfect.  And when you are 100% certain that a belief-generating process is reliable, the process that led you to that belief is likely imperfect.  This line of thinking is sometimes referred to as skepticism — which would be fine if it weren’t usually meant as a pejorative.

When push comes to shove, people will usually admit that there is at least some chance they are wrong, yet they massively underestimate just what those chances are.  In political debates, for example, people may admit that there is some miniscule possibility that their position is ill-informed or empirically unsound, but they will almost never say that they are more likely to be wrong than to be right.  Yet, when two populations hold diametrically opposed views, either one population is wrong or both are – all else being equal, the correct assessment in such scenarios is that no-one is likely to have it right.

When dealing with beliefs about probabilities, the complications get even trickier:  Obviously many people believe some things are close to 100% likely to be true, when the real probability may be some-much if not much-much lower.  But in addition to the extremes, people hold a whole range of poorly-calibrated probabilistic beliefs, like believing something is 60% likely when it is actually 50% or 70%.  (Note: Some Philosophically trained readers may balk at this idea, suggesting that determinism entails everything having either a 0 or 100% probability of being true.  While this argument may be sound in classroom discussions, it is highly unpragmatic: If I believe that I will win a coin flip 60% of the time, it may be theoretically true that the universe has already determined whether the coin will turn up heads or tails, but for all intents and purposes, I am only wrong by 10%).

But knowing that we are wrong so much of the time doesn’t tell us much by itself: it’s very hard to be right, and we do the best we can.  We develop heuristics that tend towards the right answers, or — more importantly for my purposes — that allow the consequences of being wrong in both directions even out over time.  You may reasonably believe that the probability of something is 30%, when, in reality, the probability is either 20% or 40%.  If the two possibilities are equally likely, then your 30% belief may be functionally equivalent under many circumstances, but they are not the same, as I will demonstrate in Section 2 (note to the philosophers: you may have noticed that this is a bit like the Gettier examples: you might be “right,” but for the wrong reasons).

There is a science to being wrong, and it doesn’t mean you have to mope in your study, or act in bad faith when you’re out of it.  “Applied Epistemology” (at least as this armchair philosopher defines it) is the study of the processes that lead to knowledge and beliefs, and of the practical implications of their limitations.

Part 2:  NFL Playoff Prediction Models

Now, let’s finally return to the Advanced NFL Stats playoff prediction model.  Burke’s methodology is simple: using a logistic regression based on various statistical indicators, the model estimates a probability for each team to win their first round matchup.  It then repeats the process for all possible second round matchups, weighting each by its likelihood of occurring (as determined by the first round projections) and so on through the championship.  With those results in hand, a team’s chances of winning the tournament is simply the product of their chances of winning in each round.  With 8 teams remaining in the divisional stage, the model’s predictions looked like this:

image

Burke states that the individual game prediction model has a “history of accuracy” and is well “calibrated,” meaning that, historically, of the teams it has predicted to win 30% of the time, close to 30% of them have won, and so on.  For a number of reasons, I remain somewhat skeptical of this claim, especially when it comes to “extreme value” games where the model predicts very heavy favorites or underdogs.  (E.g’s:  What validation safeguards do they deploy to avoid over-fitting?  How did they account for the thinness of data available for extreme values in their calibration method?)  But for now, let’s assume this claim is correct, and that the model is calibrated perfectly:  The fact that teams predicted to win 30% of the time actually won 30% of the time does NOT mean that each team actually had a 30% chance of winning.

That 30% number is just an average.  If you believe that the model perfectly nails the actual expectation for every team, you are crazy.  Since there is a large and reasonably measurable amount of variance in the very small sample of underlying statistics that the predictive model relies on, it necessarily follows that many teams will have significantly under or over-performed statistically relative to their true strength, which will be reflected in the model’s predictions.  The “perfect calibration” of the model only means that the error is well-hidden.

This doesn’t mean that it’s a bad model: like any heuristic, the model may be completely adequate for its intended context.  For example, if you’re going to bet on an individual game, barring any other information, the average of a team’s potential chances should be functionally equivalent to their actual chances.  But if you’re planning to bet on the end-result of a series of games — such as in the divisional round of the NFL playoffs — failing to understand the distribution of error could be very costly.

For example, let’s look at what happens to Minnesota and Arizona’s Super Bowl chances if we assume that the error in their winrates is uniformly distributed in the neighborhood of their predicted winrate:

image

For Minnesota, I created a pool of 11 possible expectations that includes the actual prediction plus teams that were 5% to 25% better or worse.  I did the same for Arizona, but with half the deviation.  The average win prediction for each game remains constant, but the overall chances of winning the Super Bowl change dramatically.  To some of you, the difference between 2% and 1% may not seem like much, but if you could find a casino that would regularly offer you 100-1 on something that is actually a 50-1 shot, you could become very rich very quickly.  Of course, this uniform distribution is a crude one of many conceivable ways that the “hidden error” could be distributed, and I have no particular reason to think it is more accurate than any other.  But one thing should be abundantly clear: the winrate model on which this whole system rests tells us nothing about this distribution either.

The exact structure of this particular error distribution is mostly an empirical matter that can and should invite further study.  But for the purposes of this essay, speculation may suffice.  For example, here is an ad hoc distribution that I thought seemed a little more plausible than a uniform distribution:

image

This table shows the chances of winning the Super Bowl for a generic divisional round playoff team with an average predicted winrate of 35% for each game.  In this scenario, there is a 30% chance (3/10) that the prediction gets it right on the money, a 40% chance that the team is around half as good as predicted (the bottom 4 values), a 10% chance that the team is slightly better, a 10% chance that it is significantly better, and a 10% chance that the model’s prediction is completely off its rocker.  These possibilities still produce a 35% average winrate, yet, as above, the overall chances of winning the Super Bowl increase significantly (this time by almost double).  Of course, 2 random hypothetical distributions don’t yet indicate a trend, so let’s look at a family of distributions to see if we can find any patterns:

image

This chart compares the chances of a team with a given predicted winrate to win the Super Bowl based on uniform error distributions of various sizes.  So the percentages in column 1 are the odds of the team winning the Super Bowl if the predicted winrate is exactly equal to their actual winrate.  Then each subsequent column is the chances of them winning the Superbowl if you increase the “pool” of potential actual winrates by one on each side.  Thus, the second number after 35% is the odds of winning the Super Bowl if the team is equally likely to be have a 30%, 35%, or 40% chance in reality, etc.  The maximum possible change in Super Bowl winning chances for each starting prediction is contained in the light yellow box at the end of each row.  I should note that I chose this family of distributions for its ease of cross-comparison, not its precision.  I also experimented with many other models that produced a variety of interesting results, yet in every even remotely plausible one of them, two trends – both highly germane to my initial criticism of Burke’s model – endured:
1.  Lower predicted game odds lead to greater disparity between predicted and actual chances.
To further illustrate this, here’s a vertical slice of the data, containing the net change for each possible prediction, given a discreet uniform error distribution of size 7:

image

2.  Greater error ranges in the underlying distribution lead to greater disparity between predicted and actual chances.

To further illustrate this, here’s a horizontal slice of the data, containing the net change for each possible error range, given an initial winrate prediction of 35%:

image

Of course these underlying error distributions can and should be examined further, but even at this early stage of inquiry, we “know” enough (at least with a high degree of probability) to begin drawing conclusions.  I.e., We know there is considerable variance in the statistics that Burke’s model relies on, which strongly suggests that there is a considerable amount of “hidden error” in its predictions.  We know greater “hidden error” leads to greater disparity in predicted Super Bowl winning chances, and that this disparity is greatest for underdogs.  Therefore, it is highly likely that this model significantly under-represents the chances of underdog teams at the divisional stage of the playoffs going on to win the Superbowl.  Q.E.D.

This doesn’t mean that these problems aren’t fixable: the nature of the error distribution of the individual game-predicting model could be investigated and modeled itself, and the results could be used to adjust Burke’s playoff predictions accordingly.  Alternatively, if you want to avoid the sticky business of characterizing all that hidden error, a Super-Bowl prediction model could be built that deals with that problem heuristically: say, by running a logistical regression that uses the available data to predict each team’s chances of winning the Super Bowl directly.

Finally, I believe this evidence both directly and indirectly supports my intuition that the large disparity between Burke’s predictions and the corresponding contract prices was more likely to be the result of model error than market error.  The direct support should be obvious, but the indirect support is also interesting:  Though markets can get it wrong just as much or more than any other process, I think that people who “put their money where their mouth is” (especially those with the most influence on the markets) tend to be more reliably skeptical and less dogmatic about making their investments than bloggers, analysts or even academics are about publishing their opinions.  Moreover, by its nature, the market takes a much more pluralistic approach to addressing controversies than do most individuals.  While this may leave it susceptible to being marginally outperformed (on balance) by more directly focused individual models or persons, I think it will also be more likely to avoid pitfalls like the one above.

Conclusions, and My Broader Agenda

The general purpose of post is to demonstrate both the importance and difficulty of understanding and characterizing the ways in which our beliefs – and the processes we use to form them — can get it wrong.  This is, at its heart, a delicate but extremely pragmatic endeavor.  It involves being appropriately skeptical of various conclusions — even when they seem right to you – and recognizing the implications of the multitude of ways that such error can manifest.

I have a whole slew of ideas about how to apply these principles when evaluating the various pronouncements made by the political commentariat, but the blogosphere already has a Nate Silver (and Mr. Silver is smarter than me anyway), so I’ll leave that for you to consider as you see fit.

Easy NFL Predictions, the SkyNet Way

In this post I briefly discussed regression to the mean in the NFL, as well as the difficulty one can face trying to beat a simple prediction model based on even a single highly probative variable.  Indeed, for all the extensive research and cutting-edge analysis they conduct at Football Outsiders, they are seemingly unable to beat “Koko,” which is just about the simplest regression model known to primates.  Capture

Of course, since there’s no way I could out-analyze F.O. myself — especially if I wanted to get any predictions out before tonight’s NFL opener – I decided to let my computer do the work for me: this is what neural networks are all about.  In case you’re not familiar, a neural network is a learning algorithm that can be used as a tool to process large quantities of data with many different variables — even if you don’t know which variables are the most important, or how they interact with each other.

The graphic to the right is the end result of several whole minutes of diligent configuration (after a lot of tedious data collection, of course).  It uses 60 variables (which are listed under the fold below), though I should note that I didn’t choose them because of their incredible probative value – many are extremely collinear, if not pointless — I mostly just took what was available on the team and league summary pages on Pro Football Reference, and then calculated a few (non-advanced) rate stats and such in Excel.

Now, I don’t want to get too technical, but there are a few things about my methodology that I need to explain. First, predictive models of all types have two main areas of concern: under-fitting and over fitting.  Football Outsiders, for example, creates models that “under fit” their predictions.  That is to say, however interesting the individual components may be, they’re not very good at predicting what they’re supposed to.  Honestly, I’m not sure if F.O. even checks their models against the data, but this is a common problem in sports analytics: the analyst gets so caught up designing their model a priori that they forget to check whether it actually fits the empirical data.  On the other hand, to the diligent empirically-driven model-maker, overfitting — which is what happens when your model tries too hard to explain the data — can be just as pernicious.  When you complicate your equations or add more and more variables, it gives your model more opportunity to find an “answer” that fits even relatively large data-sets, but which may not be nearly as accurate when applied elsewhere.

For example, to create my model, I used data from the introduction of the Salary Cap in 1994  on.  When excluding seasons where a team had no previous or next season to compare to, this left me with a sample of 464 seasons.  Even with a sample this large, if you include enough variables you should get good-looking results: a linear regression will appear to make “predictions” that would make any gambler salivate, and a Neural Network will make “predictions” that would make Nostradamus salivate.  But when you take those models and try to apply them to new situations, the gambler and Nostradamus may be in for a big disappointment.  This is because there’s a good chance your model is “overfit”, meaning it is tailored specifically to explain your dataset rather than to identifying the outside factors that the data-set reveals.  Obviously it can be problematic if we simply use the present data to explain the present data.  “Model validation” is a process (woefully ignored in typical sports analysis), by which you make sure that your model is capable of predicting data as well as explaining it.  One of the simplest such methods is called “split validation.”  This involves randomly splitting your sample in half, creating a “practice set” and a “test set,” and then deriving your model from the practice set while applying it to the test set.  If “deriving” a model is confusing to you, think of it like this: you are using half of your data to find an explanation for what’s going on and then checking the other half to see if that explanation seems to work.  The upside to this is that if your method of model-creating can pass this test reliably, your models should be just as accurate on new data as they are on the data you already have.  The downside is that you have to cut your sample size in half, which leads to bigger swings in your results, meaning you have to repeat the process multiple times to be sure that your methodology didn’t just get lucky on one round.

For this model, the main method I am going to use to evaluate predictions is a simple correlation between predicted outcomes and actual outcomes.  The dependent variable (or variable I am trying to predict), is the next season’s wins.  As a baseline, I created a linear correlation against SRS, or “Simple Rating System,” which is PFR’s term for margin of victory adjusted for strength of schedule.  This is the single most probative common statistic when it comes to predicting the next season’s wins, and as I’ve said repeatedly, beating a regression of one highly probative variable can be a lot of work for not much gain.  To earn any bragging rights as a model-maker, I think you should be able to beat the linear SRS predictions by at least 5%, since that’s approximately the edge you would need to win money gambling against it in a casino.  For further comparison, I also created a “Massive Linear” model, which uses the majority of the variables that go into the neural network (excluding collinear variables and variables that have almost no predictive value).  For the ultimate test, I’ve created one model that is a linear regression using only the most probative variables, AND I allowed it to use the whole sample space (that is, I allowed it to cheat and use the same data that it is predicting to build its predictions).  For my “simple” neural network, of course, I didn’t do any variable-weighting or analysis myself, and it required very little configuration:  I used a very slow ‘learning rate’ (.025 if that means anything to you) with a very high number of learning cycles (5000), with decay on.  For the validated models, I repeated this process about 20 times and averaged the outcomes.  I have also included the results from running the data through the “Koko” model, and added results from the last 2 years of Football Outsiders predictions.  As you will see, the neural network was able to beat the other models fairly handily:

Football Outsider numbers are obviously not since 1994.  Note that Koko actually performs on par with F.O. overall, though both are pretty weak compared to the SRS regression or the cheat regression.  “Koko” performed very well last season, posting a  .560 correlation, though apparently last season was highly “predictable,” as all of the models based on previous patterns performed extremely well.  Note also that the Massive Linear model performs poorly: this is as a result of overfitting, as explained above.

Now here is where it gets interesting.  When I first envisioned this post, I was planning to title it “Why I Don’t Make Predictions; And: Predictions!” — on the theory that, given the extreme variance in the sport, any highly-accurate model would probably produce incredibly boring results.  That is, most teams would end up relatively close to the mean, and the “better” teams would normally just be the better teams from the year before.  But when applied the neural network to the data for this season, I was extremely surprised by its apparent boldness:


I should note that the numbers will not add up perfectly as far as divisions and conferences go.  In fact, I slightly adjusted them proportionally to make them fit the correct number of games for the league as a whole (which should have little or positive effect on its predictive power). SkyNet does not know the rules of football or the structure of the league, and its main goal is to make the most accurate predictions on a team by team basis, and then destroy humanity.

Wait, what?  New Orleans struggling to make the playoffs?  Oakland with a better record than San Diego?  The Jets as the league’s best team?  New England is out?!?  These are not the predictions of a milquetoast forecaster, so I am pleased to see that my simple creation has gonads.  Of course there is obviously a huge amount of variance in this process, and a .43 correlation still leaves a lot to chance. But just to be completely clear, this is exactly the same model that soundly beat Koko, Football Outsiders, and several reasonable linear regressions — some of which were allowed to cheat – over the past 15 years.  In my limited experience, neural networks are often capable of beating conventional models even when they produce some bizarre outcomes:  For example, one of my early NBA playoff wins-predicting neural networks was able to beat most linear regressions by a similar (though slightly smaller) margin, even though it predicted negative wins for several teams.  Anyway, I look forward to seeing how the model does this season.  Though, in my heart of hearts, if the Jets win the Super Bowl, I may fear for the future of mankind.

A list of all the input variables, after the jump:

Read the rest of this entry »

Hidden Sources of Error—A Back-Handed Defense of Football Outsiders

So I was catching up on some old blog-reading and came across this excellent post by Brian Burke, Pre-Season Predictions Are Still Worthless, showing that the Football Outsiders pre-season predictions are about as accurate as picking 8-8 for every team would be, and that a simple regression based on one variable — 6 wins plus 1/4 of the previous season’s wins — is significantly more accurate

While Brian’s anecdote about Billy Madison humorously skewers Football Outsiders, it’s not entirely fair, and I think these numbers don’t prove as much as they may appear to at first glance.  Sure, a number of conventional or unconventional conclusions people have reached are probably false, but the vast majority of sports wisdom is based on valid causal inferences with at least a grain of truth.  The problem is that people have a tendency to over-rely on the various causes and effects that they observe directly, conversely underestimating the causes they cannot see.

So far, so obvious.  But these “hidden” causes can be broken down further, starting with two main categories, which I’ll call “random causes” and “counter-causes”:

“Random causes” are not necessarily truly random, but do not bias your conclusions in any particular direction.  It is the truly random combined with the may-as-well-be-random, and generates the inherent variance of the system.

“Counter causes” are those which you may not see, but which relate to your variables in ways that counteract your inferences.  The salary cap in the NFL is one of the most ubiquitous offenders:  E.g. an analyst sees a very good quarterback, and for various reasons believes that QB with a particular skill-set is worth an extra 2 wins per season.  That QB is obtained by an 8-8 team in free agency, so the analyst predicts that team will win 10 games.  But in reality, the team that signed that quarterback had to pay handsomely for that +2 addition, and may have had to cut 2 wins worth of players to do it.  If you imagine this process repeating itself over time, you will see that the correlation between QB’s with those skills and their team’s actual winrate may be small or non-existent (in reality, of course, the best quarterbacks are probably underpaid relative to their value, so this is not a problem).  In closed systems like sports, these sorts of scenarios crop up all the time, and thus it is not uncommon for a perfectly valid and logical-seeming inference to be, systematically, dead wrong (by which I mean that it not only leads to an erroneous conclusion in a particular situation, but will lead to bad predictions routinely).

So how does this relate to Football Outsiders, and how does it amount to a defense of their predictions?  First, I think the suggestion that FO may have created “negative knowledge” is demonstrably false:  The key here is not to be fooled by the stat that they could barely beat the “coma patient” prediction of 8-8 across the board.  8 wins is the most likely outcome for any team ex ante, and every win above or below that number is less and less likely.  E.g., if every outcome were the result of a flip of a coin, your best strategy would be to pick 8-8 for every team, and picking *any* team to go 10-6 or 12-4 would be terrible.  Yet Football Outsiders (and others) — based on their expertise — pick many teams to have very good and very bad records.  The fact that they break even against the coma patient shows that their expertise is worth something.

Second, I think there’s no shame in being unable to beat a simple regression based on one extremely probative variable:  I’ve worked on a lot of predictive models, from linear regressions to neural networks, and beating a simple regression can be a lot of work for marginal gain (which, combined with the rake, is the main reason that sports-betting markets can be so tough).

Yet, getting beaten so badly by a simple regression is a definite indicator of systematic error — particularly since there is nothing preventing Football Outsiders from using a simple regression to help them make their predictions. Now, I suspect that FO is underestimating football variance, especially the extent of regression to the mean.  But this is a blanket assumption that I would happily apply to just about any sports analyst — quantitative or not — and is not really of interest.  However, per the distinction I made above, I believe FO is likely underestimating the “counter causes” that may temper the robustness of their inferences without necessarily invalidating them entirely.  A relatively minor bias in this regard could easily lead to a significant drop in overall predictive performance, for the same reason as above:  the best and worst records are by far the least likely to occur.  Thus, *ever* predicting them, and expecting to gain accuracy in the process, requires an enormous amount of confidence.  If Football Outsiders has that degree of confidence, I would wager that it is misplaced.

Favre’s Not-So-Bad Interception

This post on Advanced NFL Stats (which is generally my favorite NFL blog), quantifying the badness of Brett Favre’s interception near the end of regulation, is somewhat revealing of a subtle problem I’ve noticed with simple win-share analysis of football plays.  To be sure, Favre’s interception “cost” the Vikings a chance to win the game in regulation, and after a decent return, even left a small chance of the Saints winning before overtime.  So in an absolute sense, it was a “bad” play, which is reflected by Brian’s conclusion that it cost the Vikings .38 wins.  But I think there are a couple of issues with that figure that are worth noting:

First, while it may have cost .38 wins versus the start of that play, a more important question might be how bad it was on the spectrum of possible outcomes.  For example, an incomplete pass still would not have left the Vikings in a great position, as they were outside of field goal range with enough time on the clock to run probably only one more play before making a FG attempt.  Likewise, if they had run the ball instead — with the Saints seemingly keyed up for the run — it is unlikely that they would have picked up the necessary yards to end the game there either.  It is important to keep in mind that many other negative outcomes, like a sack or a run for minus yards would be nearly as disastrous as the interception. In fact, by the nature of the position the Vikings were in, most “bad” outcomes would be hugely bad (in terms of win-shares), and most “good” outcomes would be hugely good.

The formal point here is that while Favre’s play was bad in absolute terms, it wasn’t much worse than a large percentage of other possible outcomes.  For an extreme comparison, imagine a team with 4th and goal at the 1 with 1 second left in the game, needing a touchdown to win, and the quarterback throws an incomplete pass.  The win-shares system would grade this as a terrible mistake!  I would suggest that a better way to quantify this type of result might be to ask the question: how many standard deviations worse than the mean was the outcome?  In the 4th down case, I think it’s hard to make either a terrible mistake or an incredible play, because practically any outcome is essentially normal.  Similarly, in the Favre case, while the interception was a highly unfavorable outcome, it wasn’t nearly as drastic as the basic win-shares analysis might make it seem.

Second, to rate this play based on the actual result is, shall we say, a little results-oriented.  As should be obvious, a completion of that length would have been an almost sure victory for the Vikings, so it’s unclear whether Favre’s throw was even a bad decision.  Considering they were out of field goal range at the start of the play, if the distribution of outcomes of the pass were 40% completions, 40% incompletions, and 20% interceptions, it would easily have been a win-maximizing gamble.  Regardless of the exact distribution ex ante, the -.38 wins outcome is way on the low end of the possible outcomes, especially considering that it reflects a longer than average return on the pick.  As should be obvious, many interceptions are the product of good quarterbacking decisions (I may write separately at a later point on the topic “Show me a quarterback that doesn’t throw interceptions, and I’ll show you a sucky quarterback”), and in this case it is not clear to me which type this was.

This should not be taken as a criticism of Advanced NFL Stats’ methodology. I’m certain Brian understands the difference between the resulting win-shares a play produces and the question of whether that result was the product of a poor decision.  When it comes to 4th downs, for example, everyone with even an inkling of analytical skill understands that Belichick’s infamously going for it against the Colts was definitely the win-maximizing play, even though it had a terrible result.  It doesn’t take a very big leap from there to realize that the same reasoning applies equally to players’ decisions.

My broader agenda that these issues partly relate to (which I will hopefully expand on significantly in the future) is that while I believe win-share analysis is the best — and in some sense the only — way to evaluate football decisions, I am also concerned with the many complications that arise when attempting to expand its purview to player evaluation.