So in case any of you haven’t been following, the 2012 edition of the ESPN True Hoop Stat Geek Smackdown is underway. Now, obviously this competition shouldn’t be taken too seriously, as it’s roughly the equivalent of picking a weekend’s worth of NFL games, and last year I won only after picking against my actual opinion in the Finals (with good reason, of course). That said, it’s still a lot of fun to track, and basketball is a deterministic-enough sport that I do think skill is relevant. At least enough that I will talk shit if I win again.
To that end, the first round is going pretty well for me so far. Like last year, the experts are mostly in agreement. While there is a fair amount of variation in the series length predictions, there are only two matchups that had any dissent as to the likely winner: the 6 actual stat geeks split 4-2 in favor of the Lakers over the Nuggets, and 3-3 between the Clippers and the Grizzlies. As it happens, I have both Los Angeles teams (yes, I am from Homer), as does Matthew Stahlhut (though my having the Lakers in 5 instead of 7 gives me a slight edge for the moment). No one has gained any points on anyone else yet, but here is my rough account of possible scenarios:
|Outcome||Points Scored (Relative)|
|Bulls in 7||-2 v. Arturo|
|Heat in 5||+2 v. Hollinger|
|Heat in 6||-2 v. Hollinger|
|Pacers in 5||+2 v. Hollinger, Ilardi, Ma|
|Celtics in 5||-2 v. Arturo|
|Celtics in 6||+2 v. Ma, Arturo|
|Lakers in 5||+7 v. Arturo, Ilardi; +2 v. Stahlhut, Ma, Hollinger|
|Lakers in 6||+5 v. Arturo, Ilardi|
|Lakers in 7||+5 v. Arturo, Ilardi; -2 v. Ma, Hollinger, Stahlhut|
|Nuggets in 7||-5 v. Arturo, Ilardi|
|Clippers in 5,7||+5 v. Hollinger, Ilardi, Ma|
|Clippers in 6||+7 v. Hollinger, Ilardi, Ma|
|Grizzlies in 7||-5 v. Ma; -7 v. Hollinger, Ilardi|
On to some odds and ends:
The Particular Challenges of Predicting 2012
Making picks this year was a bit harder than in years past. At one point I seriously considered picking Dallas against OKC (in part for strategic purposes), before reason got the better of me. Abbott only published part of my comment on the series, so here’s the full version I sent him:
Throughout NBA history, defending champions have massively over-performed in the playoffs relative to their regular season records, so I wouldn’t count Dallas out. In fact, the spot Dallas finds itself in is quite similar to Houston’s in 1995, and this season’s short lead -time and compressed schedule should make us particularly wary of the usual battery of predictive models.
Thus, if I had to pick which of these teams is more likely to win the championship, I might take Dallas (or at least it would be a closer call). But that’s a far different question from who is most likely to win this particular series: Oklahoma City is simply too solid and Dallas too shaky to justify an upset pick. E.g., my generic model makes OKC a >90% favorite, so even a 50:50 chance that Dallas really is the sleeping giant Mark Cuban dreams about probably wouldn’t put them over the top.
That last little bit is important: The “paper gap” between Dallas and OKC is so great that even if Dallas were considerably better than they appeared during the regular season, that would only make them competitive, while if they were about as good as they appeared, they would be a huge dog (this kind of situation should be very familiar to any serious poker players out there).
But why on earth would I think Dallas might be any good in the first place? Well, I’ll discuss more below why champions should never be ignored, but the “paper difference” this year should be particularly inscrutable. The normal methods for predicting playoff performance (both my own and others) are particularly ill-suited for the peculiar circumstances of this season:
- Perhaps most obviously, fewer regular season games means smaller sample sizes. In turn, this means that sample-sensitive indicators (like regular season statistics) should have less persuasive value relative to non-sensitive ones (like championship pedigree). It also affects things like head to head record, which is probably more valuable than a lot of stats people think, though less valuable than a lot of non-stats people think. I’ve been working on some research about this, but for an example, look at this post about how I thought there seemed to be a market error w/r/t Dallas vs. Miami in game 6, partly b/c of the bayesian value of Dallas’s head to head advantage.
- Injuries are a bigger factor. This is not just that there are more of them (which is debatable), but there is less flexibility to effectively manage them: e.g., there’s obv less time to rehab players, but also less time to develop new line-ups and workarounds or make other necessary adjustments. In other words, a very good team might be hurt more by a role-player being injured than usual.
- What is the most reliable data? Two things I discussed last year were that (contra unconventional wisdom) Win% is more reliable for post-season predictions than MOV-type stats, and that (contra conventional wisdom) early season performance is typically more predictive than late season performance. But both of these are undermined by the short season. The fundamental value of MOV is as a proxy for W% that is more accurate for smaller sample sizes. And the predictive power of early-season performance most likely stems from its being more representative of playoff basketball: e.g., players are more rested and everyone tries their hardest. However, not only are these playoffs not your normal playoffs, but this season was thrown together so quickly that a lot of teams had barely figured out their lineups by the quarter-pole. While late-season records have the same problems as usual, they may be more predictive just from being more similar to years past.
- Finally, it’s not just the nature of the data, but the nature of the underlying game as well. For example, in a lockout year, teams concerned with injury may be quicker to pull starting players in less lopsided scenarios than usual, making MOV less useful, etc. I won’t go into every possible difference, but here’s a related Twitter exchange:
@skepticalsports Pop is the lockout-ball king. DNP-OLD motherf—er!
— Ignarus (@thegreatIgnarus) April 18, 2012
Which brings us to the next topic:
The Simplest Playoff Model You’ll Never Beat
The thing that Henry Abbott most highlighted from my Smackdown picks (which he quoted at least 3 times in 3 different places) was my little piece of dicta about the Spurs:
I have a ‘big pot’ playoff model (no matchups, no simulations, just stats and history for each playoff team as input) that produces some quirky results that have historically out-predicted my more conventional models. It currently puts San Antonio above 50 percent. Not just against Utah, but against the field. Not saying I believe it, but there you go.
I really didn’t mean for this to be taken so seriously: it’s just one model. And no, I’m not going to post it. It’s experimental, and it’s old and needs updating (e.g., I haven’t adjusted it to account for last season yet).
But I can explain why it loves the Spurs so much: it weights championship pedigree very strongly, and the Spurs this year are the only team near the top that has any.
Now some stats-loving people argue that the “has won a championship” variable is unreliable, but I think they are precisely wrong. Perhaps this will change going forward, but, historically, there are no two ways to cut it: No matter how awesomely designed and complicated your models/simulations are, if you don’t account for championship experience, you will lose to even the most rudimentary model that does.
So case in point, I came up with this 2-step method for picking NBA Champions:
- If there are any teams within 5 games of the best record that have won a title within the past 5 years, pick the most recent.
- Otherwise, pick the team with the best record.
Following this method, you would correctly pick the eventual NBA Champion in 64.3% of years since the league moved to a 16-team playoff in 1984 (with due respect to the slayer, I call this my “5-by-5″ model ).
Of course, thinking back, it seems like picking the winner is sometimes easy, as the league often has an obvious “best team” that is extremely unlikely to ever lose a 7 game series. So perhaps the better question to ask is: How much do you gain by including the championship test in step 1?
The answer is: a lot. Over the same period, the team with the league’s best record has won only 10/28 championships, or ~35%. So the 5-by-5 model almost doubles your hit rate.
And in case you’re wondering, using Margin of Victory, SRS, or any other advanced stat instead of W-L record doesn’t help: other methods vary from doing slightly worse to slightly better. While there may still be room to beef up the complexity of your predictive model (such as advanced stats, situational simulations, etc), your gains will be (comparatively) marginal at best. Moreover, there is also room for improvement on the other side: by setting up a more formal and balanced tradeoff between regular season performance and championship history, the macro-model can get up to 70+% without danger of significant over-fitting.
In fairness, I should note that the 5-by-5 model has had a bit of a rough patch recently—but, in its defense, so has every other model. The NBA has had some wacky results recently, but there is no indication that stats have supplanted history. Indeed, if you break the historical record into groups of more-predictable and less-predictable seasons, the 5-by-5 model trumps pure statistical models in all of them.
Uncertainty and Series Lengths
Finally, I’d like to quickly address the complete botching of series-length analysis that I put forward last year. Not only did I make a really elementary mistake in my explanation (that an emailer thankfully pointed out), but I’ve come to reject my ultimate conclusion as well.
Aside from strategic considerations, I’m now fairly certain that picking the home team in 5 or the away team in 6 is always right, no matter how close you think the series is. I first found this result when running playoff simulations that included margin for error (in other words, accounting for the fact that teams may be better or worse than their stats would indicate, or that they may match up more or less favorably than the underlying records would suggest), but I had some difficulty getting this result to comport with the empirical data, which still showed “home team in 6″ as the most common outcome. But now I think I’ve figured this problem out, and it has to do with the fact that a lot of those outcomes came in spots where you should have picked the other team, etc. But despite the extremely simple-sounding outcome, it’s a rich and interesting topic, so I’ll save the bulk of it for another day.