So in Monday’s post, I included my “5-by-5” method (I probably shouldn’t call it a “model”) for picking NBA champions. In case you missed it, here it is again:
- If there are any teams within 5 games of the best record that have won a title within the past 5 years, pick the most recent winner.
- Otherwise, pick the team with the best record.
In the 28 seasons since the NBA moved to a 16-team playoff format, this method correctly picked the eventual champion 18 times (64%), comparing favorably to the 10/28 (36%) success rate of the team with the league’s best record.
Henry Abbott blogged about it on ESPN yesterday, raising the obvious follow-up:
The question is, why? Why are teams that have won before so much better at winning again? I’ll kick off the brainstorming:
- Maybe most teams fall short of their potential because of team dynamics of selfishness — and maybe champions are the teams that know how to move past that.
- Maybe there are only a few really special coaches, and these teams have them.
- Maybe there are only a few really special teams, and these teams are them.
- Maybe there are special strategies to the playoffs that only some teams know. Not even sure what I’m talking about here — Sleep schedules? Nutrition? Injury prevention?
- Maybe champions get better treatment from referees.
Anyway, it’s certainly fascinating.
UPDATE: John Hollinger with a good point that fits this and other data: Maybe title-winning team don’t value the regular season much.
Though I think some of these ideas are more on point than others, I won’t try to go parse every possibility. On balance, I’m sympathetic to the idea that “winning in the playoffs” has its own skillset independent of just being good at winning basketball games. Conceptually, it’s not too big a leap from the well-documented idea that winning games has its own skillset independent of scoring and allowing points (though the evidence is a lot more indirect).
That said, I think the biggest factor behind this result may be a bit less sexy: It may simply be a matter of information reliability.
Winning Championships is Harder than Winning Games
In stark contrast to other team sports, the NBA Playoffs are extremely deterministic. The best team usually wins (and, conversely, the winner is usually the best team). I’ve made this analogy many times before, but I’ll make it again: The NBA playoffs are a lot more like a Major tournament in men’s tennis than any other crowning competition in popular sports.
This is pretty much a function of design: A moderately better team becomes a huge favorite in a 7 game series. So even if the best team is only moderately better than the 2nd best team, they can be in a dominant position.
Combine this with an uneven distribution of talent (which, incidentally, is probably a function of salary structure), and mix in the empirical reality that the best teams normally don’t change very much from year to year, and its unsurprising that “dynasties” are so common.
On the other side of the equation, regular season standings and leaderboards—whether of wins or its most stable proxies—are highly variable. Note that a 95% confidence interval on an 82 game sample (aka, the “margin of error”) is +/- roughly 10 games.
If you think of the NBA regular season as a lengthy 30-team competition for the #1 seed, its structure is much, much less favorable to the best teams than the playoffs: It’s more like a golf tournament than a tennis tournament.
The Rest is Bayes
Obviously better teams win more often and vice-versa. It’s just that these results have to be interpreted in a context where all results were not equally likely ex ante. For example, the teams who post top records who also have recent championships are far more likely than others to actually be as good as their records indicate. This is pure bayesian inference.
Quick tangent: In my writing, I often reach a point where I say something along the lines of: “From there, it’s all bayesian inference.” I recognize that, for a lot of readers, this is barely a step up from an Underpants Gnomes argument. When I go there, it’s pretty much shorthand for “this is where results inform our beliefs about how likely various causes are to be true” (and all that entails).
There was an interesting comment on Abbott’s ESPN post, pointing out that the 5-by-5 method only picked 5/14 (35.7%) of champions correctly between 1967 and 1980. While there may be unrelated empirical reasons for this, I think this stat may actually confirm the underlying concept. Structurally, having fewer teams in the playoffs, shorter series lengths, a smaller number of teams in the league—basically any of the structural differences between the two eras I can think of—all undermine the combined informational value of [having a championship + having a top record].
To be fair, there may be any number of things in a particular season that undermine our confidence in this inference (I can think of some issues with this season’s inputs, obv). That’s the tricky part of bayesian reasoning: It turns on how plausible you thought things were already.