So in Monday’s post, I included my “5-by-5” method (I probably shouldn’t call it a “model”) for picking NBA champions. In case you missed it, here it is again:
- If there are any teams within 5 games of the best record that have won a title within the past 5 years, pick the most recent winner.
- Otherwise, pick the team with the best record.
In the 28 seasons since the NBA moved to a 16-team playoff format, this method correctly picked the eventual champion 18 times (64%), comparing favorably to the 10/28 (36%) success rate of the team with the league’s best record.
Henry Abbott blogged about it on ESPN yesterday, raising the obvious follow-up:
The question is, why? Why are teams that have won before so much better at winning again? I’ll kick off the brainstorming:
- Maybe most teams fall short of their potential because of team dynamics of selfishness — and maybe champions are the teams that know how to move past that.
- Maybe there are only a few really special coaches, and these teams have them.
- Maybe there are only a few really special teams, and these teams are them.
- Maybe there are special strategies to the playoffs that only some teams know. Not even sure what I’m talking about here — Sleep schedules? Nutrition? Injury prevention?
- Maybe champions get better treatment from referees.
Anyway, it’s certainly fascinating.
UPDATE: John Hollinger with a good point that fits this and other data: Maybe title-winning team don’t value the regular season much.
Though I think some of these ideas are more on point than others, I won’t try to go parse every possibility. On balance, I’m sympathetic to the idea that “winning in the playoffs” has its own skillset independent of just being good at winning basketball games. Conceptually, it’s not too big a leap from the well-documented idea that winning games has its own skillset independent of scoring and allowing points (though the evidence is a lot more indirect).
That said, I think the biggest factor behind this result may be a bit less sexy: It may simply be a matter of information reliability.
Winning Championships is Harder than Winning Games
In stark contrast to other team sports, the NBA Playoffs are extremely deterministic. The best team usually wins (and, conversely, the winner is usually the best team). I’ve made this analogy many times before, but I’ll make it again: The NBA playoffs are a lot more like a Major tournament in men’s tennis than any other crowning competition in popular sports.
This is pretty much a function of design: A moderately better team becomes a huge favorite in a 7 game series. So even if the best team is only moderately better than the 2nd best team, they can be in a dominant position.
Combine this with an uneven distribution of talent (which, incidentally, is probably a function of salary structure), and mix in the empirical reality that the best teams normally don’t change very much from year to year, and its unsurprising that “dynasties” are so common.
On the other side of the equation, regular season standings and leaderboards—whether of wins or its most stable proxies—are highly variable. Note that a 95% confidence interval on an 82 game sample (aka, the “margin of error”) is +/- roughly 10 games.
If you think of the NBA regular season as a lengthy 30-team competition for the #1 seed, its structure is much, much less favorable to the best teams than the playoffs: It’s more like a golf tournament than a tennis tournament.
The Rest is Bayes
Obviously better teams win more often and vice-versa. It’s just that these results have to be interpreted in a context where all results were not equally likely ex ante. For example, the teams who post top records who also have recent championships are far more likely than others to actually be as good as their records indicate. This is pure bayesian inference.
Quick tangent: In my writing, I often reach a point where I say something along the lines of: “From there, it’s all bayesian inference.” I recognize that, for a lot of readers, this is barely a step up from an Underpants Gnomes argument. When I go there, it’s pretty much shorthand for “this is where results inform our beliefs about how likely various causes are to be true” (and all that entails).
There was an interesting comment on Abbott’s ESPN post, pointing out that the 5-by-5 method only picked 5/14 (35.7%) of champions correctly between 1967 and 1980. While there may be unrelated empirical reasons for this, I think this stat may actually confirm the underlying concept. Structurally, having fewer teams in the playoffs, shorter series lengths, a smaller number of teams in the league—basically any of the structural differences between the two eras I can think of—all undermine the combined informational value of [having a championship + having a top record].
To be fair, there may be any number of things in a particular season that undermine our confidence in this inference (I can think of some issues with this season’s inputs, obv). That’s the tricky part of bayesian reasoning: It turns on how plausible you thought things were already.
One other thought that might be a corollary to Hollinger’s thought – maybe non-championship teams value the regular season more because they don’t know that they are expending valuable energy when they win those regular season games.
Winning games is harder/more tiring than losing games, so in addition to the idea that some teams can just turn it on at playoff time, it might be the case that some teams are resting during the regular season.
I suppose it would be fairly easy to figure out whether this is the case or not by looking at minutes played for the championship teams that fit the 5+5 model.
But what level confidence does a 7 game series provide that the “moderately” better team won?
This question is actually a bit more complicated than you might think. In any given series between “moderately” different teams, the better might only win ~70-75% of the time. But the best team will be more than moderately better than some of its early-round opponents, and may not even have to play the actual 2nd best team, since that team will only be a “moderate” favorite against the next level of opponents, and so on. Ultimately, you don’t need a huge edge on the field to be a favorite to win the championship.
But then, as per ElGee’s point below, don’t most seasons result in the best team (albeit having the best odds) having a 50% shot?
* “less than” 50%…
Have you or someone else written elsewhere about the +/- 10 games regular-season idea? I’m weak with statistics, but this seems to me to suggest that there’s a 5% chance, for example, that the 1995-96 Bulls could have gone undefeated. Or if the season had played out 100,000,000 times they would likely have gone undefeated in 5,000,000 of them. And won only 62 games in 5,000,000 others. Is this a correct interpretation? I’d love to read more. Thanks!!!
No, it doesn’t work that way. The 10 game “margin of error” of an 82 game season technically only applies to a 41 win team. The further you get from .500, the more the range skews. A rough way to approximate is to look at the equivalent percentage change between each % and the maximum and minimum possible. So, e.g., 50% +/- 10% is similar to 75% +5% to -15% (20% in either direction), though note that the likelihood of each value in the range is not equal. It’s similar to what I did for adjusted win % differential in the Rodman series.
Championship experience drives the model because the very nature and rules of the game change in the playoffs. I would propose that there are more than sufficient differences between the regular season and the play-offs to account for the lack of correlation between them (minutes management, time off and especially refereeing — regular season fouls become playoff play-ons, regular season flagrant fouls are just good hard playoff fouls (see Bynums takedown of McGee the other night night)).
The culture of the NBA worships hard physical playoff basketball – which was why the Nash led Suns never had a prayer of winning a championship.
Additionally as much as I love the game, the NBA focus on championships makes an unfair playing field – in what other sport would [the league allow players as valuable as Boris Diaw and Stephen Jackson to end up on San Antonio? It’s a front runner league, no question…
Hi Benjamin — I made the comment about the test period affecting the results on Abbot’s post. Sample size IS an issue here ITO of placing a lot of stock into the predictive power of this method. Obviously, I can’t really display what I’m saying unless we come back to this post in 40 or 50 years, but let’s use your other point, about “ease” of winning a 7-game series, to illustrate this.
It’s not that easy, depending on the absolute difference between the teams. A huge difference – sure it’s easy. But I don’t see a lot of evidence for huge differences in many series. To wit:
Let’s suppose a team is closer and closer in quality to its opponent as the PS unfolds, and their chance of winning a game is:
1st round: 90% every game
2nd round: 70% every game
3rd round: 60% every game
4th round: 55% every game
(Obviously this method ignores home-court, but there’s no need to overly complicate this to make a point.) With such an edge throughout the playoffs, the odds the favored team wins all 4 series are 37.6% (99.7%, 87.4%, 71%, 60.8% per series). One in three times, we’d expect a team with such comfortable margins to win out.
Based on game-by-game analysis, and quarter-by-quarter variance, I think 90% for a first-round opponent is quite high. But perhaps the opponent in the Finals isn’t so close to our team. If the distribution looks like this:
1st Round: 75% every game
2nd Round: 70% …
3rd Round: 65% …
4th Round: 65% …
The odds of winning all 4 series are 52%. Basically, when a team has no one really to close to it, the odds of winning the title are going to be strong (better than 50%). But that’s being liberal by assuming no matchup at any point is a challenge.
The point: It’s actually quite easy to lose in a 7-game series with the high-variance nature of basketball. (I wrote about this last week: http://www.backpicks.com/2012/05/01/how-often-does-the-worse-team-win-sample-size-and-variance-in-a-7-game-series/ ) I’m using liberal numbers here, but in a 28 set sample, it’s not hard to have results that say “40% of the time the best team won” or “60% of the time, the best team won.”
Now, when you say 67-80 data supports your theory, you lose me. The more series there are, the less likely it is the best team wins. What am I missing there?
I’d be interested in knowing regular season and past playoff volatility of performance. Are champs and repeat champs high or low on past playoff volatility? Is there playoff performance volatility / regular season performance volatility notably different than other contenders?
[…] to win their league’s championship. That figure is zero in the NBA, bolstering the adage that the best team wins more often in basketball than it does in […]