LeBron’s High-Usage Shooting Efficiency (Featuring Adrian Dantley)

As anyone (statistically-inclined or not) can tell you, LeBron James is having a pretty good year. His 26.8 points, 8 rebounds and 7.3 assists per game (through 81) makes for another entry in his already stunning portfolio of versatile seasons: This will be his 6th time hitting 25/7/7+, a feat that has only been accomplished 8 times since the merger:

Totals Shooting Per Game
Rk Player Season Age Tm G FGA FG% 3P% FT% PTS TRB AST TS%
1 LeBron James 2012-13 28 MIA 76 1354 .565 .406 .753 26.8 8.0 7.3 .640
2 Michael Jordan* 1988-89 25 CHI 81 1795 .538 .276 .850 32.5 8.0 8.0 .614
3 Larry Bird* 1986-87 30 BOS 74 1497 .525 .400 .910 28.1 9.2 7.6 .612
4 LeBron James 2009-10 25 CLE 76 1528 .503 .333 .767 29.7 7.3 8.6 .604
5 LeBron James 2010-11 26 MIA 79 1485 .510 .330 .759 26.7 7.5 7.0 .594
6 LeBron James 2008-09 24 CLE 81 1613 .489 .344 .780 28.4 7.6 7.2 .591
7 LeBron James 2007-08 23 CLE 75 1642 .484 .315 .712 30.0 7.9 7.2 .568
8 LeBron James 2004-05 20 CLE 80 1684 .472 .351 .750 27.2 7.4 7.2 .554
Provided by Basketball-Reference.com: View Original Table
(Generated 4/17/2013.)

But the thing that sticks out (which stat-heads have been going berserk about) is his shooting, which has been by far the most efficient of his career.  Indeed, it may be one of the greatest shooting efficiency seasons of all time.

While his raw shooting % wouldn’t break the top 100 seasons, and his “true” shooting % (adjusted for free throws and 3 point shots made) would still only rank about 60th, the key here is that James’ shooting efficiency is remarkable for someone with his role as both a primary option and a shooter of last resort.  Generally, when you increase a player’s shot-taking responsibilities, it comes at the cost of marginal shot efficiency. This doesn’t mean this is a bad decision or that the player is doing anything wrong—what may be a bad shot “for them” may be a great shot under the circumstances in which they are asked to take it (like when the shot clock is running down, etc).

While there’s no simple stat that describes the degree to which someone is a “shot creator,” we can use usage rate as a decent (though obviously imperfect) proxy. There have been around 150 seasons in which one player “used” >=30% of their team’s possessions:

Usage >30% vs. TS%

All player seasons with USG% >= 30. LeBron’s in red.

As we would expect, the best shooting percentages decline as the players’ usage rates get larger and larger.  The red points are LeBron’s seasons (which are pretty excellent across the board) and as we can see from this scatter, his 2012-13 campaign is about to set the record for this group (though we should note that it’s NOT a Rodman-esqe outlier).

Amazingly, the previous record-holder was Adrian Dantley! Dantley is a Hall of Fameer who I had practically never heard of until his name kept popping up in my historical research as possibly one of the most underrated players ever.

Dantley never made an All-NBA first team or won an NBA championship, but he does extremely well in a variety of plus-minus and statistical plus-minus style metrics. While he didn’t have the all-around game of a LeBron James (though he did average a respectable 6-7 rebounds and 3-4 assists in his prime), Dantley was an extremely efficient high-usage shooter. For example, if we look at the top True Shooting seasons among players with a Usage Rate of greater than 27.5%, guess who occupies fully 5 of the top 10 spots:

Totals Shooting Advanced
Rk Player Season Age Tm G FG FGA PTS FG% TS% USG%
1 Amare Stoudemire 2007-08 25 PHO 79 714 1211 1989 .590 .656 28.2
2 Adrian Dantley* 1983-84 27 UTA 79 802 1438 2418 .558 .652 28.2
3 Kevin Durant 2012-13 24 OKC 81 731 1433 2280 .510 .647 29.8
4 LeBron James 2012-13 28 MIA 76 765 1354 2036 .565 .640 30.1
5 Charles Barkley* 1990-91 27 PHI 67 665 1167 1849 .570 .635 29.1
6 Adrian Dantley* 1979-80 23 UTA 68 730 1267 1903 .576 .635 27.8
7 Adrian Dantley* 1981-82 25 UTA 81 904 1586 2457 .570 .631 27.9
8 Adrian Dantley* 1985-86 29 UTA 76 818 1453 2267 .563 .629 30.0
9 Karl Malone* 1989-90 26 UTA 82 914 1627 2540 .562 .626 32.6
10 Adrian Dantley* 1980-81 24 UTA 80 909 1627 2452 .559 .622 28.4
Provided by Basketball-Reference.com: View Original Table
(Generated 4/17/2013.)

Dantley was also in the news a bit last month for working part-time as a crossing guard:


Key quotes from that story:

“It’s not a big thing to me … I just do it. I have a routine. I exercise, I go to work, I go home. I have a spring break next week. I have a summer off, just like when I was a basketball player.”

“I just did it for the kids … I just didn’t want to sit around the house all day.”

“I’ve definitely saved two lives. I’ve almost gotten hit by a car twice. And I would say 70 percent of the people who go across my route are on their telephone or on their BlackBerry, text-messaging. I never would have seen that if I had not been on the post.”

What a character!

Graph of the Day: Second Look at Stan Van?

Granted, “of the Day” isn’t really accurate considering how often I post, but I found it amusing enough to share:

Red is years coached by Stan Van Gundy.

Win % in games played by Dwight Howard. Red years were with Stan Van Gundy coaching.

This came up in a discussion about the possibility that Dwight Howard might not be leveraged optimally on teams that aren’t comprised mostly of small 3 point shooters. That would have interesting implications.

The Clock: A Graph and Some Thoughts

If you’re a hardcore follower of this blog, you know that one of things I have frequently complained about is the failure of NBA play-by-play data to include the shot clock. It’s so obviously important and—relative to other play-by-play data—so easy to track, that it’s a complete mystery to me why doing so isn’t completely standard. OTOH, I see stats broken down by “early” and “late” in the shot clock all the time, so someone must have this information.

In the meantime, I went through the 2010 play-by-play dataset and kluged a proxy stat from the actual clock, reflecting the number of seconds passed since a team took possession. Here’s a chart summarizing the number and outcomes of possessions of various lengths:

The orange X’s represent the number of league-wide possessions in which the first shot took place at the indicated time. The red diamonds represent the average number of points scored on those possessions (including from any subsequent shots following an offensive rebound, etc).

We should expect there to be a constant trade-off at any given time between taking a shot “now” and waiting for a better one to open up: the deeper you get into a possession, the more your shot standards should drop. And, indeed, this is reflected in the graph by the downward-sloping curve.

For now, I’m just throwing this out there. Though it represents a very basic idea, it is difficult to overstate its importance:

  1. Accounting for the clock can help evaluate players where standard efficiency ratings break down. Most simply, you can take the results of each shot and compare them to the expected value of a shot taken under the same amount of time-pressure. E.g., if someone averages .9 points per attempt with only a couple of seconds left, you can spot value where normal efficiency calculations wouldn’t.
  2. Actually, I’ve calculated just such preliminary “value-added” shooting for the entire league (with pretty interesting results), but I’d like to see more accurate data before posting or basing any substantial analysis on it. Among other problems, I think the right side of the curve is overly generous, as it includes possessions where it took a while to get the clock started (a process that is, unfortunately, highly variable), or where time was added and the cause wasn’t scored (also disappointingly common).
  3. Examining this information can tell you some things about the league generally: For example, it’s interesting to me that there’s a noticeable dip right around where the most shots actually take place (14 to 16 seconds in). Though speculative, I suspect that this is when players are most likely to settle for mediocre 2 point jumpers. Similarly, but a bit more difficultly, you can compare the actual curve with a derived curve to examine whether NBA players, on the whole, seem to wait too long (or not long enough) to pull the trigger.

With better data, the possibilities would open up further (even moreso when combined with other play-by-play information, like shot type, position, defense, etc). For example, you could look at the curve for individual players and impute whether they should be more or less aggressive with their shot selection.

So, yeah, if any of you can direct me to a dataset that has what I want, please let me know.

Sports Geek Mecca: Recap and Thoughts, Part 2

This is part 2 of my “recap” of the Sloan Sports Analytics Conference that I attended in March (part 1 is here), mostly covering Day 2 of the event, but also featuring my petty way-too-long rant about Bill James (which I’ve moved to the end).

Day Two

First I attended the Football Analytics despite finding it disappointing last year, and, alas, it wasn’t any better. Eric Mangini must be the only former NFL coach willing to attend, b/c they keep bringing him back:

Overall, I spent more time in day 2 going to niche panels, research paper presentations and talking to people.

The last, in particular, was great. For example, I had a fun conversation with Henry Abbott about Kobe Bryant’s lack of “clutch.” This is one of Abbott’s pet issues, and I admit he makes a good case, particularly that the Lakers are net losers in “clutch” situations (yes, relative to other teams), even over the periods where they have been dominant otherwise.

Kobe is kind of a pivotal case in analytics, I think. First, I’m a big believer in “Count the Rings, Son” analysis: That is, leading a team to multiple championships is really hard, and only really great players do it. I also think he stands at a kind of nexus, in that stats like PER give spray shooters like him an unfair advantage, but more finely tuned advanced metrics probably over-punish the same. Part of the burden of Kobe’s role is that he has to take a lot of bad shots—the relevant question is how good he is at his job.

Abbott also mentioned that he liked one of my tweets, but didn’t know if he could retweet the non-family-friendly “WTF”:

I also had a fun conversation with Neil Paine of Basketball Reference. He seemed like a very smart guy, but this may be attributable to the fact that we seemed to be on the same page about so many things. Additionally, we discussed a very fun hypo: How far back in time would you have to go for the Charlotte Bobcats to be the odds-on favorites to win the NBA Championship?

As for the “sideshow” panels, they’re generally more fruitful and interesting than the ESPN-moderated super-panels, but they offer fewer easy targets for easy blog-griping. If you’re really interested in what went down, there is a ton of info at the SSAC website. The agenda can be found here. Information on the speakers is here. And, most importantly, videos of the various panels can be found here.

Box Score Rebooted

Featuring Dean Oliver, Bill James, and others.

This was a somewhat interesting, though I think slightly off-target, panel. They spent a lot of time talking about new data and metrics and pooh-poohing things like RBI (and even OPS), and the brave new world of play-by-play and video tracking, etc. But too much of this was discussing a different granularity of data than what can be improved in the current granularity levels. Or, in other words:

James acquitted himself a bit on this subject, arguing that boatloads of new data isn’t useful if it isn’t boiled down into useful metrics. But a more general way of looking at this is: If we were starting over from scratch, with a box-score-sized space to report a statistical game summary, and a similar degree of game-scoring resources, what kinds of things would we want to include (or not) that are different from what we have now?  I can think of a few:

  1. In basketball, it’s archaic that free-throws aren’t broken down into bonus free throws and shot-replacing free throws.
  2. In football, I’d like to see passing stats by down and distance, or at least in a few key categories like 3rd and long.
  3. In baseball, I’d like to see “runs relative to par” for pitchers (though this can be computed easily enough from existing box scores).

In this panel, Dean Oliver took the opportunity to plug ESPN’s bizarre proprietary Total Quarterback Rating. They actually had another panel devoted just to this topic, but I didn’t go, so I’ll put a couple of thoughts here.

First, I don’t understand why ESPN is pushing this as a proprietary stat. Sure, no-one knows how to calculate regular old-fashioned quarterback ratings, but there’s a certain comfort in at least knowing it’s a real thing. It’s a bit like Terms of Service agreements, which people regularly sign without reading: at least you know the terms are out there, so someone actually cares enough to read them, and presumably they would raise a stink if you had to sign away your soul.

As for what we do know, I may write more on this come football season, but I have a couple of problems:

One, I hate the “clutch effect.” TQBR makes a special adjustment to value clutch performance even more than its generic contribution to winning. If anything, clutch situations in football are so bizarre that they should count less. In fact, when I’ve done NFL analysis, I’ve often just cut the 4th quarter entirely, and I’ve found I get better results. That may sound crazy, but it’s a bit like how some very advanced Soccer analysts have cut goal-scoring from their models, instead just focusing on how well a player advances the ball toward his goal: even if the former matters more, its unreliability may make it less useful.

Two, I’m disappointed in the way they “assign credit” for play outcomes:

Division of credit is the next step. Dividing credit among teammates is one of the most difficult but important aspects of sports. Teammates rely upon each other and, as the cliché goes, a team might not be the sum of its parts. By dividing credit, we are forcing the parts to sum up to the team, understanding the limitations but knowing that it is the best way statistically for the rating.

I’m personally very interested in this topic (and have discussed it with various ESPN analytics guys since long before TQBR was released). This is basically an attempt to address the entanglement problem that permeates football statistics.  ESPN’s published explanation is pretty cryptic, and it didn’t seem clear to me whether they were profiling individual players and situations or had created credit-distribution algorithms league-wide.

At the conference, I had a chance to talk with their analytics guy who designed this part of the metric (his name escapes me), and I confirmed that they modeled credit distribution for the entire league and are applying it in a blanket way.  Technically, I guess this is a step in the right direction, but it’s purely a reduction of noise and doesn’t address the real issue.  What I’d really like to see is like a recursive model that imputes how much credit various players deserve broadly, then uses those numbers to re-assign credit for particular outcomes (rinse and repeat).

Deconstructing the Rebound With Optical Tracking Data

Rajiv Maheswaran, and other nerds.

This presentation was so awesome that I offered them a hedge bet for the “Best Research Paper” award. That is, I would bet on them at even money, so that if they lost, at least they would receive a consolation prize. They declined. And won. Their findings are too numerous and interesting to list, so you should really check it out for yourself.

Obviously my work on the Dennis Rodman mystery makes me particularly interested in their theories of why certain players get more rebounds than others, as I tweeted in this insta-hypothesis:

Following the presentation, I got the chance to talk with Rajiv for quite a while, which was amazing. Obviously they don’t have any data on Dennis Rodman directly, but Rajiv was also interested in him and had watched a lot of Rodman video. Though anecdotal, he did say that his observations somewhat confirmed the theory that a big part of Rodman’s rebounding advantage seemed to come from handling space very well:

  1. Even when away from the basket, Rodman typically moved to the open space immediately following a shot. This is a bit different from how people often think about rebounding as aggressively attacking the ball (or as being able to near-psychically predict where the ball is going to come down.
  2. Also rather than simply attacking the board directly, Rodman’s first inclination was to insert himself between the nearest opponent and the basket. In theory, this might slightly decrease the chances of getting the ball when it heads in toward his previous position, but would make up for it by dramatically increasing his chances of getting the ball when it went toward the other guy.
  3. Though a little less purely strategical, Rajiv also thought that Rodman was just incredibly good at #2. That is, he was just exceptionally good at jockeying for position.

To some extent, I guess this is just rebounding fundamentals, but I still think it’s very interesting to think about the indirect probabilistic side of the rebounding game.

Live B.S. Report with Bill James

Quick tangent: At one point, I thought Neil Paine summed me up pretty well as a “contrarian to the contrarians.”  Of course, I’m don’t think I’m contrary for the sake of contrariness, or that I’m a negative person (I don’t know how many times I’ve explained to my wife that just because I hated a movie doesn’t mean I didn’t enjoy it!), it’s just that my mind is naturally inclined toward considering the limitations of whatever is put in front of it. Sometimes that means criticizing the status quo, and sometimes that means criticizing its critics.

So, with that in mind, I thought Bill James’s showing at the conference was pretty disappointing, particularly his interview with Bill Simmons.

I have a lot of respect for James.  I read his Historical Baseball Abstract and enjoyed it considerably more than Moneyball.  He has a very intuitive and logical mind. He doesn’t say a bunch of shit that’s not true, and he sees beyond the obvious. In Saturday’s “Rebooting the Box-score” panel, he made an observation that having 3 of 5 people on the panel named John implied that the panel was [likely] older than the rest of the room.  This got a nice laugh from the attendees, but I don’t think he was kidding.  And whether he was or not, he still gets 10 kudos from me for making the closest thing to a Bayesian argument I heard all weekend.  And I dutifully snuck in for a pic with him:

James was somewhat ahead of his time, and perhaps he’s still one of the better sports analytic minds out there, but in this interview we didn’t really get to hear him analyze anything, you know, sportsy. This interview was all about Bill James and his bio and how awesome he was and how great he is and how hard it was for him to get recognized and how much he has changed the game and how, without him, the world would be a cold, dark place where ignorance reigned and nobody had ever heard of “win maximization.”

Bill Simmons going this route in a podcast interview doesn’t surprise me: his audience is obviously much broader than the geeks in the room, and Simmons knows his audience’s expectations better than anyone. What got to me was James’s willingness to play along, and everyone else’s willingness to eat it up. Here’s an example of both, from the conference’s official Twitter account:

Perhaps it’s because I never really liked baseball, and I didn’t really know anyone did any of this stuff until recently, but I’m pretty certain that Bill James had virtually zero impact on my own development as a sports data-cruncher.  When I made my first PRABS-style basketball formula in the early 1990’s (which was absolutely terrible, but is still more predictive than PER), I had no idea that any sports stats other than the box score even existed. By the time I first heard the word “sabermetrics,” I was deep into my own research, and didn’t bother really looking into it deeply until maybe a few months ago.

Which is not to say I had no guidance or inspiration.  For me, a big epiphanous turning point in my approach to the analysis of games did take place—after I read David Sklansky’s Theory of Poker. While ToP itself was published in 1994, Sklansky’s similar offerings date back to the 70s, so I don’t think any broader causal pictures are possible.

More broadly, I think the claim that sports analytics wouldn’t have developed without Bill James is preposterous. Especially if, as i assume we do, we firmly believe we’re right.  This isn’t like L. Ron Hubbard and Incident II: being for sports analytics isn’t like having faith in a person or his religion. It simply means trying to think more rigorously about sports, and using all of the available analytical techniques we can to gain an advantage. Eventually, those who embrace the right will win out, as we’ve seen begin to happen in sports, and as has already happened in nearly every other discipline.

Indeed, by his own admission, James liked to stir controversy, piss people off, and talk down to the old guard whenever possible. As far as we know, he may have set the cause of sports analytics back, either by alienating the people who could have helped it gain acceptance, or by setting an arrogant and confrontational tone for his disciples (e.g., the uplifting “don’t feel the need to explain yourself” message in Moneyball). I’m not saying that this is the case or even a likely possibility, I’m just trying to illustrate that giving someone credit for all that follows—even a pioneer like James—is a dicey game that I’d rather not participate in, and that he definitely shouldn’t.

On a more technical note, one of his oft-quoted and re-tweeted pearls of wisdom goes as follows:

Sounds great, right? I mean, not really, I don’t get the metaphor: if the sea is full of ignorance, why are you collecting water from it with a bucket rather than some kind of filtration system? But more importantly, his argument in defense of this claim is amazingly weak. When Simmons asked what kinds of things he’s talking about, he repeatedly emphasized that we have no idea whether a college sophomore will turn out to be a great Major League pitcher.  True, but, um, we never will. There are too many variables, the input and outputs are too far apart in time, and the contexts are too different.  This isn’t the sea of ignorance, it’s a sea of unknowns.

Which gets at one of my big complaints about stats-types generally.  A lot of people seem to think that stats are all about making exciting discoveries and answering questions that were previously unanswerable. Yes, sometimes you get lucky and uncover some relationship that leads to a killer new strategy or to some game-altering new dynamic. But most of the time, you’ll find static. A good statistical thinker doesn’t try to reject the static, but tries to understand it: Figuring out what you can’t know is just as important as figuring out what you can know.

On Twitter I used this analogy:

Success comes with knowing more true things and fewer false things than the other guy.

Graphs of the Day: Bird vs. Bron

One of my favorite stat-nuggets ever is that “Larry Bird never had a losing month.” So, yesterday, I figured it was about time to check whether or not it’s, you know, true.

To do this, I first had to figure out which Celtics games Bird actually played in. The problem there is that his career began well before 1986, meaning the box score data aren’t in Basketball Reference’s database. But they do have images of the actual box scores, like so:

Fortunately, Bird played in every game in his first two seasons, so figuring this out was just a matter of poring through 4 years of these pics: Easy peasy! (I’ve done more grueling work for even more trivial questions, to be sure.) But results on that later.

Independently, I was trying to come up with a fun way to illustrate the fact that LeBron James won a lot more games in his last two seasons on the lowly Cleveland Cavaliers than he has so far on the perma-hyped Miami Heat:

So that graph reflects every game of LeBron’s career, including the regular season and playoffs (through last night). It’s pretty straightforward: With LeBron an 18-year-old rookie, the Cavs (though much improved) were still pretty shaky, and they pretty much got better and better each year. After a slight decline from their soaring 2008 performance, LeBron left to join the latest Big 3—which is a solid contender, but no threat to the greatest Big 3. (BTW, I would like to thank the Heat for becoming Exhibit A for my long-time contention that having multiple “primary” options is less valuable than having a well-designed supporting cast—even one with considerably less talent.)

But with Mr. Trifecta on my mind (not to mention overloading my browser history), I thought it might be fun to compare the two leading contenders for the small forward spot on any NBA GOAT team. So here’s Larry:

Wow, pretty crazy consistent, yes? Keep in mind that, despite the Celtics long winning tradition, they only won 29 games the year before Bird’s arrival.  Note the practically opposite gradient from LeBron’s: Bird started out hot, and basically stayed hot until injuries cooled him down.

As for the results of the original inquiry: It turns out Bird’s Celtics started the season 2-4 in November 1988, just before Bird had season-ending ankle surgery (of course, Bird’s 1988 games ARE in my database, so this was a bit of a “Doh!” finding). And, of course, he also had losing months in the playoffs.

His worst full month in the regular season, however, was indeed exactly .500: He went 8-8 in March of 1982. So, properly qualified (like, “In the regular season, Bird never had a losing month in which he played more than 6 games”), the claim holds up. If I were a political fact-checker, I would deem it “Mostly True.”

In case you’re interested, here is the complete list of months in Larry Bird’s career:

[table “10” not found /]

The Case Against the Case for Dennis Rodman: Initial Volleys

When I began writing about Dennis Rodman, I was so terrified that I would miss something and the whole argument would come crashing down that I kept pushing it further and further and further, until a piece I initially planned to be about 10 pages of material ended up being more like 150. [BTW, this whole post may be a bit too inside-baseball if you haven’t actually read—or at least skimmed—my original “Case for Dennis Rodman.” If so, that link has a helpful guide.]

The downside of this, I assumed, is that the extra material should open up many angles of attack. It was a conscious trade-off, knowing that individual parts in the argument would be more vulnerable, but the Case as a whole would be thorough and redundant enough to survive any battles I might end up losing.

Ultimately, however, I’ve been a bit disappointed in the critical response. Most reactions I’ve seen have been either extremely complimentary or extremely dismissive.

So a while ago, I decided that if no one really wanted to take on the task, I would do it myself. In one of the Rodman posts, I wrote:

Give me an academic who creates an interesting and meaningful model, and then immediately devotes their best efforts to tearing it apart!

And thus The Case Against the Case for Dennis Rodman is born.

Before starting, here are a few qualifying points:

  1. I’m not a lawyer, so I have no intention of arguing things I don’t believe. I’m calling this “The Case Against the Case For Dennis Rodman,” because I cannot in good faith (barring some new evidence or argument I am as yet unfamiliar with) write The Case Against Dennis Rodman.
  2. Similarly, where I think an argument is worth being raised and discussed but ultimately fails, I will make the defense immediately (much like “Objections and Replies”).
  3. I don’t have an over-arching anti-Case hypothesis to prove, so don’t expect this series to be a systematic takedown of the entire enterprise. Rather, I will point out weaknesses as I consider them, so they may not come in any kind of predictable order.
  4. If you were paying attention, of course you noticed that The Case For Dennis Rodman was really (or at least concurrently) about demonstrating how player valuation is much more dynamic and complicated than either conventional or unconventional wisdom gives it credit for. But, for now, The Case Against the Case will focus mainly on the Dennis Rodman part.

Ok, so with this mission in mind, let me start with a bit of what’s out there already:

A Not-Completely-Stupid Forum Discussion

I admit, I spend a fair amount of time following back links to my blog. Some of that is just ego-surfing, but I’m also desperate to find worthy counter-arguments.

As I said above, that search is sometimes more fruitless than I would like. Even the more intelligent discussions usually include a lot of uninspired drivel. For example, let’s look at a recent thread on RealGM. After one person lays out a decent (though imperfect) summary of my argument, there are several responses along the lines of poster “SVictor”s:

I won’t pay attention to any study that states that [Rodman might be more valuable than Michael Jordan].

Actually, I’m pretty sympathetic to this kind of objection. There can be a bayesian ring of truth to “that is just absurd on its face” arguments (I once made a similar argument against an advanced NFL stat after it claimed Neil O’Donnell was the best QB in football). However, it’s not really a counter-argument, it’s more a meta-argument, and I think I’ve considered most of those to death. Besides, I don’t actually make the claim in question, I merely suggest it as something worth considering.

A much more detailed and interesting response comes from poster “mysticbb.” Now, he starts out pretty insultingly:

The argumentation is biased, it is pretty obvious, which makes it really sad, because I know how much effort someone has to put into such analysis.

I cannot say affirmatively that I have no biases, or that bias never affects my work. Study after study shows that this is virtually impossible. But I can say that I am completely and fundamentally committed to identifying it and stamping it out wherever I can. So, please—as I asked in my conclusion—please point out where the bias is evident and I will do everything in my power to fix it.

Oddly, though, mysticbb seems to endorse (almost verbatim) the proposition that I set out to prove:

Let me start with saying that Dennis Rodman seems to be underrated by a lot of people. He was a great player and deserved to be in the HOF, I have no doubt about that. He had great impact on the game and really improved his team while playing.

(People get so easily distracted: You write one article about a role-player maybe being better than Michael Jordan, and they forget that your overall claim is more modest.)

Of course, my analysis could just be way off, particularly in ways that favor Rodman. To that end, mysticbb raises several valid points, though with various degrees of significance.

Here he is on Rodman’s rebounding:

Let me start with the rebounding aspect. From 1991 to 1998 Rodman was leading the league in TRB% in each season. He had 17.7 ORB%, 33 DRB% and overall 25.4 TRB%. Those are AWESOME numbers, if we ignore context. Let us take a look at the numbers for the playoffs during the same timespan: 15.9 ORB%, 27.6 DRB% and 21.6 TRB%. Still great numbers, but obviously clearly worse than his regular season numbers. Why? Well, Rodman had the tendency to pad his rebounding stats in the regular season against weaker teams, while ignoring defensive assignments and fighting his teammates for rebounds. All that was eliminated during the playoffs and his numbers took a hit.

Now, I don’t know how much I talked about the playoffs per se, but I definitely discussed—and even argued myself—that Rodman’s rebounding numbers are likely inflated. But I also argued that if that IS the case, it probably means Rodman was even more valuable overall (see that same link for more detail). He continues:

Especially when we look at the defensive rebounding part, during the regular season he is clearly ahead of Duncan or Garnett, but in the playoffs they are all basically tied. Now imagine, Rodman brings his value via rebounding, what does that say about him, if that value is matched by players like Duncan or Garnett who both are also great defenders and obviously clearly better offensive players?

Now, as I noted at the outset Rodman’s career offensive rebounding percentage is approximately equal to Kevin Garnett’s career overall rebounding percentage, so I think Mystic is making a false equivalency based on a few cherry-picked stats.

But, for a moment, let’s assume it were true that Garnett/Duncan had similar rebounding numbers to Rodman, so what? Rodman’s crazy rebounding numbers cohere nicely with the rest of the puzzle as an explanation of why he was so valuable—his absurd rebounding stats make his absurd impact stats more plausible and vice versa—but they’re technically incidental. Indeed, they’re even incidental to his rebounding contribution: The number (or even percent) of rebounds a player gets does not correlate very strongly with the number of rebounds he has actually added to his team (nor does a player’s offensive “production” correlate very strongly with improvement in a team’s offense), and it does so the most on the extremes.

But I give the objection credit in this regard: The playoff/regular season disparity in Rodman’s rebounding numbers (though let’s not overstate the case, Rodman has 3 of the top 4 TRB%’s in playoff history) do serve to highlight how dynamic basketball statistics are. The original Case For Dennis Rodman is perhaps too willing to draw straight causal lines, and that may be worth looking into. Also, a more thorough examination of Rodman’s playoff performance may be in order as well.

On the indirect side of The Case, mysticbb has this to say:

[T]he high difference between the team performance in games with Rodman and without Rodman is also caused by a difference in terms of strength of schedule, HCA and other injured players.

I definitely agree that my crude calculation of Win % differentials does not control for a number of things that could be giving Rodman, or any other player, a boost. Controlling for some of these things is probably possible, if more difficult than you might think. This is certainly an area where I would like to implement some more robust comparison methods (and I’m slowly working on it).

But, ultimately, all of the factors mysticbb mentions are noise. Circumstances vary and lots of things happen when players miss games, and there are a lot of players and a lot of circumstances in the sample that Rodman is compared to: everyone has a chance to get lucky. That chance is reflected in my statistical significance calculations.

Mysticbb makes some assertions about Rodman having a particularly favorable schedule, but cites only the 1997 Bulls, and it’s pretty thin gruel:

If we look at the 12 games with Kukoc instead of Rodman we are getting 11.0 SRS. So, Rodman over Kukoc made about 0.5 points.

Of course, if there is evidence that Rodman was especially lucky over his career, I would like to see it. But, hmm, since I’m working on the Case Against myself, I guess that’s my responsibility as well. Fair enough, I’ll look into it.

Finally, mysticbb argues:

The last point which needs to be considered is the offcourt issues Rodman caused, which effected the outcome of games. Take the 1995 Spurs for example, when Rodman refused to guard Horry on the perimeter leading to multiple open 3pt shots for Horry including the later neck-breaker in game 6. The Spurs one year later without Rodman played as good as in 1995 with him.

I don’t really have much to say on the first part of this. As I noted at the outset, there’s some chance that Rodman caused problems on his team, but I feel completely incompetent to judge that sort of thing. But the other part is interesting: It’s true that the Spurs were only 5% worse in 95-96 than they were in 94-95 (OFC, they would be worse measuring only against games Rodman played in), but cross-season comparisons are obviously tricky, for a number of reasons. And if they did exist, I’m not sure they would break the way suggested. For example, the 2nd Bulls 3-peat teams were about as much better than the first Bulls 3-peat as the first Bulls 3-peat was better than the 93-95 teams that were sans Michael Jordan.

That said, I actually do find multi-season comparisons to be a valid area for exploration. So, e.g., I’ve spent some time looking at rookie impact and how predictive it is of future success (answer: probably more than you think).

Finally, a poster named “parapooper” makes some points that he credits to me, including:

He also admits that Rodman actually has a big advantage in this calculation because he missed probably more games than any other player due to reasons other than health and age.

I don’t actually remember making this point, at least this explicitly, but it is a valid concern IMO. A lot of the In/Out numbers my system generated include seasons where players were old or infirm, which disadvantages them. In fact, I initially tried to excise these seasons, and tried accounting for them in a variety of ways, such as comparing “best periods” to “best periods”, etc. But I found such attempts to be pretty unwieldy and arbitrary, and they shrunk the sample size more than I thought they were worth, without affecting the bottom line: Rodman just comes out on top of a smaller pile. That said, some advantage to Rodman relative to others must exist, and quantifying that advantage is a worthy goal.

A similar problem that “para” didn’t mention specifically is that a number of the in/out periods for players include spots where the player was traded. In subsequent analysis, I’ve confirmed what common sense would probably indicate: A player’s differential stats in trade scenarios are much less reliable. Future versions of the differential comparison should account for this, one way or another.

The differential analysis in the series does seem to be the area that most needs upgrading, though the constant trade-off between more information and higher quality information means it will never be as conclusive as we might want it to be. Not mentioned in this thread (that I saw), but what I will certainly deal with myself, are broader objections to the differential comparisons as an enterprise. So, you know. Stay tuned.

Championship Experience Matters! (Un-Sexy Version)

So in Monday’s post, I included my “5-by-5” method (I probably shouldn’t call it a “model”) for picking NBA champions. In case you missed it, here it is again:

  1. If there are any teams within 5 games of the best record that have won a title within the past 5 years, pick the most recent winner.
  2. Otherwise, pick the team with the best record.

In the 28 seasons since the NBA moved to a 16-team playoff format, this method correctly picked the eventual champion 18 times (64%), comparing favorably to the 10/28 (36%) success rate of the team with the league’s best record.

Henry Abbott blogged about it on ESPN yesterday, raising the obvious follow-up:

The question is, why? Why are teams that have won before so much better at winning again? I’ll kick off the brainstorming:

  • Maybe most teams fall short of their potential because of team dynamics of selfishness — and maybe champions are the teams that know how to move past that.
  • Maybe there are only a few really special coaches, and these teams have them.
  • Maybe there are only a few really special teams, and these teams are them.
  • Maybe there are special strategies to the playoffs that only some teams know. Not even sure what I’m talking about here — Sleep schedules? Nutrition? Injury prevention?
  • Maybe champions get better treatment from referees.

Anyway, it’s certainly fascinating.

UPDATE: John Hollinger with a good point that fits this and other data: Maybe title-winning team don’t value the regular season much.

Though I think some of these ideas are more on point than others, I won’t try to go parse every possibility. On balance, I’m sympathetic to the idea that “winning in the playoffs” has its own skillset independent of just being good at winning basketball games. Conceptually, it’s not too big a leap from the well-documented idea that winning games has its own skillset independent of scoring and allowing points (though the evidence is a lot more indirect).

That said, I think the biggest factor behind this result may be a bit less sexy: It may simply be a matter of information reliability.

Winning Championships is Harder than Winning Games

In stark contrast to other team sports, the NBA Playoffs are extremely deterministic. The best team usually wins (and, conversely, the winner is usually the best team). I’ve made this analogy many times before, but I’ll make it again: The NBA playoffs are a lot more like a Major tournament in men’s tennis than any other crowning competition in popular sports.

This is pretty much a function of design: A moderately better team becomes a huge favorite in a 7 game series. So even if the best team is only moderately better than the 2nd best team, they can be in a dominant position.

Combine this with an uneven distribution of talent (which, incidentally, is probably a function of salary structure), and mix in the empirical reality that the best teams normally don’t change very much from year to year, and its unsurprising that “dynasties” are so common.

On the other side of the equation, regular season standings and leaderboards—whether of wins or its most stable proxies—are highly variable. Note that a 95% confidence interval on an 82 game sample (aka, the “margin of error”) is +/- roughly 10 games.

If you think of the NBA regular season as a lengthy 30-team competition for the #1 seed, its structure is much, much less favorable to the best teams than the playoffs: It’s more like a golf tournament than a tennis tournament.

The Rest is Bayes

Obviously better teams win more often and vice-versa. It’s just that these results have to be interpreted in a context where all results were not equally likely ex ante. For example, the teams who post top records who also have recent championships are far more likely than others to actually be as good as their records indicate. This is pure bayesian inference.

Quick tangent: In my writing, I often reach a point where I say something along the lines of: “From there, it’s all bayesian inference.” I recognize that, for a lot of readers, this is barely a step up from an Underpants Gnomes argument. When I go there, it’s pretty much shorthand for “this is where results inform our beliefs about how likely various causes are to be true” (and all that entails).

There was an interesting comment on Abbott’s ESPN post, pointing out that the 5-by-5 method only picked 5/14 (35.7%) of champions correctly between 1967 and 1980. While there may be unrelated empirical reasons for this, I think this stat may actually confirm the underlying concept. Structurally, having fewer teams in the playoffs, shorter series lengths, a smaller number of teams in the league—basically any of the structural differences between the two eras I can think of—all undermine the combined informational value of [having a championship + having a top record].

To be fair, there may be any number of things in a particular season that undermine our confidence in this inference (I can think of some issues with this season’s inputs, obv). That’s the tricky part of bayesian reasoning: It turns on how plausible you thought things were already.

Stat Geek Smackdown 2012, Round 1: Odds and Ends

So in case any of you haven’t been following, the 2012 edition of the ESPN True Hoop Stat Geek Smackdown  is underway.  Now, obviously this competition shouldn’t be taken too seriously, as it’s roughly the equivalent of picking a weekend’s worth of NFL games, and last year I won only after picking against my actual opinion in the Finals (with good reason, of course).  That said, it’s still a lot of fun to track, and basketball is a deterministic-enough sport that I do think skill is relevant. At least enough that I will talk shit if I win again.

To that end, the first round is going pretty well for me so far.  Like last year, the experts are mostly in agreement. While there is a fair amount of variation in the series length predictions, there are only two matchups that had any dissent as to the likely winner: the 6 actual stat geeks split 4-2 in favor of the Lakers over the Nuggets, and 3-3 between the Clippers and the Grizzlies.  As it happens, I have both Los Angeles teams (yes, I am from Homer), as does Matthew Stahlhut (though my having the Lakers in 5 instead of 7 gives me a slight edge for the moment).  No one has gained any points on anyone else yet, but here is my rough account of possible scenarios:

[table “9” not found /]

On to some odds and ends:

The Particular Challenges of Predicting 2012

Making picks this year was a bit harder than in years past.  At one point I seriously considered picking Dallas against OKC (in part for strategic purposes), before reason got the better of me.  Abbott only published part of my comment on the series, so here’s the full version I sent him:

Throughout NBA history, defending champions have massively over-performed in the playoffs relative to their regular season records, so I wouldn’t count Dallas out.  In fact, the spot Dallas finds itself in is quite similar to Houston’s in 1995, and this season’s short lead -time and compressed schedule should make us particularly wary of the usual battery of predictive models.

Thus, if I had to pick which of these teams is more likely to win the championship, I might take Dallas (or at least it would be a closer call).  But that’s a far different question from who is most likely to win this particular series: Oklahoma City is simply too solid and Dallas too shaky to justify an upset pick. E.g., my generic model makes OKC a >90% favorite, so even a 50:50 chance that Dallas really is the sleeping giant Mark Cuban dreams about probably wouldn’t put them over the top.

That last little bit is important: The “paper gap” between Dallas and OKC is so great that even if Dallas were considerably better than they appeared during the regular season, that would only make them competitive, while if they were about as good as they appeared, they would be a huge dog (this kind of situation should be very familiar to any serious poker players out there).

But why on earth would I think Dallas might be any good in the first place? Well, I’ll discuss more below why champions should never be ignored, but the “paper difference” this year should be particularly inscrutable.  The normal methods for predicting playoff performance (both my own and others) are particularly ill-suited for the peculiar circumstances of this season:

  1. Perhaps most obviously, fewer regular season games means smaller sample sizes.  In turn, this means that sample-sensitive indicators (like regular season statistics) should have less persuasive value relative to non-sensitive ones (like championship pedigree).  It also affects things like head to head record, which is probably more valuable than a lot of stats people think, though less valuable than a lot of non-stats people think.  I’ve been working on some research about this, but for an example, look at this post about how I thought there seemed to be a market error w/r/t Dallas vs. Miami in game 6, partly b/c of the bayesian value of Dallas’s head to head advantage.
  2. Injuries are a bigger factor. This is not just that there are more of them (which is debatable), but there is less flexibility to effectively manage them: e.g., there’s obv less time to rehab players, but also less time to develop new line-ups and workarounds or make other necessary adjustments. In other words, a very good team might be hurt more by a role-player being injured than usual.
  3. What is the most reliable data? Two things I discussed last year were that (contra unconventional wisdom) Win% is more reliable for post-season predictions than MOV-type stats, and that (contra conventional wisdom) early season performance is typically more predictive than late season performance.  But both of these are undermined by the short season.  The fundamental value of MOV is as a proxy for W% that is more accurate for smaller sample sizes. And the predictive power of early-season performance most likely stems from its being more representative of playoff basketball: e.g., players are more rested and everyone tries their hardest.  However, not only are these playoffs not your normal playoffs, but this season was thrown together so quickly that a lot of teams had barely figured out their lineups by the quarter-pole. While late-season records have the same problems as usual, they may be more predictive just from being more similar to years past.
  4. Finally, it’s not just the nature of the data, but the nature of the underlying game as well. For example, in a lockout year, teams concerned with injury may be quicker to pull starting players in less lopsided scenarios than usual, making MOV less useful, etc. I won’t go into every possible difference, but here’s a related Twitter exchange:


Which brings us to the next topic:

The Simplest Playoff Model You’ll Never Beat

The thing that Henry Abbott most highlighted from my Smackdown picks (which he quoted at least 3 times in 3 different places) was my little piece of dicta about the Spurs:

I have a ‘big pot’ playoff model (no matchups, no simulations, just stats and history for each playoff team as input) that produces some quirky results that have historically out-predicted my more conventional models. It currently puts San Antonio above 50 percent. Not just against Utah, but against the field. Not saying I believe it, but there you go.

I really didn’t mean for this to be taken so seriously: it’s just one model.  And no, I’m not going to post it. It’s experimental, and it’s old and needs updating (e.g., I haven’t adjusted it to account for last season yet).

But I can explain why it loves the Spurs so much: it weights championship pedigree very strongly, and the Spurs this year are the only team near the top that has any.

Now some stats-loving people argue that the “has won a championship” variable is unreliable, but I think they are precisely wrong.  Perhaps this will change going forward, but, historically, there are no two ways to cut it: No matter how awesomely designed and complicated your models/simulations are, if you don’t account for championship experience, you will lose to even the most rudimentary model that does.

So case in point, I came up with this 2-step method for picking NBA Champions:

  1. If there are any teams within 5 games of the best record that have won a title within the past 5 years, pick the most recent.
  2. Otherwise, pick the team with the best record.

Following this method, you would correctly pick the eventual NBA Champion in 64.3% of years since the league moved to a 16-team playoff in 1984 (with due respect to the slayer, I call this my “5-by-5” model ).

Of course, thinking back, it seems like picking the winner is sometimes easy, as the league often has an obvious “best team” that is extremely unlikely to ever lose a 7 game series.  So perhaps the better question to ask is: How much do you gain by including the championship test in step 1?

The answer is: a lot. Over the same period, the team with the league’s best record has won only 10/28 championships, or ~35%. So the 5-by-5 model almost doubles your hit rate.

And in case you’re wondering, using Margin of Victory, SRS, or any other advanced stat instead of W-L record doesn’t help: other methods vary from doing slightly worse to slightly better. While there may still be room to beef up the complexity of your predictive model (such as advanced stats, situational simulations, etc), your gains will be (comparatively) marginal at best. Moreover, there is also room for improvement on the other side: by setting up a more formal and balanced tradeoff between regular season performance and championship history, the macro-model can get up to 70+% without danger of significant over-fitting.

In fairness, I should note that the 5-by-5 model has had a bit of a rough patch recently—but, in its defense, so has every other model. The NBA has had some wacky results recently, but there is no indication that stats have supplanted history. Indeed, if you break the historical record into groups of more-predictable and less-predictable seasons, the 5-by-5 model trumps pure statistical models in all of them.

Uncertainty and Series Lengths

Finally, I’d like to quickly address the complete botching of series-length analysis that I put forward last year. Not only did I make a really elementary mistake in my explanation (that an emailer thankfully pointed out), but I’ve come to reject my ultimate conclusion as well.

Aside from strategic considerations, I’m now fairly certain that picking the home team in 5 or the away team in 6 is always right, no matter how close you think the series is. I first found this result when running playoff simulations that included margin for error (in other words, accounting for the fact that teams may be better or worse than their stats would indicate, or that they may match up more or less favorably than the underlying records would suggest), but I had some difficulty getting this result to comport with the empirical data, which still showed “home team in 6” as the most common outcome.  But now I think I’ve figured this problem out, and it has to do with the fact that a lot of those outcomes came in spots where you should have picked the other team, etc. But despite the extremely simple-sounding outcome,  it’s a rich and interesting topic, so I’ll save the bulk of it for another day.

Sports Geek Mecca: Recap and Thoughts, Part 1

So, over the weekend, I attended my second MIT Sloan Sports Analytics Conference. My experience was much different than in 2011: Last year, I went into this thing barely knowing that other people were into the same things I was. An anecdote: In late 2010, I was telling my dad how I was about to have a 6th or 7th round interview for a pretty sweet job in sports analysis, when he speculated, “How many people can there even be in that business? 10? 20?” A couple of months later, of course, I would learn.

A lot has happened in my life since then: I finished my Rodman series, won the ESPN Stat Geek Smackdown (which, though I am obviously happy to have won, is not really that big a deal—all told, the scope of the competition is about the same as picking a week’s worth of NFL games), my wife and I had a baby, and, oh yeah, I learned a ton about the breadth, depth, and nature of the sports analytics community.

For the most part, I used Twitter as sort of my de facto notebook for the conference.  Thus, I’m sorry if I’m missing a bunch of lengthier quotes and/or if I repeat a bunch of things you already saw in my live coverage, but I will try to explain a few things in a bit more detail.

For the most part, I’ll keep the recap chronological.  I’ve split this into two parts: Part 1 covers Friday, up to but not including the Bill Simmons/Bill James interview.  Part 2 covers that interview and all of Saturday.

Opening Remarks:

From the pregame tweets, John Hollinger observed that 28 NBA teams sent representatives (that we know of) this year.  I also noticed that the New England Revolution sent 2 people, while the New England Patriots sent none, so I’m not sure that number of official representatives reliably indicates much.

The conference started with some bland opening remarks by Dean David Schmittlein.  Tangent: I feel like political-speak (thank everybody and say nothing) seems to get more and more widespread every year. I blame it on fear of the internet. E.g., in this intro segment, somebody made yet another boring joke about how there were no women present (personally, I thought there were significantly more than last year), and was followed shortly thereafter by a female speaker, understandably creating a tiny bit of awkwardness. If that person had been more important (like, if I could remember his name to slam him), I doubt he would have made that joke, or any other joke. He would have just thanked everyone and said nothing.

The Evolution of Sports Leagues

Featuring Gary Bettman (NHL), Rob Manfred (MLB), Adam Silver (NBA), Steve Tisch (NYG) and Michael Wilbon moderating.

This panel really didn’t have much of a theme, it was mostly Wilbon creatively folding a bunch of predictable questions into arbitrary league issues.  E.g.: ” “What do you think about Jeremy Lin?!? And, you know, overseas expansion blah blah.”

I don’t get the massive cultural significance of Jeremy Lin, personally.  I mean, he’s not the first ethnically Chinese player to have NBA success (though he is perhaps the first short one).  The discussion of China, however, was interesting for other reasons. Adam Silver claimed that Basketball is already more popular in China than soccer, with over 300 million Chinese people playing it.  Those numbers, if true, are pretty mind-boggling.

Finally, there was a whole part about labor negotiations that was pretty well summed up by this tweet:

Hockey Analytics

Featuring Brian Burke, Peter Chiarelli, Mike Milbury and others.

The panel started with Peter Chiarelli being asked how the world champion Boston Bruins use analytics, and in an ominous sign, he rambled on for a while about how, when it comes to scouting, they’ve learned that weight is probably more important than height.

Overall, it was a bit like any scene from the Moneyball war room, with Michael Schuckers (the only pro-stats guy) playing the part of Jonah Hill, but without Brad Pitt to protect him.

When I think of Brian Burke, I usually think of Advanced NFL Stats, but apparently there’s one in Hockey as well.  Burke is GM/President of the Toronto Maple Leafs. At one point he was railing about how teams that use analytics have never won anything, which confused me since I haven’t seen Toronto hoisting any Stanley Cups recently, but apparently he did win a championship with the Mighty Ducks in 2007, so he clearly speaks with absolute authority.

This guy was a walking talking quote machine for the old school. I didn’t take note of all the hilarious and/or non-sensical things he said, but for some examples, try searching Twitter for “#SSAC Brian Burke.” To give an extent of how extreme, someone tweeted this quote at me, and I have no idea if he actually said it or if this guy was kidding.

In other words, Burke was literally too over the top to effectively parody.

On the other hand, in the discussion of concussions, I thought Burke had sort of a folksy realism that seemed pretty accurate to me.  I think his general point is right, if a bit insensitive: If we really changed hockey so much as to eliminate concussions entirely, it would be a whole different sport (which he also claimed no one would watch, an assertion which is more debatable imo).  At the end of the day, I think professional sports mess people up, including in the head.  But, of course, we can’t ignore the problem, so we have to keep proceeding toward some nebulous goal.

Mike Milbury, presently a card-carrying member of the media, seemed to mostly embrace the alarmist media narrative, though he did raise at least one decent point about how the increase in concussions—which most people are attributing to an increase in diagnoses—may relate to recent rules changes that have sped up the game.

But for all that, the part that frustrated me the most was when Michael Schuckers, the legitimate hockey statistician at the table, was finally given the opportunity to talk.  90% of the things that came out of his mouth were various snarky ways of asserting that face-offs don’t matter.  I mean, I assume he’s 100% right, but just had no clue how to talk to these guys.  Find common ground: you both care about scoring goals, defending goals, and winning.  Good face-off skill get you the puck more often in the right situations. The question is how many extra possessions you get and how valuable those possessions are? And finally, what’s the actual decision in question?

Baseball Analytics

Featuring Scott Boras, Scott Boras, Scott Boras, some other guys, Scott Boras, and, oh yeah, Bill James.

In stark constrast to the Hockey panel, the Baseball guys pretty much bent over backwards to embrace analytics as much as possible.  As I tweeted at the time:

Scott Boras seems to like hearing Scott Boras talk.  Which is not so bad, because Scott Boras actually did seem pretty smart and well informed: Among other things, Scott Boras apparently has a secret internal analytics team. To what end, I’m not entirely sure, since Scott Boras also seemed to say that most GM’s overvalue players relative to what Scott Boras’s people tell Scott Boras.

At this point, my mind wandered:

How awesome would that be, right?

Anyway, in between Scott Boras’s insights, someone asked this Bill James guy about his vision for the future of baseball analytics, and he gave two answers:

  1. Evaluating players from a variety of contexts other than the minor leagues (like college ball, overseas, Cubans, etc).
  2. Analytics will expand to look at the needs of the entire enterprise, not just individual players or teams.

Meh, I’m a bit underwhelmed.  He talked a bit about #1 in his one-on-one with Bill Simmons, so I’ll look at that a bit more in my review of that discussion. As for #2, I think he’s just way way off: The business side of sports is already doing tons of sophisticated analytics—almost certainly way more than the competition side—because, you know, it’s business.

E.g., in the first panel, there was a fair amount of discussion of how the NBA used “sophisticated modeling” for many different lockout-related analyses (I didn’t catch the Ticketing Analytics panel, but from its reputation, and from related discussions on other panels, it sounds like that discipline has some of the nerdiest analysis of all).

Scott Boras let Bill James talk about a few other things as well:  E.g., James is not a fan of new draft regulations, analogizing them to government regulations that “any economist would agree” inevitably lead to market distortions and bursting bubbles.  While I can’t say I entirely disagree, I’m going to go out on a limb and guess that his political leanings are probably a bit Libertarian?

Basketball Analytics

Featuring Jeff Van Gundy, Mike Zarren, John Hollinger, and Mark Cuban Dean Oliver.

If every one of these panels was Mark Cuban + foil, it would be just about the most awesome weekend ever (though you might not learn the most about analytics). So I was excited about this one, which, unfortunately, Cuban missed. Filling in on zero/short notice was Dean Oliver.  Overall, here’s Nathan Walker’s take:

This panel actually had some pretty interesting discussions, but they flew by pretty fast and often followed predictable patterns, something like this:

  1. Hollinger says something pro-stats, though likely way out of his depth.
  2. Zarren brags about how they’re already doing that and more on the Celtics.
  3. Oliver says something smart and nuanced that attempts to get at the underlying issues and difficulties.
  4. Jeff Van Gundy uses forceful pronouncements and “common sense” to dismiss his strawman version of what the others have been saying.

E.g.:

Zarren talked about how there is practically more data these days than they know what to do with.  This seems true and I think it has interesting implications. I’ll discuss it a little more in Part 2 re: the “Rebooting the Box Score” talk.

There was also an interesting discussion of trades, and whether they’re more a result of information asymmetry (in other words, teams trying to fleece each other), or more a result of efficient trade opportunities (in other words, teams trying to help each other).  Though it really shouldn’t matter—you trade when you think it will help you, whether it helps your trade partner is mostly irrelevant—Oliver endorsed the latter.  He makes the point that, with such a broad universe of trade possibilities, looking for mutually beneficial situations is the easiest way to find actionable deals.  Fair enough.

Coaching Analytics

Featuring coaching superstars Jeff Van Gundy, Eric Mangini, and Bill Simmons.  Moderated by Daryl Morey.

OK, can I make the obvious point that Simmons and Morey apparently accidentally switched role cards?  As a result, this talk featured a lot of Simmons attacking coaches and Van Gundy defending them.  I honestly didn’t remember Mangini was on this panel until looking back at the book (which is saying something, b/c Mangini usually makes my blood boil).

There was almost nothing on, say, how to evaluate coaches, say, by analyzing how well their various decisions comported with the tenets of win maximization.  There was a lengthy (and almost entirely non-analytical) discussion of that all-important question of whether an NBA coach should foul or not up by 3 with little time left.  Fouling probably has a tiny edge, but I think it’s too close and too infrequent to be very interesting (though obviously not as rare, it reminds me a bit of the impassioned debates you used to see on Poker forums about whether you should fast-play or slow-play flopped quads in limit hold’em).

There was what I thought was a funny moment when Bill Simmons was complaining about how teams seem to recycle mediocre older coaches rather than try out young, fresh talent. But when challenged by Van Gundy, Simmons drew a blank and couldn’t think of anyone.  So, Bill, this is for you.  Here’s a table of NBA coaches who have coached at least 1000 games for at least 3 different teams, while winning fewer than 60% of their games and without winning any championships:

[table “8” not found /]

Note that I’m not necessarily agreeing with Simmons: Winning championships in the NBA is hard, especially if your team lacks uber-stars (you know, Michael Jordan, Magic Johnson, Dennis Rodman, et al).

Part 2 coming soon!

Honestly, I got a little carried away with my detailed analysis/screed on Bill James, and I may have to do a little revising. So due to some other pressing writing commitments, you can probably expect Part 2 to come out this Saturday (Friday at the earliest).

Bayes’ Theorem, Small Samples, and WTF is Up With NBA Finals Markets?

Seriously, I am dying to post about something non-NBA related, and I should have my Open-era tennis ELO ratings by surface out in the next day or so.  But last night I finally got around to checking the betting markets to see how the NBA Finals—and thus my chances of winning the Smackdown—were shaping up, and I was shocked by what I found.  Anyway, I tossed a few numbers around, and thought you all might find them interesting.  Plus, there’s a nice little object-lesson about the usefulness of small sample size information for making Bayesian inferences.  This is actually one area where I think the normal stat geek vs. public dichotomy gets turned on its head:  Most statistically-oriented people reflexively dismiss any empirical evidence without a giant data-set.  But in certain cases—particularly those with a wide range of coherent possibilities—I think the general public may even be a little too conservative about the implications of seemingly minor statistical anomalies.

Freaky Finals Odds:

First, I found that most books seem to see the series as a tossup at this point.  Here’s an example from a European sports-betting market:

image

Intuitively, this seemed off to me.  Dallas needs to win 1 out of the 2 remaining games in Miami.  Assuming the odds for both games are identical (admittedly, this could be a dubious assumption), here’s a plot of Dallas’s chances of winning the series relative to Miami’s expected winrate per home game:

image

So for the series to be a tossup, Miami needs to be about a 71% favorite per game.  Even at home in the playoffs, this is extremely high.  Depending on what dataset you use, the home team wins around 60-65% of the time in the NBA regular season and about 65%-70% of the time in the postseason.  But that latter number is a bit deceptive, since the playoffs are structured so that more games are played in the homes of the better teams: aside from the 2-3-2 Finals, any series that ends in an odd number of games gives the higher-seeded team (who is often much better) an extra game at home.  In fact, while I haven’t looked into the issue, that extra 5% could theoretically be less than the typical skill-disparity between home and away teams in the playoffs, which would actually make home court less advantageous than in the regular season.

Now, Miami has won only 73% of their home games this season, and it was against below-average competition (overall, they had one of the weakest schedules in the league).  Counting the playoffs, at this point Dallas actually has a better record than Miami (by one game), and they played an above-average schedule.  More importantly, the Mavs won 68% of their games on the road (compare to the league average of 35-40%).  Not to mention, Dallas is 5-2 against the Heat overall, and 2-1 against them at home (more on that later).

So how does the market tilt so heavily to this side?  Honestly, I have no idea. Many people are much more willing to dismiss seemingly incongruent market outcomes than I am.  While I obviously think the market can be beaten, when my analytical results diverge wildly from what the money says, my first inclination is to wonder what I’m doing wrong, as the odds of a massive market failure are probably lower than the odds that I made a mistake. But, in this case, with comparatively few variables, I don’t really get it.

It is a well-known phenomenon in sports-betting that huge games often have the juiciest (i.e., least efficient) lines.  This is because the smart money that normally keeps the market somewhat efficient can literally start to run out.  But why on earth would there be a massive, irrational rush to bet on the Heat?  I thought everyone hated them!

Fun With Meta-Analysis:

So, for amusement’s sake, let’s imagine a few different lines of reasoning (I’ll call them “scenarios”) that might lead us to a range of different conclusions about the present state of the series:

  1. Miami won at Home ~73% of the time while Dallas won on the road (a fairly stunning) 68% of the time.  If these values are taken at face value, a generic Miami Home team would be roughly 5% better than a generic Dallas road team, making Miami a 52.5% favorite in each game.
  2. The average home team in the NBA wins about 63% of the time.  Miami and Dallas seem pretty evenly matched, so Miami should win each game ~63% of the time as well.
  3. Let’s go with the very generous end of broader statistical models (discounting early-season performance, giving Miami credit for championship experience, best player, and other factors), and assume that Miami is about 5-10% better than Dallas on a neutral site.  The exact math on this is complicated (since winning is a logistic function), but, ballpark, this would translate into about a 65.5% chance at home.
  4. Markets rule!  Approximate Market Price for a Miami series win is ~50%, translating into the 71% chance mentioned above above.

Here’s a scatter-plot of the chances of Dallas winning the series based on those per-game estimates:

Ignore the red dots for now—we’ll get back to those.  The blue dots are the probability of Dallas winning at least one of the next two games (using the same binomial formula as the function above).  Now, hypothetically, let’s assume you thought each of these analyses were equally plausible, your overall probability for Dallas winning the title would simply be the average of the four scenario’s results, or right around 60%.  Note: I am NOT endorsing any of these lines of reasoning or any actual conclusions about this series here—it’s just a thought experiment.

A Little Bayesian Inference:

As I mentioned above, the Mavericks are 5-2 against the Heat this season, including 2-1 against them in Miami.  Let’s focus on the second stat: Sticking with the assumption that you found each of these 4 lines of reasoning equally plausible prior to knowing Dallas’s record in Miami, how should your newly-acquired knowledge that they were 2-1 affect your assessment?

Well, wow! 3 games is such a miniscule sample, it can’t possibly be relevant, right?  I think most people—stat geek and layperson alike—would find this statistical event pretty unremarkable.  In the abstract, they’re right: certainly you wouldn’t let such a thing invalidate a method or process built on an entire season’s worth of data. Yet, sometimes these little details can be more important than they seem.  Which brings us to perhaps the most ubiquitously useful tool discovered by man since the wheel: Bayes’ Theorem.

Bayes’ Theorem, at it’s heart, is a fairly simple conceptual tool that allows you to do probability backwards:  Garden-variety probability involves taking a number of probabilistic variables and using them to calculate the likelihood of a particular result.  But sometimes you have the result, and would like to know how it affects the probabilities of your conditions: Bayesian analysis makes this possible.

So, in this case, instead of looking at the games or series directly, we’re going to look at the odds of Dallas pulling off their 2-1 record in Miami under each of our scenarios above, and then use that information to adjust the probabilities of each.  I’ll go into the detail in a moment, but the relevant Bayesian concept is that, given a result, the new probability of each precondition will be adjusted proportionally to its prior probability of producing that result.  Looking at the red dots above (which are technically the cumulative binomial probability of Miami winning 0 or 1 out of 3 games), you should see that Dallas is far more likely to go 2-1 or better on Miami’s turf if they are an even match than if Miami is a huge favorite—over twice as likely, in fact.  Thus, we should expect that scenarios suggesting the former will become much more likely, and scenarios suggesting the latter will become much less so.

In its simplest form, Bayes’ Theorem states that the probability of A given B is equal to the probability of B given A times the prior probability of A (probability before our new information), divided by the prior probability of B:

P(A|B)= \frac{P(B|A)*P(A)} {P(B)}

Though our case looks a little different from this, it is actually a very simple example.  First, I’ll treat the belief that the four analyses are equally likely to be correct as a “discrete uniform distribution” of a single variable.  That sounds complicated, but it simply means that there are 4 separate options, one of which is actually correct, and each of which is equally likely. Thus, the odds of any given scenario are expressed exactly as above (B is the 2-1 outcome):

P(S_x)= \frac{P(B|S_x)*P(S_x)} {P(B)}

The prior probability for Sx is .25.  The prior probability of our result (the denominator) is simply the sum of the probabilities of each scenario producing that result, weighted by each scenario’s original probability.  But since these are our only options and they are all equal, that element will factor out, as follows:

P(B)= P(S_x)*(P(B|S_1)+P(B|S_2)+P(B|S_3)+P(B|S_4))

Since P(Sx) appears in both the numerator and the denominator, it cancels out, leaving our probability for each scenario as follows:

P(S_x)= \frac{P(B|S_x)} {P(B|S_1)+P(B|S_2)+P(B|S_3)+P(B|S_4)}

The calculations of P(B|Sx) are the binomial probability of Dallas winning exactly 2 out of 3 games in each case (note this is slightly different from above, so that Dallas is sufficiently punished for not winning all 3), and Excel’s binom.dist() function makes this easy.  Plugging those calculations in with everything else, we get the following adjusted probabilities for each scenario:

Note that the most dramatic changes are in our most extreme scenarios, which should make sense both mathematically and intuitively: going 2-1 is much more meaningful if you’re a big dog.

Our new weighted average is about 62%, meaning the 2-1 record improves our estimate of Dallas’s chances by 2%, making the gap between the two 4%: 62-38 (24% difference) instead of 60-40. That may not sound like much, but a few percentage points of edge aren’t that easy to come by.  For example, to a gambler, that 4% could be pretty huge: you normally need a 5% edge to beat the house (i.e., you have to win 52.5% of the time), so imagine you were the only person in the world who knew of Dallas’s miniature triumph—in this case, that info alone could get you 80% of the way to profit-land.

Making Use:

I should note that, yes, this analysis makes some massively oversimplifying assumption—in reality, there can be gradients of truths between the various scenarios, with a variety of interactions and hidden variables, etc.—but you’d probably be surprised by how similar the results are whether you do it the more complicated way or not. One of the things that makes Bayesian inference so powerful is that it often reveals trends and effects that are relatively insulated from incidental design decisions.  I.e., the results of extremely simplified models are fairly good approximations of those produced by arbitrarily more robust calculations.  Consequently, once you get used to it, you will find that you can make quick, accurate, and incredibly useful inferences and estimates in a broad range of practical contexts.  The only downside is that, once you get started on this path, it’s a bit like getting Tetrisized: you start seeing Bayesian implications everywhere you look, and you can’t turn it off.

Of course, you also have to be careful: despite the flexibility Bayesian analysis provides, using it in abstract situations—like a meta-analysis of nebulous hypotheses based on very little new information—is very tricky business, requiring good logical instincts, a fair capacity for introspection, and much practice.  And I can’t stress enough that this is a very different beast from the typical talking head that uses small samples to invalidate massive amounts of data in support of some bold, eye-catching and usually preposterous pronouncement.

Finally, while I’m not explicitly endorsing any of the actual results of the hypo I presented above, I definitely think there are real-life equivalents where even stronger conclusions can be drawn from similarly thin data.  E.g., one situation that I’ve tested both analytically and empirically is when one team pulls off a freakishly unlikely upset in the playoffs: it can significantly improve the chances that they are better than even our most accurate models (all of which have significant error margins) would indicate.