Michael Lewis’s book Moneyball tells the story of a how Major League Baseball team, the Oakland Athletics, stopped relying on scouts to judge ballplayers’ talents and turned to statistical analyses. Moneyball expresses one of Lewis’s favorite themes: the so-called experts don’t know what really matters and, to make matters worse, they are highly confident in their own judgments. They don’t know what they don’t know. Moneyball also expresses another favorite Lewis theme: smart outsiders outwitting the so-called experts (see also The Big Short). Moneyball was published in 2003 and is still a bestseller; it was turned into a movie starring Brad Pitt as Billy Beane, the General Manager of the Oakland Athletics.

Lewis is a sufficiently talented writer not to let facts get in the way of entertainment. His version of events is reasonably but not entirely accurate (e.g., see the 2012 essay by Will Braund, How true is Moneyball?) These inaccuracies don’t bother me very much.

My real concern is with the central message of Moneyball. The myths it presents appear to be taking hold, not only within the general public but within the judgment and decision making community whose members often cite Moneyball to back up their professional recommendations.

This essay describes the three Moneyball myths that worry me the most. And I need to explain that the essay is not an attack on Michael Lewis. I have greatly enjoyed each of his books I have read, and I have read most of them. I also met him for lunch a few times and found him to be witty and charming. I am a big fan.

The Moneyball myths are necessary for telling the story — a more nuanced treatment wouldn’t have become a bestseller. It would have been better suited as an article in a professional journal.

However, when I see actual articles in professional journals that cite Moneyball, I feel a need to sound a warning.

Here are the three myths I want to examine more closely:

First, baseball scouts don’t know which players are genuinely skilled.

Second, baseball scouts have a façade of expertise but nothing of substance.

Third, baseball scouts are allergic to statistics.  

Let’s examine these three claims.

Claim #1. Baseball scouts don’t know which players are genuinely skilled. They rely on scouting prejudices including absurd features. “Some of the scouts still believed they could tell by the structure of a young man’s face not only his character but his future in pro ball. They had a phrase they used: “the Good Face.” (p. 7).

Even if we accept Moneyball’s premise that the statistical analyses outperformed the scouts’ judgment it doesn’t follow that the scouts’ judgments were useless. Remember — before the wide availability of baseball data, all that teams had available was the judgment of the scouts. If that judgment were useless, it would mean that scouts were no better at identifying talented young players than fans in the stands. No better than people randomly selected. And no one has ever done that research.  

We have anecdotes to the contrary such as the 1988 incident in which a scout heard about a shortstop in Panama with no training as a pitcher who had come in as a replacement for a struggling starting pitcher and had done well. The scout watched the shortstop/pitcher throw and liked his smooth motion and athleticism. Even though he weighed only 155 lbs. and threw only 85-87 miles/hour, the scout signed him to a $2,500 contract with the Yankees. His name was Mariano Rivera.

Anecdotes like these are not data, but we don’t have data. To my knowledge, no one has compared the judgment of professional scouts against a control group. It wouldn’t be hard — show video clips of hitters swinging at pitches, or pitchers winding up and letting go, just the motions, not the outcomes. Some of the video clips would show minor leaguers about to be released. Other video clips would show successful players. Or else get video clips from Japan, from the Nippon Professional Baseball League and its related minor league, so the participants couldn’t recognize the successful players.

Would the baseball scouts do better than controls at identifying which ones are the successful major leaguers and which ones washed out at the bottom of the minor leagues? Perhaps there would be no difference. I suspect that the scouts would greatly outperform the controls. 

Why haven’t the decision researchers who tout Moneyball been skeptical about Claim #1? I think it is because Claim #1 fits their preconceptions and they don’t feel a need to test those preconceptions — which is the criticism that Moneyball makes of the scouts.

Claim #2.  Baseball scouts have a façade of expertise but nothing of substance.

Following from Claim #1, if scouts do have expertise, then what might it consist of? One possibility is that scouts might appreciate athleticism and mechanics.

Back to the Mariano Rivera anecdote. The scout who watched Rivera admired his smooth motion. Similarly, scouts might admire a hitter’s smooth swing. Statistics do not capture these kinds of subtleties. But we see it in action: baseball commentators on TV regularly comment on a pitcher’s motion. Or they will use replays to show how a hitter is “stepping into the bucket,” compromising his ability.

Consider other sports. During the summer Olympics, in the diving competition experienced television commentators alerted us when a diver rotated too far to enter the water cleanly, creating a bigger splash. And, sure enough, in the slow motion replay, we saw the problem. However, the commentator saw it as it happened. Surely that counts as expertise, gained only after countless hours watching athletes perform their craft.

Even though baseball scouts may have important skills, their ability to judge talent may still be inferior to statistical analyses. While irrelevant to the thesis of this essay, this observation should be taken into account. And tested, by comparing the judgments of scouts to those of the analysts, rather than to a control group.

Presumably, the analysts would use the most advanced statistics currently available. So it makes little sense to compare these analysts to the kind of scouts lampooned in Moneyball. It is only fair to engage the most advanced scouts — those selected by a rigorous vetting process and then further trained through the current state-of-the-art methods for developing expertise. In Superforecasting, Tetlock and Gardner showed how to develop predictive expertise regarding world events — it can’t be harder to develop predictive expertise in baseball scouts. Let’s put Moneyball to the test.

Claim #3. Baseball scouts are allergic to statistics.  (“Billy was forever telling Paul that when you try to explain probability theory to baseball guys, you just end up confusing them.” p.  34).

This claim is fun to make, backed up by anecdotal evidence. But Lewis himself describes how baseball scouts were using statistics. They just weren’t using very good statistics — they relied on the traditional statistics at the time. The scouts were happily studying the batting averages of hitters and the Earned Run Averages of pitchers. The scouts weren’t scared off by numbers. They depended on these misleading statistics too much, not too little.

Moneyball cites the work of Bill James, who, in the 1970s and 1980s showed what could be done with the data that were just becoming available. Even before I read Moneyball I had devoured several of the Bill James Abstract books when I started playing fantasy baseball in the early 1990s, and I became an instant convert to the power of statistical analyses in baseball. So this essay is not criticizing statistical approaches. Rather, the essay explains why we might want to re-consider the contempt heaped on baseball scouts.

Moneyball has become a metaphor for why we shouldn’t trust the experts. It is a prime exhibit in the War on Experts. The objective of that war is to devalue experts in a variety of fields and recommend that experts be replaced by algorithms, checklists, and analyses. That recommendation is sometimes helpful but often reckless and counterproductive. 

Of course, the real issue is not scouts vs. analysts but rather how to design statistical and computational tools to allow scouts and other decision makers to be more successful. But we will have trouble pursuing that path if we buy into the myths of Moneyball.

You are reading

Seeing What Others Don't

The Age of Centaurs

Instead of building smarter machines, let's build machines that make us smarter.

The War on Experts

Five professional communities are trying to discredit expertise.

Cognitizing a Scenario

Developing high-impact training scenarios