In his fascinating new book, Everything is Obvious Once You Know the Answer, Duncan Watts, a principal research scientist at Yahoo! Research and former professor of sociology at Columbia University, takes a critical look at common sense and shows how dangerously bad we are at predicting certain outcomes.

CF: In a nutshell, what is wrong with common sense?

DW: I'm going to start off by saying what's right with common sense. I was in the Navy. I went to an all-boy's school in Toowoomba, Australia. The students were mostly sons of farmers or rugby players, or both. There's a lot of emphasis on common sense in those environments, and it's legitimate, because common sense is extremely good at handling all kinds of day to day social interactions--things that we encounter over and over again and that are highly context-dependent. 

What's so great about common sense is that when you possess it, at least, you don't even know why it is that you know what you know. You just know it.

CF: Do you equate common sense with intuition, then?

DW: Well, all of these terms are a little bit blurry. I use it to mean two related but different things. First, there's common sense knowledge, by which I mean the collection of facts and rules and pieces of received wisdom that we collect in the course of everyday life, such as knowing that when you walk into the elevator you don't face the rest of the people, or you don't go to work naked.

But the other kind of common sense I refer to is what you might call common sense reasoning, which is how we reason about cause and effect and how we explain the things we observe.

The two kinds of common sense are related because a lot of things that sound like statements of fact that turn out to contain hidden reasoning processes about cause and effect. For example, it sounds like common sense fact to say that the police are more likely to investigate incidents of serious crime than non-serious crime. But it turns out that there's a huge correlation between the crimes that police investigate and the socioeconomic status of where the neighborhood where the crime takes place. It's not surprising when you think about it: wealthy people in wealthy suburbs are more likely to report crimes that happen, police are more likely to respond to those crimes because they come from people who are wealthy and who are likely to vote, and who have resources at their disposal if they're not happy.

As a result, the crimes that are reported by those people are more likely to get classified as serious, because the police investigated them. Thus the common sense statement that people are more likely to investigate serious crimes is true, but the causal arrow is the opposite of what commonsense reasoning would suggest (i.e. that police investigate crimes because they are more serious).

This is one of the problems of common sense. It blurs the distinction between statements about how things are, and statements about how things come to be.

CF: So everything is obvious if we agree with it?

DW: Almost. We can agree that something is common sense as long as we share the same set of assumptions. I think people really don't understand how much of a problem this is, because they assume that what they think is common sense is right.

Someone was just telling me the other day that she was mystified at how we couldn't just use common sense to make policy. It seemed completely ridiculous to her--why isn't there someone in politics who could say that It's common sense that everyone should have equal access to healthcare? But that's a belief that is deeply disputed! And people who disagree with her also feel that they are using common sense.

So common sense is actually very good for resolving everyday situations, where everyone shares the same set of assumptions. The problem is that it feels so effective to us in these circumstances that we're tempted to use it to make decisions and plans and predictions about situations that are not everyday situations. We want to use it to make decisions that are about people who are very different from us, who are interacting with each other in complex ways over extended intervals of time.

This sounds abstract, but it's precisely what you encounter when you make policy. For example: How can we change the incentives that doctors face when they deal with patients to improve the efficiency and effectiveness of health care?

To many politicians, this all sounds like an exercise in common sense. As Bill Frist, former United States senator from Tennessee, put it in an op-ed piece that I cite, "This is not rocket science." But that's really, really misleading. Because you're not talking about a problem that's like you disciplining your child, or you trying to figure out how to get a person whom you know very well to do something that you want him to do. This is about you making a set of rules that are going to apply to hundreds of thousands of individuals who are all very different, and who have different types of circumstances and live in different parts of the country. As soon as you try to apply your common sense to solve this problem, you're going to fail, and you're going to generate all kinds of unanticipated consequences.

CF: So common sense is very adaptive for certain situations but maladaptive for others, and we don't understand the distinction?

DW: Right. The problem is that when thinking about a very large-scale, abstract problem, what we tend to do is reduce it down to a single scenario, like, Oh, I go to the doctor. And how would my doctor respond differently if he were being paid this way or that way? But very often there is some other factor that we didn't put into our simulation that turns out to be important, and that means our prediction will be wrong.

CF: How else can common sense go wrong?

DW: If our model of individual behavior is flawed in this way, our model of collective behavior is even worse. When marketers think about their demographic, they actually construct stories about individual people. For example, when they're trying to come up with ideas for how to sell a particular product, they might say something like "Emily is a 27-year-old woman who lives in Chicago and she has a four-year college education, and she's just gotten engaged to Doug." Then they ask, "How are we going to sell this product to Emily?"

But of course Emily doesn't exist. In reality you have this very diverse population of people who have all kinds of needs and incentives and who are also interacting with each other in ways that are hard to anticipate. Stories like the one about Emily paper over all of that complexity, effectively replacing the whole system with a single individual, a "representative individual" and then try to reason about their behavior as if it were an actual person. It's a big error.

CF: Why do advertisers and policy makers keep making this error, then?

DW: It's a good question: If we're making such obvious mistakes with common sense, why are the mistakes not obvious? When we study physics, we have all of this common sense intuition that we bring to physics class. But very quickly, you realize that your intuition sucks. You can't guess the answers. You have to work through the math.

But the social world is different. To begin with, we have a lot more intuition, so we're much better at coming up with explanations than we are in physics. Even then, we still get things wrong for the reasons I've been describing, but this is where the third problem of common sense reasoning comes into play: there's something about how we learn from history that prevents us from realizing what it is that we're doing wrong.

Take what's happening now in the Middle East. Once something has come to an end, we'll construct a story about what happened. It's probably going to involve Facebook, and it's probably going to involve the Google exec Wael Ghonim and his cohort of tech-savvy young leaders who got together in Egypt, and some rebel leaders we haven't yet named, and very likely some critical battle or protest--and it going to involve these things not because they actually explain anything but because that's how we tell stories. Stories have critical moments where things change, they have critical characters around whom the action revolves, and they have a beginning, middle, and end.

The end is particularly important, because at the end, everything makes sense. At the end, we finally get to see the causal thread that holds everything together. But what's discarded is all the other stuff--the other characters who might have been important but who turned out not to be and all of the uninteresting events that also happened. In a sense, there's no way to contest our nice story, because most of the data that is inconsistent with it will be buried with history.

It doesn't matter how complex the past is, we can always come up with a story after the fact that makes it seem that it was sort of inevitable. Even if we didn't know at the time that it was going to happen, now we know that it was going to happen, and we know that because it did happen.

The fact that the persuasiveness of the story depends on the way its told should tip you off to the fact that we judge historical explanations in terms of their properties as stories, not in terms of their ability to explain data. It's fine to love stories, if they're in the fiction category. When we take those stories and call them non-fiction, we get in trouble.

CF: What's wrong with stories?

DW: Up to a point, nothing. Stories are helpful. They connect you to the past. They motivate people and give them a sense of meaning and make the world seem not totally random.

But the problem is that they sound like causal explanations. And because they do, as soon as we have them, we immediately turn them around and use them to make predictions. We immediately try to generalize from them.

Once we have a statement like: "Wael Ghonim and his cohort of influencers triggered the revolution in Egypt," or "Steve Jobs is responsible for the amazing success of Apple after he came back"--even if we mean them just to be descriptions of events we have witnessed, we can't help but treat them as evidence of underlying rules that apply generally. Whenever we ask why a particular revolution happened, or why a particular company succeeded, we're trying to understand why revolutions in general happen and companies in general succeed.

This is critical to the argument because this is also why we never realize what we're doing wrong. To give a simple example, many people think they'll be happy if they win the lottery. Why do they think this? Well, probably what they're thinking is, "I have these problems in my life, and those problems would go away if I had more money." But what they don't think about is that if they had more money, they'd have a whole other set of problems, and those problems are going to make them at least as miserable as they are now.

What we should learn from examples like this one is that it's surprisingly difficult to anticipate how people will react to circumstances beyond the immediate here and now-we're even bad at predicting our own future happiness. But typically we don't learn this lesson. Instead we think, "If only I'd known what I know now, I would have predicted correctly." We think our inability to predict is simply due to a lack of knowledge. But what we don't appreciate is that we only know what we know now because we made the mistake . It's simply not knowledge we could have possessed at the time! Some things, in other words, are impossible to predict, even in principle..

This sounds simple, because "everyone knows" that the future is unpredictable, but it's probably the most difficult point to grasp in my book. It's related to the concept that I discuss in Chapter 5 that history can't be told while it's happening-that we can't say what something means at the time, because it's meaning is something that will only be settled on in the future, as a consequence of events that are yet to happen. In other words, to say what something means requires you not only to predict future events but to predict how historians of the future beyond that future are going to look back and interpret everything that happened between now and then! This is not prediction at all really. It's prophecy.

"Black Swan" events, like the Storming of the Bastille, or the development of Internet, or Hurricane Katrina, are great examples of when we look back and wish we'd been able to predict something. But Black Swans turn out to be precisely the sort of thing that we can't predict, not just because future events are unpredictable, but also because Black Swan events aren't really events at all.

Hurricane Katrina, for example, was a big storm, but it wasn't the size of the storm that made it what it was. It wasn't even the biggest storm that summer. Rather it was a whole complicated sequence of factors and consequences that exacerbated its impact: Its location right on top of New Orleans, and the fact that the levees fell, and that officials were unprepared, that all these people died and suffered, and the perceived racism in the media coverage, and the long economic consequences of so many people leaving and not returning. All kinds of stuff got concreted around that event, and that's what made it a black swan. But then the black swan isn't really a thing at all-it's a label for a whole chunk of history.

CF: How does these errors of prediction apply to the stories we hear about successful people--stories we love to share in our culture?

DW: That's another example where we pay too much attention to the story and not enough to the data. It sounds like common sense to say that if you want to understand why successful people are successful, you can go out and study them-where the implication, of course, is that you can then replicate their success.

The problem is that if you'd also looked at unsuccessful people, they would share many of these same qualities as the successful people. For example, there are probably many people who talk just like Donald Trump, but you just don't know them, because they're not successful!

What this means is that even if there are certain attributes that are shared by many successful people, they are still not very predictive of success. Even if most people who are successful have attribute X, if lots of unsuccessful people also have attribute X, then having X doesn't tell you much about your likelihood of being successful.

From a prediction standpoint, what you want is an attribute that all successful people have, and no unsuccessful people have. But if you only ever look at the successful people, you'll never know if you have the right one.

In fact, probably no such "prefect predictors" exist, because when it comes to success there is invariably a lot of randomness. The same applies to other kinds of rare event, whether airplanes crashes or 
school shootings: Most school shooters are teenage boys who wear a lot of black and are alienated from their peers and have occasionally expressed violent thoughts. But that description fits thousands of boys who have never gone on to harm anybody. So simply identifying the things that known school shooters have in common actually gives you a lousy predictive algorithm. Once again there's probably no good predictive algorithm, because there's way too much randomness involved.

CF: So, what can we do differently?

DW: First, it's important to note that I'm not saying that everything is unpredictable. Some things are predictable and some things aren't, and it's important to differentiate between the two. Credit card default rates are one phenomenon that can be predicted with surprising accuracy, on average. Or seasonal flu caseloads. Or a lot of online behavior, such as which content people like. You can also make reasonable predictions about the weather, at least in the very near future. These predictions might not be as good as you would like them to be, but they are better than simply guessing, and that may be good enough for many purposes.

Second, for the sorts of predictions you can't make, it's important to avoid making plans that depend sensitively on these predictions.

We probably can't reliably predict the timing of the next financial crisis, or the next breakthrough innovation in technology or social media, and so we probably shouldn't act as we can. Instead, we develop plans that acknowledge our uncertainty about the future and hedge as much risk as possible.

CF: Another method for dealing with our inability to predict is a "searching" rather than a "planning" orientation. Tell me more about that.

DW: "Measuring and reacting" is what I call it. The traditional view of planning has prediction at its core. You have some vision of the future, and you build around that. This other view is about trying to find out what's happening right now, and how can I react to that as productively as possible.

Also, you can do field experiments. Going back to the health care example, if you don't have historical data on doctors and patients, you could run a series of experiments where you actually try different compensation schemes and bureaucratic rules with different hospitals in different states to actually see which ones work. Then you could roll out the ones that work to larger groups of people. If you do this in a systematic way, you'll learn something.

And in the realm of policy, you can use the "bright spots and bootstrapping" approach. Instead of deciding and implementing the solution in a top-down manner, you reverse the process. You go out to the periphery and you try to learn about what's already working and then you figure out how to generalize these home grown solutions.

CF: It sounds like you're advocating for more humility among experts.

DW: In a sense. What bootstrapping and other bottom-up techniques require that experts need to stop thinking of themselves as experts at coming up with solutions, and instead become experts at finding and leveraging solutions that already exist, or could exist if only circumstances were slightly different. They have to say, "I don't actually know how to solve this problem, because I can't possibly understand all the specific circumstances. But I can become very good at helping people help themselves."

CF: How have you used the "measuring and reacting" method at Yahoo?

DW: Well, Yahoo is filled with such examples. Bucket testing is totally standard in the web world. You have a huge audience and you're trying to predict what people want. Take, for example,the stories on the front page of Yahoo. In part those stories are there because of editorial input, but they're also a product of real time learning-- because we show groups of people different arrangements of stories and we see which they click on. And if we find that one story gets a lot more clicks, than that's the one that gets rolled out to the whole audience. There's a surprising amount of science that goes into the content that a user sees on a given web page, not to mention the arrangement of search results, ads, etc. The web is in many respects ideal for measuring and reacting, because the scale of the audience is so huge, and the cost of running experiments is relatively low.

CF: But measure and react is not restricted to the online world.

DW: An "offline" example is Harrah's casino. The guy who runs that says there are two ways to get fired from Harrah's: One is stealing money and the other is not including a control group in your experiment. It's scary that casinos are doing this, but slot machines are perfect for this kind of experimentation because you can program each one with a slightly different algorithm and see which ones make more money. You can change the noise they make, the lights, the decorations. They are constantly fiddling with all the variables to affect the payout-all of which may be socially destructive, but is still very smart!

CF: Why do you think the web is exposing the "breakable" aspects of common sense?

DW: The web forces us to confront the data. Once you test a theory-- like Malcolm Gladwell's theory about the "influencers" driving popularity--against thousands of tweets, for example, you see that that nice story is not doing very well.

Once your theories are proved wrong many times, you realize that your intuition is pretty unreliable, so you have to keep testing things. As more of the economy moves online, and more people in power are confronted with data, they're going to learn that their intuition is not what they think. Hopefully that will lead to a virtuous cycle in which they start to demand more data. Like, "Let's build an infrastructure to run these experiments. Now that we've got this infrastructure we're generating even more data, and we keep finding more ways in which our intuition doesn't work." I'm hopeful that that is going to happen. The more you go down this path, the more apparent its value becomes.

CF: If people accept what you're saying, it might squash some of their motivation. Believing that we can predict things and get certain outcomes if we do the "right" things helps us go on in a largely random world.

DW: I think that explains some of the defensive reactions I'm getting to the book. Some people associate determinism with meaning. If you look at all the major religions, they're all deterministic. Someone is in charge and he or she has a plan for us. In a way, it's an oppressive view. But people believe in fate and in destiny, that things happen because of the ends that they're going to reach.

I would argue that meaning is separate from what I'm saying about predictions. When I joined the Navy, I went through a process of random allocation to a unit. Two guys were with me throughout the recruiting process; we all lined up together, and we each went to separate divisions. In that moment, the rest of our lives got determined. When I look back 24 years later, many of my best and oldest friends and people with whom I had all kinds of life experiences, and without whom I wouldn't be the person that I am--they all were in my division. Do I think that wasn't random? No, of course I think it was random! There was no hand of God coming down and moving me around so that I would meet these people, and go on to have the life I had. But do I think it doesn't mean anything? No. Meaning is something that we create, so it's just as relevant when we associate it with random outcomes as it with non-random ones.

The mistake is to assume that because things mean something to us, they're not random. If someone is successful, we assume they must have been more talented, because of the socially meaningful outcome they achieved. We think that can't be arbitrary. If Facebook is a 50 billion dollar company, Mark Zuckerberg must be a genius, because it couldn't have happened otherwise. I have no doubt that Mark Zuckerberg is a smart guy, and I think Facebook is a great company, but there are lots of reasons to think that it could have been otherwise.

Friendfluence will be released on Jan. 15!

You are reading

Under a Friendly Spell

Another View From the Top

A Taco Bell executive reflects on her leadership style.

Leading While Female

A former Hollywood exec who now runs a start-up shares her insights.

How to Find (and Keep) Your Ideal Creative Partner

Tips and insights from Joshua Wolf Shenk's new book on collaborators.