Skip to main content

Verified by Psychology Today

Environment

The Downside of Having Nothing to Hide

In the future, our motives will be transparent. That's not good.

Mike Mozart, used under Creative Commons License
Source: Mike Mozart, used under Creative Commons License

Imagine driving down a street and hearing impatient honking behind you. Annoying, right? But there might be extenuating circumstances. Perhaps there's a woman in labor in that car. Or someone who could miss her plane because of all the traffic from that accident two miles back. On the other hand, that honker could have reasons that make him more annoying than you'd expect. Maybe that driver is a cheating spouse rushing to get home so an alibi will hold up. Maybe it's a goon out collecting protection money for a guy named Ice Pick Willie.

At any given moment, such details are usually unknown and, practically speaking, unknowable. So it's reasonable to take an average of all possible motives—one that is typical, and neither especially bad nor especially good—and ascribe that to the stranger. This is a sound application of the Copernican principle, which holds it most likely that there's nothing special about the place and time in which you are observing anything. So whatever you're perceiving is probably in the average range for its type. The driver behind you is unlikely to be 4 feet tall, and equally unlikely to be 7 foot 2. Analogously, that driver is unlikely to have an extremely good or extremely bad reason to badger you.

An alternative to the Copernican approach is to use stereotypes. But a stereotype like "New Yorkers are rude drivers" or or "old people are bad drivers" is still a rough generalization, in which you invoke (supposed) facts about a category of people to explain the behavior of an individual. You only do this because you don't know enough about the person in front of you and their motivations of the moment.

Our widespread lack of information about individuals and their circumstances is a veil of ignorance, hiding important moral facts. Lack of knowledge means that most people who interact with me only see my actions, not my motives. They know I'm honking, but they don't know why.

Despite the temptation to use stereotypes to fill in the gaps in our knowledge of one another, the veil of ignorance has not been bad for social relations. You could even argue that it makes them, on balance, more smooth and fair.

Yes, people sometimes use harmful stereotypes—but often they don't. And when we aren't using a negative stereotype, the veil forces us to see each person who imposes on us as an equal, whose motives and values are, most likely much like our own. Thus, we refrain from judging him.

What happens, though, if the veil of ignorance is removed? If it becomes possible to know, in real time, all the relevant details about what made the person honk? This is the prospect raised by the Internet of Things, that near-future Web over which phones, cars, highways, fitness trackers, offices and stores will all talk to one another. Soon, when the IoT is part of daily life, as Daniel Burrus noted here, concrete in a bridge will detect ice and alert your car to the hazard. Then the car "will instruct the driver to slow down, and if the driver doesn’t, then the car will slow down for him."

But how much will it slow down? The right answer will relate the risks of a given speed with the driver's reason for going fast—an emergency trip to the hospital requires a different solution than wanting to get to a restaurant before someone else scores your favorite table. Therefore, objects that work for us won't just report what we do. They will also explain what we do.

What brings this to mind is a growing movement to change the way we talk about cars bashing into one another. As Matt Richtel reports here, traffic-safety advocacy groups are campaigning to rebrand the "car accident." That term invites people to think that colliding with another car or plowing into a ditch are events that just happen, like lightning strikes or heavy rain. In fact, almost all car crashes are caused by people's decisions. Driving while drunk, driving while texting, driving while asleep and other choices are the cause of "accidents," which kill some 38,000 people a year on American roads.

So police departments around the U.S. have been replacing the blank veiling concept of "accident" with the word "crash." The authorities want to shift to a concept that nudges people to see drivers as responsible for what happens to them. In this, they are like their colleagues elsewhere in government, who are trying to reclassify as controllable a lot of experiences that people used to see as fate. Heart attacks and diabetes don't just happen, we're told—they're the consequences of choices we make about food and exercise. Catastrophic global warming isn't something that's occurring by accident; we're making it worse by flying and wasting electricity. If you're poor in old age, don't blame capitalism—you were supposed to be saving for retirement!

You might think that such an outlook would encourage people to take more responsibility for their own conduct, and thus help them not only to avoid accidents but to see themselves as empowered agents of their own lives and authors of their own fates.

I think, though, that the long term consequences will be the opposite.

This is because today's message of individual responsibility coincides with an equally powerful message of individual incompetence. The psychologists who have the attention of our media and our leaders are the ones who say that people make mistakes all the time, about everything: what to eat, what to buy, what to fear, what to plan for. Driving is no exception: One of the most commonly cited justifications for the coming age of self-driving cars is that way fewer people will die on the road. Driving, like so many other skills, will soon be—if it is not already—an activity that machines are better at than humans.

When you combine the message of individual responsibility with the message of human incompetence, the only consistent stance left is that people ought to let the machines do the work. Let the cars drive themselves, and thousands of people will live, who would have died at the hands of human drivers. Perhaps it is a worthwhile price to pay when, as a result, all the motives and drives that we once kept to ourselves will become public matters, available for public judgment.

Yet it is not a trivial matter to shred that veil of ignorance that forced us to assume others' motives were no better or worse than our own. When we know the precise drivers of one another's behavior, another layer of autonomy—the capacity to decide for yourself how much of a hurry you are entitled to be in, how much you should impose on others—will be gone. The Internet of Things is committing us for a transparency that is so complete, and so different from life today, that we can scarcely imagine the world we are creating.

advertisement
More from David Berreby
More from Psychology Today
More from David Berreby
More from Psychology Today