Skip to main content

Verified by Psychology Today

Environment

Does Truth Still Exist, or Are There Just Alternative Facts?

Evolution, "alternative facts," and what it means to be human.

About a month ago, Kellyanne Conway, advisor to President Trump, introduced a new term into most people’s lexicon: “Alternative Facts.”

She was responding to assertions made by Chuck Todd, host of NBC’s Meet The Press, that the White House had lied about attendance at Trump’s inauguration. Specifically, Press Secretary Sean Spicer had stated that this had been “the largest audience to ever witness an inauguration, period.” This claim, as Todd correctly observed, was demonstrably false.

“You’re saying it’s a falsehood,” Conway responded. “And they’re giving—Sean Spicer, our press secretary—gave alternative facts.”

Social media channels immediately lit up. Many people were appalled at the White House’s misinformation, while others were left wondering, “What’s the big deal?”

The truth is that the exact attendance at Trump’s inauguration probably doesn’t matter in itself. I want to make the case that there is an even bigger and more important issue at stake: Our society’s evolving relationship with the truth.

The issue of truth has come up a lot lately. Oxford Dictionaries even chose “post-truth” as its international word of the year for 2016, noting an approximately 2,000 percent increase in its usage in news articles and social media in the United States and the United Kingdom.

As the word “post-truth” communicates, the question isn’t as simple as whether someone is lying or not. We’ve known that politicians lie for a long time. The issue is that the very notion that an objective truth exists (or matters) is eroding. Instead, people seem to think that they’re entitled to their own truths.

Why is this happening? As I see it, there are two reasons—one that everyone is talking about and the other that most people are overlooking.

As Obama noted in his farewell address, the first reason is that we increasingly live in “bubbles.” We get our news from sources that agree with our already-formed opinions and surround ourselves with Facebook friends who confirm our biases. Web sites serve up information that is consistent with our user profiles and search histories. For the first time in history, we can choose exactly what messages we hear and ignore the rest. It’s easy to live in our own personal informational worlds. This wasn’t possible just a couple of decades ago, when most people got their information from a few highly trusted, largely non-partisan sources like the evening news and major newspapers.

But the second reason that we are increasingly living in a “post-truth” society is much more fundamental to who we are as human beings, and this takes a bit more explaining.

Let’s back up for a second and reflect on exactly who—and what—we are. In addition to whatever else we are, we are animals with big brains. Evolution brought our species (homo sapiens sapiens) into being about 200,000 years ago. Although there is some debate among scientists, it seems clear that approximately 50,000 years ago our species existed in pretty much its fully formed version, complete with language and art. If you kidnapped someone from that year, dressed them in modern clothes, and put them in the middle of New York City, you probably wouldn’t be able to tell them apart from anyone else. There hasn’t been much evolution—biologically, at least—in the intervening tens of thousands of years.

So, we are evolved to fit an ancient environmental niche. Our species likely evolved to adapt to an environment that was native to Africa all those many millennia ago. And this environment looked very different from today’s. We evolved to live in small groups or tribes of perhaps 20 to 50 individuals. Many of these people, in fact, would have been related to us. Certainly, all of these people would have known each other. It would be like living with 50 of your closest friends. Most of the threats to our well-being would have come from outside of this tribal group—from things like animal predators, natural hazards, or human beings from other competing tribal groups. People likely spent much more of their time than we do today on survival activities, like hunting, gathering food, and finding or building shelters. To avoid danger, they would have had to trust their intuition and their powers of observation much more than most of us do today, living in our relatively safe and comfortable homes.

Although nobody knows for sure, it was in this kind of environment that the concepts of true and false were likely born. But there were no newspapers, televisions, or web sites. People probably would have considered something “true” if it met one of two criteria. First, something would have felt true if you could directly observe it. Seeing is believing. Second, something would have felt true (even if you couldn’t directly observe it) if it were told to you by somebody like you that you trusted, somebody in your tribe. These criteria for telling truth from fiction aren’t that much different than the ones most of us use today. And, they’ve worked pretty well—until now.

Our social and technological environment has radically shifted in just the last decade or two, however, and these criteria are beginning to fail us. For one thing, the amount of information possessed by our species has gotten exponentially greater, and it’s now impossible to use seeing something as a viable criterion for believing it. It is increasingly important that we believe things that we cannot directly see. Take climate change for instance. Most of us cannot directly observe emissions destroying the ozone layer. Most of us cannot directly see the polar ice caps slowly melting. In fact, in the realm of science in general, “seeing is believing” is impossible for most of us without access to a research lab. We have to take someone else’s word for it. And, to bring in the second criterion discussed above, the people whose word we have to take might not be people like ourselves. They might not be part of our “tribe”—our ethnic, national, religious, age, or socioeconomic group.

Given all this, it’s very easy for us to doubt climate change and other findings of science. Believing abstract information that we cannot see and that does not come from the lips of a close ally goes against our genetic programming as a species. And yet, the consequences for this doubt could be catastrophic.

At this point, you might be asking, “But hasn’t technology given each of us an unprecedented ability to observe things? After all, we have streaming video and countless news sites, not to mention good old-fashioned radio and TV.” It’s true that over the last century, improvements in technology have dramatically increased ordinary people’s abilities to see things for ourselves. Take the Vietnam War, for instance. That conflict was the first to be widely televised, providing ordinary citizens the opportunity to see with their own eyes the carnage of war. I would argue that it was this ability to actually observe what was happening in Vietnam that gave rise to the outpouring of moral outrage in America during the 1960s. The war suddenly seemed real and “true” to us.

Here’s the problem: In the year 2017, for the first time in history, we can no longer believe what we see in video. We’ve had the ability to doctor photographs for decades. But now, we also have the ability to dramatically alter and even falsify video. In December, people turned out in droves to Rogue One: A Star Wars Story. While it was an excellent addition to the beloved science fiction saga, history will probably remember the film for something different: for faking a human being. It was the first film in history where a major character—the villain Governor Tarkin—was completely computer generated. Tarkin was a realistic CGI version of an actor (Peter Cushing) who died over 20 years ago.

While wildly entertaining, I believe we haven’t come to grips with what this ability to simulate people in video does to our relationship with “truth.” In Rogue One, this ability was used in an honest and entertaining way. But it will undoubtedly also be used to counterfeit reality, to lead people to believe things that simply are not true. Videos of people in authority doing unethical or violent things have been the basis of recent outrage in America—whether we’re talking about videos of Donald Trump bragging about groping women or cell phone footage of police shooting unarmed citizens. So far, all of these videos have been real. But, imagine the damage that could be done if fake videos of this nature were circulated and widely believed. Using our classic criteria for telling truth from fiction, could we even tell the difference?

Creating video of realistic computer-generated human beings is currently very expensive, difficult, and imperfect, so it’s unlikely to be used frequently. But if we have learned nothing else from the last decade or two, it is that technology gets cheaper and better very quickly, and as it does, more and more people gain access to that technology. In the next 10 years, it’s likely that we will be able to believe very little of what we see in video.

We as a species have solved many problems and confronted many sticky issues during our long history. We overcome difficulties when people of good faith join together to understand the truth of whatever is happening. This shared understanding of “fact” then functions as the basis for solutions. What makes our modern dilemma fundamentally different is that this very basis is in question. If this doesn’t worry you, it should. And though we’re not talking about it much as a society, we need to. That’s the big deal.

David B. Feldman is an Associate Professor of Counseling Psychology at Santa Clara University and the author of Supersurvivors: The Surprising Link Between Suffering and Success.

advertisement
More from David B. Feldman Ph.D.
More from Psychology Today