By Gary Drevitch, published on September 5, 2017 - last reviewed on December 2, 2017
The artificial intelligence revolution is here, and MIT physics professor Max Tegmark believes the implications are vaster than most of us imagine. Tegmark, cofounder and president of the Future of Life Institute, believes that as technology gives us the power to flourish or self-destruct, "We prefer the former." In Life 3.0: Being Human in the Age of Artificial Intelligence, he lays out both utopian and dystopian visions of a world dominated by AI. His prescription for the day we cease being Earth's most intelligent minds? Humility.
What do you make of the idea that self-driving cars will produce safer roads only if human drivers stay out of the equation?
I find it humbling. At the same time, if we keep saying that in our human-machine interactions it's the humans who are the problem, then maybe we've been asking the wrong question. After all, we want to create these machines to make life better for us. Ultimately, you could have much more efficient traffic flow if there were no humans at all on the planet and just robots driving around. But is that what we want?
Are AI engineers designing our obsolescence?
I think it's important that they ask themselves about the social implications of what they build. You can't just say, "That's not my department."
How could AI design become more responsible?
When a German pilot crashed a plane into a mountain in 2015, all he did was program the autopilot to go down in altitude. There's no reason an autopilot can't be built that would refuse to do that. And ideally we will have cars that will refuse to drive into a crowd of people. For that to work, engineers not only have to think about how something they build will do what the user tells it, but also make sure that if the user violates fundamental goals of society, it's not going to go along.
Can we envision the moment when our computers realize they are smarter than us?
Your brain is a collection of quarks and electrons that processes information in a very complex way. There's no law of physics saying that you can't put quarks and electrons together in an even better way in a computer. Some think it's never going to happen, some think it won't happen for hundreds of years. But about half of leading AI researchers think it is going to happen in our lifetime.
You were amazed by an AI system that figured out the best strategy for the video game Breakout. Why?
That was a moment when it really hit me that an extremely simple algorithm can learn to do really smart things. When the Deep Blue computer beat Kasparov in chess, its intelligence had been programmed by humans, and it won because it could compute faster than he could. But for this kind of deep reinforcement learning, the human only has to put in a simple learning rule and then the algorithm, like a child, learns to get really good.
So an algorithm that thinks like a child is a major advance?
When a kid is born it has 100 billion neurons wired up in such a way that it isn't able to do much except scream and breastfeed. Four years later, it's able to have fantastically interesting conversations. Where did that intelligence come from? It wasn't programmed; there are some very simple algorithms the brain uses to take all the data that come in through our senses and learn from them. The biggest breakthroughs in AI have come the same way—by computers learning for themselves.
You've sketched scenarios for humanity's future that could go very well, or very badly. What's a really positive vision of the future?
Everything we love about civilization is the product of intelligence. So if we can amplify our intelligence with AI, we can make our civilization better and solve problems from curing disease to creating a sustainable source of energy. I feel the step from cave dwelling to today is a small step, technology-wise, compared with what we can have in the future with AI. We can imagine life spreading through much of our cosmos and lasting not decades but for billions of years.
We've created AI, and yet it may surpass us. Has anything in nature ever created something that superseded it?
If we create machines with the ability to learn, and give them more memory and computing power than our own brains, then they have the potential to supersede us. This has never happened in the history of our planet, but we are at that threshold now. If we create something that's above us in intelligence, it too will be above that threshold. So it can create a smarter version of itself, which can in turn create a version of itself that is dramatically smarter than us. We underestimate the potential to create incredibly powerful technologies because we're used to inventing things ourselves. But if machines do the inventing, they might come much sooner.
And leave us far behind.
Or maybe not. It depends on whether we plan well. The sooner we discuss it in earnest, the better our odds for a happy outcome.
With the advent of AI, you suggest no longer imagining ourselves as homo sapiens, but as homo sentiens. Why?
We have tended to derive our self-worth from being smarter than all the other animals on the planet. That's not going to work in the long term. But I think it's an outdated idea that we need to define ourselves as the best. If we think of ourselves as Homo sentiens, we're emphasizing that our ability to have wonderful experiences and feel love and joy is what's really valuable about us, not that we're smarter than everybody else. That's something we can cherish regardless of whether there are smarter machines out there.
Facebook image: kentoh/Shutterstock