Gary Marcus on Why AI Needs a Reboot
A contrarian’s view on how to cross the “AI Chasm”
Posted September 20, 2019 | Reviewed by Kaja Perina
Amid this global groundswell of enthusiasm, a few voices are going against popular opinion, and are calling for a reboot. Robust.AI CEO Gary Marcus and NYU professor of computer science Ernest Davis, sound a warning bell for AI in their book Rebooting AI, released in September 2019.
Gary Marcus is a modern-day polymath. He is a cognitive scientist, successful technology entrepreneur, prolific author, keynote speaker, professor emeritus at New York University (NYU), juggler, unicyclist and erstwhile guitarist who literally wrote the book on it with his bestseller Guitar Zero: The Science of Becoming Musical at Any Age.
Marcus founded Robust.AI, based in Palo Alto, California, in June 2019 with industry luminaries Rodney Brooks, Mohamed R. Amer, Anthony Jules, and Henrik Christensen as co-founders.
This is the second company that Marcus has founded. Previously, Marcus was the CEO of Geometric Intelligence, a machine learning startup aimed at enabling computers to make inferences with relatively small data sets and sparse amounts of information. He started up Geometric Intelligence in 2014 with co-founders Douglas Bemis, Zoubin Ghahramani, and Ken Stanley. Uber acquired Geometric Intelligence in 2016 to create its artificial intelligence division—Uber AI Labs.
Artificial intelligence is at the epicenter of strategic planning of forward-thinking CEOs, economists, scientists, academics, government policymakers, and entrepreneurs. By 2022, global spending on cognitive and artificial intelligence systems is projected to reach 77.6 billion USD according to a recent IDC (International Data Corporation) report released in March 2019. IDC estimates the worldwide spending on artificial intelligence systems to reach 35.8 billion USD in 2019, which represents an increase of 44 percent over the amount spent the year prior. In the U.S. last year, venture capital investment in AI-related companies rose sharply to 9.3 billion USD, which is a 72 percent increase from 2017, according to PwC / CB Insights MoneyTree Report for Q4 2018.
The trends contributing to this global boom include the increase of computing power with the massive parallel processing capabilities of GPUs for general purpose computing, the rise of cloud computing, the increase of availability of big data sets, and breakthroughs in machine learning, principally due to advances in deep learning techniques.
“I think that we are entering an era where we are trusting machines more and more, but the machines don’t yet deserve that trust—they haven’t earned that trust,” said Marcus. “I don’t think we can go backward and stop trusting machines. And so, we need to make machines that we can trust. And that entails really thinking about AI pretty differently than how people are doing it now.”
Rebooting AI is an informative analysis of the strengths and weaknesses of the current state-of-the-art in artificial intelligence. It is also entertaining. For example, Marcus and Davis’ advice in the event of a robot attack illustrates the serious limitations of current robot intelligence with irreverent wit.
“Worried about superintelligent robots rising up and attacking us?” wrote Marcus and Davis. “Close your doors, and for good measure, lock them.”
“Contemporary robots struggle greatly with doorknobs, sometimes even falling over as they try to open them,” wrote the authors. “Still worried? Paint your doorknob black, against a black background, which will greatly reduce the chance that the robot will even be able to see it.”
The authors posit that current AI solutions based on deep learning are fragile and narrow—the current paradigm needs rework in order to get on a path of AI that is secure, truly intelligent and reliable. In Rebooting AI, they point out the foibles of recent AI breakthrough achievements. Marcus and Davis believe there is a massive gap “between ambition and reality” that they call the “AI Chasm”— a fundamental gap with issues in gullibility, illusory progress and robustness.
“People are looking for a silver bullet,” said Marcus. “They’re looking for the one equation to rule them all. I don’t think it’s going to work that way. If you look at biological cognition, there are many different mechanisms that work in different principles that reinforce and counter-balance each other. It’s not one thing. There’s lots of knowledge, there are lots of mechanisms … I think it’s a mistake to say, 'When we find the right mechanism for unsupervised learning we’ll be done.' It’s just not like that.”
In a nod to social psychology, the Marcus and Davis define the AI gap in gullibility as analogous to the fundamental overattribution error (correspondence bias)—the tendency to over-emphasize dispositional or personality-based explanations for behaviors in others rather than situational or environmental forces. Marcus and Davis maintain that “we humans did not evolve to distinguish between human and machines—which leaves us easily fooled.”
“It’s mostly about statistics,” said Marcus. “You know the old saying about causation versus correlation. Deep learning doesn’t just do, let’s say, linear regression, which is a form of correlation. It’s doing something that is more sophisticated in the same vein. It’s about statistical analysis of this thing and this other thing that tend to occur together, which is not the same thing as understanding why things are related to one another.”
"I’m a fan of Judea Pearl’s recent book, The Book of Why, which is about causality,” said Marcus. “Causality is really important to genuine intelligence. We just haven’t done enough work on that yet because it’s seductive. It’s really seductive to use these statistical calculators to get a good first approximation to a lot of things. All you need is a first approximation … If you’re recommending ads, you don’t have to know exactly what ad a person likes. You just find something that is better than throwing darts to make a lot of money. And AI has kind of shifted towards these ‘How can I make a lot of money with this statistical approximation?” rather than ‘How can I actually understand the world?’ And if we want to get to elder-care robots, or driverless cars that we can actually trust, then we’re going to have to get over that hump. We’re going to have to stop relying on these first-order approximations and get to things that are causal and more sophisticated.”
“The negative thing that I’m trying to avoid and help the world to avoid is things like driverless cars that are not actually reliable and can kill people,” said Marcus. “They can be easily fooled. They don’t really understand the world around them.”
Despite his warnings of the pitfalls of narrow AI, overall Marcus is optimistic about the possibilities. “There are all these negative ways in which bad AI is either leading to fatalities or undermining democracy, et cetera,” said Marcus. “The positive part is, if we made smarter AI, we could be solving all kinds of medical and scientific problems. It could be enormously useful set of techniques I think if we took some hints from the human cognitive sciences and made it a bit more sophisticated.”
Marcus and Davis point out that there is an illusory progress gap where people are “mistaking progress in AI on easy problems for progress on hard problems.” Although recent breakthroughs have occurred with AI defeating world-class human players in various types of classic strategy board games such as chess and Go, the authors point out that these are games of “perfect information” where “both players can see the entire board at any moment.” In the real world, not all the possibilities and variables can be known—it’s inherently complex and uncertain.
“The more intelligence you have, the more you can be robust,” said Marcus. “One of the techniques that allows one to be robust is the capacity to think about a problem in different ways. So, when you do math and do a so-called “sanity-check” after you’ve done the basic arithmetic, well that’s because you’re intelligent and you have different techniques available to cross-correlate with each other.”
“Right now, people are thinking it’s cool to have a system that starts with seeing pixels on its input and has maybe joysticks on its output and there’s no structure internally—just do a lot of training, and what you need emerges,” said Marcus. “Our view is that stuff is never robust. That stuff works in particular cases, but then you change things a little bit and it breaks down. For example, the Atari game system from DeepMind is exactly what I describe. It’s pixels on the input, joystick motions on the output with no or very little internal structure. But after gathering a very large amount of data, it plays a very good game of “Breakout” or “Space Invaders.” But if you play “Breakout” and move the paddle by three pixels, the whole thing falls apart.”
Marcus plans to tackle the hard problem of AI robustness by taking a hybrid approach, versus “end-to-end deep learning” at Robust.AI. Marcus draws the distinction between autonomy and automation. He views current robots as good for automating rote tasks in controlled environments. But today’s AI is not robust enough to enable robots to be autonomous in changing environments like a home or construction site according to Marcus.
“The thing I find most inspiring is nature itself,” said Marcus. “If you compare what our machines can do, with say, what little children can do or maybe animals can do, evolution is not a conscious system, but it has arrived at answers that are so much more powerful than what the world’s best engineers have come up with—that’s amazing, that’s inspiring.”
With an extensive background in evolutionary psychology, Marcus thinks it is important to not only understand what human brains do but also why. “Evolutionary psychology is the field that most asks the question why we do the things that we do,” said Marcus. “A lot of my recent work is about how we can best use an understanding of the human mind to make AI better in order to make the work a better place.”
Marcus believes we should look to the human mind for clues, but does not believe in replicating it. In his book Kluge, Marcus illustrates “all kinds of warts on human psychology—things that could have been better if designed from scratch.” According to Marcus, “Evolutionary happenstance has left us with a really lousy memory system. At the same time, human beings are amazing with the way in which we learn language in a few years. I think we should be asking, “How do humans do the techniques that they do? What can we borrow from that?” It’s not that we want to make airplanes that fly exactly like birds. If they can do that, then we can do that. What’s involved? What can be borrowed? The ultimate AI systems are going to have memory capabilities of machines, not people. They’re going to do arithmetic like machines, not people. But we want them to be as flexible as people. And so, we can try to understand better how people manage to be flexible, and use that to make better machines."
In the future, Marcus thinks that machines will be able to do “pretty much every cognitive thing that people can do, except experience emotions.”
Like Elon Musk, Marcus is in favor of universal basic income (UBI). “I think it’s inevitable that we will get there," said Marcus. "I think that it’s just a question of if we will get there gradually, or through a lot of social unrest, and so forth.”
“I expect that machines will eventually be able to do world-class scientific reasoning, but we are nowhere near that right now,” said Marcus. “I don’t think it’s going to happen in the next five years. It might happen in the next 15 years. It certainly will happen in the next 500 years—there’s no mathematical reason why it can’t be done.”
Copyright © 2019 Cami Rosso All rights reserved.
Marcus, Gary. Davis, Ernest. Rebooting AI: Building Artificial Intelligence We Can Trust. Random House. 2019.