The Neocortex Wins the Darwin Award
Has our extraordinary mind created the possibility of its own demise?
Posted May 12, 2022 | Reviewed by Lybi Ma
- The development of AI has advanced to the point where its creators often can't predict or understand what it will do.
- Will AI machines "take over," as they do in science fiction? Maybe. We really don't know what will happen.
- Has the extraordinary human mind created another way, in addition to nuclear weapons, to destroy our species?
Every year, the Darwin Awards honor “Charles Darwin, the father of evolution” by commemorating “those who improve our gene pool by removing themselves from it in the most spectacular way possible” (Darwin Awards: Evolution in Action). For example, a Darwin Award was given to a young man in Australia who attempted to do a handstand on the railing of the Cave Garden Sinkhole, a 100-foot-deep cenote, lost his balance, and fell to his death. Another award went to a man who used a .22 bullet as a fuse in his car. The bullet fired and shot the award winner in the testicles, thereby impairing his reproductive success.
It may seem hyperbolic to give a Darwin Award to the human cerebral cortex, but the creation of Artificial Intelligence (AI) may put our entire species in danger. In The Age of AI and Our Human Future¸ Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher discuss both the wonders and the dangers of artificial intelligence. For example, AlphaZero, a chess program created by AI, has never been beaten—by grandmasters or computer programs. It has created chess moves and strategies that no human has ever conceived. When former world chess champion Garry Kasparov was introduced to AlphaZero he said that “chess has been shaken to its roots by AlphaZero.” M.I.T. researchers created an AI to try to find a new antibiotic that could kill bacteria and viruses that had become resistant to other antibiotics. They fed the AI the descriptions of 61,000 molecules to examine and test, and only one satisfied the AI’s criteria. The researchers jokingly named the molecule “halicin” after the rogue computer “Hal” in the film 2001 Space Odyssey.
According to the authors of The Age of AI, artificial intelligence is everywhere these days. Cars that drive themselves are controlled by AI, cryptocurrency is “mined” by AI, research that requires enormous amounts of data is done by AI, Facebook uses AI to extract and organize information about its users, and on and on. Artificial intelligence can do extraordinary things, but because it has, well, a mind of its own, the growing reliance on AI is cause for concern.
When intangible software acquires logical capabilities and, as a result, assumes social roles once considered exclusively human (paired with those never experienced by humans), we must ask ourselves: How will AI’s evolution affect human perception, cognition, and interaction? What will AI’s impact be on our culture, our concept of humanity, and, in the end, our history? (Kissinger, Schmidt, and Huttenlocher, 2021, p. 15).
The digital world has little patience with wisdom; its values are shaped by approbation, not introspection. It inherently challenges the Enlightenment proposition that reason is the most important element of consciousness. Nullifying restrictions that historically have been imposed on human conduct by distance, time, and language, the digital world proffers that connection, in and of itself, is meaningful (p.52).
The first reported AI auto death occurred in 2018 when a pedestrian was hit and killed by a self-driving car. What will your AI-guided automobile do when faced with a situation like this: On an icy road in winter, a pedestrian walks off the curb and into the street. Your car begins to skid. Does the AI controller drive off the road to miss the pedestrian? Or does it hit the pedestrian to protect the passenger (you)? Because AI operations are, in some ways, beyond human understanding, nobody really knows.
And what about AI-controlled weapons?
Because AIs are dynamic and emergent, even those powers creating or wielding an AI-designed or AI-operated weapon may not know exactly how powerful it is or develop a strategy—offensive or defensive—for something that perceives aspects of the environment that humans may not, or may not as quickly, and that can learn and change through processes that, in some cases, exceed the pace or range of human thought (p. 157).
Whoa! The authors don’t anticipate an AI takeover, á la “2001,” or countless other science fiction stories of machine rebellion, but they warn that the humans creating AIs need to build limits into their creations, sort of like the first law for robots in Asimov’s classic, I Robot: “A robot shall not harm a human, or by inaction allow a human to come to harm.” But what if Vladimir Putin is running the robot?
Perhaps the cortex has undone itself at last. Is our wonderful brain, evolved in hunter-gatherer societies to facilitate reproduction and survival, in the process of making itself redundant by creating machines that it can’t completely understand? Is the feature that made our species the masters of the planet about to encounter one unintended consequence too many?
Kissinger, Henry; Eric Schmidt; and Daniel Huttenlocher. 2021. The Age of AI and Our Human Future. New York: Little, Brown, and Company.