The Human Bias in the AI Machine
How artificial intelligence is subject to cognitive bias.
Posted Feb 06, 2018
Artificial intelligence (AI) can result in positive advancements and unintended negative consequences. A key area that warrants further research is the impact of human cognitive bias on AI. Harvard and MIT Professor George Church, Singularity University Neil Jacobstein, MIT Physicist Max Tegmark, Behavioral Economics and Data Scientist Colin W.P. Lewis, Ph.D., Oxford Professor of Philosophy Nick Bostrom, SpaceX and Tesla Motors Founder Elon Musk, Apple Co-founder Steve Wozniak, and Cambridge Physicist Stephen Hawking are among the over 8,000 people who have signed an open letter on artificial intelligence that seeks research on how to reap the benefits of AI while avoiding the pitfalls .
"Success in creating effective AI, could be the biggest event in the history of our civilization. Or the worst." Stephen Hawking, Physicist
Like the human brain, artificial intelligence is subject to cognitive bias. Human cognitive biases are heuristics, mental shortcuts that skew decision-making and reasoning, resulting in reasoning errors. Examples of cognitive biases include stereotyping, the bandwagon effect, confirmation bias, priming, selective perception, the gambler’s fallacy, and the observational selection bias. The total number of cognitive biases is constantly evolving, due to the ongoing identification of new biases.
Human cognitive bias influences AI through data, algorithms and interaction. Machine learning, a subset of AI, is the ability for computers to learn without explicit programming. AI’s learning is shaped by data, algorithms, and experience through interactions and iterations. The size, structure, collection methodology, and sources of data impact machine learning. Machine learning is dependent on the quality of learning data sets. Just like in humans, in AI the more objective the data and the larger the data set, the less possibility of distortion .
The common underlying factor in cognitive biases is inclination. Proneness in AI is influenced through the assignment of weight on the parameters and nodes of a neural network, a computer system modeled on the human brain. The weight may inadvertently bias the machine learning algorithm from inception via data input, through supervised training, and by intervention through manual adjustments. The absence or inclusion of indicators, and the inherent cognitive biases of the human computer programmer can cause machine learning bias .
The artificial intelligence revolution (AIR) is well underway . Artificial intelligence is currently a tool used to assist humans and is being deployed as point solutions across a wide variety of functions such as personal digital assistants, email filtering, search, fraud prevention, engineering, marketing models, digital distribution, voice recognition, facial recognition, content classification, natural language, video production, news generation, play and game-play analytics, customer service, financial reporting, marketing optimization, energy cost management, pricing, inventory, enterprise applications, and more functions . Some of the greatest thinkers of the 21st century have warned of the dangers of AI unchecked. The increasing pervasiveness of AI necessitates the minimization of human cognitive bias in the machine. The future of humanity may very well depend on it.
Copyright © 2018 Cami Rosso All rights reserved.
1. "Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter". Future of Life Institute. Retrieved 2 February 2018.
2. Rosso, Cami. “The Conundrum of Machine Learning and Cognitive Biases.” Medium. July 14, 2015.
4. Rosso, Cami. “Why Artificial Intelligence is the Next Revolution – AI Will Change Almost Every Aspect of Our Daily Lives.” Medium. March 16, 2016.
5. Rosso, Cami. “Why AI is Trending Now.” Medium. Feb. 21, 2017.