Skip to main content

Verified by Psychology Today

Artificial Intelligence

Why AI Is More About Humans Than Technology

... and why we should put our egos aside.

Key points

  • Artificial intelligence has taken on an increasingly important place in people's lives.
  • Embracing AI's full potential requires self-awareness, intellectual humility, and breaking down silos.
  • In the end, the significance of AI will be more about how humans change than technology itself.

While AI takes on an increasingly important place in our daily lives, some rejections or wrong beliefs about its capacity remain. These are largely inherited from the media causing sad and sensational confusion between intelligence and domination, leading to AI or algorithms being judged more harshly than humans when making a mistake. Nevertheless, while it would be inappropriate to deny some dangerous usage of AI in practice, it would be even more naive to condemn it all due to an overgeneralization of poor previous results, its misunderstanding, or the dishonesty and opacity of some solutions’ vendors.

Looking at AI through the unique lens of a threat of dehumanization, is, therefore, a mistake hindering our ability to understand it as a true catalyst for human potential and collaboration. Technology itself doesn’t change society: It’s more its resignification, and acceptance by humans, that bring real change and impact. This nuance in our understanding of technology highlights the main and essential role people play in building the future of AI, where benefits prevail.

Intellectual humility

The tendency to prefer recommendations made by another person rather than by a recommender system—even if the latter is more efficient—could be explained by an illusion of understanding human better than algorithmic decision-making. This emerges from the fact that people more easily project their intuitive understanding of a decision-making process onto another person, rather than onto an algorithm. However, when asked to explain another person's decision, people’s perception of understanding diminishes. The same is true for our understanding of algorithms: If we think we understand how they work, we’re displaying a form of overconfidence stemming from the illusion of explanatory depth (IOED). Other studies, at MIT, also show the importance of cognitive style in acceptance or aversion to AI: system 2 thinking, the analytical and logical mode our brain operates when solving complex problems, is shown to be positively related to algorithmic appreciation.

These conclusions don’t suggest that algorithms are always right. Rather, they invite everyone to take stock of their own biases, egos, and credulity to develop their intellectual humility and rethink their convictions. Intellectual emancipation becomes a necessity to help everyone to forge an enlightened understanding of algorithms’ potential and benefits, beyond the darkness and misinformation carried by certain lobbies. At every level of society, it, therefore, appears essential to promote scientific openness, develop critical thinking, and welcome opportunities for our assumptions to be challenged.

Collective intelligence

AI also urges strengthening synergies between people and breaking down silos existing in companies or in education. While building AI solutions was for a long time perceived as something reserved for "technical" professionals, it’s becoming increasingly obvious that more transversality and multidisciplinary skills are needed to get everyone feeling connected to AI, and to develop more efficient and impactful systems.

Today, it is, indeed, far too common and regrettable to have misalignment between stakeholders about what AI can really deliver or to see products being developed and aiming to automate specific business processes without consulting experts within the field. Companies have, therefore, every interest in moving away from a "product-first" vision to favor a "service" vision that considers end users and other needed experts as essential co-designers. Improving harmony, complementarity, and interdependence between people and fields of study, through a more systemic approach to AI, its development, and its capabilities, is, therefore, a condition to enjoy positive benefits and reduce algorithmic bias. Some ecosystems are already setting up this type of multidisciplinary collaboration.

In the end, AI is more about humans and our ability to put aside our egos, update our knowledge, and improve our ways of working together for the benefit of all. Focusing on potential threats that AI could bring, although legitimate, makes us miss out on an incredible and essential opportunity to evolve individually and collectively.

Debates should, therefore, no longer dwell on a showdown between humans and AI. Whether we like it or not, AI performs better and is growing exponentially. A more stimulating and challenging question is to ask yourself about the role you want to take on in all this and in the building of tomorrow’s world. Would you rather remain a spectator and tolerate choices made by others, or be proactive in defining the desired world?

advertisement
More from Emeric Kubiak
More from Psychology Today
More from Emeric Kubiak
More from Psychology Today