Skip to main content

Verified by Psychology Today

Artificial Intelligence

Is Generative AI a Pandora's Box?

Exploring the promise and challenges of artificial intelligence.

Key points

  • Technology is a mixed blessing that brings moral and social challenges.
  • What questions should people be asking themselves about achieving beneficial outcomes?
  • People can adapt their mindset to rapid technological evolution and focus on hope rather than fear.

Why do Greek myths still seem relevant after thousands of years of history? A cynic might say that the norms of human society simply haven’t evolved very much. A kinder philosopher might suggest that these myths persist because they reveal essential characteristics of who we are and how we behave. What does this say about our strengths and weaknesses?

Pandora’s box was said to contain all of the world’s miseries, everything bad that could possibly happen. Before the box was opened, we lived in a Garden of Eden, to mix metaphors, where all was only good and simple. Afterward, life became more complex and challenging. But also in the box was hope, a tiny light that could inspire us to have the courage and creativity to persist in striving for a better future. How often do we need to be reminded of this lesson?

Why do we develop tools?

For the most part, technologists are problem-solvers. More importantly, they actively seek out problems to work on because that is their idea of fun. Entrepreneurs also seek out problems where they see challenges as opportunities for personal and financial growth.

We want to apply our knowledge of science and technology to make our lives easier and more enjoyable. Isn’t the essence of this effort basically a drive toward greater productivity and enjoyment? What followed the implementation of assembly line manufacturing of Ford’s Model A? Automobiles were demonized, and early laws were passed to protect horses, as well as people.

Eventually, after several decades of innovation, the advantages were generally accepted to outweigh the perceived risks. The evolution of safety regulations regarding the construction of automobiles is still ongoing. Laws governing drivers and accident liability continue to be a work in process.

According to SAS, one of the leading companies offering business intelligence and decision support software, the goal of artificial intelligence (AI) is to provide software that can reason on input and explain on output. The input is data; the output is a description of the connections among the data. From this kind of capability, we can mistakenly infer the software has “intelligence” in the sense of human understanding of the content.

Generative AI (GenAI) takes this assumption to the next level because the interface uses natural language and even voice. This is significant because we have been socialized to learn from and believe in words and speakers. Using GenAI and large language models, the technical term for this particular kind of AI, all of us, anyone who can read and write, can access the power of these tools by simply asking a question in common, everyday, colloquial English. A similar situation occurred when the World Wide Web was invented as an interface to the resources of the Internet that previously could only be accessed via instructions written in technical software language.

ChatGPT is a GenAI that has already been fed training sets containing all the data that is “on the Internet.” Does that mean it is an all-knowing fount of knowledge and wisdom, like the Delphic Oracle? What does the software program actually know, and how is it programmed to work?

There are plenty of competitors in this hotly contested space, including from China as well as leading domestic tech giants and brash startups. Google offers Bard, its own version of a GenAI that resembles ChatGPT but has somewhat different strengths and weaknesses as a technology. Meta seeks to leverage the data from its massive following on social media platforms. In this game, the one who has the most data has the greatest advantage in developing and refining the technology.

As a technology, how different is GenAI from other “tech breakthroughs”?

Artificial intelligence, i.e., neural network processing, including machine learning, computer vision, natural language processing, and deep learning, has actually been around for more than 70 years, in fiction and in reality. Today, AI is pervasive in virtually every industry and all aspects of our lives, from smartphones to self-driving cars to healthcare and banking and investment advisory services. The furious competition for our attention among Apple, Google, Meta (Facebook), Amazon, and other tech giants is evidence of AI in every interaction we have with the companies and their products and services.

The buzz about GenAI results from the confluence of several massive social and technological trends. The power of social media has been increasing for a few decades now, especially with the integration of entertainment and e-commerce available 24/7/365. We even call this the Attention Economy, as we exchange our free will and our personal information for “no-cost entertainment or convenience.” Analysts and pundits in all industries are glorifying GenAI as a miraculous tool for productivity, and, at the same time, others are demonizing this technology as an existential threat to human workers. What does this say about how we understand ourselves and how we develop tools?

Who’s asking the questions? Data science and computing have exploded as areas of academic interest to the point where the University of California at Berkeley has, for the first time in 50 years, committed to creating a new College of Computing, Data Science, and Society to reflect the advances in technology and their impact on society. When academics focus on a topic, it has deeper roots than just hype. What academics love to do is ask questions endlessly from different perspectives.

Why are thousands of prominent researchers and business people raising alarms about this technology and demanding some kind of government regulation, as well as greater accountability within the tech industry? Some are even suggesting that research in this area of technology should be halted or at least slowed down. We have seen this scenario in other areas of technology. Research into nuclear power is a notable example where concerns about the possible misuse or abuse of the technology may have caused decades of delay in mitigating environmental crises. Similarly, research into human stem cells has huge potential for advancing knowledge of medical conditions and possible therapies but has been underfunded for decades as well. The cost of human lives might be incalculable.

In each situation, there are moral, societal, legal, and political ramifications that absolutely must be discussed and sorted through by all segments of society. Every new technology, every tool brings promise, often massive, of a brighter future. At the same time, those same tools can be acquired and used by people who may be unscrupulous or even criminal, to the point of causing death and destruction. The same old story about how power corrupts applies in this context.

According to Bill Franks, Director of the Center for Data Science and Analytics at Kennesaw State University, “ChatGPT… is literally making each answer up based on the patterns in its training data. While we call the things it gets wrong hallucinations, in reality, every answer is a hallucination.”

Where’s the hope in using GenAI for good to help humans do more of what only humans can do?

The real challenge presented by AI is that we humans have to evolve our thinking beyond what we are accustomed to. How can this tool, e.g., ChatGPT, be applied to achieve beneficial outcomes in practical use cases?

What kinds of questions can we be asking? What new kinds of problems can we pose to ourselves that will stretch our abilities and our vision of what we can be? What makes the human experience unique for each individual and not replicable? Our cognitive abilities clearly distinguish us from other animals, but what about our emotional and spiritual natures that allow us to enhance our problem-solving abilities through collaborative and supportive relationships?

Instead of attempting to objectify our fear of the unknown by blaming technology that we don’t understand, doesn’t it make more sense to learn about ourselves and how we want to design tools that support us to be even more creative and inventive in ways that we may not even have imagined before? Our creative energy is our most precious, uniquely human asset, so we must protect and leverage it as much as we can. This is the ultimate source of hope for each of us and for humanity.

More from Po Chi Wu Ph.D.
More from Psychology Today