Skip to main content
Artificial Intelligence

What Is the Opposite of AI?

As we rely more and more on AI, it is worth considering what is lost in the process.

Key points

  • AI was designed to make tasks easier and more precise.
  • Overreliance on AI and blind acceptance of it can be intellectually limiting.
  • With AI, there is a loss with regard to the exploration of ideas and the uncertainty this journey provides.
Brain / Mind
Brain / Mind
Source: Elisa / Pixabay

From Hollywood to Wall Street and into our homes on Main Street, artificial intelligence (AI) seems to be on the tip of everyone’s tongue. Depending on whom you are speaking with, AI may either destroy or save humankind. To my college-student patients, it is a way to make their research projects less onerous. To some, however, it is a way to get out of thinking much at all.

My patient, “Tim,” told me in his therapy session that he was not particularly interested in Shakespeare and that ChatGPT could do much of the research for his paper. To his credit, Tim made sure that he stayed on the “right side of the honor code” by not resorting to plagiarism. Unfortunately, the project became more of a text editing project than a research paper.

As a psychologist (and former English major), I have done my fair share of research. What may be lost through the use of AI is what I call meaningful meanderings. AI will give you exactly what you are looking for. It is, after all, trying to solve a very specific problem in a specific way, with varying degrees of complexity based on the model. It is designed not to go off course, and when it does, it often ends in an inaccurate “hallucination.” My concern is that the use of AI may discourage our young people from thinking beyond the confines of their given problem, and it may prevent them from identifying and solving their problems the hard way. What is the opposite of AI? Possibly patient and diligent research, free intellectual exploration, or maybe good old-fashioned uncertainty. Here are some considerations when it comes to AI.

The Benefit of Getting Lost

In what can be a black-and-white world, getting lost, intellectually speaking, can lead to wonderful conclusions. Humanity is filled with examples of important inventions discovered by accident. Penicillin, a life-saving medication, was found after Alexander Fleming returned from a holiday to find mold in a petri dish killing nearby bacteria. The microwave oven came to be after a candy bar melted near a magnetron. Beyond this, humans go down the wrong intellectual path all the time, sometimes with amazing results for art and for science. Much of this is lost when we rely too greatly on the specific capacities of AI.

Overreliance on Machine Learning and Thinking

Technology can be amazing and additive for nearly every field. It becomes concerning, however, when we see an overreliance on the findings. I work with many young adults who trust AI almost without question. Whether it is a political leader, a philosopher, or a machine, it is helpful to use the information as a single data point among many others. In this way, young people can form their own questions and seek their own answers. My worry is that this process is being diluted by the nearly immediate gratification that comes with AI.

Open Your Mind and Open Your World

As an educator and supervisor for both psychiatry residents and other disciplines in the “helping fields,” I encourage students to think for themselves. I even encourage them not to accept everything I say as entirely accurate, but simply one clinician’s opinion based on my specific years of experience. I encourage them to find their own voices and their own way of doing things within the system they find themselves in. Yes, much of medicine can be black and white, but the ability to fill in the gray—when and where it is needed—is what makes the difference between a good and great doctor. I feel the same holds true for the rest of our society as well.

advertisement
More from Kurt W Ela Psy.D.
More from Psychology Today