Artificial Intelligence
Will ChatGPT Erode Our Ability to Tell Human from Machine?
Does a conversant AI challenge what we think of as an essential human trait?
Posted February 5, 2023 Reviewed by Abigail Fagan
Key points
- ChatGPT is a revolutionary new AI driven online tool.
- Its capabilities range from writing novel term papers to creating songs to quickly making complex calculations.
- There are new challenges in determining what is human vs. computer, especially in creative and written work.
- An AI with the capacity to have genuine, empathetic, and engaging conversations is still a long way off.
There has been a lot of buzz recently about the new AI-driven conversational model ChatGPT, which convincingly carries on a back-and-forth conversation. Even more, its ability to create content such as pretty decent song lyrics and surprisingly coherent term papers has already caused well-founded consternation among teachers, academics and creative types.
For those unfamiliar with ChatGPT, it is basically an interactive online tool powered by deep learning on enormous amounts of training data. When asked to describe itself, ChatGPT characterized itself as “a program that can answer questions or have conversations by predicting the most likely response based on its training.”
With this very human-like conversational interactivity, this revolutionary chatbot might lead us to wonder whether sentient use of language, a defining characteristic of our species, will now also be shared by the smart machines of our future, a thought simultaneously terrifying and exciting.
The next generation of human-machine interactions
We have all experienced automated customer service assistants and, often, such early AI chatbots have not been very impressive, limited by the prompts to which they were trained to respond. But what if, instead of giving the automated assistant your account number and birthdate for the third time to no avail before you eventually decide to just hang up, that chatbot could actually respond intelligently to your query? And then shift and refine its answers as it learned more about your specific context and needs, much like a human?
This is exactly what the next generation of chatbots, like the recently unveiled ChatGPT, are able to do. Using vast quantities of training data, as well as supervising AI trainers who helped provide nuance, these new AI-driven conversations are much more human-like and will revolutionize the way we learn, work and create, which is exactly what has many people worried.
Boon or big problem?
Using deep learning algorithms, ChatGPT does not just spit out information in a google-esque sort of way but can learn from and synthesize previous knowledge to create new content and use what it has learned to come up with ideas and conclusions, just like we good old humans do.
This means college professors, high school teachers and employers may no longer be able to rely on tried-and-true tools (like plagiarism checkers) to catch those who are not doing their own work. Now our computers can actually write the paper or produce work reports for us rather than simply spellcheck and edit the words that come from our brains.
That doesn’t mean, though, its output is always correct. It makes factual errors somewhat often — something that should give pause to those who might overly rely on its analysis and writing skills. As well, bias inherent in the training data also becomes part of its output. For instance, assigning certain groups particular traits more than others — a likely result from data scraped from the internet, even with filters in place.
Aside from these issues, ChatGPT can create songs, poems, literary treatises, and philosophical musings to rival most of us. It can also craft emails to common queries and do almost instantaneous calculations, such as estimating mortgage payments at different interest rates or solving tricky homework problems.
This is certainly going to present a serious challenge for those who value humans' unique artistic creativity, and it will probably lead to job loss as more tasks can be accomplished by sophisticated AI generation. Yet, we are far from being replaced by humanoid robots that can carry on conversations that rival those we have every day when talking to our friends, our loved ones and our colleagues.
Human vs. machine
ChatGPT, for instance, does not have the capacity for emotion or empathy that humans do. It can analyze text for tone and sentiment, but it can’t understand the experiences and contexts that have created the sentiment — or the understanding of how a lived experience might have led to it. It is limited to its training data, vast as it may be, to draw from in crafting answers,

In contrast, humans have beliefs as well as a store of memories and personal experiences that inform how we respond and communicate, and it is these unique features that define real and substantive conversations. This is why shifting to online formats in classes and in workplaces might help us work and study comfortably from our home offices wearing our bunny slippers, but it has also led to a loss of that intangible social aspect that helps people connect and forge relationships. Likewise, talking with a computer just doesn’t satisfy all our human needs.
For instance, when talking to another human, we might shift subconsciously to using more fully articulated -ing endings on our progressive participles (talking instead of talkin’) and fewer contractions (do not Instead of don’t) when we get angry or excited. Or we might change our voice pitch as we come to the end of a sentence when we aren’t sure our co-conversationalist is really listening. These types of conversational features require intentionality and conversational goals, two things that ChatGPT is lacking but that humans have in spades.
Certainly, as this type of AI becomes increasingly sophisticated and widespread in our daily lives, it brings with it many ethical, educational and professional dilemmas. But the capacity to have genuine, empathetic, and engaging conversations remains the domain of humans.
Wall-E is still a long way off
There is no doubt that our future will involve a lot more communication with chatbots and almost undetectably non-human virtual assistants. This is in many ways a boon to those of us who could use an assist when taking care of mundane tasks such as responding to repetitive email requests, commonly asked questions or sending out personalized information on products or activities. But people do much more than simply learn the structures of language and how to combine them in linguistically meaningful ways, they also learn to recognize the subtle social meanings and metamessages that we convey when we talk to one another.
So, while AI may be getting more and more human-like, it is still a far way off from understanding what being a human is like.
References
OpenAI. “ChatGPT.” Accessed at openai.com/models/chatgpt/