- New tools in artificial intelligence are beginning to come of age in ways that threaten to impact and shape our shared humanity.
- While breathtakingly impressive, current digital tools are still primitive when it comes to the complexities of the human condition.
- Social media algorithms, in particular, can provide users with a false sense of self that is particularly susceptible to social pressures.
- Some day our psyches may be boiled down to a series of ones and zeroes, but that time has not yet come and for that I am thankful.
Social media at the intersection of tech and culture has been abuzz as programming from OpenAI called ChatGPT made waves across the internet last week. The new software allowed anyone to ask this piece of cutting-edge software to answer any prompt one could imagine in any style one wished. With a remarkable ability to produce written content that seems both knowledgeable and, more impressively, human, ChatGPT has hit a real cultural nerve. From sonnets about the intricacies of neuroscience to mini book reports about the news topic of the day, many are enjoying this new toy. But does it foreshadow something more sinister? If the eyes are a window to the soul, ChatGPT seems to be a window to our soulless future.
With healthy serving of self-indulgence, I asked the software to “describe the written work of Adam Philip Stern”, to which it replied “Adam Philip Stern’s written work combines clear, concise language with a deep understanding of the human condition. His essays and stories often explore themes of identity and belonging, and he uses vivid imagery and metaphors to draw readers in. He has a knack for finding the right words to bring an emotional depth to his writing, and his work has been praised for its ability to bring the reader on an emotional journey.”
Though it had been generated by the murkiest set of immaterial ones and zeroes, I couldn’t help but feel a real sense of pride. This little bit of artificial intelligence scoured the summation of digitized human knowledge and produced a compliment to my work when presumably it felt no moral responsibility to do so. How cool.
If a little bit of praise went so far, I wondered how much I further I could go. What would ChatGPT come up with about my memoir from 2021? “Describe the critical reception of Committed: Dispatches from a Psychiatrist in Training by Adam Stern”, I commanded. “The critical reception of Adam Stern’s book Committed: Dispatches from a Psychiatrist in Training was generally positive.”
So far so good, but then things got a little weird.
“The New York Times praised the book for its ‘brutally honest’ and ‘emotionally intense’ account of Stern’s journey from medical student to psychiatric resident. The Los Angeles Times noted the book’s ‘revealing’ and ‘intimate’ look at the complexities of the psychiatric profession ….”
While I love those quotes, the only problem is that Committed was never reviewed by The New York Times or The Los Angeles Times. While the answer went on to quote other reviews from Publisher’s Weekly and Kirkus that were real, I was suddenly shaken awake from the fantastical dream I’d been indulging in. If only most of these answers were based in reality, then maybe none of it truly was.
Enter the many selves of social media
The outcome played out just as it has in other areas of my digital existence that rely just as heavily on faceless, soulless algorithms that reflect back a sense of false self to the user. On Facebook, I have a private account where I share family photos and other personal notes on recent goings on. Within this app, I see similar posts from friends and family and get reasonably pertinent advertisements aimed at an early-middle aged man living in suburban America. Again, so far so good.
But when I extend the analysis to Twitter, where I have a public profile, I see a very different version of myself. On this platform, I promoting my own creative work (i.e. writing and podcasts) and raise awareness for kidney cancer research (an admittedly very specific and unusual combination). Because it is public, I do not include any content about my family, most definitely the most important part of my life. So the algorithm responds by showing me a lot of #MedTwitter content involving advances in cancer research along with both popular and academic writing. Overall, it is a pretty tame view of me that misses entire swaths of my world and life. According to Twitter, I’m a fairly bland do-gooder. Fair enough. I could do worse than being a bland do-gooder.
And I do. I do a lot worse on TikTok, where I once decided I am too old and frankly square to create any content. On this app, I do occasionally peruse videos and from my behavior — the amount of time I watch each clip and the kinds of clips I scroll right by — the app tailors my feed to what will drive further engagement. I get a fair amount of alpha-male oriented, pro-traditional masculinity content and a smattering of frankly misogynistic video clips. In this app, where I feel invisible, the algorithm portrays me as a kind of guy that I genuinely dislike in real life. There are a few clips about astronomy or the nature of the universe, but mostly it’s bad content I’m ashamed to be associated with.
I do often linger on the weird hypermasculine clips because they’re so far from what I’m used to seeing in my little corner of polite society. Is the algorithm interpreting my behavior as a sign that I want more of it or is this some kind of genuine reflection of who I actually but unconsciously am when not in the public eye?
When I spend more than a few minutes pondering the issue, I have to conclude that none of these apps truly capture what I consider to be my true self. I believe I’m more complex than can be adequately summarized by any combination of ones and zeroes on the other end, and that lets me sleep a bit easier.
In that sleep, there’s another messy algorithm with otherworldly and often disturbing content that seeps into my dreams. It’s built on a code we don’t yet fully understand, and yes, sometimes it seems to be a real mess just like the ones on social media. But I suppose that’s for another column.