Social Media
Navigating Algospeak on Social Media
The struggle for free expression in the age of algorithmic censorship.
Posted September 14, 2025 Reviewed by Lybi Ma
Key points
- Social media police scour social media and punishing posters for terms considered dangerous or subversive.
- Using workarounds like slang, emojis, and code words reinforce the idea that sexuality must be hidden.
- The changing of language on social media limits open and honest therapeutic discussions.
- Posts that triggers negative transference prompts viewers to report and remove them rather than look within.
For many years now, I’ve relied heavily on numerous social-media outlets to share ideas and knowledge, as well as engage in dialogue with others. Between Instagram, Facebook, LinkedIn, TikTok, and others, I’ve garnered many followers—probably more than readers of my books.
But I never suspected that eventually I’d have to begin censoring my words or using cyphers, trying to outwit the non-human language police—algorithm robots. A few years ago, I began noticing that while some of my posts found thousands of readers in a matter of minutes, others only garnered a few over days’ time. At first, I thought, Well, some topics were just not as interesting as others. After a while, however, I began to suspect that it was not me writing about uninteresting topics, but rather the words I was using.
I began hearing from others who found their social media posts blocked and themselves put in “Facebook Jail” for a period of time, with no explanation as to why. I put on my Sherlock Holmes deerstalker cap and began doing some research.
That’s when I found out that we secretly were at war with robots—the language police. I am reminded of 1984, George Orwell’s classic dystopian novel, in which the state manages to control people’s thoughts by controlling the language, which he calls “newspeak.” Maybe a better term for our age would be “algospeak.”
The psychological dangers of censorship
As a sex and relationship therapist who deals every day with clients’ repressed identities, shame, projections, and so on, one of the things that disturbs me about this algorithmic censorship is having to use workarounds like slang, emojis, and code words (“seggs,” “corn”), reinforcing the idea that sexuality must be hidden. This is anathema to healing one’s sexual traumas. My entire career is to help people use correct language and get explicitly correct information. Social media compromises some of this with its censorship.
And it's not just the robots doing the censoring. When social media users can flag content to be removed because it violates their personal, prejudiced norms, it inevitably favors privileged majority identities and experiences. This worsens systemic inequities, further compounding the psychological problems minorities face.
Social media isn’t therapy
My posts are not meant to be therapy in themselves, but rather to spark viewers’ questions or insights about their thoughts and behaviors. I have had some viewers tell me that they cannot afford to go to therapy either because of financial or time restraints, and that my videos educate them and bring something over nothing from a therapeutic point of view. I state clearly on my social media that this is not therapy but is psychoeducation.
In too many cases, I find that a viewer has projected their own meaning on what I’ve presented and even accused me of inciting immoral or illegal behavior. This can be shocking and can lead to my further being censored. In a therapy session, I would be able to skillfully help the client reflect on their overreaction, negative transference, and projections so they could benefit from what it brought up for them. In social media, I am unable to do this. And the content gets removed or blocked by the algorithm.
For content creators, censors and misplaced criticisms breed an “inner censor” voice that is always second-guessing what is “safe” to post, thus feeding anxiety or shame. For example, when I begin to film something to post, I often will have to start over again several times, fearing that it won’t pass a censor. I find myself looking up other words to use that won’t change my intent or restrict how I say it. I am an educator and want to be fully present and accurate, but this kind of self-censorship erodes my creativity as I’m sure it does others’.
Consequently, creators will tend to produce more generic, less authentic, and less vulnerable content. When the fear of being flagged outweighs the desire to share, people tend to play small or stop posting altogether.
‘Free therapeutic speech’
It’s common knowledge today that the social media police are constantly scouring social media and punishing posters for terms they consider dangerous or subversive. But here’s my problem: I’m a sexual health therapist. I must try to be frank and specific when I use words. Throughout my therapeutic career, I have believed that my authenticity is a gift, one that encourages others to be true to themself. But if I post something on social media that contains the words “sex," or “vagina,” I find I have to express myself like a kid. For example, I’ve had to replace those words with “seggs” or worse, “vijay-jay,” respectively. As a result, I’ve had readers post comments like, “You’re a PhD, for God’s sake! Why are you speaking like a child?” Embarrassing. I started as a kid, not knowing the proper terminology for something, became a sex educator with the right terminology, and now am back having to speak like a little boy.
God forbid that you are a woman using medically or anatomically correct words like “labia,” “period,” “vulva,” “uterus,” or “clitoris.” Or anyone who, instead of “Dick pics,” has to use the absurd “Richard photography” or “pictures down there.” Or instead of “anal sex,” he has to say, “backdoor play.”
In one recent TikTok video I posted about “sides,” a word I coined for gay men who don’t engage in penetration, I had to say things like, “that means I don’t put the carrot in anyone else’s donut, and nobody puts the carrot in my donut. Neither am I interested in seeing anyone’s ‘starfish’ picture on Grindr.” You get the picture? I have found users who circumvent the robots by using “corn,” instead of “porn,” “grape,” instead of “rape,” or “S3x,” instead of “sex.” Is all this subterfuge really necessary, or even helpful?
These examples are funny, but how about words that we use to talk about serious topics such as suicide, a censored word that people have had to replace with the ludicrous “unaliving?” Like, “He felt so bad that he had to unalive himself.” What else is going to become the next f-word or n-word to be banned?
I admire, however, the cleverness of social media users in coming up with workaround words to trick the robots. I have my suspicions about the motives of the coders and social media companies who are “protecting” us from these “volatile words.” It’s not like by using them we’re spewing anti-government or antisemitic speech, inciting riots, or otherwise polluting the logosphere (yes, that’s a word).
Some of us are simply disturbed by the awkward attempts to sanitize speech, especially speech that therapists like me feel is essential to help people better understand and accept themselves for who they are.
At present, I can offer no solutions to these problems, only the hope that in the future we will again be able to openly and honestly discuss such important matters without having to dodge the algorithmic censors and critics that are gaining more control over the way we think and speak. With this, viewers and readers on social media can get better therapeutic and psychoeducational quality from therapists and educators like myself.
References
Davisson, A., Witkowski, R., & Oravec, K. (2024). “Difficult to just exist”: Social media platform community guidelines related to nudity, sexual content, and solicitation. Social Media + Society, 10(1), 1–12. doi.org/10.1177/20563051231224270
Harness, J., & Anam, S. (2024, June 7). How the current social media legislature landscape impacts youth mental health. Psychiatric Times. psychiatrictimes.com/view/how-the-current-social-media-legislature-landscape-impacts-youth-mental-health
University of Illinois at Urbana-Champaign. (2024, April 23). How do people use self-censorship to avoid having their content suppressed on sites like TikTok? Illinois News Bureau. news.illinois.edu/how-do-people-use-self-censorship-to-avoid-having-their-content-suppressed-on-sites-like-tiktok
Zhang, C. C. (2024). Social media content moderation may do more harm than good: Potential negative outcomes of restricting mental health discourse. Psychiatric Services, 75(1), 18–20. pubmed.ncbi.nlm.nih.gov/38088464/
