Skip to main content

Verified by Psychology Today


Combating Cyberbullying on TikTok

TikTok has introduced new features to curb cyberbullying.

Key points

  • Cyberbullying is a persistent problem in social media, and TikTok creators have reported harassment.
  • TikTok has implemented two new features to address bullying.
  • One tool allows video posters to filter comments and another nudges commenters to reconsider posting when they attempt to upload inappropriate content.
  • There is a need to teach users to stop cyberbullying before it occurs, such as through social media literacy training.
Photo by Solen Feyissa on Unsplash
Source: Photo by Solen Feyissa on Unsplash

Cyberbullying is a persistent problem in social media, and more companies have begun to recognize their social responsibility to curb cyberbullying and online harassment. TikTok, a video-sharing platform that has risen in prominence over the past couple of years, is no exception to cyberbullying. Many people have positive experiences on TikTok, and they use the platform to express themselves, provide social support, and engage in other positive behaviors such as engaging with current political issues or connect with physically distant friends and family. However, it is also common for TikTok videos to be filled with offensive and hostile comments.

TikTok’s widespread use, as well as the relative anonymity it affords users, may also contribute to this problem. On TikTok, people often engage with content from creators they aren’t directly connected to offline. Therefore, they may not feel that they’re held accountable for their behavior.

TikTok’s duet feature, which allows users to create video responses that are displayed alongside another user’s video, can also lend itself to harassment. Creators report getting “hate duets” with violent or sexual harassment video responses. This type of harassment can be very hard to report due to the limited reporting criteria available. Although video creators can choose to turn off the duet feature, many choose not to, as it also limits opportunities for positive interactions, which are needed for videos to gain popularity on the site.

In response to this harassment, TikTok has implemented two new in-app precautions to take a stand against cyberbullying. Firstly, with the introduction of their new Filter All Comments feature, TikTok is putting greater bullying prevention power into the hands of its creators. This tool enables video posters to approve and delete comments, granting them greater curation power and agency over their content. TikTok is also trying to put responsibility in the hands of viewers with their new nudges. Their software will now be able to flag offensive comments before they are even posted. When a viewer attempts to upload a comment with offensive or inappropriate content, they will receive a prompt asking them to reconsider posting. This multi-pronged approach to curbing cyberbullying is promising, as it puts in place prevention measures for both creators and viewers.

While TikTok’s new measures are a great step in the right direction, there is certainly still room for improvement, as these are primarily downstream interventions to reducing cyberbullying. For example, the Filter All Comments feature places onus on creators to screen every comment they receive for bullying. Rather than preventing online harassment from occurring altogether, comment filtration simply gives creators the option to prevent other viewers from seeing it. Furthermore, the comment prompts rely on AI technology to accurately flag offensive language and also rely on the assumption that individuals will be discouraged from posting negative comments simply based on an automated message.

Social Media Test Drive
Source: Social Media Test Drive

As social media use and cyberbullying continue to become more widespread, there is greater urgency to teach users to stop cyberbullying before they even type out a comment. Social media literacy training can be one such way to prevent cyberbullying before it occurs. Cornell University’s Social Media Lab has developed a national program, Social Media TestDrive, which I co-founded, in collaboration with Common Sense. The program’s goal is to teach young social media users digital citizenship and online literacy skills through a life-like social media simulation. Over 250,000 students have used the platform, and learned and practiced how to be responsible digital citizens through simulated digital dilemmas and scenarios. It is our hope that Social Media TestDrive will teach the next generation of social media users proper digital citizenship.

This post was co-authored by Amanda Wong, Aparajita Bhandari, and Dr. Natalie Bazarova.

More from Cornell Social Media Lab
More from Psychology Today