Skip to main content

Verified by Psychology Today

We Are Not Immune to the Threat of Virtual Hatred

Algorithms, social media filter bubbles, and hate.

Key points

  • When confronted with worldviews we don’t agree with, online disinhibition can result in hate speech.
  • Social media filter bubbles are resilient accelerators of prejudice, reinforcing and amplifying extreme viewpoints.

As a criminologist researching hate crime, I’ve had to accept that being victimised myself is part of the job. Becoming a victim of hate crime in the 1990s is what actually steered me towards criminology as a profession. While I’ve managed to avoid physical forms of violence ever since, I do continue to experience online hate in various forms. I am not alone, online abuse and hatred affect the lives of millions. Half of all 12- to 15-year-olds in the UK reported seeing online hate on social media in 2021 (Ofcom 2021). It seems no social media user is immune from the threat of virtual tongue-lashings.

Behavioural studies on how digital communications can shape behaviour date back to the '80s. Online spaces can free us up to embark on behaviours that would normally not see the light of day offline. Psychologists point to anonymity, deindividuation, the perceived physical distance between interlocutors, and the low chance of being caught and punished by police, to explain why we sometimes feel less inhibited to express ourselves when communicating online. When confronted with worldviews we don’t agree with, this disinhibition can result in hate speech.

Online Filter Bubbles and Hate

More recent research has found the abundance of online hate speech may have something to do with algorithms that reinforce online filter bubbles. Partisan information sources are amplified in online networks of like-minded social media users, where they go largely unchallenged due to ranking algorithms filtering out any challenging posts. These filter bubbles are resilient accelerators of prejudice, reinforcing and amplifying extreme viewpoints on both sides of the spectrum.

Looking at over half a million tweets covering the issues of gun control, same-sex marriage, and climate change, New York University’s Social Perception and Evaluation Lab found that hateful posts related to these issues increased retweeting within filter bubbles, but not between them (Brady et al. 2017). The lack of inter-filter bubble retweeting is facilitated by Twitter’s timeline algorithm, which prioritises content from the accounts that users most frequently engage with (via retweeting or liking). Given that these behaviours are highly biased towards accounts that share users’ views, exposure to challenging content is minimised by the algorithm. Filter bubbles, therefore, become further entrenched via a form of online confirmation bias, facilitated by posts and reposts that contain emotional content in line with views on deeply moral issues. It, therefore, seems likely that at points in time when such issues come to the fore, say during court cases, political votes or following a school shooting, occupants of filter bubbles (likely to be a significant number of us who don’t sit on the fence) hunker down and polarise the debate, sometimes with hate speech.

Unsurprisingly, Facebook’s algorithms also display similar biases generated from partisan filter bubble content. In 2016 and 2017 the non-profit investigative journalism organisation ProPublica found that Facebook’s algorithmic advertising service was facilitating prejudiced targeting. The system allowed advertisers to target their products and events to those who expressed interest in the topics: "Jew Hater." "How to Burn Jews." "History of Why Jews Ruin the World." (Angwin, et al. 2017). As with Twitter’s timeline, Facebook’s advertising code is shaped by what users post, share, and like. In this instance, the algorithm pulled information from far-right and alt-right filter bubbles where Facebook users had indicated these hateful topics as ‘interests’. Once notified, Facebook altered its advertising service and claimed it was not at fault as it was the algorithms that made them available, not staff. Despite these changes, advertisers were still allowed to block housing advertisements from being shown to African Americans, Latinxs and Asian Americans for a period of time (Angwin and Parris 2016).

Bursting the Bubble

Even if internet users are willing to listen to the opinions of those who don’t share their views, is such open-mindedness enough to pop the filter bubble? To test the resilience of filter bubbles to alternative viewpoints, the Polarization Lab at Duke University ran an experiment to see if they could be dismantled by forced exposure to challenging content, effectively counteracting the effect of Twitter’s timeline algorithm (Bail et al. 2018).

Republican and Democrat-supporting Twitter users were paid to follow Twitter bots set up by the research team. For one month, these bots automatically posted 24 messages a day that challenged the participants’ political viewpoints. The team found that Republicans, and to a lesser extent Democrats, actually became more entrenched in their ideology when exposed to opposing views on Twitter. When exposed to these alternative viewpoints online, we tend to use them to reinforce what we already believe. Those of us with a tolerant mindset can become more liberal when challenged by hate speech, and those of us with an intolerant mindset can become more conservative when challenged by counter-hate speech.

The Duke study is depressing, but we need many more studies to confirm the findings before we give up hope. The effect may also be localised to Twitter. Given its reputation for heated exchanges, people who log on may be ‘primed’ to expect a more aggressive style of communication, making them less susceptible to interventions designed to soften worldviews. Running the same experiment in the local community centre would likely see a different outcome due to the expectation of civil discourse and the effect of positive contact, phenomena lacking on platforms like Twitter.

References

Ofcom, ‘Children and Parents: Media Use and Attitudes’, London: Ofcom, 2021.

W. J. Brady et al., ‘Emotion Shapes the Diffusion of Moralized Content in Social Networks’, Proceedings of the National Academy of Sciences 114 (2017), 7313–18.

C. A. Bail et al., ‘Exposure to Opposing Views on Social Media Can Increase Political Polarization’, Proceedings of the National Academy of Sciences 115 (2018), 9216–21.

J. Angwin, et al., ‘Facebook Enabled Advertisers to Reach “Jew Haters”’, ProPublica, 14 September 2017.

J. Angwin and T. Parris, ‘Facebook Lets Advertisers Exclude Users by Race’, ProPublica, 28 October 2016.

advertisement
More from Psychology Today
More from Psychology Today