Skip to main content

Verified by Psychology Today

Sarah Langan

On a Social Dilemma: The Black Box

Part I: People as the product.

The Netflix documentary The Social Dilemma, argues that people are social media’s product, delivered to advertisers; its mission is to keep us in its thrall, so it can track our data for as long as possible.

This isn’t a new business model. Broadcast television lived and died by its viewership, which it was able to translate into advertising dollars. It’s fun to be in media’s thrall. I marathon view Buffy the Vampire Slayer at least once a year. That’s how Hulu knows I’m middle-aged. What’s unique to this model is the specificity of social media – the way it’s able to customize its advertising to us individual users, and in the cases of Facebook and YouTube, it’s able to customize our streams. What’s also unique is its utter lack of federal regulation.

Netflix / The Social Dilemma / Fair Use
Source: Netflix / The Social Dilemma / Fair Use

“The Social Dilemma” reminded me of something I’ve chosen to forget: There’s an algorithm. Facebook is following that algorithm to maximize views. Because bad news tends to get more eyeballs than good, it’s in Facebook’s DNA to spread false information and instigate arguments that wouldn’t ordinarily exist. Much like Brundlefly, there is no morality in an algorithm.

Another thing the documentary addresses that bears repeating: Users are the product.

The scope of The Social Dilemma is broad, encompassing all social networking platforms as well as e-mail. I’m interested more specifically in Facebook, which to me has been the greatest agent of harm. Specifically, it has hijacked our neurology, and this hijacking has affected our daily human interactions.

Facebook has become inescapable. My older daughter’s school regularly posts updates via Facebook, and when she has problems logging into her classes, Facebook is where I crowdsource the answers. My younger daughter’s school updates information via Instagram and WhatsApp, both owned by Facebook. Facebook is where I have the most followers. As a writer with a product to sell, it’s the easiest way to communicate with my readers.

Some number salad: Roughly 51 percent of the global population (3.96 billion people) are on social media. Facebook remains the platform in the lead, with 2.6 billion monthly active users (Hootsuite and We Are Social). The average Facebook user spends an hour per day on its platform (SimilarWeb, via Review42). Women tend to post their opinions less often on social media because when they do so, they receive a disproportionate number of negative comments. So, instead, they post about kids and cats. Men tend to troll more often (The Atlantic, Social Media Today, Gender Inequality).

The tradeoff for this free platform is that Facebook gathers all our data and follows us across the internet. It packages and sells this data to advertisers. But that’s not all it does. To keep us engaged so that it can continue tracking us, its algorithm shows us the things we’re most likely to react to, and because we mammals are hard-wired to respond to threat, we’re accordingly shown what the algorithm predicts will threaten, shock, or excite us most. Once it has that data, the algorithm reinforces those threats with more of the same. It has discovered the specific streams that keep us engaged.

The fake news articles are bad. Once we’ve seen them, we can’t forget those crazy headlines. We can’t help but wonder if they’re real. Even as our logical minds doubt them, they linger in our hindbrains, where threatening things get coded for long term storage. Even as we skim and avert our eyes, we become conditioned and increasingly receptive to false, alarming information.

But for me, what’s far worse than the fake news is the way Facebook foments arguments between friends and strangers. We might post something, and be surprised that a day later, someone has reacted negatively to it in the middle of the night, while we were sleeping. Sneak attack. Because this happens in a public forum, we’re hard-wired to respond to it.

But as we attempt to respond, we face a conundrum: We can’t read the room. We can’t see the room. We don’t know who’s in the room. The room is a black box. We can’t determine how serious this comment is, or whether a functioning person posted it. We don’t know who in our audience agrees with this post, or whether they’ve noticed at all. What’s worse, because some clown we may never have met in real life has posted a comment in our stream, the algorithm will now show our posts to the clown’s friends, who are not our friends. Now, these new clowns will also troll. I’ve been there. It’s a sh*t show.

Because we’re told that this small section of the internet is our brand, we feel obliged to protect it. We feel obliged to protect anyone attacked on our turf. But we’re lost in the black box. We can’t assess the level of threat. Our hindbrains default to everything matters.

In our heightened state of alarm, we engage. It doesn’t work out well. The memory of this interaction doesn’t fade like it would in real life. It sinks into our hindbrains, the place where fear lives. We wrap it in a box and name it bad. We categorize the people we argued with bad. We don’t reopen this box because it’s upsetting to do so. We’ve become polarized.

Facebook’s party line on this is that:

  1. As users, this is our fault. We ought to be more discriminating.
  2. Facebook has recently skewed its algorithm to emphasize connections to family and friends over politics.
  3. Facebook can’t possibly fact check all the false articles posted. As a platform, that’s not its job.
  4. The way the algorithm skews is not purposeful. Nobody made an active decision.

I’d argue:

  1. We’re the ones who aren’t making conscious decisions, and our streams are expressly not our faults.
  2. It changed its algorithm, which it still won’t show us. Uh, yeah.
  3. Facebook made $700 billion in 2019, according to its own press release. It can afford to hire some fact-checkers. We created federal regulatory agencies like the FDA and the FCC expressly to protect our populace from harm. Harm is happening. Why isn’t the FCC regulating Facebook?
  4. A company that makes billions every year based on the data it collects from algorithms is absolutely making conscious decisions about how and when it employs those algorithms.

We’re told by Facebook that through its platform, it’s offering every human on earth the freedom to create her own brand. Facebook has given us freedom. But what is a brand and why do we need one? How can a complex human be a brand? What is it we think we’re selling, and why are we selling it on Facebook? Are we supposed to advertise ourselves to our own “Friends,” even though our “Friends” are products, too? What control do we have over our brands, when we’re not writing the algorithms?

Though we all use Facebook, we don’t actually know the rules of engagement. It kind of just showed up in our lives, like Buffy’s creepy little sister. Facebook has never made clear what’s polite and what isn’t. Are we supposed to post about our work? Leisure? Our kids? Are we supposed to share our political opinions? Should we have public breakdowns? Share our health scares? Is it okay to post on other peoples’ threads, pick fights with strangers, pick fights with friends? Facebook claims this, too, is our responsibility. But we’re not the ones who created it. Facebook has never been open about its business model or the fact that it tracks everything we do. It’s like it invited us all to play a soccer game, is making money off the outcome, but can’t spring for the referee.


About the Author

Sarah Langan has received three Bram-Stoker awards, and her work has been included in numerous best-of-the-year anthologies. Her upcoming book is Good Neighbors.