Skip to main content

Verified by Psychology Today

Bias

Virtual Justice: How Big Data Could Undermine Society

We're recognizing how uses of algorithms can be socially irresponsible.

 Algorithmic Justice League
Algorithms used in facial recognition have raised serious ethical questions.
Source: Algorithmic Justice League

“There is a battle going on for fairness, inclusion and justice in the digital world.” Darren Walker, president of the Ford Foundation, was referring to burgeoning research that has uncovered systematic racial, gender, and other biases built into algorithms used for everything from Netflix “recommended” titles to surveillance systems. In one dramatic case, researchers testing facial-recognition software found that it was reliable – when using photos of white males. But for photos of darker-skinned people, the systems failed as much as 35 percent of the time to correctly identify the gender of black women (Lhor, 2018). Such facial-recognition technology is being rapidly developed for a range of applications.

The marketing potentials are obvious: the software can help craft specific advertising pitches based on social-media profile photos. But law enforcement agencies are also eager to integrate the software; according to recent estimates, the facial photos of 117 million Americans are held in facial-recognition systems used by law enforcement now. And research suggests that too often, black men are singled out for scrutiny, because they are disproportionately represented in mugshot databases. And the problem of algorithms merely reflecting the biases built into them can be even more blatant. In 2015, Google had to apologize when its new Google Photos app began labeling photos of black people as “gorillas” (Dougherty, 2015).

As the public’s behaviors, interests, routines and even physical appearances are recorded and stored, individuals transform into commodities themselves. Among media technology folks, there’s a saying: If you’re getting something for free online, it means that YOU are commodity. As a society, we are just beginning to seriously question this arrangement that works terrifically for powerful corporate entities yet arguably threatens the very fabric of society and its civic and democratic foundations. How might we find a better balance between our own interests in controlling data about ourselves and the fact that this same data is the currency that drives a multi-billion-dollar economy? “Society has sacrificed fairness for efficiency,” is the way mathematician Cathy O’Neil, who catalogues the disturbing biases built into pervasive algorithms in her book 2016 book, Weapons of Math Destruction,” describes our predicament. And that much-acclaimed Netflix “recommender” algorithm that populates your account? It, and other similar systems, are equally likely to give offense and raise ethical questions when race is an added part of the calculations. In late 2017, Netflix rolled out a new version of the algorithm that manipulated the “thumbnail” images of movies to tailor them to individual viewer profiles. Soon, viewers whose histories included African-American-themed films also started seeing films with predominantly white casts in their recommended lists – with thumbnails featuring minor black characters. “This feels like a step too far,” said Tobi Aremu, 26, a film-maker from Brooklyn. Recently he watched the film Set It Up, “which was made to look like a two-hander between Taye Diggs and Lucy Liu, but they were secondary characters in the love story of a young white couple!” (Iqbal, 2018).

These and other examples of algorithmic bias have prompted analysts, technologists and scholars to call for a more systematic auditing of machine-learning systems and more conscientious construction of the “learning sets” used to teach computers. The researcher who documented systematic discrimination in facial-recognition systems, Joy Buolamwini, launched the Algorithmic Justice League to do just that. “Algorithms, like viruses, can spread bias on a massive scale, at a rapid pace,” Buolamwini noted (2018, ajlunited.org). But toleration of algorithmic bias raises two deeper philosophical questions. The first is Kantian in its focus on what it means to design and use media technology in ways that ensure all individuals are treated with the dignity required. The second is Aristotelian in its focus on how we might envision a digital landscape designed to cultivate social engagement and a healthy polis.

Regarding the first, algorithmic bias arguably perpetuates the treatment of individuals as a mere means, and even disregards our moral obligation to approach all individuals with the dignity required for beings with rationality and free will. In Kant’s deontology, this is a significant moral failure made clear through his “categorical imperative,” a sort of mental exercise that calls on us to envision whether an individual’s actions would be considered acceptable if all were to act the same way. Clearly, disregard for the discriminatory effects of a system would fail Kant’s imperative. Similarly, we are all duty-bound to promote a sense of justice, according to W.D. Ross. Regarding the second, we all understand that failure to prioritize the project of a just society, however imperfect, can quickly trigger an erosion of civil life. The pursuit of justice, as with the cultivation of all virtues, should be a prime concern for all of us, because they are essential for the construction of a community in which all individuals have the opportunity to reach their full potential – to flourish, as Aristotle put it. As philosopher Philippa Foot argued, vice – including injustice and discriminatory behavior – is a defect in humans the same way that poor roots are a defect in an oak tree: it is obvious that its presence undermines the ability to flourish.

As media consumers and citizens whose lives are increasingly experienced online, we expect companies to pursue sustainable and responsible business endeavors. In the realm of data ethics, this means we will need policies that take more seriously the potential harm of Big Data and the question of who actually “owns” personal data. In other words, there is a “demand for virtue”—a social expectation for corporations to not only produce products and services in a sustainable manner, but for them to invest in the communities from which they profit.

References

Dougherty, C. (2015, July 1). Google Photos mistakenly labels black people ‘gorillas.’ The New York Times. Available: https://bits.blogs.nytimes.com/2015/07/01/google-photos-mistakenly-labe…

Iqbal, N. (2018, October 20). Film fans see red over Netflix ‘targeted’ posters for black viewers. The Guardian. Available: https://www.theguardian.com/media/2018/oct/20/netflix-film-black-viewer…

Lohr, S. (2018, February 9). Facial recognition is accurate, if you’re a white guy. The New York Times. Available: https://www.nytimes.com/2018/02/09/technology/facial-recognition-race-a…

advertisement
More from Patrick L. Plaisance Ph.D.
More from Psychology Today