Skip to main content

Verified by Psychology Today

Artificial Intelligence

Can a Selfie Predict Your Politics?

Machine learning can predict whether someone is liberal or conservative.

  • A new machine learning algorithm can identify political orientation from a photo with 73 percent accuracy.
  • Algorithms may identify politics and other characteristics not through a genuine connection but by intermediate factors. For example, people who are liberal may also be more likely to wear certain clothes.
  • Self-presentation helps determine political preference in this algorithm, but it doesn’t entirely explain its predictive ability.
  • Machine learning raises important ethical questions, such as the potential for discrimination based on appearance or personality traits.
Photo by Sora Shimazaki on Pexels.
New research predicts political orientation from pictures of faces.
Source: Photo by Sora Shimazaki on Pexels.

You are given two pictures of people’s faces, closely cropped. One is a conservative, one is a liberal. Can you tell guess which is which, just by looking at the face?

This is the set-up for Stanford University psychologist Michal Kosinski’s new machine learning paper, one in a series that he has published in the last several years examining how well machine learning algorithms can predict various personal characteristics, like scores on personality tests, and, more controversially, sexual orientation. The headline finding, that features derived from an uninterpretable “black box” neural network algorithm can accurately pick political orientation with 73% accuracy, raises questions and concerns.

Ethical Questions about Machine Learning

First, the concerns. Facial physiognomy, the practice of judging a person’s character from their face, has historically been used to justify scientific racism. If you can tell someone’s personality, whether they are likely to commit a crime, or whether they have a stigmatized sexual identity from their face, then you can decide how to treat the person based solely on their appearances, not on their actions. Treating people differently based on their physical appearance and personal characteristics, as opposed to on what they do, is the definition of discrimination.

Machine learning puts a new scientific sheen on ideas that are at odds with basic social ethics. These new results could easily be used to suggest that some people are just “born liberals” (or conservatives) and that you can tell from looking at their faces. The very set-up of the study assumes that our political views are as consistent as our faces, and the only result it could give is an accuracy score relating aspects of the face to politics.

How Algorithms Make Predictions

There is not (to my first reading) reason to believe the results rely on statistical errors, but that does not mean that the results necessarily capture true relationships in nature. It is common in machine learning research to use huge datasets with known biases. For example, a dataset containing photographs and the criminal records of 1 million people could be used to train an algorithm to predict whether a person has a criminal record.

But we have to remember that having a criminal record is not the same as “likely to commit a crime.” Some groups of people are more likely to be prosecuted for a crime, while others are likely to get let off without being entered in the system. For example, marijuana use is similar in both white and black Americans, but black Americans are much more likely to be arrested for marijuana use. In this case, the machine learning algorithm would not be learning whether someone really is a criminal, but whether the person was likely to get arrested. This is influenced by biases in which areas are policed and what kind of people are and aren’t arrested.

Training an algorithm to give the best possible prediction in this one dataset essentially freezes all the inherent biases in the current system in place, turning preferential treatment for attractive people, more light-skinned people, less wrinkled people, and more into a system that will from then on be relied on to give “objective” responses. Machine learning can therefore be a way of laundering bias, converting discrepancies between types of people into a computerized system that can’t be questioned.

Photo from This Is Engineering on Pexels.
Algorithms can be used to make biased judgments seem more scientific.
Source: Photo from This Is Engineering on Pexels.

It is easy to see how this might happen in Kosinski’s political orientation detector. A YouTuber posting makeup tutorials might become particularly popular among large groups of Evangelical Christian women, leading them to all have similar makeup in their photos. Gay men in large cities might prefer to have shaped eyebrows (in fact, researchers did report that gay men were more likely than straight men to wear glasses in dating profiles, and lesbians less likely to wear makeup than straight women). Southerners might be more likely to post selfies taken outdoors, which would lead to a different quality of light. Actors might post headshots, which include professional studio lighting.

Each of these groups is likely to have particular voting preferences, but the reasons they look similar are due to social trends and self-presentation. In other words, the same facial feature might relate to political orientation differently if, for example, the style of makeup promoted by our (hypothetical) YouTuber is suddenly adopted by people with very different political beliefs.

Determining Political Preferences

As Kosinski notes in an online supplement, the black box algorithm could be picking up on aspects of self-presentation like this. In fact, this paper shows that some of these features are indeed useful for picking up political orientation. Just knowing how a person’s head was posed can give 58% accuracy in determining politics (liberals face the camera more often). What expression a person was showing gives 57% accuracy (liberals express disgust less).

However, these interperetable features still don’t explain the extra 15% accuracy picked up by the neural network algorithm he used. Further, Kosinski notes that capturing information about posing isn’t likely going to be a problem for the ways companies, political parties, and government agencies would use this type of research. They would likely be using photos that were scraped from public social media accounts, just like Kosinski did, so the same biases in self-presentation should hold.

Photo by Manuel Geissinger on Pexels.
Kosinski's algorithm for predicting political orientation relies on uninterpretable "black box" algorithms.
Source: Photo by Manuel Geissinger on Pexels.

This leads us to question the deeper scientific significance of this work. Why can the machine learning algorithm predict political orientation from photographs? If it’s not just due to typical ways different types of people like to pose their head, or the facial expressions they’re making, what could the explanation be?

In his online supplement, Kosinski lays out three broad possibilities: (1) Who you are changes your face. For example, if you are a happy person, smiling frequently will give you crow’s feet wrinkles. (2) Your face changes who you are. For example, attractive people may be given special treatment, leading them to be more optimistic. (3) Some other factor, like genes or hormones, changes both faces and who you are. For example, if your body produces more testosterone you might be more naturally aggressive and have a more prominent brow ridge. Unfortunately, this study does not attempt to distinguish between these, only comparing them explicitly in an online supplement.

These findings are interesting, but it’s hard to know what to make of them at this point. What does this paper teach us about how faces (or politics) work? As in the earlier work on sexual orientation, the kind of careful exploration needed to really make sense of the results (and tease out effects like how makeup can help distinguish lesbians from heterosexual women) will likely fall to other, future researchers. Like the black box algorithm employed, right now all we are seeing is an interesting output, but we aren’t getting the big scientific prize: an explanation.

References

Kosinski, M. Facial recognition technology can expose political orientation from naturalistic facial images. Sci Rep 11, 100 (2021). https://doi.org/10.1038/s41598-020-79310-1

advertisement
More from Alexander Danvers Ph.D.
More from Psychology Today