All Over Your Face
Advanced facial-recognition technology can deduce aspects of our personality as well as our identity. Will this new fact of life change the way we act?
By Matthew Hutson published January 2, 2018 - last reviewed on October 9, 2019

Last year, a Russian firm launched the website FindFace, which matches submitted photos to profiles on the social networking site VK, a regional Facebook imitator. If a stranger photographs you in the street or spots your image on another site, and you're on VK, then FindFace can likely identify you by name. Trolls immediately began using the site to out actresses in adult videos, harassing them and their friends and shaming them on discussion boards with epithets like "burnt whore."
Meanwhile, Moscow police use facial recognition on a network of 160,000 security cameras across the city, and China is using cameras with facial recognition to tag jaywalkers. You can also use your face to pay at some KFCs in China, and it's required before toilet paper can be dispensed at some public restrooms. In Dubai, police wear Google Glass devices that identify the faces of people in front of them. Here at home, the faces of half of all American adults are already in the government's facial-recognition system. It's becoming harder to go about your life in private, online or off, anywhere in the world. You don't need to be a porn star or a crook to find that unnerving.
Now researchers are developing techniques that not only identify people by their faces but also infer what's in their minds. Our expressions signal our emotions, and our facial structure can hint at our genetic makeup. We've always known that faces convey information to others, but now ever-present electronic eyes can watch us with untiring attention and with the training to spot our most fleeting micro-expressions.
Even as we debate the ethics, facial analysis advances at an accelerating rate. Amazon, for example, is testing grocery stores that track users as they shop. Such technology has the potential to make our lives safer, more convenient, and better customized to our individual needs, but it can also entrap us behind literal bars or those of social norms or paranoia. As the machines' learning advances, step by step, we must make or accept tradeoffs, explicitly or implicitly. That's why it's worth looking into those electronic eyes, to understand their applications and their social risks and benefits.
The End of Hiding
Users of dating sites delicately curate what they reveal online, hiding information that they consider unbecoming or that unwanted suitors might use to pursue them beyond the site. But a pseudonym doesn't deliver what it used to. To see how easily a stranger can learn personal information about you, Carnegie Mellon University privacy researcher Alessandro Acquisti conducted an experiment. He and two collaborators first used a web browser to collect profile photos of about 5,000 Match.com users in a North American city. They also collected the primary photos of about 100,000 Facebook users in the same city. Using a commercially available piece of software called PittPatt, they were able to match about one in 10 Match faces to a Facebook face. Before the introduction of such algorithms, the task would have required 500 million comparisons by hand.
For the researchers' next act, they pulled in college students walking by their building and took three photos of each of them. They asked the students how they'd feel if a stranger could photograph them and predict their interests and Social Security numbers. On a scale from 1 (most comfortable) to 7 (most uncomfortable), the average ratings were about 5 and 6, respectively. The researchers then proceeded to do just that. They matched the students' photos to Facebook profiles and grabbed their real names, interests, and other information. Then they used those data points and another algorithm to search online and dredge up Social Security numbers. For about a quarter of the participants, they were able to guess the first five digits—enough to run a brute force identity attack for the remaining four—within a few attempts. The method could easily be improved with more photos or slightly better algorithms. Sample responses from the students: "very worrisome," "surprised and shocked," "freaky ... makes me reassess what I should ever reveal on the internet." Just for fun, Acquisti's team coded up a demo augmented-reality iPhone app: Point the phone's camera at a stranger, and next to the person's head it displays his or her name, SSN, and date and state of birth.
Acquisti relied only on primary profile photos, but people upload billions of other photos to Facebook every month, many of them tagged by name. A recent study found that by using albums, comments, information about where and when photos were taken, friend networks, and the bodies and backgrounds displayed, even people in untagged photos with their faces blurred could be identified. "People like to think that they're anonymous and invisible, despite posting lots and lots of information about themselves all over the internet," says psychologist Nicholas Rule of the University of Toronto, who studies social perception. "It all feels private from your living room, but it's the digital equivalent of posting a billboard on the side of a major highway."
Recent advances have made such unintentional broadcasting possible, primarily in an area of artificial intelligence known as machine learning, in which computers discover patterns in data by themselves. Landmarks in machine learning—self-driving cars, Go-playing computers, automatic language translation—have resulted from three main factors. First, computing power has steadily increased, and new specialized chips tailored for machine learning can run algorithms exponentially faster and more efficiently.

Second, "big data" has gotten bigger; remember those billions of Facebook photos. We're surrounded by sensors collecting information about the world and feeding it into databases. This information doesn't just open up our personal lives; it helps to train the computers, which need massive numbers of examples to learn from. A child can see one hot dog and recognize other hot dogs for life, but a computer needs to "see" thousands or millions.
Third, the algorithms have improved. Developing artificial neural networks, or neural nets, is the hottest area of machine learning right now. These software models work somewhat similarly to the brain. "Neurons" each process little bits of information, then pass them on to other layers of the net. At first the strength of the connections between neurons is random. But over time, as the network guesses correctly or incorrectly (nope, not a hot dog), it receives feedback and adjusts accordingly. Once it's trained, it's ready to be used in situations where the answer is not known in advance.
Neural nets can have millions of neurons arranged in dozens of layers for what is called deep learning. There are many ways to arrange the neurons, but one of the most important architectures at the moment is a convolutional neural net, or ConvNet. These algorithms rely on convolution, a mathematical operation that allows them to recognize patterns even as they vary slightly, the way you can recognize a face no matter where it falls on your retina.
Since 2012, ConvNets have been the standard tool for image recognition. For facial recognition, neural nets are sometimes used to translate an image like a Match.com photo into a manageable set of numbers representing facial features. Then another algorithm looks for a target image—say, a Facebook photo—with the most similar set of features. As computers, data sets, and algorithms keep improving, so will their ability to recognize us. The longer they work, the more they learn, and the more powerful and accurate they become. And they're just getting started.
Will facial recognition change how we act? One possibility: People won't care. Unless we can see the cameras, see the people looking at our online profiles, and see how they're using our information, we may forget about it. And even if we do care about our privacy in theory, we simply might not be able to maintain it in practice. "It would require a nearly superhuman effort for an individual to properly manage their privacy," Acquisti says. We're fairly good at managing our privacy offline: If you're having a sensitive conversation at dinner and the waiter walks by, you lower your voice. Online, however, you can't see the waiter. It's informational asymmetry, Acquisti says: We know little about what people know about us, who knows it, or what their intentions are.
The interpersonal effects of facial recognition remain clouded as well. What happens when we can no longer separate our personas and prevent our social worlds from mutual contamination? Or when someone meets you at a bar or clicks on your LinkedIn profile and can use your image to dredge up every other iota of your web presence, including footage of you at a kink club or political rally? Maybe we'll learn to forgive youthful indiscretion when photographic evidence of our entire lives is out there. Maybe we'll learn to see each other as more complete people. Or maybe we'll become paranoid and stop trusting one another. Either way, we won't have the space and control to nurture new relationships organically. "Privacy offers the ability to modulate your degree of intimacy with another person," Acquisti says.
We have some data on how people change when they feel watched. Studies in Sweden, England, and the United States show that security cameras moderately reduce crime in their immediate vicinity. After Edward Snowden revealed many federal surveillance practices, traffic to Wikipedia pages for topics such as "terrorism" and "dirty bomb" dropped.
A study in Helsinki took tracking to an intimate extreme: In each of 12 participating households, researchers installed microphones and three or four cameras, and also monitored wireless traffic and computer and cellphone activity (keypresses, screen shots). The intrusion lasted six months. In surveys and interviews, the subjects reported annoyance, anxiety, and occasional anger. As for sharing their video data, they said they'd be least comfortable with the authorities seeing it, even if they hadn't broken any laws, followed by public media, which could spin it into "commercial drama or something," and then friends and acquaintances. But on average, they adapted to the tech over time. Half said they assumed their internet use was already being monitored post-9/11. And while some changed their routines ("I kind of cannot have sex in the kitchen because of the camera"), others didn't ("After I realized that I'd already walked naked to the kitchen a couple of times, my threshold...got lower").

The Finnish group's varied responses should come as no surprise. Acquisti and colleagues have written about our inconsistencies when considering privacy. In one study, when participants were asked to rate their concern that a stranger might discern their sexual orientation, half of those who rated it a 7 out of 7 had already revealed their orientation on their Facebook profiles. People will also pay more to keep privacy than to acquire it, an example of the endowment effect. Even the temperature of a room can irrationally affect how much people will reveal.
If we wanted to be fully informed and calculating, we'd have to put the rest of our lives on hold: By one rough estimate, reading the privacy policies of every web site we visit would cost Americans $1 trillion a year in lost time.
The effects of accepting facial identification are both nefarious and salutary, often in combination. Surrendering anonymity denies us agency by holding us to our pasts and to who we are elsewhere. When strangers can call up your biography, warts, laurels and all, you can't start fresh each time you walk into a room or meet someone new. On the flip side, you can more easily avoid people with bad reputations; serial con artists will need to invest in a fresh selection of fake moustaches. We'll also enjoy a range of new conveniences and security advantages like walletless checkout and terrorist identification in crowds. Unfortunately, at this point, we can't know whether those benefits will outweigh the costs. "We are all part of a gigantic social experiment," Acquisti says.
What's Behind Our Faces
AI can not only identify us by our faces but also read the emotions on them. Our faces reveal more than just our biographies—who we are, what we've done, where we've been. They also reveal what's inside our heads. Facial expressions evolved to signal our mental state to others. Communication can occur strategically, as when we smile politely at a coworker's joke, or subconsciously, as when we display tells at the poker table. People are pretty good at reading expressions already, but machines open new opportunities. They can be more accurate, they don't get tired or distracted, and they can watch us when no one else is around.
One opportunity this opens up is helping people who aren't naturals at face reading. Dennis Wall, a biomedical data scientist at Stanford, has given Google's Glass to children with autism. They wear frames with a built-in camera connected to software that detects faces and categorizes their emotions. The devices can then display words, colors, or emoticons on a little screen attached to the glasses, which the child can view by looking up. The software can run constantly, or the children can play training games, such as one in which they try to guess someone's emotion. Parents can review recordings with a child and explain tricky social interactions. Children can't wear the device in the classroom, but teachers report that the training has improved engagement and eye contact. Wall says similar applications might help people with PTSD or depression, who, research shows, are biased to miss smiling.
Ned Sahin, a neuroscientist who has developed Glass apps for autistic children, says anyone could benefit from such assistance. "I make a joke any time I talk about it: Good thing we're doing this for people on the spectrum, because they need it and we don't. We've got this all dialed in," he says, emphasizing the irony. "And each of you knows exactly what your wife or husband is thinking at any time."
There are indeed some situations in which face-reading tech performs better than neurotypical people. In one study, individuals were recorded doing two tasks: watching a video of a baby laughing, which elicited smiles of delight, and filling out a frustrating web form, which elicited natural expressions of frustration, closely resembling smiles. When other participants viewed the recordings to categorize the smiles as delighted or frustrated, they performed no better than chance. A machine-learning algorithm, however, got them all right. In the real world, people would have contextual clues beyond facial expressions. "Coding facial movements in the absence of context will not reveal how someone feels or what they think most of the time," says Lisa Feldman Barrett, a psychologist and neuroscientist at Northeastern University.
In another experiment, participants watched videos of people holding an arm in ice water or holding an arm in warm water and pretending to look anguished. Subjects' scores at distinguishing real from faked pain expressions remained below 60 percent even after training. A machine-learning algorithm scored around 85 percent.
These studies raise the possibility of AI lie detectors—possibly deployed on something like Google Glass. What happens when our polite smiles stop working? When white lies become transparent? When social graces lose their lubricating power? Even if we have the technology to create such a dystopia, we may decide not to use it. After all, if someone says he likes your haircut, how hard do you currently try to test the comment's veracity? We prefer to maintain certain social fictions. "There will be a sector of humanity that will want stuff like that," Wall says, "but I think a majority will prefer just to sit down and have a conversation with somebody the old-school way."

Face-reading algorithms generally fall into one of two types. There are machine-learning algorithms (including neural networks) trained to translate an image into an emotional label. This process is relatively simple but deals best with stereotypical facial configurations, which can be rare. Second, there are methods that use a machine-learning algorithm (again including neural networks, or one called a support vector machine) that detect in an image a set of active "action units," or facial movements linked to underlying muscle contractions. Another algorithm then translates the action units into an emotional expression. This method is more flexible, but analyzing action units can be tricky. Once you add variations in lighting, head pose, and personal idiosyncrasy, accuracy drops.
Automatic face reading has wide applicability. Couples might use it to better understand each other—or to understand themselves and what signals they're really displaying in a conversation. Public speakers might use it to help read their audience during online or offline seminars or to practice their own body language. Teams might use it to monitor and improve group dynamics. Treaty negotiators or criminal investigators could use it for peace and security (or for manipulation).
In a recent book chapter, computer scientists Brais Martinez and Michel Valstar of the University of Nottingham outlined face reading's potential benefits for behavioral medicine in the diagnosis and treatment of such disorders as depression, anxiety, autism, and schizophrenia, as well as in pain management (evaluating injuries and tracking rehab). Louis-Philippe Morency, a computer scientist at Carnegie Mellon University, has used video analysis to find that depressed people don't smile less than other people but that their smiles are different—shorter and less intense. He's also found that depression makes men frown more and women frown less. He recently reported that using machine learning to analyze conversations with depressed people can predict suicidality. Algorithms can be more objective than people, and they can be deployed when doctors aren't around, monitoring people as they live their lives. They can also track subtle changes over time. Morency hopes that by giving doctors more objective, consistent measures of internal states to help them in their assessments, he can create "the blood test of mental health."
Affectiva, a company spun out of MIT's Media Lab, has collected data on six million faces from 87 countries and put facial analysis to work for dozens of clients. Uses include making a cute robot more responsive to learners during language lessons, making a giant light display respond to crowds, and analyzing legal depositions. The company is also working on automotive solutions that both monitor drivers' alertness to make sure they're always ready to take back control in semi-autonomous vehicles and measure mood for better customization of the driving experience.
Facial analysis is frequently used to measure audience response to ads, because a good deal of the money in tech is in advertising. It's also where much of the potential for abuse lies. In one study of supermarket shoppers, some participants expressed discomfort with the potential for micro-expression monitoring. "Understanding how you really feel about this product even though you might not know it yourself... that's a little spooky," one participant said. "It's like mining your thoughts more than just your buying habits."
Obviously, we need some extensive discussions about consent for facial analysis. Which norms and laws are necessary to maintain a sense of inner privacy? Facial analysis clearly has great value for users, but to the extent that we don't understand or think about our privacy, informed consent may be an illusion, and people will increasingly come to know us much better than we may be comfortable with.
Typecasting, With Accuracy
In 2014, an Israeli company named Faception launched, with the promise that its AI could classify several character types from faces—including the Bingo Player, the Academic Researcher, and the Pedophile. They don't reveal much about their clients, but claim they've done homeland-security work, presumably keeping citizens safe from bingo extremists. Questionable marketing pitches aside, the evidence suggests that facial structure really does reveal some internal traits.
Recently, a paper made a big splash by demonstrating that machine learning could guess sexual orientation from dating-site headshots much better than chance. The algorithm's AUC—a statistical measure that accounts for both false positives and false negatives, where 0.5 is chance and 1.0 is perfect—was 0.81 for men and 0.71 for women. Human guessers scored only 0.61 and 0.51, respectively. In other words, if the computer selected the 10 men most likely to be gay from a group of 1,000 photos, it would be right about 9 of them.

Michal Kosinski, a psychologist at Stanford University, and his collaborator wrote the paper as a warning of what's possible. They used an off-the-shelf neural network and other standard algorithms—ones available to any government, including those in countries where homosexuality is a crime punishable by death. Critics have argued that the algorithms might be relying on subtle differences in posing or grooming based on sexuality, thus reducing the study's validity, but even if that's the case, the additional facial cues can be used in the real world. What's more, the method doesn't need to be perfect to have an impact: It might simply be used as a prescreening device to narrow the range of people to investigate.
One danger in automating judgment about traits or inclinations is the risk of encoding biases while offering the illusion of objectivity. A recent paper by Chinese researchers used machine learning (including a standard convolutional neural net) to assess "criminality" based on headshots. But their basis for measuring criminality was not the committing of a crime, or even traits such as aggression or impulsiveness; it was the existence of a criminal conviction. And one's path through the justice system depends on subjective judgment at every step, including biases based on appearance. Maybe someone looks mean. That person is more likely than someone else to be caught and convicted for a similar crime. The algorithm then learns that mean-looking people have more "criminality." It uses that to catch and convict more mean-looking people. The cycle repeats. It's easy to see race and class biases becoming embedded and amplified.
Kosinski is hopeful, however, that AI can actually minimize inaccurate profiling. Even if absolute objectivity is an illusion, a computer might rely on relatively more objective signals than humans. He sees another possible benefit to automated profiling: increased tolerance. He would not out anyone without their consent, but imagines that if everyone were outed, homosexuality might become less taboo. "Do you really think," he asks, "that if people in Saudi Arabia realized that 7 percent of their neighbors, cousins, uncles, people in the royal family are gay, they would burn them all at the stake?"
Other traits appear in the face, too, whether through genetics, environment, or some combination. Nicholas Rule, the Toronto psychologist who studies social perception, recently co-authored a book chapter surveying the field. Based on faces, he found, people can predict personality, political orientation, Jewishness, Mormonism, and business and political success. Predicting professional success, though, is a bit like predicting criminal convictions: It should not be mistaken for predicting a truly inherent trait.
Humans who make these predictions score at or just above chance. At the most recent Psychology of Technology conference, Kosinski's graduate student Poruz Khambatta revealed that AI can do better. But it might never be great. Even if it is, it might not cause as much disruption as we fear, because we already have better ways to identify sexual preference, political ideology, and the rest: what people say, how they move, and what they wear.
"Past behavior is a better predictor of future behavior than how you look in the moment," says Alexander Todorov, a psychologist at Princeton University and the author of Face Value. If possible employers want to know if they're hiring a future terrorist or a bingo player, they're better off looking at your Facebook feed than your profile picture. Kosinski has done work showing that, based on Facebook likes, a computer can predict a wide variety of characteristics, including personality, intelligence, religion, and drug use. In fact, computers judged personality better than people's own spouses could. In many ways, they're getting to know us better than we know ourselves.
Aside from character and demographic traits, AI can also read genetic and developmental disorders in our faces. The majority of clinical geneticists now use an app called Face2Gene, which can evaluate the probabilities of 2,000 disorders. It helps to distinguish between different disorders when faces look somewhat abnormal and can suggest diagnoses even when a face shows no obvious signs to a physician's eye. Face2Gene has been trained mostly on white faces, so Maximilian Muenke, a geneticist at the National Human Genome Research Institute, is developing an app suitable for a variety of races, since many poorer countries don't have the resources to manually screen children. He notes that Nigeria has a population exceeding 180 million but not a single clinical geneticist. While such technology could be used to diagnose people against their will, "the benefits outweigh the possible negatives," he says.
As with all technology, there are tradeoffs. Our faces are rich with information, and we won't know what will happen when we harvest it all until we do. Judging from past advances—cars, televisions, the internet—many of our worries will turn out to be for nothing, while other, unforeseen, social dilemmas will surely crop up.
Our most public-facing body part is simultaneously our most intimate. We've evolved to share it with people in our close vicinity—and to have equal access to theirs. Someday soon that most basic social compact may be disrupted.
Matthew Hutson is a science and technology writer and the author of The 7 Laws of Magical Thinking.
Cracking Facial Recognition in Our Own Neural Networks
While some researchers try to engineer ever-better facial recognition technology, others are trying to reverse-engineer the facial recognition circuitry inside our own heads. These groups may soon be able to meet in the middle, advancing both neuroscience and computer science.
A part of the brain called the inferotemporal cortex, or IT, plays a key role in facial recognition, but its coding scheme has been a matter of debate for years. In recent research, Le Chang and Doris Tsao, neuroscientists at the California Institute of Technology, appear to have broken that code. They started by using a computer to generate 2,000 photo-realistic faces that differed from each other based on 50 variables, or independent dimensions. The researchers then recorded electrical activity in the IT cells of two monkeys—whose visual processing closely resembles our own—as the monkeys looked at the faces. They found that each cell was tuned to only one of the 50 dimensions. Chang and Tsao noted how remarkable it is that this area of the brain performs such an abstract calculation. Once they had the code, they could read a monkey's mind and reconstruct whatever face it was seeing.
The IT, however, represents only a face's shape and appearance, not its identity. A recent paper by neuroscientists Sofia Landi and Winrich Freiwald of The Rockefeller University explored this next step of processing. They recorded cortical activity in four monkeys as they looked at photos of faces and objects. The images of personally familiar faces activated two new areas of the brain—the perirhinal cortex and the temporal pole, both of which are important for memory. These areas also responded differently to familiar faces than did the others. As a face gradually came into focus, instead of slowly becoming more active, the areas suddenly jumped to attention in an aha! moment when the face revealed itself as connected to everything the monkey knew about its owner.
These findings reveal how our brains make sense of the visual world, and the more we learn about the brain's elusive codes, the better we can implement them in silicon.
Submit your response to this story to letters@psychologytoday.com. If you would like us to consider your letter for publication, please include your name, city, and state. Letters may be edited for length and clarity.
Pick up a copy of Psychology Today on newsstands now or subscribe to read the the rest of the latest issue.
Facebook image: Alin Lyre/Shutterstock
LinkedIn image: AJR_photo/Shutterstock