Karen Yu, Ph.D. and Warren Craft, Ph.D., MSSW

Choice Matters

Why Ask a Machine for a Recommendation?

New research suggests that machines can give better recommendations than people.

Posted May 02, 2019

Gerd Altmann/Pixabay
Source: Gerd Altmann/Pixabay

Suppose you’re seeking a recommendation: perhaps you’re searching for a book, movie, or restaurant you might like. You could ask a friend who knows you well, a stranger, or a computer algorithm. Which would you turn to?

Perhaps not surprisingly, most people would prefer a recommendation from another person. Yet a recent study suggests that machine-based recommendations can be better at predicting what people will like. Indeed, a fairly basic computer algorithm can outperform recommendations from strangers, friends, and family. And it can do so without information about the nature of the items it is recommending.

Wait, what?

That’s right. Some computer-based algorithms can better match a person’s actual preferences without any information about the nature of the items being recommended—they don’t need details about the books, movies, or restaurants under consideration. In fact, the algorithms don’t even need information about which category—e.g., books, movies, or restaurants—they are making a recommendation about.

In a recent study, researchers compared how well recommendations from people and from computers matched people’s actual preferences. And they did so in a realm that might be rather difficult for computers: humor. In particular, the researchers considered computer-generated and person-generated recommendations about jokes that people would find funny. Because humor is arguably a uniquely human experience, predicting what jokes people will find funny is expected to be challenging for a machine-based system without knowledge about the topics of the jokes, what tends to make a joke funny, or other such information. Yet the researchers found that an algorithm using ratings of a sample of jokes from a number of people generates recommendations that match people’s joke preferences better than recommendations from people who know them well.

In one experiment, the researchers varied both the actual source of the joke recommendations— a person or a computer algorithm—and the perceived source of those recommendations. For some participants, perceived source and actual source matched: if the recommendations were from a person, participants were told they were from a person, and if they were from a computer, they were told as such. For other participants, perceived source and actual source differed: participants were told the recommendations were from a person when in fact they were generated by a computer or vice versa. This allowed researchers to separate the accuracy of the predictions from their perceived source. Even though the computer algorithm’s recommendations more accurately matched their own joke preferences, participants gave higher ratings to the recommender when they thought it was a person than when they thought it was a machine.

The researchers also found that people agreed more strongly with statements such as “I could understand why the recommender thought I would like those jokes,” when the recommender was a human. This suggests that people may prefer human recommendations in part because they feel they better understand how other people make recommendations (whether they actually do is another question!). When given a more detailed explanation of the computer recommendation process, people rated it as easier to understand and also rated the quality of the recommendations more highly than did those given less information about the process.

Note that this makes some sense: it’s reasonable to consider recommendations cautiously when you don’t know much about what led to them. It will be interesting to see whether the understanding of the process in and of itself leads people to judge recommendations as better; it seems likely that judgments might depend not only on the understanding of the process but also on whether that meshes with people’s general beliefs about what leads to accurate recommendations.


  • Fairly basic, general purpose algorithms can generate recommendations that better predict people’s actual preferences than recommendations from other people. Current technology makes it easier than ever to gather and access the sorts of ratings that are the input to these algorithms. Might machine-generated algorithms offer improved recommendations across a variety of domains beyond jokes? Indeed, algorithmic approaches are already in use in some venues — consider, for example, your Netflix home screen and what does and doesn’t appear there.
  • Useful recommendations can be generated from fairly little information. Note that the algorithms used in this recent research did not have information about a participant beyond his or her ratings of a set of jokes—demographic and personality information that one might think would be needed to accurately predict what a person will find funny were not part of the algorithm. And as noted above, the algorithm used very limited information about the items being recommended themselves—just ratings of the jokes from a number of people, and no information on the general nature of those items (i.e., that they were jokes) or their specific content was required. In what other contexts might one piece of information from many people be used to generate improved recommendations?
  • Although people favor person-generated recommendations over machine-generated ones, the findings of Yeomans and colleagues suggest this can be modified. If it is primarily the perceived source of the recommendations that matters, then manipulations that lead people to believe the source is a person could presumably increase trust in the recommendations. One might imagine situations in which the source is explicitly (mis)represented as a person, or where the machine-based system is given more human-like characteristics. Indeed, Yeomans and colleagues offer the possibility of having “algorithms pause, as if ‘thinking,’ before making a recommendation” (p. 10). Or one might simply say nothing about the source of the recommendations. Yet another approach suggested by the above findings would aim to improve people’s perceived and/or actual understanding of how a given algorithm generates predictions and what algorithms can (and cannot) do for us.
  • Even if some algorithm-based recommendations are more accurate, there may still be value to person-generated recommendations. Generating and sharing such recommendations and receiving and using them may contribute to social connectedness and stronger confidence in one’s decisions, among other things. There are many interesting questions to ask about how the potential benefits of person-generated recommendations interact with the potential benefits of machine-generated ones.

Ultimately, it remains to be seen which sorts of computer algorithms actually lead to better predictions in which contexts. Clearly, their value will depend not only on their accuracy but on how people think about, understand, and feel about them. With that in mind, rather than man versus machine, we might do well to consider man and machine.


Yeomans, M., Shah, A., Mullainathan, S., & Kleinberg, J. (2019). Making sense of recommendations. Journal of Behavioral Decision Making, Advance online publication. https://doi.org/10.1002/bdm.2118