Experts Project Less

There’s another interpretation of the "surprisingly popular algorithm."

Posted Sep 18, 2019

When people make judgments under uncertainty, they can make all sorts of errors, and many of these errors can be treated and modeled as random. If there is any truth signal in the data, aggregation will reveal it. For binary decisions, majority votes are more accurate than the random judge (Hastie & Kameda, 2005). Prelec et al. (2017) introduce an algorithm identifying the surprisingly popular choice, and they show that the choice that is made more often than the judges themselves predict, tends to be the right one, even if it is made by a minority. Therefore, their new algorithm outperforms the simple majority rule.

The conceptual idea is simple enough, but the assumptions underlying it and their mathematical articulation are less intuitive. You need, among other things, Bayesian priors about actual and counterfactual worlds. Perhaps a simpler approach can get us to the same result. Consider Prelec’s primary example. Your task is to tell whether Philadelphia is the capital of Pennsylvania. Most people mistakenly answer ‘yes’. These majority judges also think that most people respond as they themselves do. The minority judges who answer ‘no’ also feel that many agree with them, but their assumed consensus is less extreme than the consensus assumed by the majority. The result is that more judges answer correctly than they think. The surprisingly popular choice turns out to be the right one [but see below for why this is not categorically true].

Suppose 75% choose ‘yes’ and believe on average that 15% say ‘no.’ The members of this majority strongly project their own choice to the whole group – a common result (Krueger, 1998). Also suppose that the members of the minority of 25% who say ‘no’ believe that 40% say ‘no.’ The aggregated prediction of ‘no’ votes is .75 x .15 + .25 x .4 = .2125. Being less than .25 this estimate makes the actual choice proportion of .25 'surprisingly large,' thereby revealing it as the true choice, notwithstanding it being the choice of a minority. If the correct minority projected more, the surprise effect would disappear when the estimates reach 55%.

When the majority is correct, the surprise algorithm can also yield the correct result, but, using the numbers of our example, the accuracy gain only materializes when the members of the majority reduce the estimates of their own group size from 85% to below 80%. In both cases, the surprise effect emerges to the extent that members of the correct group project less or to the extent that the members of the incorrect group project more. The surprise effect is nil – neither positive nor negative – if A/(1-A) = eA|(1-A)/(1-eA|A), where A is the proportionate size of the correct group, where eA|A is the estimate of the size of group A given by group A, i.e., projection, and where eA|(1-A) is the size of group A estimated by the incorrect group, i.e., 1 - projection.

Why might those who have access to accurate knowledge project less strongly to the population than members of equally large groups of non-experts? It may be in the nature of expertise that the possession of knowledge is accompanied by the realization of its relative rarity. Nothing else may be needed to explain the surprise algorithm effect and to improve forecasting while keeping it democratic.

To place the surprise effect in the context of social projection, consider the choice task where a truth signal only affects choice but not predictions. 2/3 of the judges are correct, but neither the correct majority nor the incorrect minority have any information about how others choose besides their own choice. As optimal Bayesians, members of both groups think that 2/3 of others choose as they themselves do (Dawes, 1989; Krueger, 1998). The surprising algorithm identifies the majority choices as the correct one because the aggregated prediction for this choice is .558 (.67 x .67 + .33 x .33). If the minority don’t change their prediction, the majority can project all they want without disarming the surprise. That is, they might even overproject, and we still get the effect. However, if the minority underproject, that is, if the minority start predicting as if they were the experts, the surprise effect will yield a false finding as soon as minority projection falls below .33 for own choice. Caveat projector!

References

Dawes, R. M. (1989). Statistical criteria for establishing a truly false consensus effect. Journal of Experimental Social Psychology, 25, 1 – 17.


Hastie, R., & Kameda, T. (2005). The robust beauty of majority rules in group decisions. Psychological Review, 112, 494-508.

Krueger, J. (1998). On the perception of social consensus. Advances in experimental social psychology, 30, 163-240. San Diego, CA: Academic Press.

Prelec, D., Seung, H. S., & McCoy, J. (2017). A solution to the single question crowd wisdom problem. Nature, 541, 535. https://www.nature.com/articles/nature21054