One Among Many

The self in social context

Stupid People

Why don’t they smarten up and realize how stupid they are?

Dumb
https://www.google.com/search?hl=en&site=imghp&tbm=isch&source=hp&biw=1280&bih=558&q=stupidity&oq=stupidity&gs_l=img.3..0l10.129
A word to the wise ain’t necessary – it’s the stupid ones that need advice.

~ Bill Cosby

Stupid people have lots of problems. They mess up, foul up, and screw up. In a world in which intelligence is adaptive, stupid people obtain less value, and they suffer the consequences. To combat stupidity, society has invented education and training. Used wisely, education and training can build skills and even raise intelligence itself. A barrier to this course of action is that many, perhaps most, of the stupid people fail to understand that they are stupid. Like others, they believe that they are better than average. Except that in their case, this belief is grossly in error, and it keeps them from taking remedial action.

Find a Therapist

Search for a mental health professional near you.

In a paper, which is now a modern classic, Justin Kruger and David Dunning (1999) gave their participants various tests, they scored the tests, and they asked each participant to estimate what percentage of test takers did worse than they themselves did. The data showed two things. First, the average of the estimated percentiles was above 60. The difference between this value and the average of the true percentiles (i.e., 50) reflects a self-enhancement bias at the group level. Second, the correlation between estimated and true percentiles over participants was positive, but imperfect. We (Krueger & Mueller, 2002) felt that these two findings were sufficient to explain the pattern of interest, namely large overestimation errors among the low scorers (the stupid ones) and the small underestimation errors among the high scorers (the smart ones).

Dunning and colleagues argue that this explanation is insufficient because the asymmetrical errors remain after measurement is corrected for unreliability. However, their argument overlooks the fact that regression to the mean will always occur if a correlation is neither +1 nor -1. Even perfectly reliable measures produce regression effects if they are not perfectly correlated with each other, in other words, if they measure different things (Fiedler & Krueger, 2012).

The regression account of the pattern makes no psychological distinctions between stupid and smart people other than that the former are stupider than the latter. They both tend to self-enhance, and neither achieves perfect accuracy. By contrast, Dunning and colleagues maintain that there is something special about stupid people. These people are doubly stupid because they get low scores without realizing that they do. Ross Mueller and I suggested that a hypothesis that refers to two separate phenomena (type I stupidity and type II stupidity) requires separate measures. Claiming 2 layers of stupidity is not parsimonious (not smart) if a single linear model can explain the data. The idea that stupid people are also plagued by meta-stupidity could be corroborated, for example, their estimated percentiles (i.e., what percentages of others they think they outperformed) were even higher than the estimates provided by intermediate performers. Williams, Dunning, and Kruger (2013) now present data with just this sort of pattern. Plotting estimates of own performance against measured performance, they find a U-shaped curve, which amounts to a quadratic trend in statistical analysis. What changed in the last 14 years?

The difference lies – of course – in the method. In the classic study, the lowest scores were obtained by those who were guessing or by those very few who by dumb luck did even worse than guessing. At the time, the test questions met the psychometric principle of independence. Answering one question correctly should not be affected by success or failure on the preceding items. In the new studies, the principle of independence is defenestrated (thrown out of the window). Instead, participants are allowed to have an aha! experience and to respond consistently thereafter. If the aha! experience follows the detection of the correct rule underlying all the individual problems, consistent responding leads to a very high score; if, however, the aha! experience follows the contemplation of an incorrect rule, the final score will be even lower than it would be by guessing alone. Surrendering the principle of independence widens the range of scores. Unfortunately, scores in this extended range can no longer be modeled linearly. The consistency (i.e., non-independence) of responding after the Archimedean moment amplifies and distorts individual differences in ability; it does not provide a more sensitive measure.

Wason task

https://www.google.com/search?hl=en&site=imghp&tbm=isch&source=hp&biw=1280&bih=558&q=wason+task&oq=wason+task&gs_l=img.3..0i24.8
Consider the method of choice, namely Wason’s (1966) legendary rule testing task. Four cards are on the table with one showing a vowel, one a consonant, one an even number, and one an odd number. The rule is: If there is a vowel on one side, there is an even number on the other. Wason found that most people turn over the vowel card and the even number card. This has become known as verification bias. Few people understand that one can efficiently ask if the rule is false by turning over the odd number card.

Williams et al. provided participants with 10 tasks of this type, and so, true performance scores could range from 0 to 10. The minimum and the maximum scores have a very low probability of occurring by guessing. They are most likely to occur among participants who have an aha! experience. They discover the correct rule (or one that turns out to be incorrect), and they respond consistently after that. After the insight has occurred, responses are no longer independent.

To Williams et al., participants scoring 0 are stupider than participants scoring 1 or 2. Type I stupidity is now the product of type II stupidity. Individuals whose insight takes them to an incorrect rule should know that this rule is incorrect. Since they do not, their extremely low performance score are held against them. The implications are strange. To be consistent, one would have to require everyone to perform a second test on the primary aha! experience. This is uncharted territory. Qualitative insights are self-limiting. They bring a cognitive task to closure. Having an insight in a context that demands insight, opens the door to implementation. Consistency allows the exploitation of the insight. Further testing is costly.

A performance test that is itself a task of rule detection over items confronts the test taker with a decision problem. What is the expected value of responding consistently after an aha! experience, and how does this strategy compare with its alternatives? Consider a simplified version of the Wason task. The four possible events are P, ~P, Q, and ~Q. The rule to be tested is If P, then Q. Suppose the tester is asked to select 2 events. There are 6 possible pairs. A tester who picks randomly will get it right with a probability of 1/6. Having had an insight, and wanting to do better than chance, it is sufficient to believe that the probability of the rule being correct is > 1/6. Such belief is built into the insight itself. For smart and stupid people alike, insight makes itself felt as an advance over chance.

Williams et al. find that participants with scores of 0 show somewhat less consistency than participants with a perfect 10. This could be so because consistency is necessary only for the maximum score. Perhaps some minimum scorers scored below chance by chance, or they did in fact fail to consistently apply the rule they thought they had discovered. This failure, however, makes them stupider, not smarter. To infer that those who scored 0 because they consistently applied an incorrect rule are stupider than those who had a low score but were inconsistent is to fall prey to an outcome bias, a well-known failure of rational reasoning (Baron & Hershey, 1988; Krueger & Acevedo, 2007).

Baron, J., & Hershey, J. C. (1988). Outcome bias in decision evaluation. Journal of Personality and Social Psychology, 54, 569–579. doi:10.1037/0022-3514.54.4.569

Fiedler, K., & Krueger, J. I. (2012). More than an artifact: Regression as a theoretical construct. In J. I. Krueger (Ed.). Social judgment and decision-making (pp. 171-189). New York, NY: Psychology Press.

Krueger, J. I., & Acevedo, M. (2007). Perceptions of self and other in the prisoner’s dilemma: Outcome bias and evidential reasoning. American Journal of Psychology, 120, 593-618.

Krueger, J., & Mueller, R. A. (2002). Unskilled, unaware, or both? The contribution of social-perceptual skills and statistical regression to self-enhancement biases. Journal of Personality and Social Psychology, 82, 180-188. doi:10.1037/0022-3514.82.2.180

Kruger, J., Dunning, D. (1999). Unskilled and unaware of it: How difficulties in recognizing one’s own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77, 1121-1134. doi: 10.1037/0022-3514.77.6.1121

Wason, P. (1966). Reasoning. In B. M. Foss (Ed.). New horizons in psychology (pp. 135-151). Baltimore, MD: Penguin Press.

Williams, E. F., Dunning, D., & Kruger, J. (2013). The hobgoblin of consistency: Algorithmic judgment strategies underlie inflated self-assessments of performance. Journal of Personality and Social Psychology, 104, 976-994. doi:10.1037/a0032416

 

 

 

 

Joachim Krueger, Ph.D., is a social psychologist at Brown University who believes that rational thinking and socially responsible behavior are attainable goals.

more...

Subscribe to One Among Many

Current Issue

Let It Go!

It can take a radical reboot to get past old hurts and injustices.