An important study by Justin Arneson, Paul Sackett, and Adam Beatty (2011) was recently published looking at the relationships between ability as measured by standardized tests and performance as measured by what people actually accomplish.
Lots of folks, including me just last week in a lecture, have proposed that talent is over-rated as an ingredient in accomplishment. Sure, a certain degree of talent and ability must be present, but once some "good enough" threshold is reached, more is not necessarily better. Rather, what then determine actual performance are factors like opportunity, instruction, passion, perseverance, and practice, especially practice. These sorts of arguments lie at the core of popular books like Malcolm Gladwell's (2008) Outliers and Daniel Coyle's (2009) The Talent Code.
If ability is only important up to a certain point, then a graph of performance as a function of ability should not be a straight line; it should level off once the pertinent threshold is reached (see the "good enough" prediction in the graph below). However, if ability matters at all levels, then the graph should be a straight line (see the "more is better" prediction in the graph below).
Arneson and colleagues analyzed four large longitudinal data sets, three pertaining to education and one to employment. Cognitive ability was measured at an earlier point in time with standardized tests like the SAT, and performance was measured at a later point in time with grade point average in the educational datasets and with work samples and supervisor ratings in the employment dataset. All sorts of statistical controls were applied.
Results were clear: The "more is better" prediction was supported in all analyses. In other words, the lines were straight. If anything, ability mattered more in predicting performance at the higher end, meaning that the lines turned slightly up and certainly not down as the "good enough" prediction would hold.
We can quarrel with the research. Maybe tests like the SAT do not measure talent. This is a good point, but it leads to a different concern, how best to measure talent, and moreover does not explain why the tests do predict performance.
Or we can argue that the criteria of successful performance are not ideal ones. Again, this is a good point, especially in the case of school grades, but these were the criteria available to the researchers.
Let me be clear what the data do not show.
They do not show that talent is immutable or inherent.
And they certainly do not show that factors like opportunity or perseverance or practice do not matter. Of course they do.
But if we take the data as showing what the data show, the conclusion is still important because it is at odds with conventional wisdom, at least as it exists in some quarters, including my own.
Implications for selection, whether for schools or jobs, are obvious. Ability always matters, and the well-intended strategy of specifying a "good enough" cutoff is not consistent with the data. That said, we do not want to focus only on ability when selecting individuals for schools or jobs. We would want to put as much effort into measuring the other likely ingredients of success and seeing what their relationship to performance might be.
Arneson, J. J., Sackett, P. R., & Beatty, A. S. (2011). Ability-performance relationships in education and employment settings: Critical tests of the more-Is-better and the good-enough hypotheses. Psychological Science, 22, 1336-1342.
Coyle, D. (2009). The talent code. New York: Bantam Dell.
Gladwell, M. (2008). Outliers: The story of success. New York: Little, Brown.