Karen Yu, Ph.D. and Warren Craft, Ph.D., MSSW

Choice Matters

Considering College Admissions

How we process information can pose challenges for deciding among applicants

Posted Apr 04, 2019

 Nikolay Georgiev/Pixabay
Source: Nikolay Georgiev/Pixabay

‘Tis the season for college admissions decisions, and the process is likely on the minds of more than prospective students and families this year given recent reports of parents paying large sums to influence decisions in their children’s favor. Not surprisingly, details including doctoring of test scores, fabrication of athletic records, and bribery of coaches have re-ignited calls for admissions reform, particularly at highly selective schools. Yet there are more pervasive and often unrecognized factors related to the availability of information and how we process it that may pose more serious challenges for accurately distinguishing among applicants. We’ll consider a few of those here, with the hope that discussion of these factors that often operate outside of our conscious awareness offers useful insights not only for admissions but for other domains as well.

What Matters? Distinguishing Useful Information

College applicants supply a range of information about themselves, often including demographic information, grades, test scores, essays, recommendation letters, and more. Applicants’ actions and interactions—whether and how often they have visited a school, spoken with an admissions counselor, and accessed a school’s website—also offer information, as do web searches and social media.

Good decisions depend on recognizing which information is actually useful—both reasonably accurate and relevant to one’s goals—and on weighting that information appropriately. When a vast amount of information is available, this can be especially challenging for various reasons, including:

  • Most measures are at best imperfect indicators of the quality of an applicant. Test scores and high school grades are reliably associated with certain aspects of college success, but imperfectly so. Indeed, many factors unrelated to college success can influence these scores and grades—poor sleep before the test, idiosyncrasies of particular teachers, and—as the recent scandal reminds us—money and deception. Similarly, while a campus visit may reflect a student’s interest, various factors might prevent a highly interested student from visiting or lead a less interested student to visit. Our decisions can be compromised if we treat these measures as perfect or near-perfect predictors.
  • The information we have can distort our thinking and blind us to what we don’t have. With limited cognitive capacity, the more information we have available, the more likely we are to lose sight of what is most important. Well-documented cognitive tendencies that we’re all prone to can lead us to overweight information that is more recent, more unusual, or otherwise more distinctive, potentially distorting our sense of an applicant’s qualifications.

The vast amount of information we do have about applicants can blind us to what we don’t have—that compelling example of resilience that a student doesn’t recognize or want to share, the family emergency that prevented a campus visit, the disorganized teacher with unusual grading policies. Because attention is more readily drawn to the presence of something than its absence, we may fail to seek out potentially valuable information because we do not recognize its absence in the first place.

  • Knowing we lack information can lead us to give it more weight. Sometimes, though, we do become aware of the information we lack. Research suggests we’ll weight that information more heavily once we obtain it, regardless of its actual importance to us or to the decision. Bastardi and Shafir1 found this to be particularly likely for decisions that influence the fate of others (hello, admissions decisions!). Indeed, an admissions scenario was among those they presented to participants.

Amidst an array of information about a hypothetical applicant, some participants were told that the applicant had a B average. Others were told of conflicting reports from the school, with the average being either an A or a B. For participants given the B average from the start, 57% chose to accept the applicant. For those told the grade was uncertain, 74% wanted to await clarification before deciding, even though the worst possible grade was the B known to the other group. When ultimately given that same information (that the average was a B), 25% chose to accept the applicant. The same information was weighted more heavily in the decision when it was initially unavailable and the alternative values were articulated.

  • Things aren’t necessarily what they seem. Perhaps more problematic, we may confidently perceive a relationship between characteristics even when no such relationship exists. Such illusory correlations are particularly likely when the characteristics involved are relatively rare or when people have a prior belief that the characteristics are related. If we believe a particular type of essay response is related to applicant quality, we will perceive such a relationship in the materials we encounter even if there is no relationship whatsoever. Backed by a compelling perception, it would not be surprising for us to incorporate that essay characteristic into our decisions. On the flip side, we may fail to consider and capitalize on relationships that seem unlikely but actually exist. Yet wouldn’t we eventually realize our errors?

Alas, not necessarily. We’re more likely to seek out, attend to, remember, and interpret information in ways that support our beliefs—a multi-faceted phenomenon known as confirmation bias. So if we believe certain types of essay responses are correlated with applicant quality, we are more likely to attend to and remember instances of stellar alumni who wrote such essays, and to forget or explain away those cases where the writers of such essays were not so successful. We might even unknowingly alter our assessment of the essays or of success in a way that makes a case more consistent with our belief. Without realizing our bias, we create and curate an array of evidence that erroneously supports our use of that essay characteristic as an indicator of applicant quality. That we cannot know how things would have transpired had we admitted a different group of applicants further shields us from disconfirming evidence.

And That’s Not All

As if that weren’t enough, other biases and extraneous factors that distort or are unrelated to applicant quality can influence our decisions. Consider the fundamental attribution error (FAE)—the tendency, more common in Western societies, to explain behavior in terms of an individual’s inherent character rather than situational factors. Thus we more readily attribute an athletic award to an individual’s athletic ability or hard work than to situational factors (e.g., attending a school that can attract a better coach, not needing a job thus having more time to practice,…or having parents willing to pay for a fabricated athletic record!). Lack of such accomplishments may be attributed to a corresponding lack of ability or dedication, again discounting situational factors. Overall, we’re likely to attribute more to an applicant’s merit and character (or lack thereof) than is warranted.

Timing and other contextual factors can also influence our choices. Decision fatigue—the depletion of mental resources following multiple sequential decisions—can lead individuals to simplify decision-making by maintaining the status quo, selecting the default option, or letting someone else decide.2 In a study of real-world parole decisions, the likelihood of granting parole declined as a decision session went on, rebounding after each meal break.3 Although the magnitude and explanation of this finding have been debated,4 it suggests that an admissions decision could be influenced by an applicant’s position within a decision session. And framing the decision as whether-to-admit or whether-to-deny could influence the decision by changing the implicit default.

Putting It All Together

An evaluation of college admissions, whether as a whole or at a specific institution, should consider details of the process and the extent to which factors such as those above might come into play. Perhaps appreciating some of these challenges, many schools have moved to test-optional admissions and to a holistic evaluation of applicants that aims to consider the whole person. Yet designating some information optional and expanding the information considered won’t necessarily yield a more accurate assessment of an applicant, and could introduce additional challenges into the process. Broad notions of “fit” or “potential to contribute significantly to the University community,” make determining whether we are considering appropriate information and weighing it reasonably even more difficult.

A Guise of Meritocracy? Envisioning Alternatives

The breadth of information supplied and considered can nevertheless lead us to believe that admissions decisions reflect meaningful, accurate distinctions among applicants, with those who are admitted being in some way better or more qualified than those who are not. Psychological research on how we process and interpret information reveals both challenges to achieving a meritocracy and reasons why we might nevertheless believe we’re operating in one; it may also encourage us to re-examine our goals and values and craft a system more effectively in service of them.

Might there be more efficient, more fair, and/or more honest approaches to college admissions? What might those look like? In 2005, Swarthmore psychologist Barry Schwartz provocatively suggested that "top colleges should select randomly from a pool of ‘good enough.’"5 It's an idea that has gained renewed attention in recent weeks, and we’ll take it up, along with other alternatives and related issues, in future posts.

References

1 Bastardi, A., & Shafir, E. (1998). On the pursuit and misuse of useless information. Journal of Personality and Social Psychology, 75, 19-32. doi: 10.1037/0022-3514.75.1.19

2 Levav, J., Heitmann, M., Herrmann, A., & Iyengar, S. S. (2010). Order in product customization decisions: Evidence from field experiments. Journal of Political Economy118, 274-299.

3 Danziger, S., Levav, J., & Avnaim-Pesso, L. (2011). Extraneous factors in judicial decisions. Proceedings of the National Academy of Sciences, 108, 6889-6892. doi: 0.1073/pnas.1018033108

4 Glöckner, A. (2016). The irrational hungry judge effect revisited: Simulations reveal that the magnitude of the effect is overestimated. Judgment and Decision Making, 11, 601-610.

5 Schwartz, B. (2005). Top colleges should select randomly from a pool of "good enough". Chronicle Of Higher Education, 51, B20-B25.