Skip to main content

Verified by Psychology Today

Bias

What Happened to the SAT?

Selection bias and the end of Gen Ed.

The Scholastic Aptitude Test was once hailed as a way to open colleges to bright poor kids. It works pretty well to predict performance in college, and even afterwards —at least it used to in 2011. But it has been increasingly criticized on several grounds: It doesn’t predict; it doesn’t predict as well as high school grades; it’s unfair because it's correlated with parents’ socioeconomic status (SES); it’s unfair because different racial/ethnic groups don’t score the same; rich kids can prepare for the SAT, poor ones can’t; high schools waste time on test prep when they should be teaching substance. There are probably other objections; this is a tangled issue, not to be settled in a short blog post.

I just want to draw attention to a simple problem. A flaw in all the “does it predict?” studies is something called selection bias. While well known to statisticians, it is nevertheless widely ignored by journalists.

Some studies find that the SAT does predict college performance; others are not sure or think that high school grades predict better.

Why are these results so conflicting? Perhaps for the same reason that the height of NBA players doesn’t predict their game performance. Players are selected for height, weight and talent; they are larger, with a smaller range of heights and weights than the population at large. Hence the real correlation in the general population between height and competence at basketball is lost after the players have been selected.

Same for SAT. In the population at large, there is a correlation between SAT and college performance; but if admission to an elite school is based on SAT score, or something highly correlated with SAT score, like grades, this correlation must diminish, perhaps vanish.

A related objection applies to the comments of distinguished cognitive psychologist and President (1995-2003) of the University of California Richard Atkinson, who complained in 2005 that the SAT I, which is basically an IQ test, was worse than both the subject-matter SAT II and high school grades: “When the SAT I is added to the combination of high school grades and the SAT IIs, the explained variance increases from 22.2% to 22.3%, a trivial increment.”

This claim is surely incomplete, for the following reason. Suppose two variables, A and B (like high school grades and SAT score) are highly correlated; and further suppose that A predicts a third variable, C (college performance). Now, how much predictive power will be added by the second variable, B, given that A and B are highly correlated? Answer: little or none. But, if the comparison had been made in the reverse order, correlating B with C, the result would have been the same: Adding A would also not have increased the total correlation.

A better way to represent the data like this is a picture (see here) from a study by Matthew Chingos in 2018, which shows the percentage of students with different combinations of SAT or ACT and high school GPA who graduated within 10 years. In one corner are the 35% of students who succeeded with poor grades but the highest SAT or ACT score; in another is the opposite case, the 47% of successful students with very low SAT but the highest grade point average. The point is that for this study the high-SAT students do worse than the high-GPA students and the superiority of high school grades over test scores is reflected in all the other points in the table.

Data like these are the basis for the claim that GPA is a better predictor of college success than SAT. But all these results are questionable because of selection bias. Perhaps this sample is better because the students are from “a group of less selective four-year public colleges and universities." But even here, the students are not randomly selected. Unless they are, these all correlations are questionable[1].

Selection bias is a longstanding problem, so why have the SAT-GPA predictions apparently gotten worse over the years? An obvious reason is this: If you are using score A to predict score C, SAT to predict college grades, then it is just as well that C is derived from a uniform experience — students all taking the same or similar courses.

In recent years, general education courses, which are required of all first-year students, have fallen out of favor, to be replaced by electives. While the performance of first-year students in the past would reflect mostly their performance on the same course, in recent years it will likely reflect their performance in different courses, mostly selected for the student’s ability to succeed in them. The range of GPA variation will be reduced and so will the correlation with any prior measure, be it SAT score or high school GPA.

Conclusion: Forget predictiveness. Look at the effect of college admission criteria on teaching in high schools. Do we want kids learning test-prep tricks or learning to write and do math? Perhaps college-admissions criteria should represent what colleges would like kids to learn, like the SAT II, rather than try and guess what they are, like the SAT I? Or just give applicants an exam based on the first-year gen ed courses they will have to take? Discuss.

[1] Indeed, a reasonable practice for conscientious college admissions officials would be this: admit a fraction of applicants each year on an entirely random basis. By looking at the college performance of this random sample, they could get an unbiased look at how well things like high school grades, SAT and extracurriculars actually predict college performance. The information can then be used to modify existing criteria.

advertisement
More from John Staddon, Ph.D.
More from Psychology Today