This is Year 13 of flipping through dozens of applications to determine the small number of people to be interviewed for a slot in the Clinical Psychology Ph.D. program at George Mason University. At this moment, faculty all over the United States are doing the same thing for social, developmental, industrial/organizational, counseling psychology and human factors programs (among others). 

A few notable trends over the past few years.

First, applicants are spending multiple years in multiple research laboratories to gain research experience. Fantastic. I have been advocating this for years

Second, there has been an explosion of publications and presentations at scientific conferences. This year, I had one student with 9 publications (insane) and three students with over 15 conference presentations (holy bejeezus!). 

Third, there has been a rise in the number of applicants who developed their own research project from the development of questions to dissemination via paper or manuscript. There has been a rise in the number of applicants who served as project coordinator for a grant-funded project by a faculty mentor. This is huge. Applicants know the nuances of how to conduct research and deal with the anal warts of no-shows, video recording malfunctions, careless survey data responses, and participants reporting suicidal ideations. An apprenticeship ensures that an applicant knows what they are getting into as a scientist.

Howard Kalin, used with permission
Source: Howard Kalin, used with permission

Despite these trends, I have found that faculty - who I refuse to name - rely on decades-old strategies to sift out applicants. Namely, does someone have a sufficient GPA and GRE score? My own students are regularly rejected by graduate programs despite serving as my project coordinator on sophisticated grant-funded studies, presenting first-author talks and posters at scientific conferences (that they earned), and being a co-author on manuscripts submitted to top-tier journals - studies that involved experience-sampling and hierarchical linear modeling (skills that they learned over months of training). 

I write this blog post to point out 20-year old forgotten research.

Do you know what GRE scores predict? First-year grades in required courses in a Ph.D. program. Do you know what nobody training independent researchers, teachers, and clinicians care about? First-year grades; that are inflated with almost no variability. And if you want to explore meta-analytic findings, know that GRE scores predict an average of 6.3% of the variance in graduate level grades in courses. GRE scores do not predict anything that is useful for graduate school. 

The rules switch from undergraduate to graduate school. Students move from convergent thinking and test taking to divergent thinking, where they must devise their own research interests and questions, and acquire the methodological, research design, program evaluation, and statistical skills to test these questions and interpret the data. The GRE scores do not predict any of these skills.

Now, if you care about the GPA of graduate students in specific courses required for a Ph.D., and are less interested in their research skills, clinical acumen, and leadership capacity, by all means, continue to judge students by the quantitative scores on their transcript. But I am speaking to you - the faculty holding the fate of students lives in your hands. If you do not care about graduate school grades then do not be a hypocrite and use the data to guide you.

Be brave.

Read their CV. If they nabbed a GPA over 3.0, move on. What is the quality of their research experience? Do they know how to hit the ground running to contribute to your research lab? Will they be a good colleague with you and your other graduate students? Is there at least one persuasive letter attesting to character strengths that set them apart from the pack?

Be a maverick.

Do not rule out great potential psychologists because they do not know the antonym for bailiwick or the cosine of 30°. It just doesn't matter. Do not rule out students because their GPA sucked in their first year of college, before developing an interest in psychology. 

There is unlikely to be a national march against the admissions process for Ph.D. programs. So this is it. Resist the temptation to let the arbitrary scores overly influence you. I suspect you will be rewarded with students who are more thoughtful, imaginative, and impactful. 

Morrison, T., & Morrison, M. (1995). A meta-analytic assessment of the predictive validity of the quantitative and verbal components of the Graduate Record Examination with graduate grade point average representing the criterion of graduate success. Educational and Psychological Measurement, 55(2), 309-316.

Sternberg, R. J., & Williams, W. M. (1997). Does the Graduate Record Examination predict meaningful success in the graduate training of psychology? A case study. American Psychologist, 52(6), 630-641.

New Additions as of 2/9/16: Some readers complained about 20-year-old data being irrelevant in the modern era. I remain skeptical about this criticism but here is the latest research on the topic that continues to show that the GRE is a poor predictor of everything that matters except for grades. 

open source, used with permission
Source: open source, used with permission

Moneta-Koehler, L., Brown, A. M., Petrie, K. A., Evans, B. J., & Chalkley, R. (2017). The limitations of the GRE in predicting success in biomedical graduate school. PloS one, 12(1), e0166742.

Other readers have been arguing for the primacy of the GRE scores over any other indicator in the admissions process. One professor suggested that I cite his work on the topic.

In one of his studies ( Kuncel & Hezlett, 2007), he finds a .20 correlation between standardized test scores and research productivity during graduate school. I view this as evidence for the problem with the primacy of GRE scores. Clearly, something else is necessary beyond GRE scores. This goes back to my suggestion of a necessary starting point such as a 3.0 GPA or 60th percentile score on the GRE subtests before moving on to the skills acquired to hit the ground running in a research lab. Interestingly, the stronger predictor of research productivity was the GRE psychology subject score. A test that is rarely required and rarely considered in admissions. I will be honest. Reading about these data has changed my view on the test and I will be viewing it more carefully in the future. It is a snapshot of whether someone has acquired and retained knowledge about the field. It is harder to ask useful research questions if you are ignorant of what has been explored in the past. A basic foundation offers a headstart. 

Perhaps it is time for psychology departments, and individual faculty members, to be transparent about their filtering process. What are the exact cutoffs used to reduce the number of applications to a reasonable level? GPA scores over 3.5? GRE scores above the 80th percentile? If you have less than two research presentations, are you out of the mix? If you have one first-author publication can it offset a GPA of 3.1? How are the demographics of race, sex, sexual orientation, age, and social economic status taken into consideration? On record, faculty members claim an adherence to the true nature of affirmative action - all else being equal, then non-white, non-heterosexual students are given the edge. Off record, there is often a push to get non-white, non-heterosexual students each year- irrespective and often at the expense of research experience, research productivity, clinical experience, GPA, GRE, etc. Diversity that is not visible, such as social economic status, is viewed as far less important. Transparency ensures a fair process. It exposes sexism, racism, and classism for both minority and majority groups (and no applicant should be held as a walking representative of any group).

In the end, I have asked dozens of people about how they choose stellar graduate students. The most common answer is - I have no freaking clue. Which is why I wrote this blog post. Nobody knows what makes great researchers and practitioners and yet every program rules out people prematurely with an algorithm that leans heavily on standardized test scores and grades.

It is worthwhile to pause, collect internal data, ask questions, and be transparent to applicants. If applicants know that their 5 conference presentations will never override their 3.1 GPA and 70th percentile quantitative GRE score, just tell them. No need to lead them astray. Be student-centric. 

Kuncel, N. R., & Hezlett, S. A. (2007). Assessment. Standardized tests predict graduate students' success. Science, 315(5815), 1080.

Kuncel, N. R., Hezlett, S. A., & Ones, D. S. (2001). A comprehensive meta-analysis of the predictive validity of the graduate record examinations: implications for graduate student selection and performance. Psychological Bulletin, 127(1), 162-181. 

Dr. Todd B. Kashdan is a public speaker, psychologist, professor of psychology and senior scientist at the Center for the Advancement of Well-Being at George Mason University. His latest book is The upside of your dark side: Why being your whole self—not just your “good” self—drives success and fulfillment. If you're interested in arranging a speaking engagement or workshop, visit toddkashdan.com

You are reading

Curious?

The Problem with Measuring Happiness

New research that helps explain what happiness measures measure

How Do Character Strengths Help (and Harm) Romance?

New research details the upsides and downsides of strengths.

How I Learned About the Perils of Grit

Rethinking simple explanations for complicated problems.