The Limits of Neurocognitive Testing in Studying Long Covid
A new study highlights pitfalls in interpreting neurocognitive test results.
Posted May 26, 2022 | Reviewed by Kaja Perina
A study published this week in the Annals of Internal Medicine by Sneller and colleagues compared 189 people who had recovered from Covid-19—many of whom reported symptoms of Long Covid—to a control group of 120 participants who never had Covid-19.
The study authors found no difference between the groups on neurocognitive tests of processing speed, executive functioning, and memory. Interestingly, the authors also did not find differences between groups in many physical and immunologic tests, but I’ll focus on the neurocognitive testing in this post.
At first glance, the lack of difference in groups in neurocognitive testing seems surprising. A growing body of research on Long Covid indicates that a percentage of individuals have persistent cognitive difficulties. Why was that not found in this study?
The Scope of Neurocognitive Tests
To answer these questions, it’s helpful to first understand what neurocognitive tests are. Neurocognitive (or neuropsychological) tests require a patient or research participant to “perform” or “demonstrate” cognitive skills, such as solving problems, learning and remembering a list of words, or responding to certain targets and not others as fast as possible. These tests have been given to large groups of healthy individuals. Whether or not an individual patient or research participant’s neurocognitive test score is determined to be “normal” or “abnormal” is then defined relative to the average performance of that large group of healthy individuals.
Unfortunately, this approach can result in a “one size fits all” definition of (ab)normality that does not account for each patient or research participant’s pre-illness cognitive function. If an individual person’s pre-illness functioning is higher than average, then what looks like a normal or average test score after Covid-19 might actually represent a significant decline for that person.
The importance of considering pre-illness functioning is underscored by another study published in the European Journal of Neurology. The researchers looked at a cohort of individuals in Ecuador who were undergoing annual neurocognitive testing, starting prior to the Covid-19 pandemic. The researchers therefore had individual, pre-Covid measurements of cognitive function for comparison to test scores after Covid-19.
The results showed that those participants who developed and recovered from Covid-19 had a mild but significant decline in cognitive test scores from pre-illness to six months after the illness. In contrast, those participants who never had Covid-19 did not have this change from their own pre-illness test score. (In a bit of positive news, these authors conducted a follow-up cognitive assessment 1-year post-illness and found that the decline at six months had largely reversed at 1 year).
The Sneller study—like most studies, and unlike the Ecuadorian cohort—did not have pre-illness cognitive testing on their participants. But even in the absence of baseline testing, pre-illness cognitive function can be estimated by proxy methods. This is typically done by using tests of vocabulary, word reading, or “crystallized” knowledge. These tests typically do not change with illness or disease and can serve as useful estimates for a person’s pre-illness cognitive function. The Sneller et al. group missed an opportunity to use such tests, which could have served as a more individualized benchmark to compare participants’ neurocognitive test performance.
The Sneller study also highlights a bias inherent in the design and interpretation of cognitive research: performance-based neurocognitive tests often tend to be prioritized and perceived as more “objective” or “real” than the participant’s own report of their experience. The latter type of assessment—referred to as patient-reported outcomes (abbreviated as PROs)— are rigorously validated and provide important, complementary information about cognitive function outside of the constraints of the laboratory setting.
The Importance of Study Design
Neurocognitive tests are typically administered in quiet, distraction-free research rooms or labs, whereas day to day life contains a lot more information to keep track of and a lot more distractions. Subtle weaknesses in neurocognitive tests administered during optimal laboratory-based environments can have a much greater impact in the real-world setting.
The Sneller study could have improved their characterization of cognitive function with PROs, which were largely absent in the study. A particularly notable omission was the absence of a patient report of cognitive fatigue, a highly prevalent symptom of Long Covid.
Physiological measurements of brain function--for example functional MRI or electroencephalography--are also important tools in a comprehensive assessment and can detect subtle changes in the brain and cognition.
The findings of Sneller et al. are gaining some traction in the popular press and on social media. But without considering the limits of how cognition is measured, I worry the study may perpetuate a tendency to dismiss the experiences of people suffering from Long Covid (“see, there’s nothing really wrong on ‘objective’ tests!”).
In sum, a possibility we should always consider is that relying on one tool and or method to measure cognitive function might only give us part of the picture of what it means to have Long Covid.