Loneliness
Why Loneliness Statistics Are Less Reliable Than You Think
The hidden measurement problems undermining our understanding of connection.
Updated April 16, 2025 Reviewed by Monica Vilhauer Ph.D.
Key points
- Different loneliness measures often show only 7% overlap, calling prevalence rates into question.
- Measurement tools in psychology lack proper validation, unlike physical measures refined over centuries.
- Cultural differences and personal interpretations affect how people respond to loneliness surveys.
- Evidence-based policy requires better measurement tools that capture loneliness in all its complexity.
"15% of French citizens are lonely."
As a social connection researcher, I encounter statements like this regularly in policy documents, media reports, and academic papers. They seem straightforward and authoritative. However, studying psychometrics has taught me to be cautious about such definitive statements (as I've also discussed in my recent post about claims regarding social media and loneliness).
Let me share a puzzling situation: In 2022, two major studies—the JRC EU-wide loneliness measurement and the Meta-Gallup State of Social Connection study—examined loneliness rates across countries. Both studies were conducted in the same year, used similarly representative populations, and aimed to measure the same phenomenon. Yet their reported rates of loneliness differed by as much as 8 percentage points in some countries.
How could this happen? And which number should we trust?
When Different Tools Measure "The Same Thing" Differently
The problem goes deeper than different studies producing different results. Even within the same study, different measurement approaches often tell conflicting stories. When researchers use both single-item measures ("How often do you feel lonely?") and multi-item scales (like the UCLA Loneliness Scale) on the same people, the correlation between them can be as low as 0.27—translating to only about 7% overlap.
In other words, two tools supposedly measuring the same psychological state are actually capturing largely different experiences. Imagine if one thermometer said the temperature was 75°F while another read 45°F. We'd immediately question the instruments. Yet with psychological measures, we often accept such discrepancies without questioning what they mean about our understanding of the phenomenon itself.
The Centuries-Long Journey to Measurement Precision
I'm a big fan of Hasok Chang's book Inventing Temperature, which I've written about before. It brilliantly illustrates just how complex it is to develop measurements, even for something directly perceptible like temperature, let alone a fuzzy concept like loneliness!
Even for elementary properties like temperature, scientists devoted centuries to perfecting measurement. The modern thermometer evolved from Galileo's crude air thermoscope in the 1600s to Fahrenheit's mercury standard in the 1700s to today's digital precision instruments, with generations of scientists refining calibration methods along the way.
Social science hasn't had this luxury of time. We're still in the early stages of developing tools to measure complex psychological states like loneliness—which, unlike temperature, cannot be directly observed. Loneliness is shaped by subjective experience, cultural norms, individual interpretation, and even by societal discourse about loneliness itself, making it inherently more difficult to quantify.
The Measurement Crisis in Psychology
Loneliness research reflects broader measurement issues in psychology:
- Scale Proliferation: There are at least 280 different scales for measuring depression and 65 different scales for emotions. For social connection, too, dozens of measurement tools exist, each potentially capturing different aspects of the experience (we are currently reviewing tens of thousands of articles).
- Lack of Validation: Many psychological measures lack proper validation. Studies show that 40% to 93% of measures used across educational behavior journals have no validity evidence—meaning we often can't confirm that our tools are actually measuring what we claim they measure.
- Ad Hoc Measures: Researchers frequently create new measures without rigorous validation. One large-scale study found that 79% of scales were created by study authors without supporting validity information.
- Measurement Invariance Problems: Many measures fail tests for measurement invariance, meaning they don't assess constructs consistently across different groups or time periods—rendering comparisons invalid (as in our example last time on social media and loneliness).
The Complexity We're Missing
Our research interviews reveal dimensions of loneliness that standard measures rarely capture. Some participants describe loneliness not as missing others, but as missing connection to themselves—a profound inner disconnect that exists regardless of external relationships. Others report rapidly shifting states of connection and disconnection throughout a single day, defying the static nature of most measurement approaches.
These nuances aren't merely academic curiosities—they fundamentally change how we should approach supporting people who feel disconnected. A person experiencing existential loneliness requires different support than someone lacking practical companionship, yet our current measures often lump these experiences together.
Moving Forward With Appropriate Humility
These measurement challenges don't mean we should abandon efforts to understand loneliness. Rather, they call for appropriate humility about what we know and don't know.
Rather than treating survey results as definitive pronouncements about the state of loneliness, we should view them as imperfect indicators that provide partial insights into a complex human experience. This approach allows us to use the information we have while acknowledging its limitations.
As I argued in my blog post about social media and loneliness, this is particularly important when research shapes public policy. When we base interventions on flawed measurements, we risk addressing symptoms while missing the true drivers of social disconnection. Recognizing this need for better measurement is why I'm excited to be part of LONELY-EU, where our Annecy Behavioral Science Lab is collaborating with partners across Europe to develop a comprehensive monitoring framework for social isolation and loneliness in the EU. This initiative aims to create validated, cross-culturally comparable, yet flexible measurement tools that can provide a more reliable foundation for both research and policy.
For researchers, this means continuing to improve our measurement approaches, embracing more sophisticated models that can capture the dynamic, context-dependent nature of loneliness, and being transparent about limitations.
For policymakers, journalists, and the public, it means approaching loneliness statistics with healthy skepticism and recognizing that seemingly straightforward numbers often conceal significant complexity and uncertainty.
By embracing this nuanced view, we can develop more effective ways to understand and address one of humanity's most fundamental experiences—the yearning for meaningful connection.
When you hear that you're part of a demographic group with "high loneliness rates," does that reflect your personal experience? What aspects of your experience with connection and disconnection do you think standardized measures might miss? Share your thoughts. This post is a summary of a longer, more technical post written with Sharanya Mosalakanti and Ivan Ropovik.