Skip to main content

Verified by Psychology Today

Career

Balancing Our Data and Our Values

Personal Perspective: On engaging with the twin imperatives of public health.

I have long argued that we do what we do, ultimately, for moral reasons, out of a deep commitment to the health of the public. We aspire to build a healthier world by building a better world, one founded on a basis of justice, equity, and compassion. This commitment nudges us beyond a neutral role that concerns itself purely with the generation of knowledge and pushes us into the realm of activism. At the same time, we are scientists. What we do is informed by data. We owe it to the future we hope to create to ensure that these data are as sound as possible and that we listen, always, to what our science is telling us.

This is, I realize, a very particular view of the role of the population health scientist, putting in the foreground our purpose, even while recognizing that what we do is, and has to be, built on data and truth. This marriage of science and mission-driven purpose may seem like an obviously correct mix to a reader steeped in the goals of public health, as I have long been, and animated by a commitment to the aspirations of the field. However, it is important not to lose sight of how radical this conception of our work is. For one, it is divergent from the traditional role of the dispassionate scientist who does their work uninfluenced by feelings about the outcome of their data. For another, it pushes us to ask how we get to clarity on our moral imperatives, and how that clarity balances with what emerges from our data.

Accepting that a balance of moral and empiric imperatives is the right approach for public health does not mean eliding the fact that this can pose challenges to what we do. I would argue that the following points reflect areas in which we should take care that this approach does not threaten the integrity of our efforts.

First, it is worrisome of course to say that we are driven by a moral imperative, because of the concern that this may influence our science. To be clear, nothing should influence our science, as much as we can help it. Acknowledging our biases does not mean surrendering to them, and we should work, as individuals and as a community, to ensure that our science is as free from bias as possible. This includes creating a context in which researchers do not feel they must toe a particular line to get published and advance their careers. This would constitute, I would argue, a level of bias we should not accept, an institutionalization of what may well occasionally appear in our science but which we should not regard as welcome or desirable. Avoiding such pitfalls means continuing to be clear and upfront about our moral precommitments so that we can quickly recognize when they may be unduly influencing our science and work to correct this when it happens.

Second, even when our moral concerns are not influencing our science, it can appear that they are, which can be in many ways as problematic as when this perception reflects reality. The perception of bias undermines the public’s trust in what we do, and casts doubt on our findings. It is an irony worth noting that, if we wanted to advance a certain narrative, biasing our science would be the worst way to do it. If all our science is skewed in a certain direction, it creates space for some to dismiss it and tune out what our field is saying. Both the perception and the reality of bias diminish our field and its potential impact. This makes it critical for health scientists to take the steps to, whenever possible, keep a wall between, for example, their perspective pieces and public writing and their science, and to make sure that the latter is not seen as influenced by the former.

Third, it is all well and good to aim for a balance between moral and empiric imperatives, but what should we do if and when they conflict? Science, to put it bluntly, is not always politically correct. If we find that our science never complicates or runs counter to our preferred narratives, it is likely that we are doing it wrong. Yet doing it right can be difficult. There are scientific data, and, indeed, whole areas of study, that can complicate, challenge, and even undercut our preferred narratives. For example, there has been much written in recent years about the role of neighborhood factors (e.g., neighborhood cohesion) in generating health outcomes, but what if they do not matter as much as we have come to think they do? Or what about even more challenging lines of inquiry like the role of genetics in shaping the physical and psychological factors that influence health outcomes? As our understanding of this area of study deepens, it raises the possibility that we will have to reckon with the reality of group differences based in genetics, a politically fraught and deeply controversial topic, but one which nevertheless has profound implications for our efforts to shape a healthier world. Is our field mature enough to engage with such topics honestly, while maintaining our commitment, always, to compassion, equity, and the pursuit of justice? The alternative is to leave to others the work of engaging with data that we do not find palatable. This runs the double risk of making us appear biased, as dishonest brokers, as we hand to those who might not share our moral vision—who may even oppose it—solid data for them to make their case. This poses a challenge that everyone involved in science needs to address. When science is published, it should be seen as a call to revise our understanding of what we think matters for health. For it to carry this weight, it should be conducted with dispassion, as we make every effort to separate our biases from our data—what we believe from what we know.

A version of this piece also appeared in Substack.

advertisement
More from Sandro Galea M.D.
More from Psychology Today