Was There a US Epidemic of Heart Problems After 9/11/2001?
If claims were correct, why did no one else notice?
Posted Sep 09, 2011
If correct, these claims demand our attention. They would require a radical revision of the entrenched idea that development of posttraumatic stress disorder requires direct exposure or directly being affected by a severely stressful event. These claims undercut impressions that despite the many challenges the Americans faced in the months after 9/11, they pulled together as a people and showed remarkable resiliency. And if these claims are true, they suggest we need to revise our understanding of how resistant to stress people are in general. As the authors of these papers suggest, people could be quite vulnerable to the traumatizing effects of learning of horrific events, even if the events do not directly affect them and the effects are worsened by watching the events on TV. Some people at least were glued to their televisions on the morning of 9/11 and afterwards. Would they have done better to simply turn their televisions off, and maybe, given the mental health risk they were supposedly facing, immediately go see a professional?
In the first installment, I raised serious doubts about the validity of the claims in the JAMA article. I started by noting that the authors took liberties in the rewording and rescoring of standardized instruments and that these alterations undercut their validity and made comparisons to other samples difficult. In the second installment, I showed that the claims in the JAMA article were strongly contradicted by better quality evidence indicating (1) even among persons directly affected by the events of 9/11, rates of posttraumatic stress disorder were remarkably low and rapidly diminished over time; and (2) among the American people in general, mental health effects were minimal and distress levels stayed within their usual limits. Of course, this is not to deny the profound and lasting effects of the events of that day, but it does raise basic questions whether the effects registered in mental health or psychiatric terms across the nation.
Did the events of 9/11 affect the heart health of the American people as a whole? Dramatic claims that strong effects occurred were made in an Archives of General Psychiatry article authored by Professor Alison Holman and colleagues, including the author of the JAMA article, Professor Roxanne Silver. The article claimed that "physician-diagnosed cardiovascular ailments increased during the three-year period from 21.5% pre-9/11... to 30.5% at three years post-9/11". This represents a 42% increase. If this is accurate, there are major public health implications.
The claim immediately prompted a reaction from this Skeptical Sleuth: "Wow! If this is true, why didn't anybody else report it? Surely, the Center for Disease Control and other scientists and public health officials are routinely monitoring the health of the nation and why didn't they notice?"
Below I'm going to take a critical look at this article, and if you interested in following along, you can access the original article directly with this free link. I'm going to be raising some provocative, sharply critical points. You're free to take a skeptical view of my criticisms, and see if you really agree with me. Start being skeptical of the skeptical sleuth! After all, the authors made their claims in a prestigious, high-impact, peer-reviewed journal, and if you can't trust such sources, who can you trust?
What people are we talking about? The abstract of the article states that a final sample of 2592 adults were selected to be representative of the national population and completed an Internet survey. This seems straightforward enough, but I previously noted that the JAMA paper was actually based on a small minority of respondents continuing in an ongoing survey after initially being contacted. A potentially highly unrepresentative less than 16% of the sample was retained from the original sample and approached for participation in the internet survey. The same criticisms of being based on an unrepresentative sample apply to this paper and the Methods section we further learn that only 1760 cases were available for analysis. Bottom line: Despite initial impressions, we may be dealing with a highly select, biased sample, but also the suggestion is we need to read this article carefully.
How were "physician-diagnosed cardiovascular ailments" determined? Here too we have to read carefully. The research participants responded on the Internet to the question "Has a medical doctor ever diagnosed you as suffering from any of the following ailments?" This was followed by a list of 35 physical and mental health ailments. So, we're dealing with research participants' recollection and self-report, not information gathered from physicians or medical records.
Participants' recall of having been diagnosed by a physician is highly selective. Whether they actually received a diagnosis may depend on their seeking help as much as the severity or impairment of any conditions they might have. One of my hypotheses developed in earlier blogs has been that the minority of participants who stayed in the Internet survey are neurotic or prone to distress. That's why they endorsed "blaming themselves" as one of the main ways they cope with the events of September 11. We know that persons who are distressed seek more health care and also remember what they had told with lesser accuracy.
Let's look at how the score for "physician diagnosed cardiovascular ailments" was constructed. The authors report taking a novel approach to summarizing their health data by coding participants' reports of physician having diagnosed them with "heart problems," "hypertension," and "stroke" as "circulatory." They renamed this 0-3 circulatory variable as "cardiovascular health" for the rest of the paper and renamed stroke as "comorbid heart ailment." Does anyone else find it confusing?
The authors present no reassuring data for the validity of their lumping these self-reported conditions into a single measure of cardiovascular response to 9/11. The problems of having to accept at face self-reports of single health conditions are well-known, but in this case problems of validity are compounded by the authors' lumping together of the answers to vague questions in a summary scale. When investigators ask a vague question, they get a vague answer. "Heart problems" can cover everything from true heart attacks to congestive heart failure to anxiety related problems to detection of minor and non-life-threatening abnormalities. Consider anxious patients who are reassured by a physician that their racing heart has a name, tachycardia. Can they endorse "heart problems" on the basis of tachycardia? The important question is whether they think they can. "Stroke" has somewhat similar vagueness when left to patient self-report because of the range of conditions that might be classified as a stroke.
The article emphasizes changes in cardiovascular health in the time after 9/11. Interpreting change is difficult when we don't know which of these various health conditions might change. The temptation is to assume that we are talking about events like heart attacks or stroke, but that's a risky assumption, and it is made less credible when we note that we are trying to account for a 42% increase in cardiovascular ailments in the three years after 9/11.
A brief crash course in reading statistical tables for readers who are not familiar with such tables but who are interested in interpreting them for themselves. Adjusted relative risk ratios" can basically be interpreted as the ratio of events reported by participants with high acute stress versus the number reported by those report low stress. So, a relative risk of 2.98 means that participants reported high acute stress also reported 2.98 times as many new cardiac ailments the first year after 9/11, almost 3 times as much.
Usually, relative risks that are not significantly different from 1.0 are ignored. We would therefore ignore any confidence intervals that did not exclude 1.0 because it was not statistically unlikely that there were no differences in the two groups. Getting back to the table, the only significant differences were for hypertension the first year and heart problems for years one and two. There were no differences for comorbid cardiac ailments in any year, and no differences in any condition for the third year.
Basically, Table 3 indicates that any overall differences in cardiac ailments were driven by increases in hypertension the first year and increases in heart problems, whatever heart problems represent, in years one and two. The increase in hypertension remains extraordinary, typically because hypertension typically has no symptoms. In order be diagnosed with hypertension, people have to seek help and get their blood pressure measured as part of a visit to the doctor. In a sample of persons with an average age of 50 as the sample is, we can expect that most people won't even get their annual physical, and so the results are even more extraordinary. The increase in heart problems I simply can't interpret on the basis of the information the authors provide.
Note that the authors tested three variables for three periods for 3x3 = 9 statistical tests. If we correct for multiple tests having been conducted, we may actually find that the results for hypertension and heart problems might disappear and so maybe we don't have so much to explain.
But table 3 also suggests that they three components of "heart ailments" don't hold together all that well and maybe should not be lumped together.
What mechanism could explain a causal association between the events of 9/11 and dramatically increased numbers of cardiovascular ailments in the general population? In their introduction (page 73-74), the authors state "the US public experienced a terrorist event of extraordinary scope and traumatic impact." They go on to suggest that "acute, subacute, and chronic stress can gradually increase cardiovascular risk through neurohormonal arousal." Perhaps, but as I showed in two previous blogs, these authors did not demonstrate lasting psychological effects of the stress of the day of 9/11, and other high-quality studies also failed to show an effect.
Is there cherry picking or selective reporting of data going on here? "Cherry picking" is the deplorable, but all too common practice of investigators exploring lots of different ways of analyzing their data and reporting or emphasizing at least only the results that are significant. When cherry picking occurs, we have to more doubtful that the real reported results could be replicated in other studies or generalize to the real world.
Is that going on here? The question is ‘Did the authors really set out to test whether a score resulting from combining research participant reports of hypertension, heart problems, and comorbid heart ailments/stroke or did they decide after looking at the data that these were the strongest results?' We can't tell for sure, but as always, the skeptical sleuths are alert for cherry picking and make hypotheses based on the best evidence they can gather, knowing that they can never be certain. In this article I believe there is evidence of cherry picking, and I went to the trouble of asking the authors for additional data that would have settled the question. Unfortunately they refused to provide it and I had to go on the basis of evidence I could assemble.
Survey studies, even when done by the Internet are expensive, and investigators typically limit the variables they assess to those that in which they are they are interested. So, I was curious why the authors assessed 35 different physical and mental health ailments, but report results only for the lump sum of three of the ailments. Only in passing, and a paragraph on page 78, do they note that the increases in other ailments after 9/11 associated with acute stress were not significant.
Recall that statistical tests were done for three different time points, years 1,2, and 3 after 9/11/2001. So, potentially 35x3 = 105 statistical tests were performed. Somehow I suspect that if more of them had been significant, we would been told about it. Regardless, the authors should have been a lot more transparent and forthcoming in presenting the results.
Finally, the measure of "cardiovascular health" that they chose to focus on likely includes a lot of conditions that are unlikely to be caused by acute or chronic stress. Heart attacks may be, but not the discovery of congenital heart defects or valve problems or infections of the heart or pericarditis, all of which would fit "heart problems. I am not sure what choices other investigators would've made, but there are other health conditions that were assessed that are better candidates has markers for responses to stress
Conclusion. If true, the authors' claims would require that we revise criteria for posttraumatic stress, that we accept their evidence that the events of 9/11 were traumatic for the whole nation, not just the directly affected residents of New York City in Washington DC, and that we accept in the absence of other evidence their methodologically weak findings that "cardiovascular ailments" increased in the three years after 9/11/2001. Strong claims requiring action also requires strong evidence and corroboration from other sources. I just don't see it here.
The authors simply don't allay my skepticism that the events of 9/11, which did not have substantial effects on the mental health of the nation, somehow proved stressful enough to cause a 3x increase in hypertension, as well as increases other measures of "cardiovascular health."