A devastating plague overwhelms the fictional island of St. Hubert in Sinclair Lewis’ Pulitzer Prize-winning novel Arrowsmith. Physician-researcher Martin Arrowsmith, who has been instrumental in isolating a potential treatment called “phage” at his institute, loosely based on the turn-of-the century institute of Rockefeller University in New York City, is sent to the island to administer his possible cure. Before Arrowsmith leaves, his mentor says, “If I could trust you to use the phage with only half your patients and the others as controls, under normal hygienic conditions but without the phage, then you could make an absolute determination of its value…” Martin swore that “he would not yield to compassion” and would uphold the experimental conditions. When, though, Arrowsmith saw those afflicted “shrieking in delirium” with “sunken bloody eyes,” and particularly after his beloved wife who has accompanying him to the island, dies of the disease, he says, “Damn experimentation!” and gave the treatment to everyone who asked.
Mentor Dr. Max Gottlieb was insisting that Arrowsmith conduct a clinical trial—essentially an experiment involving human subjects to assess the efficacy of at least one specific treatment intervention that is conducted to advance general scientific knowledge and not necessarily for the benefit of an individual patient. Subjects in clinical research can sometimes have a “therapeutic misconception,” namely that their individualized needs will determine their treatment allocation and have an “unreasonable appraisal” that they will necessarily receive “direct therapeutic benefit” from participating in a research study (i.e., “misconception regarding the process or goals of the research.”) (Swekoski and Barnbaum, Ethics & Human Research, 2013) A clinical trial is based on the concept of equipoise in which either the researchers themselves “genuinely do not know what is the best way to treat their patients” (Doll, 1982, Statistics in Medicine) or a state of “genuine uncertainty” within the medical community about the therapeutic merits of a particular treatment. (Freedman, NEJM, 1987)
Throughout the centuries, there have been reports of several “clinical trials,” reviewed in detail by both Bhatt (Perspectives in Clinical Research, 2010) and Vandenbroucke, (Journal Chronic Diseases, 1987). Possibly the very first one involved comparing diets when King Nebuchadnezzar of Babylon in the Old Testament’s Book of Daniel, by request, allowed Daniel and some of his men to take only vegetables and water and the others, only meat and wine. (Those who ate only the vegetables apparently fared better.) There was also the famous 1747 study by Scottish ship surgeon James Lind, in another diet comparison, who discovered the anti-scurvy benefits that sailors gained from a diet that included lemons and oranges. (Bhatt, 2010)
In Lewis’s novel, Martin Arrowsmith is confronted with the difference between the experimental conditions within the “security of the laboratory” and what one actually has to contend with in the trenches of human misery.
The clinical decision, though, namely to whom to administer the unproven treatment, was not as difficult in the 1946 British Medical Research Council’s (MRC) groundbreaking study of streptomycin as a treatment for pulmonary tuberculosis because the medication was in short supply. This landmark clinical trial was conducted under the auspices of Sir Austin Bradford Hill (1897-1991), then director of the MRC’s Statistical Research Unit. (I have written previously about Sir Austin, known, as well, for his “viewpoints” on causation and his linking of smoking with lung cancer.) This trial is often considered the first strictly controlled and most importantly, randomized trial that “ushered in the new era of medicine.” (Hill, Controlled Clinical Trials, 1990). Initial randomization is crucial as it prevents selection bias—“The aim, then is to allocate those admitted to the trial in such a way that the two groups—treatment and control are initially equivalent.” (Hill and Hill, Principles of Medical Statistics, p. 219, 1991) Hill, incidentally, had himself been bedridden years earlier for almost two years with tuberculosis that he had contracted in Greece during World War I and his own illness had precluded his career aspirations of becoming a physician. (Hill, British Medical Journal, 1985)
Streptomycin had been discovered in the U.S. in 1944 and was not then readily available in a Britain that was impoverished by the War. (D’Arcy, British Medical Journal, 1999) Said Hill (1990), “…the shortage of streptomycin was the dominating feature” and made it ethically possible to consider a clinical trial in which a potentially beneficial (but not clearly proven) treatment was withheld from half the “desperately ill” patient population. (Hill, British Medical Journal, 1963). (The control population received the standard treatment of that era—bed rest.) Significantly, though, the trial was not double-blinded (i.e., both physicians and patients knew which treatment was administered, though the two independent radiologists who read the chest films were “blind” to which group each patient belonged.) Nor was it placebo-controlled (D’Arcy, 1999.) The rationale for this protocol (“no need to throw common sense out the window,”said Hill, British Medical Journal, 1963) was that streptomycin administration then required four intramuscular injections a day for four months, and the researchers did not want to subject their control patients to four injections of saline water daily for the duration of the experiment. (Doll, Statistics in Medicine, 1982)
Though Hill was always concerned about the ethics of clinical trials (e.g. the use of placebos; withholding of treatment), he did believe that the question of consent should be, “When is it necessary to ask the patient’s consent to his inclusion in a controlled trial?” (Hill, British Medical Journal, 1963) He believed that giving patients too much information, especially about the uncertainty of a treatment, potentially undermined their trust in their physicians. In effect, he was emphasizing that how physicians frame information can have a detrimental effect on the treatment—what we now call the nocebo effect—giving patients too much information about potential harm may be harmful itself (For more on the nocebo effect, see my previous blog post.)
Hill noted that ethical standards were different in the 1940s: researchers did not obtain the patient’s or anyone’s permission. Nor did Hill and his researchers even tell patients they were part of a trial. In fact, Hill believed it was “wrong to shift the entire consent-giving responsibility onto the shoulders of patients who cannot really be informed.” (Controlled Clinical Trials, 1990) Hill, incidentally, was criticized for his view on consent in a letter by “a member of the uninitiated public” (Hodgson, British Medical Journal, 1963) and in an editorial in the British Medical Journal that same year.
Bhatt notes that the framework for protection of human subjects had its origins in the Hippocratic Oath—i.e., Do no harm. (Bhatt, 2010), but it was not until after the egregious medical experiments, conducted in the name of science by the Nazis, became known that the Nurenberg Code of 1947 highlighted the importance of “voluntariness” in giving consent. (Bhatt, 2010) Primo Levi, in his moving If This Is A Man, about his experiences as a survivor of Auschwitz, wrote, “We are slaves, deprived of every right, exposed to every insult, condemned to almost certain death, but we still possess one power, and we must defend it with all our strength, for it is the last—the power to refuse our consent.” (p. 37) And it was in 1964 that the Helsinki Declaration by the World Medical Association established “general principles and specific guidelines on use of human subjects in medical research.” (Bhatt, 2010; World Medical Association Declaration of Helsinki in JAMA, 2013)
Beecher, a professor of anesthesiology at Harvard, reviewed “a variety” of 22 unethical or questionably ethical practices in medicine and noted the “unfortunate separation between the interests of science and the interests of the patient.” Though he believed that an experiment is either ethical or not initially, Beecher also acknowledged, “Consent in any fully formed sense may not be obtainable. Nevertheless…it remains a goal toward which one must strive…There is no choice in the matter.” (Beecher, NEJM, 1966)
Significantly in the last edition (12th edition) before his death of his textbook Principles of Medical Statistics, written with his son, Hill not only had a chapter on ethics but included an Appendix on ethics and human experimentation that included the principles of the Helsinki Declaration. (1991 Hill and Hill)
Throughout the years, despite international attempts at regulating experiments on human subjects, there have continued to be alarming abuses, (some even initiated by U.S. government agencies), particularly on the disenfranchised, such as Blacks (e.g. untreated syphilis studies in Tuskegee, Alabama that did not stop until 1972); mentally impaired (e.g. hepatitis study at Willowbrook State School for the Retarded in NY, also until 1972) and prisoners (e.g. testicular irradiation of state prisoners in Oregon conducted until 1974).
Bottom Line: Since exposure of these abuses, the U.S. government has mandated the establishment of Institutional Review Boards (IRBs) that regulate any federally funded research. This “checks and balance” system, as evident in the concept of “advise and consent” is far from perfect but considerably better than any previous time in history. For a discussion of IRBs, see the 2015 book The Ethics Police? by Robert Klitzman and for more details on a history of decades-long abuses and more recent regulations, as well as concrete suggestions to improve protection of human research volunteers, see the two-part article by Dr. Marcia Angell in November and December 2015 issues of The New York Review of Books.