NYTimes Magazine: “Can Big Data Tell Us What Clinical Trials Don’t?”  Journalists like to dichotomize to create tension.  The story can be parapharased: Dr. Frankovich saw a girl with lupus, and recognized an unusual batch of symptoms that came with this particular patient: her kidney failed; her blood vessels and pancreas sent distress signals.  Dr. Frankovich, with her clinical astuteness, felt that in her experience, patients with lupus and these additional symptoms had more blood clots.  She went to a database and calculated some averages, and convinced her team to give the patient a blood thinner.  The patient did not develop a blood clot.

The story might have a different ending if the girl had a serious complication, like an intracranial hemorrhage, from the blood thinner.  Dr. Frankovich might have had to sit in front of a jury.  Luckily this wasn’t the case, and the article was a pitch instead of a tale of caution. 

Every intervention carries a risk and a benefit.  Dr. Frankovich did a risk-benefit analysis in her head, and felt that the risk justified the benefit—patients don’t usually bleed into their head if they are given a blood thinner.  But she doesn’t know that for sure.  In a simple clinical trial, you test two interventions against each other, with the assumption that each carries approximately the same risk versus benefit.  This principle is called equipoise.  In this case and many other cases like it, clinicians like Dr. Frankovich would have said that there wasn’t equipoise, and a straight-forward randomized clinical trial would not be an appropriate way to discover the right course of action.  But they don’t know for sure. 

Is there a way to find out for sure?  Is there a way to calculate exactly the risks and the benefits?

(Apologize for math in advance--as Stephen Hawking said, “for every equation in the book the readership would be halved…”) 

The risks and the benefits can be quantified into probabilities.  Let’s calculate the benefits first.  Benefit here is the (hopefully decreased) probability of having a blood clot, given clinical presentation, the existing database, and the intervention (blood thinner).  You can write that in a short hand (also known as a conditional probability):

P ( blood clot | symptoms, database, blood thinner)

To calculate this, we use Bayes Theorem

P ( blood clot | symptoms, data, blood thinner)

                                    = P (symptoms, data, blood thinner | blood clot) P (blood clot)

                                       -------------------------

                            P (symptoms, data, blood thinner)

These terms have names.  P (blood clot) is called a prior--the overall probability of someone having a blood clot--and is easy to calculate.  The quotient P (symptoms, data, blood thinner | blood clot) / P (symptoms, data, blood thinner) is called likelihood, which is a value representing the belief given an additional evidence (i.e. the unique presentation, and the clinical intervention), what is proportional change of probability of developing a blood clot.  The numerator P (symptoms, data, blood thinner | blood clot) can be factored out if one assumes that whether a blood thinner is given or not is independent to the method of data collection and the presence of symptoms with blood clot.  i.e.

P (symptoms, data, blood thinner | blood clot)

= P (symptoms | blood clot) P (data | blood clot) P (blood thinner | blood clot)

The first term on the right hand side is probably what Dr. Frankovich calculated during rounds—what’s the overall probability of having these symptoms when the patient in fact developed a blood clot eventually.  The second and third terms don’t change whether you give a blood thinner or not, so can be safely ignored as constants. 

The wildcard is the denominator: P (symptoms, data, blood thinner).  This is called a joint probability.  This can only be extracted from the database, and in very complicated ways with integrals and scary sounding techniques like Markov Chain Monte Carlo.  If this denominator were ignored, Dr. Frankovich would be doing exactly the right thing: the benefit then scales linearly with her back-of-the-envelop calculation.  Unfortunately the denominator generally cannot be ignored, and her back-of-the-envelop calculation is liable to a number of errors. 

 The risks can be calculated exactly the same way:

                  P (bleed into your brain | symptoms, data, blood thinner)

The first point of this little mathematical interlude is that we have the mathematical toolbox to exactly quantify the risks and benefits of particular interventions given only observational data.  The denominator has baffled people for many years, but newer techniques now allow us to calculate that too.  In theory Dr. Frankovich could have done better than simple averages. 

The second point is to show you that there is almost never real equipoise.  Let’s say we want to do a clinical trial to test if giving a patient blood thinner is better than NO blood thinner.  Before the trial started, to ensure equipoise, we should calculate the two probabilities:

P ( blood clot | symptoms, database, blood thinner)

and

P ( blood clot | symptoms, database, NO blood thinner)

It doesn’t take a mathematician to see that unless there is a miracle, the two probabilities are not going to be very similar.  

Right now, the entire edifice of clinical trial design depends on a committee of people intuit these two probabilities and make sure that they are not wildly different.  But really to randomize people half-and-half given unequal probability of benefit is not very efficient.  We want to bet on the horse that has the best odds of winning to begin with, which was what Dr. Frankovich did, but could not quite articulate. And her results couldn't be rigorously quantified, even though technically what she did was starting a clinical trial.  Not a conventional trial, but a Bayesian trial.  

Matching randomization probability based on estimates of success of treatment is the basic principle behind Bayesian adaptive trial design.

NYTimes is stuck again in its lack of depth.  Analyzing observational data can help us design the most efficient clinical trials to test what we genuinely don’t know.  Big Data on observational data and clinical trials are not oppositional.  They are complimentary. 

About the Author

Sean X. Luo M.D., Ph.D.

Sean X. Luo, M.D., Ph.D., is a physician-scientist working at Columbia University and The New York State Psychiatric Institute.

You are reading

Hooked on Patterns

Being Bayesian in Insane Places

A brief discussion of the mathematics of Rosenhan experiments and follow-ups

Online Therapy: How Does This Work?

A review of this rapidly moving, possibly dystopic technology

Understanding Addiction With Electronic Registries

Disruptive technology with a $40 per patient clinical trial price tag.