The Pfizer grant provided for developing and testing of quality indicators, simple criteria that can be easily applied to medical records to evaluate whether particular actions were undertaken.
A quality indicator is a measure that is used to evaluate the quality of care. While indicators should be tied to evidence as to what actions or procedures actually improve patient outcomes, they are often just a matter of professional consensus. Once defined, quality indicators can be used to monitor medical care and even to enforce requirements that certain procedures be performed. Quality indicators can trump best evidence practices and even clinicians’ judgments as to what is best for a particular patient at a particular time.
After specifying the indicators, the Pfizer-funded project involved applying them to medical record data from 1600 actual cancer patients from across South Florida.
The first quality indicator specifies there should be evidence in the medical record that the patient's current emotional well-being was assessed within 1 month of the patient's first visit with a medical oncologist. The second quality indicator stipulates that, if a problem with emotional well-being was identified, there is evidence in the patient's medical record that action was taken to address the problem or an explanation is provided for why no action was taken. Measurement of these indicators is operationalized by formulating questions that can be answered yes or no on the basis of the review of an individual patient's medical record.
The first indicator could be satisfied by any of the following: a copy of a screening instrument such as a questionnaire or electronic results from screening with a computer touchscreen, or simply a note that includes the words "coping," "adjustment," "distress,’" "emotional," "depression," or "anxiety."
If one of these indicators occurred in the records, the second quality indicator would be applied and could be satisfied by evidence that the problem was addressed or a statement why it was not. Evidence that the problem was addressed might include a note indicating the oncology team provided treatment or a referral was made to a mental health or other professionals.
Professional organizations have strongly promoted screening, particularly organizations whose members would provide the services to patients who screen positive for distress. However, screening for emotional distress is not yet part of routine cancer care. There is little evidence that routine screening of all cancer patients for distress has had sustained implementation in even the most well-resourced of comprehensive cancer centers. Adding a quality indicator that can be met by administering a screening tool to detect distress to cancer patients would seem like one way to force the hands of cancer centers to make sure that patients are screened. Yet, there is no strong evidence that implementing screening has any advantage over simply making services readily available and advising cancer patients how they could access them.
Routine screening involves more than passing out screening questionnaires or offering patients computer touchscreen assessments. For screening to have any value, a professional has to follow-up with an interview of patients who screen positive to determine the source of their distress and any needs to which their distress may be tied. Studies find that most patients who are distressed don't want services from the medical center where they are being treated for cancer, either because they consider biomedical treatment a higher priority, they believe that they can manage their distress themselves, they are already getting help elsewhere, or they simply don't want to have to go back to the cancer center for an appointment, a place that might be many miles from their home. Patients who are distressed have more interest in services than patients who are not, but on balance, about equal numbers of distressed and nondistressed patients seek services. Think of it: cancer patients might want a chance to get better information about their condition or about wound healing after surgery, but these concerns would not be registered in a quick assessment of distress with a touchscreen.
About a third or so of patients who screen positive for distress are clinically depressed, but oncologists typically don't know the diagnostic criteria nor are they interested in taking the time with a structured clinical interview to formally diagnose patients. Depression cannot be diagnosed with a questionnaire, which can only establish that patients are at risk for depression and need further evaluation. Furthermore, if patients are going to be evaluated for depression, a safety plan needs to be in place so that there can be a prompt intervention if the patients indicates in the course of such an interview that they don't want to live or that they would be better off dead. Most such complaints turn out to be false positives: they are not meant literally but are merely the expression of the patients struggling with their cancer. But determining the level of threat of self harm requires the time of a professional.
So, it is not surprisimg that routine screening of all cancer patients is not being implemented as recommended in many settings in the United States. But with quality indicators providing a ready means of monitoring whether particular clinicians are screening their patients for distress, the focus can shift from the organizations recommending routine screening to enforcing its implementation. If an action can be easily monitored, feedback can be provided to clinicians, and accrediting agencies can monitor and enforce compliance. It would easily be possible to block an oncologist from closing a patient’s electronic medical record and going onto the next patient if the oncologist doesn't satisfy quality indicators requiring asking about distress and taking action if the patient indicates distress.
Undoubtedly, many advocates of screening will welcome the establishment of quality indicators, but they may be getting something different than they expect. Oncologists are paid by the procedure, not by the length of consultation, and many will become frustrated by having to ask about distress when so often the inquiry only precipitates a discussion that does not lead to treatment or referral. And referrals of cancer patients to mental health professionals are notoriously hard to get completed. In some settings, attempts to make such referrals are labeled as sending the patient to the black hole.
So what may happen is that oncologists will satisfy the quality indicators by asking a yes/no question about distress and prescribing antidepressants to patients who indicate they are distressed. The oncologists will do so without a formal diagnosis and without a follow-up. Eli Lilly has already investigated whether antidepressants can have a general effect on the quality of life of cancer patients who do not meet criteria for depression. An ambitious trial retained for follow up only 17% of the patients who were given an antidepressant without being depressed, and so the results were not valid, other than suggesting such interventions were not acceptable to patients.
A study by Steven C. Palmer and colleagues and myself of over 400 breast cancer patients interviewed shortly after diagnosis found a high rate of prescription of antidepressants, even among patients had never been depressed two weeks straight in their life. Most patients who were clinically depressed had been prescribed an antidepressant, but so had many who had never been depressed in their lives.
Enforcement of adherence to the quality indicators may simply generate a lot more prescriptions for antidepressants rather than any improvement in patient outcomes. This possibility was not noted in an article authored by the psychologist to whom Pfizer awarded the over $10 million, Paul Jacobsen, that described how the quality indicators could easily be applied to the medical records of actual cancer patients. The article at least acknowledged Pfizer as the funding source, but subsequent articles by Jacobsen have no such indication of an obvious conflict of interest in the praising of the quality indicators.
The psychologist getting the money from Pfizer is not in a position to prescribe antidepressants himself, but he is a member of numerous professional committees promoting screening and as a member of these committees, he has privileged access to publishing in high impact journals like Journal of Clinical Oncology with lessened oversight by peer review. He has exercised that privilege and is now even an associate editor of that journal. Arguably the conflict of interest extends to his activities as editor, making judgments about which articles are accepted or rejected. Regardless of whether it was the intention of the $10 million dollars, his promotion of quality indicators is likely to greatly benefit Pfizer and other pharmaceutical companies. There is thus a payback to Pfizer for a grant to a psychologist, but the benefits for improving patient quality of life and well-being are less clear. If anything, implementation of these particular quality indicators may mean that patients get less of a chance to talk about their pressing needs that can be met within the context of cancer care than would occur in the absence of screening. Perhaps quality indicators should register if every cancer patient gets an unhurried chance to meet with a professional who has time to talk with them, not necessarily a busy, overcommitted oncologist or oncology nurse specialist. Somehow I think that would be of less interest to Pfizer than the current initiative.