Doctors Play the Numbers Too

In the US, some health insurance companies are posting report cards on individual doctors on-line to encourage patients to look for the “best” doctors. The rationale is to “incentivize” physicians to practice “quality” medicine by concentrating on particular evidenced-based measures of efficient health care. Sound familiar?

Just as in the case of schools and teachers, report cards on doctors create a climate of failure and inadequacy and serves to demoralize rather than “incentivize”. As one doctor pointed out last month in the prestigious New England Journal of Medicine:

The quarterly “report card” sits on my desk. Only 33% of my patients with diabetes have glycated hemoglobin levels that are at goal. Only 44% have cholesterol levels at goal. A measly 26% have blood pressure at goal. All my grades are well below my institution’s targets.
It’s hard not to feel like a failure when the numbers are so abysmal. We’ve been getting these reports for more than 2 years now, and my numbers never budge. It’s wholly dispiriting….

The quarterly report card stokes a perennial fear: maybe I really am a substandard doctor, and these statistics simply shed light on what I’ve refused to accept. If I’m doing my patients a disservice, then I’m morally obliged to vacate my office to make room for a more competent practitioner. [Ofri 2010: 606]

Such comments mirror those of many teachers and schools about being evaluated on the basis of test scores which are influenced by many factors outside their control. Just as teachers cannot control the influence of poverty, poor nutrition, family support and a host of other factors on student learning, doctors have no control over factors which affect the treatment of many chronic conditions such as whether patients stop smoking, take better nutrition and more exercise and how affordable is the medication:

We all want our patients to achieve the best health possible, but most doctors don’t actually have control over the challenges of a complicated disease like diabetes – which is probably why my numbers haven’t budged in 2 years. [Ofri 2010: 607]

Just as in the case of teaching, these performance measures overlook the complexities patients bring to the surgery.

….although these quality measures focus on diabetes in pristine isolation, my patients inconveniently carry at least five other diagnoses and routinely have medication lists in the double digits. Practicing clinicians know from experience that micromanagement of one condition frequently leads to fallout in another. [Ofri 2010: 606]

Moreover, performance measures ignore important qualities of effective doctors, just as they ignore important qualities of effective teachers and schools:

I’ve always wanted to ask these analysts how they choose a physician for their sick child or ailing parent. Do they go online and look up doctors’ glycated hemoglobin stats? Do they consult a magazine’s Best Doctor listing? Or do they ask friends and family to recommend a doctor they trust? That trust relies on a host of variables — experience, judgment, thoughtfulness, ethics, intelligence, diligence, compassion, perspective – that are entirely lost in current quality measures. These difficult-to-measure traits generally turn out to be the critical components in patient care….

Doctors who actually practice medicine – as opposed to those who develop many of these benchmarks – know that these statistics cannot possibly capture the totality of what it means to take good care of your patients. They merely measure what is easy to measure. [Ofri 2010: 607]

Just as in the case of schools, the measures are inflexible as to what constitutes success or failure:

Success and failure in these measures tend to be presented as a binary function, although clinical risk is almost always a variable function. My patients whose blood pressure is 140/85 (quite near the 130/80 goal) are counted as failures equivalent to patients with a blood pressure of 210/110, even though their risks for adverse cardiovascular outcomes are vastly different. [Ofri 2010: 606]

Just as in the case of schools, doctors have a strong incentive to “play the numbers” to look good. One way is to “cherry pick” patients just as schools “cherry pick” students. As one anonymous doctor wrote recently on the popular health care blog KevinMD.com:

Doctors have two major ways of responding to those report cards. We can change the ways we practice, such that our patients will have better cholesterols and cost our hospitals less. Or we learn from insurance companies. Cherry pick compliant, uncomplicated, generally healthy patients, and gently encourage … the complicated patients to seek care elsewhere. [9 September 2010]

The latter response means that report cards can lead to worse health outcomes just as they can in education. There is extensive evidence that report cards have led doctors to turn away the sickest and most severely ill patients so as to avoid poor outcomes and lower public ratings just as schools with control over their enrolments turn away low achieving students [Werner & Asch 2005; Rothstein et.al. 2008].

A major study of health care report cards in New York State and Pennsylvania on physician and hospital coronary artery bypass graft surgery (CABG) mortality rates found that they led providers of medical care to shift surgical treatment for cardiac illness toward healthier patients and away from sicker patients.

….report cards led to increased expenditures for both healthy and sick patients, marginal health benefits for healthy patients, and major adverse health consequences for sicker patients.
Report cards led to a decline in the illness severity of patients receiving CABG in New York and Pennsylvania relative to patients in states without report cards….[and] to significant declines in other intensive cardiac procedures for relatively sick AMI (acute myocardial infarction) patients…[who] experienced dramatically worsened health outcomes. Among more severely ill patients, report cards led to substantial increases in the rate of heart failure and recurrent AMI and, in some specifications, to greater mortality. [Dranove et.al. 2003: 577, 583]

Another study concluded:

….the value of publicly reporting quality information is largely undemonstrated and public reporting may have unintended and negative consequences on health care. These unintended consequences include causing physicians to avoid sick patients in an attempt to improve their quality ranking, encouraging physicians to achieve ‘target rates’ for health care interventions even when it may be inappropriate among some patients, and discounting patient preferences and clinical judgment. [Werner & Asch 2005: 1239]

A recent review of research studies noted that several found unintended consequences of public report cards, such as a reluctance to care for high-risk patients [Fung et.al. 2008].

All this means that, just as in the case of schools and teachers, a doctor’s report card can be highly misleading about the quality of a practice:

It purports to make a statement about comparative quality whose objectivity is a fallacy…..It offers patients a seductively scientific metric of doctors’ performance – but can easily lead them astray. [Ofri 2010: 607]

Finally, as in education, the evidence that report cards in health care will improve patient outcomes is slim; at best, it is mixed. One study concluded:

Public reporting of quality information promotes a spirit of openness that may be valuable for enhancing trust of the health professions, but its ability to improve health remains undemonstrated, and public reporting may inadvertently reduce, rather than improve, quality. Given these limitations, it may be necessary to reassess the role of public quality reporting in quality improvement. [Werner & Asch 2005: 1239]

Despite strong advocacy and increasing resort to report cards on health treatments, there seems to be a surprising dearth of substantive evidence about their impact on the quality of health care. This is also the case in education where there is little evidence to show that reporting school results and the results of individual teachers leads to higher student achievement.

A recent review of academic studies on the impact of report cards for health care plans, hospitals and doctors found little substantiated effect on clinical outcomes [Fung et.al. 2008: 111-123]. It noted that evidence on health outcomes is scant, particularly about individual providers and practices, and that rigorous evaluation of many major public reporting systems is lacking.

The review found some evidence that publicly releasing performance data stimulates quality improvement activities in hospitals, although the studies were mostly descriptive. Overall, the study concluded that the effect of public reporting on effectiveness, safety, and patient-centredness remains uncertain.

The findings led one doctor to comment:

Before devoting more resources to further refine these techniques, we must question whether quality reporting (as currently conceived) is truly a sufficient tool….the analysis by Fung and colleagues does not inspire a call for better report card techniques – it calls instead for study of the social facts that render the reports, to date, so inefficacious. [Graham 2008: 883]

Ditto again for education.

Trevor Cobbold

References
Dranove, D.; Kessler, D.; McClellan, M. & Satterthwaite, M. 2003. Is More Information Better? The Effects of Report Cards on Health Care Providers. Journal of Political Economy 111 (3): 555-588.

Fung, C.; Lim, Y-W; Mattke, S.; Damberg, C. & Shekelle, P. 2008. Systematic Review: The Evidence That Publishing Patient Care Performance Data Improves Quality of Care. Annals of Internal Medicine, 148 (2): 111-123.

Graham, J. 2008. Is Reporting of Quality Scores Worth Refining? Annals of Internal Medicine, 148 (11): 883.

Ofri, D. 2010. Quality Measures and the Individual Physician. New England Journal of Medicine, 363 (7): 606-607.

Rothstein, R.; Jacobsen, R. & Wilder, T. 2008. Grading Education: Getting Accountability Right. Economic Policy Institute, Washington DC.

Werner, R. & Asch, D 2005. The Unintended Consequences of Publicly Reporting Quality Information. Journal of the American Medical Association, 293 (10): 1239-1244.

Previous Next

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.