WTBS 6 Measuring Quality – The Value of Health Care Metrics

Dr. Howard Ovens, February 2016

 

Three Pitfalls in Measuring Quality – And How to Fix Them

“If you cannot measure it, you cannot manage it.”

—Peter Drucker, the “founder” of modern management

Recently, the issue of measurement in health care has received a great deal of attention, with the New England Journal of Medicine and the New York Times each publishing articles on this subject. In both cases, the authors lament the way we have come to measure the quality of health care being delivered to patients, specifically the underlying ideology (i.e., “Taylorism,” which suggests everything that can be measured should be measured) and a frequently used method (LEAN). While the authors’ eloquent critiques of our current measurement paradigm are compelling, they miss the point. The failures we have experienced in the area of measurement are not due to faulty principles (measurement is necessary) or theories (LEAN should not be discounted), but rather because of misguided applications by people who have lost sight of the underlying goals.

Medicine is one of the oldest professions, and for centuries physicians operated with a quasi-sacrosanct freedom, with little transparency and almost no accountability. On the heels of successful revolutions in the automotive and airline industries, health-care leaders posited that measuring processes and outcomes in medicine would also lead to better patient outcomes. The pendulum swung from a place of complete physician autonomy to one where measurements encroached on clinical care (“did you spend longer than 15 minutes with any patient today?”) and personal freedoms (“you were timed as having taken a long bathroom break today”). This has unsurprisingly created tensions between administrators and front-line providers, who are pushing back against the use of these measurements quite hard.

But before the pendulum swings back to an era in which transparency and accountability are once again trivialized, I would like to offer a perspective on where we may have gone wrong. I will focus here on three conceptual issues that have plagued the measurement of quality in health care: the sheer number of metrics, the way they are collected, and how they are interpreted.

 

Number of metrics: Drinking from a fire hose

A 2001 report from the Institute of Medicine, Crossing the Quality Chasm, has done wonders for the fields of quality improvement and patient safety. It brought mainstream attention to these topics and forced clinicians and administrators to take a hard look at their individual and collective practices. However, although the report had undeniably positive repercussions, it also had unforeseen consequences. In the report, the discussion of “quality” encompassed issues pertaining to safety, timeliness, effectiveness, efficiency, equitability, and patient-centeredness. It cast such a wide net on the concept of “quality” that virtually everything fell under that umbrella. This led health-care administrators to create and collect countless new metrics, and instead of sipping from a fountain of knowledge it seemed we were left to attempt to drink from a fire hose.

What started as a useful endeavour picked up dizzying speed to the point where hundreds of scores and thousands of metrics are now being collected daily in many institutions. Instead of providing a clearer picture of the strengths and weaknesses of the system, this over-abundance of data creates a confusing and often conflicting picture of reality. Identifying a meaningful signal from the underlying noise becomes a guessing game, drawing conclusions becomes challenging, and identifying priorities is almost impossible.

This myth that with increased measurement and data comes improved outcomes has been challenged by the experience of hospitals participating in the National Surgery Quality Improvement Program (NSQIP). Sponsored by the American College of Surgeons, NSQIP is a complex and resource-intensive program in the United States. Using diligent and extensive collection of patient-, provider-, and system-level data, as well as the delivery of support and feedback to all parties, participating hospitals aim to decrease the numbers of complications and deaths, and hospital costs. Despite early and sporadic success stories, comprehensive reviews of the program demonstrated that NSQIP hospitals have not seen greater improvements in their outcomes than non-NSQIP hospitals over time.4,5 That is not to say that these efforts are futile, as some hospitals did see meaningful improvements in their processes and outcomes. Rather, it shows that the detailed measurements and analyses of the NSQIP network did not necessarily lead to improved outcomes.

Instead of measuring more metrics, maybe we should go back to the drawing board to pick and choose those metrics that are truly relevant.

 

Data collection: Whose job is it, anyway?

One of the many reasons health care providers chose their field of work is the desire to interact with others, and the joy and pride they feel when they contribute to the betterment of someone’s health. That is why many providers resent tasks that distract them from precious moments at the bedside—whether the tasks are medico-legal, administrative, or logistical.

While the complete and legible documentation of patient encounters remains crucial to providing excellent care, administrative duties have increased over time and now intrude on the time providers can devote to patient care. The ever-increasing number of quality metrics being collected has only exacerbated this phenomenon. Many providers have complained that routine visits or discharge planning now consist of such interminable box checking that there is no time to answer patients’ questions. At best, the collection of innumerable metrics has become a mechanical and meaningless exercise. At worst, it now alienates patients and distracts providers from engaging in patient-oriented conversations.

In pursuing their goal of improving the system through measurement and standardization, administrators have downloaded their agenda onto first-line providers. The more time clinicians spend collecting data, the less patient care they provide. We need to shift this paradigm. Instead of having clinicians support administrators in their insatiable appetite for data collection, administrators should build systems to support the clinician while seamlessly collecting quality metrics. Clinicians should not only be surveyed on the selection of the metrics, but also be intimately involved in the design of the collection systems.

One example of data collection that does support clinicians is recording the time from arrival to physician initial assessment. It is minimally burdensome to front-line providers but supports fair compensation for institutions and physicians, ensures that standards of care are maintained, and safeguards against some medico-legal concerns.

In another example, electronic medical records (EMR) and health informatics in general hold great promise in the field of measurement. However, significant challenges remain. Those who buy—and therefore drive the design of—EMRs are the administrators, who may have a tendency to focus on metrics and excessive measurement at the expense of patient-related priorities.

 

Data interpretation: The squeaky wheel doesn’t necessarily need the grease

For providers and administrators alike, the most rewarding aspect of quality measurement, other than seeing patient outcomes improve, is to be able to use the data to implement positive changes. The problem is, the conclusions reached by administrators can often be incorrect.

Let’s take the example of emergency physicians Dr. A and Dr. B, who work in the same busy community hospital emergency department. Dr. A sees twice as many patients per hour as Dr. B. In and of itself, this doesn’t necessarily point to Dr. A providing better care than Dr. B (and quite possibly could mean the opposite). But what if Dr. A’s CT/MRI scan utilization, length of stay times, consultation rates, and admission rates are all lower, and yet her return visit rate is half that of Dr. B? Patient outcomes are hard to come by, but on paper this all points to Dr. A providing better and faster care than Dr. B. At face value, Dr. B appears to be slow, insecure, and possibly unsafe. But what if Dr. B is the one who “cleans up” after the other physicians? What if he routinely attends to the elderly and complex patients to provide exemplary and patient-centred care, instead of picking up the young and “easier to treat” patients, as Dr. A does? Which physician would you want taking care of your elderly mother?

This exemplifies the inherent limitations of using data without fully understanding the perspectives of front-line providers. In this case, clinicians would have recognized that major differences in case-mix between these two physicians renders their comparison meaningless. The same concept can be applied to broader analyses of hospital systems at the regional level. Worse patient outcomes in certain hospitals are sometimes due to the lower socio-economic status of the population they serve rather than the actual care they provide. It is not that the measurements are wrong or biased; it is that the interpretation of the data is more complex than a simple spreadsheet analysis.

Front-line providers may not necessarily be the best people to interpret the data. In fact, many become defensive and attack the validity of any metric that portrays them in ways that are inconsistent with their own overly positive self-assessment. However, front-line providers must be part of the discussion. Having everyone at the same table ensures not only that metrics are understood and interpreted in a way that matches reality, but also that metrics are utilized in a productive and patient-centred fashion.

At the end of the day, transparency is good, accountability is necessary, and measurements are here to stay. But to reap the greatest benefits from measuring quality, we must ensure we do it well. We must abandon the “everything can and must be analyzed” mantra and instead strive to measure only what makes sense. The leaders at the helm of hospitals and health-care systems must ensure that clerical and administrative supports are in place to help the front-line providers in the collection of key data. And, importantly, the interpretation of data should remain focused on patient care.

Health care has enjoyed tremendous successes as a result of the diligent and thorough measurements undertaken in the past decade. Let’s all work together to devise an improved system of measurements, which includes learning from our past errors. Our patients deserve it.

 

—Dr. Lucas Chartier is an emergency physician at the University Health Network and North York General Hospital’s emergency departments in Toronto, Ontario. He is also the Director of Quality and Innovation at the University Health Network Emergency Department and has a Master in Public Health, with a focus on quality improvement, from Harvard University.

 

Dr. Ovens and Dr. Chartier have no conflicts of interest to declare

 

References

  1. Wachterjan RM. How Measurement Fails Doctors and Teachers. New York Times. Jan. 16, 2016: Accessed Feb. 28, 2016.
  2. Hartzband P and Groopman J. Medical Taylorism. NEJM 2016;374:106-8
  3. Institute of Medicine (IOM). 2001. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, D.C.: National Academy Press
  4. Osborne NH, Nicholas LH, Ryan AM, et al. Association of Hospital Participation in a Quality Reporting Program With Surgical Outcomes and Expenditures for Medicare Beneficiaries. JAMA. 2015;313(5):496-504
  5. Etzioni DA, Wasif N, Dueck AC, et al. Association of Hospital Participation in a Surgical Outcomes Monitoring Program With Inpatient Complications and Mortality. JAMA.2015;313(5):505-511

Sign up for our Newsletter 

About the Author:

Howard Ovens is the Director of the Dept. of Emergency Medicine for the Sinai Health System in Toronto, Canada. He’s a Professor in the Department of Family and Community Medicine at the University of Toronto and a member of the CAEP Public Affairs Committee. He’s also the Lead for EM for the Toronto Central Local Health Integration Network (LHIN) and the Ontario Government Expert Lead for EM. He tweets on issues of public policy and administration related to EM and is the lead author for EM Cases ‘Waiting to Be Seen – Where EM Policy Meets Practice’, blog series.

Leave A Comment