POINT: YES. PUBLIC REPORTING ON THE QUALITY OF ALL HEALTH CARE SERVICES is key in the nation’s efforts to develop a value-driven health care system, says Carolyn M. Clancy, a general internist and the director of the Agency for Healthcare Research and Quality, one of twelve agencies that are part of the U.S. Department of Health and Human Services.

A value-driven system is one in which substantial investments made in health care result in improved outcomes for patients. For such a system to work, patients, providers, policy makers and other stakeholders need reliable, comparable information to make informed decisions.

We are seeing escalating health care costs and persistent evidence of a substantial gap between the best possible care and that which is routinely delivered. These observations have helped spur interest in the use of performance measurement to drive clinical improvements and inform patient choices.

Although report cards and the performance measures that populate those cards are in a relatively early stage of development, we know that public reporting is associated with significant improvements in care. Data from the 2006 National Healthcare Quality Report, produced by the Agency for Healthcare Research and Quality, found that hospital measures of quality improved at a median annual rate of 7.8%.

We have also learned that keeping data private isn’t as effective in quality improvement as public reporting. In Wisconsin, hospitals strongly encouraged by local employers to submit data on performance were split into two groups: those who saw the results privately and those whose results were posted publicly. One year later, the latter group had undertaken many more quality improvement efforts than the hospitals with privately reported data.

In a follow-up study conducted two years after the Wisconsin data was made public, one-third of hospitals with public reporting had significantly improved their performance for obstetric conditions and only 5% saw a decline. For those whose data was reported privately, only one-fourth saw improvements in obstetric performance and about 14% saw a decline.

A crucial next step is to engage consumers. Although the public expresses strong interest in knowing the quality of hospitals and physicians, evidence is limited that people are paying attention to and using the data. However, I am encouraged that from 2000 to 2005, the percentage of people doing so rose from 4% to 12%.

For reporting on physician performance to be successful, measures must be valid, data must be risk-adjusted and samples must be of sufficient size. Given that high-quality care is often a team effort, an important but as-yet-unanswered question is: For which clinical situations should quality be measured at the individual level and for which should it be assessed at the team level?

To get this right, physician leadership is crucial. Already, the nation’s physician organizations have led in the efforts to develop effective, fair physician performance measures and report cards. They are collaborating with consumers, policymakers, insurers and others to ensure that report cards are used to improve care rather than to deny coverage.

Transparency regarding quality is a powerful tool if done wisely. The ultimate goal is not to create better report cards, but to add value to a clinician’s work by making it easier to provide the best care and focusing on what physicians believe to be in the best interest of their patients.

COUNTERPOINT: No, because the wrong kind of performance measurements can do more harm than good, says David F. Torchiana, a cardiac surgeon and the head of the Massachusetts General Physicians Organization.

Proponents of publicly reporting physician outcomes give two reasons for doing so: to help patients make better choices and to use public opinion as a catalyst for improvement. Both of these objectives are worthy, but it is also important for the benefits of public reporting to outweigh the shortcomings.

The theoretical ideal might be a definitive physician rank order for every condition. It doesn’t exist. The primary data sources used for ranking are claims data, which are gathered for billing and payment. They are a poor surrogate for real clinical detail, which is more challenging and costly to collect. Moreover, only one operation (coronary bypass graft surgery) is performed often enough to meaningfully differentiate mortality rates among hospitals, and it takes three years of data to make that distinction. Trying to compare individual practitioners is more difficult because the caseload per physician is smaller still, which makes comparisons even weaker. The validity of attributing an outcome to a single physician is also unfair; most care is the product of a team of providers. Finally, outcomes are labile: This year’s outstanding performer may be in the middle of the pack next year, or worse, and then bounce back the year after that. Instead of an authoritative rank order we have an unstable, imprecise process based on soft data that is often invalid at the physician level and shouldn’t be attributed at that level anyway.

What about public reporting drawing greater attention and therefore stimulating a stronger response? Maybe—but it can be as much of a danger as a benefit. The experience with cardiac surgery report cards demonstrates that there are three ways to improve outcomes in response to a public report card. One is good: better operations and better patient care. The other two are less positive. “Up-coding”—reporting more severe patient co-morbidities so that the expected risk goes up—improves risk-adjusted outcomes without actually improving the care at all. In New York, where outcomes were first reported publicly almost 20 years ago, risk-adjusted cardiac surgical mortality dropped by 36% from 4.2% to 2.7% in the first few years. To produce this number, actual mortality fell by 11%, while predicted mortality went up by 37%, driven by changes in coding which included a sevenfold increase in the recorded incidence of the risk factor of renal failure and greater than fourfold increase in congestive heart failure.

The third way to “improve” is clearly the most distressing: Stop caring for the sickest patients, the very same group of patients who typically stand to gain the most from treatment. The most important evidence that this happens comes from public data on coronary angioplasty. In New York, after years of reporting, the mortality rate was 0.6% in 2005. In Massachusetts’ first report that same year, mortality was 1.7%. Is angioplasty really three times safer in New York? In fact, angioplasty for equivalent cases had about the same outcome in each state. The difference, rather, was in the mix of patients selected for the procedure: In Massachusetts there were about six times more angioplasty patients considered highest-risk (suffering from shock because of extensive acute cardiac damage) than in New York. For such a patient, angioplasty can be truly life-saving: The odds of survival at five years are 66% better with angioplasty, but the risk of mortality is much higher than in an elective setting. This data suggests that a lower mortality rate for angioplasty on a report card has been traded for a potentially poorer survival for five out of six patients with a heart attack and cardiogenic shock—a trade no one would logically make.

A negative public report card threatens the reputation and livelihood of the physician portrayed—that’s what makes reporting such a powerful stimulus for practice change, for better or worse. The benefits of public reporting have been oversold and the downside ignored. Most attempts at outcomes reports at the physician level are inaccurate and misleading. Even when the data is valid, much of the supposed resultant improvement comes from coding changes or negative changes in practice. Public reporting at the individual physician level is a two-edged sword. Let’s wield it carefully.