Recently, Dr. Ashish K. Jha, a Harvard internist and health policy researcher, published an opinion piece in JAMA, advocating public reporting of individual surgeon outcomes. I have followed Dr. Jha for many years on Twitter and have enjoyed his blog posts and papers. However, I must respectfully disagree with much of what he wrote this time. He tries but fails to refute the arguments that critics of individual surgeon reporting have put forward. For example, Jha says that the way to solve the problem of small sample sizes is to aggregate cases over several years. For most operations, aggregating over 3 to 4 years would still not yield enough volume for proper analysis. He feels that combining the outcomes for similar operations could make it easier to assess a given operation. To illustrate this point, he wrote: “A surgeon’s performance on esophagectomy improves with the number of other similar surgeries she performs.” I can’t think of a single operation that is like an esophagectomy because the esophagus anatomically differs from any other organ. Jha says that publicly reported data would be enhanced by including confidence intervals “to highlight the level of imprecision so that those reading the report are aware of the statistical limitations.” I chuckled at that one because many physicians don’t understand confidence intervals. To expect the public to do so is wishful thinking. The notion that surgeons would avoid difficult cases so as not to tarnish their records is dismissed by Jha who says while citing no references, “the evidence on the extent to which this occurs is weak and anecdotal.” I will mention a couple of 2005 studies he may have missed. A survey by investigators from the University of Rochester found “79 percent of interventional cardiologists agreed or strongly agreed that the publication of mortality statistics has, in certain instances, influenced their decision regarding whether to perform angioplasty on individual patients.” A similar percentage felt that some patients who might have improved with angioplasty might not have had it done because mortality rates for individual physicians were being reported. Of the 186 cardiologists who received surveys, 120 (65 percent) responded. Over 88 percent said that physicians might report higher risk comorbidities to enhance their risk-adjusted mortality statistics. The paper appeared in Archives of Internal Medicine. The second paper was in the Journal of the American College of Cardiologyand compared patient characteristics, indications, and outcomes of over 11,000 patients from an eight-hospital percutaneous coronary intervention (PCI) database in Michigan, which did not have public reporting, to a statewide database of 69,000 patients from 34 hospitals in New York, a public reporting state. The authors found that New York patients with acute myocardial infarction and cardiogenic shock underwent PCI significantly less often than Michigan patients who had more associated congestive heart failure and noncardiac vascular disease. Michigan patients had a significantly higher in-hospital mortality rate before adjusting for comorbidities. They concluded that the case mix of patients was significantly different in the two states. It appeared that cardiologists in New York tended to not intervene on higher-risk patients, and the authors speculated that it might be because of public reporting. Because a randomized, prospective double-blind study of public reporting of outcomes is impossible to do, the evidence will remain “weak and anecdotal.” In a blog post that appeared after Jha’s piece was submitted, cardiologist Anish Koka described a 54-year-old man with, end-stage renal disease, cirrhosis secondary to hepatitis C, and a prior aortic valve replacement. Because he was suffering from endocarditis, he needed another aortic valve replacement, an operation that would have given him a 50 percent chance of survival vs. a 100 percent mortality without it. No surgeon in Philadelphia would operate. He was turned down by Johns Hopkins too. The reason—public reporting of outcomes. Koka cited a paper th at found “New York patients with acute myocardial infarction and cardiogenic shock were less likely to undergo coronary angiography and PCI and waited significantly longer to receive coronary artery bypass grafting than their non-New York counterparts.” The patient died without having surgery. Jha believes the most important reason to report data on individual surgeons publicly is “it’s information the patients want.” Would they still want it if they knew someday it might result in denial of a life-saving operation for them or a loved one? You may think that because I am a surgeon, I am simply trying to protect my reputation. Not so. I am retired from the practice of general surgery. Public reporting of outcomes will not affect me personally. And I’m not the only one who feels public reporting is unfair. A group from the Imperial College London published a paper in Health Affairs about the strengths and weaknesses of public reporting of surgeon-specific outcome data and concluded, “We would argue that given the small number of procedures performed by an individual surgeon for many specialties, the reporting of mortality data is not valid. It would be more appropriate to publish risk-adjusted hospital mortality data, which are statistically more robust and reflect a team-based approach to health care and the resource allocations within a hospital. Commenting on a story entitled “Calif. hits nerve by singling out cardiac surgeons with higher patient death rates,” respected Yale cardiologist and health care researcher Harlan Krumholz tweeted “We shouldn’t publicly single out surgeons for poor performance; should focus on teams. Airlines, not pilots.” And here’s what noted surgeon, author, and researcher Atul Gawande said about the Krumholz tweet: Source