My alarm is set to let loose the soothing tones of NPR at 6:15 each weekday morning, and I spend the first seven of those minutes in the pleasant haze before actually rising (which is less pleasant). This usually coincides with the final story before the news headlines, and this morning's piece brought news that the Massachusetts state government is planning to "stop paying hospitals where the re-admission rate is higher than the statewide average." This is expected to save the Commonwealth roughly $8 million annually in a budget that, like most of the states in the Union, is strapped for cash.
Let's leave aside the most egregious part of this policy: namely, that half of all hospitals should be made to take a financial hit, even if every hospital in Massachusetts improves its readmission rates, since by definition half of any defined group is always above the average. There's a cruel logic at work there, or rather more likely, none at all. But for the sake of argument let's assume that some clever legislator thought of this and worked out some model to adjust for this problem, measuring hospitals against some baseline of expected performance rather than against each other. Is it still a good idea?
Maybe, but to expect politicians and/or bureaucrats to get this right just now is...well, color me skeptical.
Measuring a given hospital's performance isn't very difficult: you just collect data on the number of admissions, the kind of admissions, the length of those admissions, how many people die in the hospital, how many have surgical complications, and the like. With computerized databases this takes only the amount of time that one wants to spend querying the data, and it's equally easy to cross-check the admissions to see how many patients are re-admitted to the hospital within one month (the typical measuring stick) with the same problem. What is difficult is knowing the standard to which that hospital's performance should be compared.
Wait, you say--why not compare all the hospitals against each other? You can actually do this via this news piece from USA Today; the slightly more tedious, and less user-friendly, version put out by the federal government is here. What if the re-admission rate for, say, pneumonia at Hospital A is 15.9 percent, while at Hospital B it is a stunning 22.5? (The national average, as illustrated here, is 18.3 percent.) Should we punish Hospital B, deprive its operating budget of potentially hundreds of thousands of dollars, and maybe send some of that cash over to Hospital A, a gleaming example of the finest medicine practiced in the US?
It's not such a hypothetical: I took the data from two actual hospitals here in Massachusetts. Hospital A is a small, regional center about an hour or so outside of Boston, while Hospital B is an academic medical center in the heart of the city. I'm sure "A" is a fine hospital with good doctors, but for my money, send me to "B" any day of the week! But why would I think such a thing given those stats (which, I'm confident, would be similar for virtually any medical condition such as heart attacks and asthma)?
The answer is that "B" is a large urban, tertiary-care center. Why is this relevant? Because of their size, they have many different ethnic groups passing through their doors, including at least two major immigrant populations: more opportunities for misunderstandings--both cultural and linguistic--that can lead to readmission. Because "B" is urban, it has the kinds of patients that "A" rarely sees, who happen to be the kinds of patients at highest risk for readmission: single mothers working two jobs who can't find time for follow-up appointments, working poor who can't afford meds, semi-literate patients who only partially understand the bizarre language of doctors and nurses, drug addicts. Because they are a tertiary-care center, they take referrals of the sickest patients in town--precisely the kind of patients that make many nurses and doctors from Hospital A pee in their pants when so confronted. For all of these reasons, Hospital B is very far from being on a level playing field, and while I'd probably be fine being taken care of at either place for routine stuff, I would very much rather be at Hospital B for even the slightest setback.
With the fancy computerization has come significantly increased access to data, and the arguments and counterarguments about how to use hospital outcomes data have been circulating for a few years. Take, for instance, the discussion about hospital mortality statistics, as evidenced by this editorial in the British Medical Journal last April, or very recent news stub by Harvard University here. Contrast this with a warm account in 2009 of Baylor University's lower readmission rates for heart failure, and its emphasis of defining the admission rates relative to the national average. Baylor may well be a model for how all hospitals should construct their programs; I'm not a heart failure specialist so I can't comment. The story, however, plants the idea that everything better than the mean signifies a better hospital, and everything worse, worse--an idea that can be misinterpreted with potentially disastrous consequences for certain patients.
My fears about how this is going to play out in the years to come is that hospitals will continue to feel budgetary pressures from government agencies and insurance companies alike, and those that care for the sickest and most vulnerable patients (read: often not white, frequently the poorest, sometimes immigrants who do not speak English well or at all, just to name a few attributes) are going to suffer the brunt of this well-meaning but so far not-ready-for-prime-time approach to measuring a given hospital's quality.