There’s no arguing against the fact that 30-day readmissions for certain conditions targeted by a federal initiative to improve quality of care are on the decline.
The Centers for Medicare & Medicaid Services’ Hospital Readmissions Reduction Program is getting a lot of credit for the decline. Since 2010, the program had dinged hospital Medicare reimbursement for a range of preventable readmissions for conditions such as pneumonia and heart failure.
However, in a recent study in Health Affairs, researchers at Harvard Medical School are offering an alternative explanation that the drop in readmissions is being driven by an overall decline in hospital admissions.
“The decline in readmission rates looked like the silver lining of pay-for-performance, but it seems to have lost its luster,” said study lead author J. Michael McWilliams, the Warren Alpert Foundation Professor of Health Care Policy in the Blavatnik Institute at Harvard.
“Our study makes a strong case that what looked like achievements of the program may have been a byproduct of factors driving a broader decrease in hospitalizations across the board,” McWilliams said.
McWilliams discussed the study findings, and the use of readmissions as a quality metric. The following transcript has been edited for length and clarity.
What prompted this study?
McWilliams: This decline in admission rates had gone largely unnoticed in the literature on the HRRP. So that prompted us to do the study, particularly in the wake of other studies interpreting the decline of readmissions as a causal effect of the program. It seemed worth pointing out that there was this other broader trend going on nationwide.
So, the simplest explanation is the correct one?
McWilliams: Yeah. Occam’s Razor. As a physician and health policy researcher, I’m not sure that’s always true. It seems like things can get really complicated sometimes. But in this case, the falling rate of admissions is a pretty clear explanation for at least much of the decline in readmissions.
It’s just because of this simple statistical relationship between the two. If there were fewer admissions per patient, and readmissions are largely independent events, simply other admissions that happened to fall within 30 days of another, then statistics tell us that as the number of admissions per patient falls, the probability that one admission falls close to another is lower.
Does this mean that efforts to reduce 30-day readmissions are a waste of time?
McWilliams: I don’t think we can say it’s a waste of time. We can certainly say that, whatever response has been elicited by the program, those efforts to reduce readmissions either have not been very effective, or it’s possible that those efforts did prevent some readmissions, but at the same time, a lot of the efforts which involved outreach to patients may have also increased readmissions.
One interpretation – although this is speculative because it’s very hard to sort out which are the prevented and which are the increased readmissions – might be that the quality of care may have gotten somewhat better. It’s just not reflected in the measure.
What do your findings suggest about using readmissions as a quality metric?
McWilliams: Any utilization-based quality measure is really problematic because it begs the question, what’s the right level of utilization? This is true of so-called preventable admissions as well as hospitalizations for ambulatory care-sensitive conditions. Obviously, the right amount of admissions and readmissions is not zero. So it’s very hard to know if we provide optimal care what proportion of patients would be admitted or readmitted.
Based on your findings, should Medicare eliminate the financial penalties for 30-day readmissions?
McWilliams: For any given hospital, it’s hard to know whether it’s merited or not. There’s been a lot of research in this area that has demonstrated that, while it’s not clear that the program has reduced readmissions much if at all, what it has certainly done is transferred resources away from providers serving sicker and poor patients to the hospital serving the healthier, wealthier patients and in ways that are not merited, that are not due to differences in quality, but rather just due to differences in the populations that they serve.
We’d be better off without the program for that reason, and there is ongoing debate about whether the program should be scrapped altogether, whether it can be refined in a way that it could achieve its objective.
I tend to be quite skeptical of programs like this that fall in the category of pay for performance because there are just a lot of intractable problems with this approach to quality improvement of trying to bake it into the payment system.
What should be done with your study findings?
McWilliams: A good use would be to take a step back and reassess the merits of this program and other programs like it. This is not the first pay-for-performance program we’ve found to have minimal benefits and lots of unintended consequences. The research on the Value-based Payment Modifier, which was the precursor to the MIPS, is very similar, as is the Hospital Value-based Purchasing Program.
The best use of the findings is to take a step back and to really have a new conversation where we start thinking about ways to improve quality that is not by linking incentives to performance measures, but just thinking about what interventions and strategies help improve quality.
Sometimes we forget that the ultimate goal is quality improvement when we are so focused on measures and how they should fit into the payment system. Once we figure out how to improve quality, there’ll be demand from patients for it and providers are interested in providing better quality care.
This report is brought to you by HealthLeaders Media.