Volume 51 | Number 6 | December 2016

Abstract List

G. Greg Peterson Ph.D., M.P.A., Jelena Zurovac Ph.D., M.S., Randall S. Brown Ph.D., Kenneth D. Coburn M.D., Ph.D., F.A.C.P., Patricia A. Markovich Ph.D., M.P.P., Sherry A. Marcantonio M.S.W., A.C.S.W., William D. Clark M.S., Anne Mutti M.P.A., Cara Stepanczuk M.P.A.


Objectives

To test whether a care management program could replicate its success in an earlier trial and determine likely explanations for why it did not.


Data Sources/Setting

Medicare claims and nurse contact data for Medicare fee‐for‐service beneficiaries with chronic illnesses enrolled in the trial in eastern Pennsylvania ( = 483).


Study Design

A randomized trial with half of enrollees receiving intensive care management services and half receiving usual care. We developed and tested hypotheses for why impacts declined.


Data Extraction

All outcomes and covariates were derived from claims and the nurse contact data.


Principal Findings

From 2010 to 2014, the program did not reduce hospitalizations or generate Medicare savings to offset program fees that averaged $260 per beneficiary per month. These estimates are statistically different ( < .05) from the large reductions in hospitalizations and spending in the first trial (2002–2010). The treatment–control differences in the second trial disappeared because the control group's risk‐adjusted hospitalization rate improved, not because the treatment group's outcomes worsened.


Conclusion

Even if demonstrated in a randomized trial, successful results from one test may not replicate in other settings or time periods. Assessing whether gaps in care that the original program filled exist in other settings can help identify where earlier success is likely to replicate.