Volume 40 | Number 6p2 | December 2005

Abstract List

A. James O'Malley Ph.D., Alan M. Zaslavsky Ph.D., Ron D. Hays, Kimberly A. Hepner, San Keller, Paul D. Cleary Ph.D.


Objectives

To estimate the associations among hospital‐level scores from the Consumer Assessments of Healthcare Providers and Systems (CAHPS) Hospital pilot survey within and across different services (surgery, obstetrics, medical), and to evaluate differences between hospital‐ and patient‐level analyses.


Data Source

CAHPS Hospital pilot survey data provided by the Centers for Medicare and Medicaid Services.


Study Design

Responses to 33 questionnaire items were analyzed using patient‐ and hospital‐level exploratory factor analytic (EFA) methods to identify both a patient‐level and hospital‐level composite structures for the CAHPS Hospital survey. The latter EFA was corrected for patient‐level sampling variability using a hierarchical model. We compared results of these analyses with each other and to separate EFAs conducted at the service level. To quantify the similarity of assessments across services, we compared correlations of different composites within the same service with those of the same composite across different services.


Data Collection

Cross‐sectional data were collected during the summer of 2003 via mail and telephone from 19,720 patients discharged from November 2002 through January 2003 from 132 hospitals in three states.


Principal Findings

Six factors provided the best description of inter‐item covariation at the patient level. Analyses that assessed variability across both services and hospitals suggested that three dimensions provide a parsimonious summary of inter‐item covariation at the hospital level. Hospital‐level factor structures also differed across services; as much variation in quality reports was explained by service as by composite.


Conclusions

Variability of CAHPS scores across hospitals can be reported parsimoniously using a limited number of composites. There is at least as much distinct information in composite scores from different services as in different composite scores within each service. Because items cluster slightly differently in the different services, service‐specific composites may be more informative when comparing patients in a given service across hospitals. When studying individual‐level variability, a more differentiated structure is probably more appropriate.