Volume 49 | Number 4 | August 2014

Abstract List

William Gardner Ph.D., Suzanne Morton M.P.H., M.B.A., Sepheen C. Byron M.H.S., Aldo Tinoco M.D., Ph.D., Benjamin D. Canan M.P.H., Karen Leonhart B.S., Vivian Kong M.P.H., Sarah Hudson Scholle M.P.H., Dr.P.H.


To determine whether quality measures based on computer‐extracted data can reproduce findings based on data manually extracted by reviewers.

Data Sources

We studied 12 measures of care indicated for adolescent well‐care visits for 597 patients in three pediatric health systems.

Study Design

Observational study.

Data Collection/Extraction Methods

Manual reviewers collected quality data from the . Site personnel programmed their systems to extract the same data from structured fields in the according to national health standards.

Principal Findings

Overall performance measured via computer‐extracted data was 21.9 percent, compared with 53.2 percent for manual data. Agreement measures were high for immunizations. Otherwise, agreement between computer extraction and manual review was modest (Kappa = 0.36) because computer‐extracted data frequently missed care events (sensitivity = 39.5 percent). Measure validity varied by health care domain and setting. A limitation of our findings is that we studied only three domains and three sites.


The accuracy of computer‐extracted quality reporting depends on the use of structured data fields, with the highest agreement found for measures and in the setting that had the greatest concentration of structured fields. We need to improve documentation of care, data extraction, and adaptation of systems to practice workflow.