Volume 37 | Number 1 | February 2002

Abstract List


Objective

 To assess the reliability, applicability, and validity of a refined system (taxonomy of requests by patients [TORP]) for characterizing patient requests and physician responses in office practice.


Study Settings

 Data were obtained from visits to six general internists practicing in North‐Central California in 1994 and eight cardiologists practicing in the same region in 1998.


Study Design

 This was an observational study of patient requests and physician responses in two practice settings. Patients were surveyed before and after the visit. Physicians were surveyed immediately after the visit, and all visits were audio recorded for future study.


Data Collection/Extraction Methods

 TORP was refined using input from a multidisciplinary panel. Audiotape recordings of 131 visits (71 in internal medicine and 60 in cardiology) were rated independently by two coders. Estimates of classifying reliability (intercoder agreement on the sorting of requests into categories) and unitizing reliability (intercoder agreement on the labeling of elements of discourse as “requests” and subsequent classification into categories) were calculated. Validity was assessed by testing three specific hypotheses concerning the antecedents and consequences of patient requests and request fulfillment.


Principal Findings

 The overall unitizing kappa for identifying patients' requests was 0.64, and the classification kappa was 0.73, indicating substantial agreement beyond chance. The average patient made 4.19 requests for information and 0.88 requests for physician action; there were few differences in the spectrum of requests between internal medicine and cardiology. Approximately 15 percent of visits included a direct request for completion of paperwork. Patients who were very or extremely worried about their health made more requests than those who were not (6.06 vs. 3.89,  < 0.05). Visits involving more patient requests took longer ( < 0.05) and were perceived as more demanding by the treating physician (=0.025). The vast majority of requests were fulfilled.


Conclusions

 The refined TORP shows evidence of both unitizing and classification reliability and should be a useful tool for understanding the clinical negotiation. In addition the system appears applicable to both generalist and specialist practices. More experience with the system is necessary to appraise TORP's ability to predict important clinical outcomes.