Volume 38 | Number 1p1 | February 2003

Abstract List

T. Michael Kashner Ph.D., J.D., Thomas J. Carmody, Trisha Suppes, A. John Rush, M. Lynn Crismon, Alexander L. Miller, Marcia Toprac, Madhukar Trivedi


Objective

To develop a statistic measuring the impact of algorithm‐driven disease management programs on outcomes for patients with chronic mental illness that allowed for treatment‐as‐usual controls to “catch up” to early gains of treated patients.


Data Sources/Study Setting

Statistical power was estimated from simulated samples representing effect sizes that grew, remained constant, or declined following an initial improvement. Estimates were based on the Texas Medication Algorithm Project on adult patients (age≥18) with bipolar disorder (=267) who received care between 1998 and 2000 at 1 of 11 clinics across Texas.


Study Design

Study patients were assessed at baseline and three‐month follow‐up for a minimum of one year. Program tracks were assigned by clinic.


Data Collection/Extraction Methods

Hierarchical linear modeling was modified to account for declining‐effects. Outcomes were based on 30‐item Inventory for Depression Symptomatology—Clinician Version.


Principal Findings

Declining‐effect analyses had significantly greater power detecting program differences than traditional growth models in constant and declining‐effects cases. Bipolar patients with severe depressive symptoms in an algorithm‐driven, disease management program reported fewer symptoms after three months, with treatment‐as‐usual controls “catching up” within one year.


Conclusions

In addition to psychometric properties, data collection design, and power, investigators should consider how outcomes unfold over time when selecting an appropriate statistic to evaluate service interventions. Declining‐effect analyses may be applicable to a wide range of treatment and intervention trials.