Design decisions and lessons learned in the Improving Chronic Illness Care Evaluation

21
6/01 RAND 1 Design decisions and lessons learned in the Improving Chronic Illness Care Evaluation Emmett Keeler September 2005 [email protected]

description

Design decisions and lessons learned in the Improving Chronic Illness Care Evaluation. Emmett Keeler September 2005 [email protected]. Evaluation questions. Did the Collaboratives induce positive changes? - PowerPoint PPT Presentation

Transcript of Design decisions and lessons learned in the Improving Chronic Illness Care Evaluation

Page 1: Design decisions and lessons learned in the Improving Chronic Illness Care Evaluation

6/01 RAND1

Design decisions and lessons learned in the Improving Chronic

Illness Care Evaluation

Emmett Keeler

September 2005

[email protected]

Page 2: Design decisions and lessons learned in the Improving Chronic Illness Care Evaluation

6/01 RAND2

Evaluation questions

• Did the Collaboratives induce positive changes?

• Did implementing the Chronic Care Model (CCM) improve processes of care and patient health?

• What did participation and implementation cost?

• What factors were associated with success?

www.improvingchroniccare.org is a great website for information about the Chronic care model.

Page 3: Design decisions and lessons learned in the Improving Chronic Illness Care Evaluation

6/01 RAND3

ICICE results in 1 slide

• Collaborative sites made more than 30 systemic changes over the year, on average

• These changes let us test if moving towards the CCM is good for patients

• Process, self management and some outcomes improved more for intervention patients than control sites

improving most in emphasized areas such as patient goal-setting

http://www.rand.org/health/ICICE presents findings from the 15 accepted papers and other information about the study.

Page 4: Design decisions and lessons learned in the Improving Chronic Illness Care Evaluation

6/01 RAND4

Outline

Design considerations and what we did

Improving science base for QI evaluations

Dealing with challenges to validity

Reducing per subject costs

Page 5: Design decisions and lessons learned in the Improving Chronic Illness Care Evaluation

6/01 RAND5

Drug RCT paradigm

Aims at internal validity of treatment estimates

• Reduce unwanted variation byMany subjects Tight criteria for enrollment Tight protocols for treatments

• Eliminate potential bias byRandomizationBlinding

How much of this paradigm is possible in evaluating systemic change in many organizations?

Page 6: Design decisions and lessons learned in the Improving Chronic Illness Care Evaluation

6/01 RAND6

Is Randomization Feasible?

Organizations want to improve care, not do research

• Are afraid patients won’t like being subjects

Many changes are at site level: new appointment system; staff training; new information system.

• Can’t be applied to half the patients at the site

• Sites might be randomized in system QI trials, but not patients

So patient RCTs are not feasible in these evaluations, but many reviewers for medical journals believe non-RCTs have little value.

Page 7: Design decisions and lessons learned in the Improving Chronic Illness Care Evaluation

6/01 RAND7

Components of alternate strong studies

Before and after with a matched control group

Multiple sources of data and an evaluation logic model

Planning for and testing potential biases

Page 8: Design decisions and lessons learned in the Improving Chronic Illness Care Evaluation

6/01 RAND8

Baseline Measures

PilotGroup

ControlGroup

Experimenting with Care

Implementing Improved Care

Baseline Measures

Secular TrendsPost

Measures

Post Measures

Difference in Changes

Evaluation Design

Changes in pilot measures

Changes in control measures

An Evaluation of Collaborative interventions to improve Chronic Illness Care: Framework and Study Design”, Cretin S et al., Evaluation Review, 2004

Page 9: Design decisions and lessons learned in the Improving Chronic Illness Care Evaluation

6/01 RAND9

Picking a control group

External control sites

• No contamination from intervention

• But a different set of shocks

• Less likely to cooperate, more expensive to include

We tried to pick internal sites in the same organization

• Asked for a similar site, that was not likely to get CCM this year.

Page 10: Design decisions and lessons learned in the Improving Chronic Illness Care Evaluation

6/01 RAND10

Patient Sampling

Use sampling frame from site registry of patients with disease of interest

• Registries are needed for the CCM interventions

• A few sites had to develop them, which we helped

We usually took everyone in registry at our sites.

• Not a “tight” design, but better for external validity

We later discarded patients who said they did not have disease, or did not get care at the sites.

Page 11: Design decisions and lessons learned in the Improving Chronic Illness Care Evaluation

6/01 RAND11

Evaluation Data Sources

Record

Staff Survey

Patient Telephone Surveys

Medical Record Abstraction

Monthly Progress Reports

Clinical & Administrative Staff Surveys

IHIIHI

Final Calls with Leader

Page 12: Design decisions and lessons learned in the Improving Chronic Illness Care Evaluation

6/01 RAND12

Telephone Survey

Patient is best source for what care provider does:

• Education, Knowledge, Adherence,

• Communication, Satisfaction,

• General and disease specific health, limitations

• Utilization, Demographics and insurance.

Can use to see if improvements in charts are just documentation or are real.

Cost ~$100 each for the ~4000 patients phoned.

Page 13: Design decisions and lessons learned in the Improving Chronic Illness Care Evaluation

6/01 RAND13

Advantages of charts

IRB and consent difficulties delayed recruitment

• Phone surveys were at end of collaborative, but

• Charts still provide true before and after

can add baseline variables from charts to analyses of measures from phone surveys.

• Cost ~ $300 each, but give before and after.

Page 14: Design decisions and lessons learned in the Improving Chronic Illness Care Evaluation

6/01 RAND14

Monthly progress reports

The collaborative asked the team at each intervention site to fill out a brief report each month -- for their leaders, and for IHI

• what they had done that month,

• tracking their most important statistics.

Very helpful in finding out what sites did.

• We needed to develop method to code change activities

• Used changes as both dependent and independent variable.

Lack of Standardization reduced value of their statistics to us.

• e.g. for depression, 3 time periods for “in treatment”, 3 for time for a follow-up , 2 for big improvement at N months.

IHIIHI

Page 15: Design decisions and lessons learned in the Improving Chronic Illness Care Evaluation

6/01 RAND15

Outline

Design considerations and what we did

Improving science base for QI evaluations

Dealing with challenges to validity

Reducing per subject costs

Page 16: Design decisions and lessons learned in the Improving Chronic Illness Care Evaluation

6/01 RAND16

Longer Run effects ?

We mainly looked at care before and during the collaborative to shorten the study, saving time-to-results and resources.

How can one get longer run effects?

• Use risk factors as a proxy for future health?Cholesterol, HbA1c, blood pressure in people with

diabetes.

• Check in a year later with key staff for: Reflections on successes and barriersSee what happened next (maintenance, spread of quality

improvements).

Page 17: Design decisions and lessons learned in the Improving Chronic Illness Care Evaluation

6/01 RAND17

Evaluators need to know what control sites are doing

In the first collaborative, control sites made big improvements, making intervention sites insignificantly better. What was up?

Bleed of collaborative quality improvement to controls:

• We measured how “close” the control sites were using a scale including geography, overlap of staff, and of organizations. (only 6 control sites were truly external).

• This scale was a mild predictor of control sites doing better.

Other Quality improvement activities: QI activities unique to control site reduce the estimates of intervention success.

• asking about QI was a big part of the control exit phone call.

Page 18: Design decisions and lessons learned in the Improving Chronic Illness Care Evaluation

6/01 RAND18

Bias from volunteer organizations and teams?

External validity: Organizations volunteered to improve their care

• We do not know how effective the intervention would be on organizations that do not care to improve quality.

internal Validity: Site staff in the organization often volunteered to go first.

• We compared staff attitudes of the chronic care delivery teams in intervention and control sites.

Similar attitudes towards quality improvement

Page 19: Design decisions and lessons learned in the Improving Chronic Illness Care Evaluation

6/01 RAND19

Selection Bias from organizations agreeing to be evaluated?

Compare organizations in the collaborative that participated in the evaluation with non-participating.

IHI Faculty gave 1-5 ratings of success at end.

• Participating organizations averaged 4.1, non-participating had 3.9, non-significant

• None of the 7 organizations that dropped out of the collaboratives had signed up to participate.

Page 20: Design decisions and lessons learned in the Improving Chronic Illness Care Evaluation

6/01 RAND20

Bias from Funder’s pressure

• Real and perceived problem that increases with less rigid designs.

• Separate evaluation team from the intervention team

But we need their help to enhance cooperation and help us understand the intervention process

• Try to ensure independence up-front in contract

Funders can review but not censor results

Evaluators share “unpleasant”results ASAP and• work with funder to understand and present them

• But, researchers and funders are in long term relationship

Page 21: Design decisions and lessons learned in the Improving Chronic Illness Care Evaluation

6/01 RAND21

Lowering cost of evaluations

Reducing multi-site IRB and consent costs:

• Study QI in organizations with many siteswith prior patient consent for quality improvement and QI

research activities.

Reducing data collection costs

• Electronic medical record

• Clever use of existing data, like claims

• Web-based and other unconventional surveys