Developing, Measuring, and Improving Program Fidelity: Achieving positive outcomes through...

27
Developing, Measuring, and Improving Program Fidelity: Achieving positive outcomes through high-fidelity implementation SPDG National Conference Washington, DC March 5, 2012 Allison Metz, PhD, Associate Director, NIRN Frank Porter Graham Child Development Institute University of North Carolina

Transcript of Developing, Measuring, and Improving Program Fidelity: Achieving positive outcomes through...

Page 1: Developing, Measuring, and Improving Program Fidelity: Achieving positive outcomes through high-fidelity implementation SPDG National Conference Washington,

Developing, Measuring, and Improving Program Fidelity:

Achieving positive outcomes through high-fidelity implementation

SPDG National Conference Washington, DC

March 5, 2012

Allison Metz, PhD, Associate Director, NIRN Frank Porter Graham Child Development Institute

University of North Carolina

Page 2: Developing, Measuring, and Improving Program Fidelity: Achieving positive outcomes through high-fidelity implementation SPDG National Conference Washington,

Program Fidelity6 Questions

• What is it?• Why is it important?• When are we ready to assess fidelity?• How do we measure fidelity?• How can we produce high fidelity use

of interventions in practice?• How can we use fidelity data for

program improvement?

Page 3: Developing, Measuring, and Improving Program Fidelity: Achieving positive outcomes through high-fidelity implementation SPDG National Conference Washington,

“PROGRAM FIDELITY”

“The degree to which the program or practice is implemented ‘as intended’ by the program developers and researchers.”“Fidelity measures detect the presence and strength of an intervention in practice.”

Page 4: Developing, Measuring, and Improving Program Fidelity: Achieving positive outcomes through high-fidelity implementation SPDG National Conference Washington,

What is fidelity?

• Three components– Context: Structural aspects that encompass

the framework for service delivery– Compliance: The extent to which the

practitioner uses the core program components

– Competence: Process aspects that encompass the level of skill shown by the practitioner and the “way in which the service is delivered”

Question 1

Page 5: Developing, Measuring, and Improving Program Fidelity: Achieving positive outcomes through high-fidelity implementation SPDG National Conference Washington,

Why is fidelity important?

• Interpret outcomes – is this an implementation challenge or intervention challenge?

• Detect variations in implementation • Replicate consistently• Ensure compliance and competence• Develop and refine interventions in the

context of practice • Identify “active ingredients” of program

Question 2

Page 6: Developing, Measuring, and Improving Program Fidelity: Achieving positive outcomes through high-fidelity implementation SPDG National Conference Washington,

Why is fidelity important?

Question 2

Effective Interventions

The “WHAT”

Effective Implementation

The “HOW”

Positive Outcomes

for Children

Page 7: Developing, Measuring, and Improving Program Fidelity: Achieving positive outcomes through high-fidelity implementation SPDG National Conference Washington,

Implementation Science

Effective NOT Effective

Effective

NOT Effective

IMPLEMENTATION

INT

ER

VE

NT

ION Actual Benefits

(Institute of Medicine, 2000; 2001; 2009; New Freedom Commission on Mental Health, 2003; National Commission on Excellence in Education,1983; Department of Health and Human Services, 1999)

Inconsistent; Not Sustainable; Poor outcomes

Unpredictable or poor outcomes;

Poor outcomes; Sometimes harmful

from Mark Lipsey’s 2009 Meta-analytic overview of the primary factors that characterize effective juvenile offender interventions – “. . . in some analyses, the quality with which the intervention is implemented has been as strongly related to recidivism effects as the type of program, so much so that a well-implemented intervention of an inherently less efficacious type can outperform a more efficacious one that is poorly implemented.”

Page 8: Developing, Measuring, and Improving Program Fidelity: Achieving positive outcomes through high-fidelity implementation SPDG National Conference Washington,

When are we ready to assess fidelity?

• OperationalizePart of Speech:  verb Definition:  to define a

concept or variable so that it can be measured or expressed quantitatively

Webster's New Millennium™ Dictionary of English, Preview Edition (v 0.9.7)Copyright © 2003-2008 Lexico Publishing Group, LLC

• The “it” must be operationalized whether it is:

• An Evidence-Based Practice or Program• A Best Practice Initiative or New Framework • A Systems Change Initiative or Element

Question 3

Page 9: Developing, Measuring, and Improving Program Fidelity: Achieving positive outcomes through high-fidelity implementation SPDG National Conference Washington,

How developed is your WHAT?

Does this approach involve the implementation of an evidence-based program or practice that has been effectively implemented in other locations?

Does this approach involve the purveyor or other “expert” support?

How well-defined are the critical components of the approach?

Does this approach involve the implementation of an evidence-informed approach that hasn’t been implemented often or ever?

To what extent is the approach still being developed or fine-tuned?

How clearly defined are the critical components of the approach?

Page 10: Developing, Measuring, and Improving Program Fidelity: Achieving positive outcomes through high-fidelity implementation SPDG National Conference Washington,

Developing Practice Profiles

• Each critical component is a heading• For each critical component, identify:

– “gold standard” practice – “expected”– developmental variations in practice– ineffective practices and undesirable practices

Adapted from work of the Heartland Area Education Agency 11, Iowa

Page 11: Developing, Measuring, and Improving Program Fidelity: Achieving positive outcomes through high-fidelity implementation SPDG National Conference Washington,

Developing Practice Profiles

Adapted from work of the Heartland Area Education Agency 11, Iowa

Page 12: Developing, Measuring, and Improving Program Fidelity: Achieving positive outcomes through high-fidelity implementation SPDG National Conference Washington,

How do we measure fidelity?

Establish fidelity criteria if not yet developed1. Identify critical components, operationalize

them and determine indicatorsa. Describe data sourcesb. Make indicators as objective as possible (e.g.,

anchor points for rating scales)

2. Collect data to measure these indicators (“preferably though a multi-method, multi-informant approach” (Mowbray, 2003))

3. Examine the measures in terms of reliability and validity

Question 4

Page 13: Developing, Measuring, and Improving Program Fidelity: Achieving positive outcomes through high-fidelity implementation SPDG National Conference Washington,

How do we measure fidelity?

• Staff performance assessments serve as a mechanism to begin to identify “process” aspects of fidelity for newly operationalized programs

• Contextual (or structural) aspects of fidelity are “in service to” adherence and competence. – length, intensity, and duration of service (or dosage), roles

and qualifications of staff– training and coaching procedures – case protocols and procedures – administrative policies – data collection requirements– inclusion/exclusion criteria of the target population

Question 4

Page 14: Developing, Measuring, and Improving Program Fidelity: Achieving positive outcomes through high-fidelity implementation SPDG National Conference Washington,

Performance Assessment • Start with the Expected/Proficient column

• Develop an indicator for each Expected/Proficient Activity

• Identify “evidence” that this activity has taken place

• Identify “evidence” that this activity has taken place with high quality

• Identify potential data source(s)

Page 15: Developing, Measuring, and Improving Program Fidelity: Achieving positive outcomes through high-fidelity implementation SPDG National Conference Washington,

Fidelity Criteria

Parent Involvement and Leadership

Practice Profile Partnering

Expected/Proficient

Indicator that activity is

happening(Adherence)

Potential Data Source

Indicator that activity is

happening well(Competence)

Potential Data Source

Encourage and include parent involvement in educational decision-making

Parent/Teacher meetings take place to develop goals and plans for child progress

ObservationDocumentation

Parent feels included and respected

Parent Partnering Survey

Page 16: Developing, Measuring, and Improving Program Fidelity: Achieving positive outcomes through high-fidelity implementation SPDG National Conference Washington,

How do we measure fidelity?

If fidelity criteria are already developed1. Understand reliability and validity of instruments

a. Are we measuring what we thought we were?b. Is fidelity predictive of outcomes?c. Does fidelity assessment discriminate between programs?

2. Work with program developers or purveyors to understand the detailed protocols for data collection

a. Who collects the data (expert raters, teachers)b. How often is data collectedc. How are data scored and analyzed

3. Understand issues (reliability, feasibility, cost) in collecting different kinds of fidelity data

a. Process data vs. Structural data

Question 4

Page 17: Developing, Measuring, and Improving Program Fidelity: Achieving positive outcomes through high-fidelity implementation SPDG National Conference Washington,

How do we measure fidelity?

If adapting an approach…•How well ‘developed’ is the program or practice being adapted? (Winter & Szulanski, 2001)•Have core program components been identified?.•Do adaptations change function or form?•How will adaptation affect fidelity criteria and assessments?

Question 4

Page 18: Developing, Measuring, and Improving Program Fidelity: Achieving positive outcomes through high-fidelity implementation SPDG National Conference Washington,

How do we measure fidelity?

• Steps to measuring fidelity (new or established criteria):1. Assure fidelity assessors are available,

understand the program or innovation, and are well versed in the education setting

2. Develop schedule for conducting fidelity assessments

3. Assure adequate preparation for teachers/practitioners being assessed

4. Report results of the fidelity assessment promptly

5. Enter results into decision-support data system

5 Questions

Page 19: Developing, Measuring, and Improving Program Fidelity: Achieving positive outcomes through high-fidelity implementation SPDG National Conference Washington,

• Build, improve and sustain practitioner competency

• Create hospitable organizational and systems environments

• Appropriate leadership strategies

.

How can we produce high-fidelity implementation in

practice?

Page 20: Developing, Measuring, and Improving Program Fidelity: Achieving positive outcomes through high-fidelity implementation SPDG National Conference Washington,

“IMPLEMENTATION DRIVERS”

Common features of successful supports to help make full and effective uses of a wide variety of innovations

Page 21: Developing, Measuring, and Improving Program Fidelity: Achieving positive outcomes through high-fidelity implementation SPDG National Conference Washington,

© Fixsen & Blase, 2008

Performance Assessment (Fidelity)

Coaching

Training

Selection

Systems Intervention

Facilitative Administration

Decision Support Data System

Integrated & Compensatory

Com

pete

ncy

Driv

ers

Com

pete

ncy

Driv

ers O

rganization Drivers

Organization D

rivers

LeadershipLeadership

Adaptive Technical

Improved Outcomes for Children and Youth

Effective Education Strategies

Page 22: Developing, Measuring, and Improving Program Fidelity: Achieving positive outcomes through high-fidelity implementation SPDG National Conference Washington,

Produce high-fidelity implementation?

• Fidelity is an implementation outcome◦Implementation Drivers influence how well or how

poorly a program is implemented

◦The full and integrated use of the Implementation Drivers supports practitioners in consistent, high-fidelity implementation of program

◦Staff performance assessments are designed to assess the use and outcomes of the skills that are required for the high-fidelity implementation of a new program or practice

Question 5

Page 23: Developing, Measuring, and Improving Program Fidelity: Achieving positive outcomes through high-fidelity implementation SPDG National Conference Washington,

Produce high-fidelity implementation?

• Competency Drivers– Demonstrate knowledge, skills and abilities– Practice to criteria– Coach for competence and confidence

• Organizational Drivers– Use data to assess fidelity and improve program operations– Administer policies and procedures that support high-fidelity

implementation– Implement needed systems interventions

• Leadership Drivers– Use appropriate leadership strategies to identify and solve

challenges to effective implementation

Question 5

Page 24: Developing, Measuring, and Improving Program Fidelity: Achieving positive outcomes through high-fidelity implementation SPDG National Conference Washington,

Use fidelity data for program

improvement?• Program Review Process to create sustainable

improvement cycle for program – Process and Outcome Data – measures, data sources,

data collection plan– Detection Systems for Barriers – roles and responsibilities– Communication protocols – accountable, moving

information up and down the system

• Questions to Ask – What formal and informal data have we reviewed? – What is the data telling us?– What barriers have we encountered?– Would improving the functioning of any Implementation

Driver help address barrier?

Question 6

Page 25: Developing, Measuring, and Improving Program Fidelity: Achieving positive outcomes through high-fidelity implementation SPDG National Conference Washington,

Program Fidelity

• Fidelity has multiple facets and is critical to achieving outcomes

• Fully operationalized programs are pre-requisites for developing fidelity criteria

• Valid and reliable fidelity criteria need to be collected carefully with guidance from program developers or purveyors

• Fidelity is an implementation outcome; effective use of Implementation Drivers can increase our chances of high-fidelity implementation

• Fidelity data can and should be used for program improvement

Summary

Page 26: Developing, Measuring, and Improving Program Fidelity: Achieving positive outcomes through high-fidelity implementation SPDG National Conference Washington,

Program Fidelity

Examples of fidelity instruments

•Teaching Pyramid Observation Tool for Preschool Classrooms (TPOT), Research Edition, Mary Louise Hemmeter and Lise Fox•The PBIS fidelity measure (the SET) described at http://www.pbis.org/pbis_resource_detail_page.aspx?Type=4&PBIS_ResourceID=222Articles •Sanetti, L. & Kratochwill, T. (2009). Toward Developing a Science of Treatment Integrity: Introduction to the Special Series. School Psychology Review, Volume 38, No. 4, pp. 445–459.•Mowbray, C.T., Holter, M.C., Teague, G.B., Bybee, D. (2003). Fidelity Criteria: Development, Measurement and Validation. American Journal of Evaluation, 24 (3), 315-340.•Hall, G.E., & Hord, S.M. (2011). Implementing Change: Patterns, principles and potholes (3rd ed.)Boston: Allyn and Bacon.

Resources

Page 27: Developing, Measuring, and Improving Program Fidelity: Achieving positive outcomes through high-fidelity implementation SPDG National Conference Washington,

Stay Connected!

nirn.fpg.unc.edu www.scalingup.org

www.implementationconference.org

[email protected]@unc.edu