Evaluating complex change across projects and contexts: Methodological lessons from a macro...

11
Presentation at UKES and CDI, 27 and 9 April 2016 Jeremy Holland and Florian Schatz Evaluating complex change across projects and contexts: Methodological lessons from a macro evaluation of DFID's social accountability portfolio

Transcript of Evaluating complex change across projects and contexts: Methodological lessons from a macro...

Page 1: Evaluating complex change across projects and contexts: Methodological lessons from a macro evaluation of DFID's social accountability portfolio

P r e s e n t a t i o n a t U K E S a n d C D I , 2 7 a n d 9 A p r i l 2 0 1 6

J e r e m y H o l l a n d a n d F l o r i a n S c h a t z

Evaluating complex change across projects and contexts: Methodological lessons from a macro evaluation of DFID's social accountability portfolio

Page 2: Evaluating complex change across projects and contexts: Methodological lessons from a macro evaluation of DFID's social accountability portfolio

Background

Part of a wider macro evaluation of DFID’s Policy Frame for Empowerment and Accountability (E&A)

Objectives:– To understand what works, for whom, in what contexts and why, by

conducting cross-case analysis of DFID’s E&A project portfolio since 2011; and

– To generate new evidence that informs policy and practice in DFID and other development organisations.

DFID’s E&A portfolio: 361 diverse projects (different focus areas, different countries and regions, different budgets and timeframes, different modalities and approaches, etc.)

Page 3: Evaluating complex change across projects and contexts: Methodological lessons from a macro evaluation of DFID's social accountability portfolio

Methodology

Project selection

Qualitative Comparative

Analysis (QCA)

Narrative Analysis

Theory development

Page 4: Evaluating complex change across projects and contexts: Methodological lessons from a macro evaluation of DFID's social accountability portfolio

QCA and Narrative Analysis

QCA (50 cases) Social science research method that applies a systematic comparison to

case study research Helps to identify the determinants of outcomes by looking at the similarities

and differences of cases in terms of the causal factors and outcomes obtained (Cress and Snow 2000)

Situated between qualitative and quantitative research approaches

Narrative analysis (13 case studies) In-depth qualitative comparative analysis to interpret and explain the

patterns identified through QCA Focus on identifying explanatory models that explain differences between

True Positive cases and False Positive cases (i.e. cases with the same configurations of conditions but a different outcome)

Based on secondary (project documentation) evidence supplemented by primary (Key Informant Interviews) evidence

Page 5: Evaluating complex change across projects and contexts: Methodological lessons from a macro evaluation of DFID's social accountability portfolio

5

1. Database of projects meeting

inclusion/ exclusion

criteria (produced 180 SAcc projects)

2. Screen for quality of outcome

contribution analysis (produced 50 SAcc

projects)

5. Identifying and coding project

‘conditions’

QCA contexts, mechanisms

and outcomes (CMOs)

6. Extracting data and scoring conditions

QCA binary scoring

4. Literature review and

DFID consultations

7. Developing

testable hypotheses

Hypotheses expressed as

CMO configurations

8. QCA of hypothesis

configurations

9. Selecting hypotheses and project cases for

narrative analysis

10. Narrative analysis of 13

projects

Analysis supplemented

with key informant interviews

Sample included ‘true positive’ and

false positive’ cases selected via Hamming distance

measure

3. Representativeness

analysis

Population of 2379

DFID projects

screened

10 Step Approach

Page 6: Evaluating complex change across projects and contexts: Methodological lessons from a macro evaluation of DFID's social accountability portfolio

Challenges and Lessons

1. The data quality challenge

…using QCA:• Despite selecting the projects with the best available data and filling data

gaps through Key Informant Interviews, we had 104 out of 1200 data points missing

• This required the manual construction of different sub-datasets for each hypothesis and limited our ability to perform more inductive analysis

…using narrative analysis:• The quality of data varied considerably in terms of coverage and

analytical depth• Key Informant Interviews were effective in deepening our understanding

but a tight timeline prevented us from reaching more than 20 stakeholders -> mixed evidence base

Page 7: Evaluating complex change across projects and contexts: Methodological lessons from a macro evaluation of DFID's social accountability portfolio

Challenges and Lessons

2. The challenge of unpacking ‘context’

…using QCA:• Despite including context conditions in our hypotheses, we were not able

to generate interesting or strong associations involving project contexts• Our context conditions were based on nationally comparable global

indices and possibly too broad

…using narrative analysis:• At the level of individual case studies, context factors were important but

so specific that it was difficult to generalise across cases while also retaining ‘granularity’

• This limited our ability to generate evidence on ‘what works in what contexts’

Page 8: Evaluating complex change across projects and contexts: Methodological lessons from a macro evaluation of DFID's social accountability portfolio

Challenges and Lessons

3. The challenge of sequencing and iteration

Combining QCA with narrative analysis required sequencing each evaluation step carefully, which resulted in a long timeline:– Finalising hypotheses before data extraction/coding– Finalising QCA before narrative analysis

But subsequent steps threw up additional factors, hypotheses and data points that would require revising preceding steps-> Iteration, for which there was insufficient time

Page 9: Evaluating complex change across projects and contexts: Methodological lessons from a macro evaluation of DFID's social accountability portfolio

Challenges and Lessons

4. The utility of mixing methods

Combining QCA with narrative analysis proved useful. The narrative analysis helped interpret and explain QCA findings, while QCA provided numerical evidence based on a relatively large number of cases to back up our findings.

The next slide presents an example finding from the macro evaluation:

Page 10: Evaluating complex change across projects and contexts: Methodological lessons from a macro evaluation of DFID's social accountability portfolio

Hypothesis QCA finding Narrative analysis findingHigher-level (at-scale) service delivery (O2) is achieved only when SAcc mechanisms include support for feeding evidence and learning into higher-level discussions (M7) and higher- level legislative and policy change (M1).

Hypothesis rejected.

The combination of feeding evidence upwards (M1) and directly supporting policy change (M7) is neither necessary nor sufficient for improved service delivery at scale. However, it improves the likelihood of success. In the cluster of (24) cases where both these mechanisms were present, there was evidence of improved higher-level service delivery in 7 (or 29%) of cases. In the cluster of (5) cases where both of these mechanisms are absent there is no case of improved higher-level service delivery. With coverage of 53% and consistency of 29%, the combination of M1 and M7 shows the strongest association with the outcome.

No single condition was necessary, sufficient, or strongly associated with the outcome.

The narrative analysis confirmed that linking SAcc to higher-level policy advocacy (M1) through upward feeding evidence (M7) could help improve service delivery at scale. The contribution of these mechanisms to improved outcomes was explained by the following factors:• SAcc processes worked

better when embedded in policy or programme frameworks

• Upward feeding evidence was effective when channelled directly into these policy and programme processes

• Upward feeding evidence was effective when combined with citizen participation key decision making processes.

Page 11: Evaluating complex change across projects and contexts: Methodological lessons from a macro evaluation of DFID's social accountability portfolio

Thank you

For more information:http://www.itad.com/knowledge-and-resources/dfids-macro-evaluations/