Measuring ‘Real’ Impact - Elrha · Impact happens over a continuum – and capturing this kind...

25
DRAFT report Measuring ‘Real’ Impact Report of a workshop held on 25 June 2012 at the UK Collaborative on Development Sciences (UKCDS)

Transcript of Measuring ‘Real’ Impact - Elrha · Impact happens over a continuum – and capturing this kind...

Page 1: Measuring ‘Real’ Impact - Elrha · Impact happens over a continuum – and capturing this kind of information (in essence a moving target) requires more than the more standard,

DRAFT report

Measuring ‘Real’ Impact

Report of a workshop held on 25 June 2012 at the UK Collaborative on Development Sciences (UKCDS)

Page 2: Measuring ‘Real’ Impact - Elrha · Impact happens over a continuum – and capturing this kind of information (in essence a moving target) requires more than the more standard,

2

Acknowledgements We are grateful to the UK Government Office for Science (GO-Science) for funding this initiative, to the UK Collaborative on Development Sciences for hosting the event and to the Natural Environment Research Council for supporting the Knowledge Exchange Fellowship with which this activity was linked. We would also like to thank all partners who collaborated on this event, including Enhancing Learning and Research for Humanitarian Action, the Humanitarian Futures Programme, King’s College London, Save the Children UK, the Consortium of British Humanitarian Agencies, the Development Studies Association and HelpAge International.

Acronyms and abbreviations ALNAP Active Learning Network for Accountability and Performance in Humanitarian Action AR Action research CBHA Consortium of British Humanitarian Agencies CDAC Communicating with Disaster Affected Communities DFID UK Department for International Development DRR Disaster Risk Reduction DRM Disaster Risk Management DSA Development Studies Association of the UK and Ireland ECB Emergency Capacity Building ELRHA Enhancing Learning and Research for Humanitarian Assistance HAI HelpAge International HEFCE Higher Education Funding Council for England HERR Humanitarian Emergency Response Review HFP Humanitarian Futures Programme IDRC International Development Research Centre INTRAC International NGO Training and Research Centre KCL King’s College London NERC Natural Environment Research Council NGO Non-Governmental Organisation ODI Overseas Development Institute REF Research Excellence Framework RELU Rural Economy and Land Use Programme RIMES Regional Integrated Multi-Hazard Early Warning System SIAM Stakeholder Impact Analysis Matrix UKCDS UK Collaborative on Development Sciences

Page 3: Measuring ‘Real’ Impact - Elrha · Impact happens over a continuum – and capturing this kind of information (in essence a moving target) requires more than the more standard,

3

Contents

Acknowledgements.................................................................................................................................. 2

Acronyms and abbreviations .................................................................................................................... 2

EXECUTIVE SUMMARY ............................................................................................................................. 4

Challenges ....................................................................................................................................................... 4

Methodologies to measure impact ................................................................................................................. 6

Conclusions ..................................................................................................................................................... 6

WORKSHOP REPORT ................................................................................................................................ 9

Introduction .................................................................................................................................................... 9

Background to the workshop ........................................................................................................................ 10

A selection of approaches and methodologies ............................................................................................. 10

Theory of Change .......................................................................................................................................... 11

Outcome mapping......................................................................................................................................... 12

Action research ............................................................................................................................................. 13

HEFCE’s Impact Criterion for the REF 2014 ................................................................................................... 13

Group and plenary discussion of methodologies ......................................................................................... 15

Creative conversation exercise ..................................................................................................................... 16

Case studies ................................................................................................................................................... 18

Conclusions ................................................................................................................................................... 21

ANNEX 1: AGENDA AND PARTICIPANTS .................................................................................................. 24

Figures Figure 1. Theory of Change cycle....................................................................................................................... 11 Figure 2. Outcome mapping stages ................................................................................................................... 12 Figure 3. Action research cycle .......................................................................................................................... 13 Figure 4. REF 2014 examples of impact ............................................................................................................. 14 Figure 5. RELU knowledge exchange linkages ................................................................................................... 19 Figure 6. Risk assessment framework ............................................................................................................... 20

Page 4: Measuring ‘Real’ Impact - Elrha · Impact happens over a continuum – and capturing this kind of information (in essence a moving target) requires more than the more standard,

4

EXECUTIVE SUMMARY When the workshop examined the issues surrounding measuring ‘real’ impact, several challenges were raised. These fell into a series of clear groupings around measurement itself – how to measure, what to measure and when to measure. These issues were closely related to who was setting the agenda for demonstrating impact and from whose perspective this measurement was taking place. This in turn fed into what incentivises people to act in a manner that will have a particular type of impact. A major consideration was also how impact measurement affects short- and long-term decision-making around activities and whether or not impact measurement is a disincentive to true learning unless this process also explicitly recognises the value of learning from failure.

Challenges Measurement Measuring impact is very difficult in emergency environments. Few humanitarian agencies, whether local or international, are in a position to conduct rigorous impact evaluations or trials of the sort that have been beneficial in health and other sectors. Field tests to date have highlighted the challenges and importance of taking into account simplicity and feasibility. It is clearly important that users should not be overburdened in terms of sample sizes or overly sophisticated tools and analysis. A significant set of issues revolves around time and attribution. In the case of an activity or series of activities to which scientific contributions are made, the farther away from the initial ‘input’ a contribution occurs, the more difficult it becomes to attribute its impact. Impact happens over a continuum – and capturing this kind of information (in essence a moving target) requires more than the more standard, quantifiable and linear measurement approaches. It was also generally acknowledged that measuring ‘real’ impact is a complex and resource-intensive process that is not fully acknowledged by funders. More time and resources are needed within the overall programme/project structure to build these measures in at the start, establish baselines and then allow sufficient time to measure the change throughout and after a programme or project. Whose perspective There was much discussion around the question of whose perspective counts, or rather the differences in agenda and motivations between institutions, disciplines, research programmes and individual researchers when measuring the impact of academic work and research programmes. The goals of impact measurement shift in relation to the perspective and the incentives and power dynamics that are in play, all of which directly influence how impact is characterised and reported. Voices of the ‘beneficiaries’ or communities at risk are rarely heard and it is rarely they who set the agenda, despite the emphasis on participatory approaches in humanitarian and development practice. The development sector/longer-term-focused initiatives are generally more advanced in their use of participatory approaches, but their practice is still far from perfect. The humanitarian sector would benefit from greater engagement with these approaches, particularly when seeking to identify impact at a community level. However, capacity and motivation to measure impact are often linked to incentives, which highlights the importance of identifying for whom the impact measurements are being undertaken. The impact that a

Page 5: Measuring ‘Real’ Impact - Elrha · Impact happens over a continuum – and capturing this kind of information (in essence a moving target) requires more than the more standard,

5

donor is looking for may well be different to that which the at-risk communities are seeking, and this will affect how the impact is measured. It is important to establish the ‘why’ and ‘for whom’ in order to inform the process or method used. Incentives The humanitarian and development sectors are informed by a range of disciplines and have undertaken efforts to develop integrated approaches, yet compartmentalisation remains very strong. Humanitarian and development efforts are often disjointed, despite significant overlap in terms of the areas and communities in which their respective activities are undertaken. Disaster Risk Reduction (DRR) and Disaster Risk Management (DRM) approaches and development activities are broadly similar in operational terms, and yet separate communities and languages have evolved in parallel. There are also silos across disciplines: for example, economists find it hard to talk to anthropologists. Much of this silo-isation comes down to how each silo is incentivised/rewarded/funded. Incentives/rewards do not currently encourage more ‘user’-focused research, although there is movement by the Research Councils, such as the Natural Environment Research Council’s (NERC) Pathways to Impact, plus the inclusion by the Higher Education Funding Council for England (HEFCE) of of impact as a criterion for the UK’s Research Excellence Framework (REF) in 2014. The current paradigm sees scientists as incentivised to provide the information, but very few are incentivised to integrate the information in a form that is relevant to users of science. However, the incorporation of impact into the UK academic REF could bring about changes in the future. Among the methodologies showcased within this workshop, the Rural Economy and Land Use (RELU) programme and Action Research offer more user-led approaches. Learning from those approaches which do support durable impact highlights the importance of greater collaboration between physical and social sciences to contextualise and embed messages that will ultimately lead to behaviour change. Long-term/short-term tensions The process of demonstrating short-term impact can undermine longer-term goals (such as strengthening democracy) if done in the wrong way, ie, you might get ‘more’ impact if undertaking private advocacy through personal contacts, but this does not strengthen longer-term governance. Humanitarian intervention demands short-term impact (short funding cycle), which might undermine longer-term development impact. This is particularly a problem where humanitarian aid is used to support chronic social service gaps rather than to respond to sudden-onset disaster. In general, humanitarian aid supports communities to return to what ‘was’, the pre-existing situation of vulnerability, rather than build longer-term resilience or transformative change. Development work is more focused on lasting social/political change and lends itself to more sustainable impact. However, the ‘Build Back Better’ agenda is an approach which has the potential to bridge this divide. True learning Too strong a focus on impact can hijack true reporting to donors. The desire to constantly demonstrate positive impact, rather than acknowledge learning from failure, can lead to a lack of integrity in reporting. There might also be a reluctance to report back on unintended consequences or impacts if the donor requires too rigid a reporting format based solely on anticipated impact, rather than allowing learning as the project/programme proceeds and adapting accordingly – a process which may require a facility to reallocate resources. Workshop discussions raised a key question: If the evidence base is increased, will that make the humanitarian sector more reliant on certainty in order to act, rather than strengthen its flexibility to act on uncertainty? If ‘good’ evidence is always required, this could lead to stasis.

Page 6: Measuring ‘Real’ Impact - Elrha · Impact happens over a continuum – and capturing this kind of information (in essence a moving target) requires more than the more standard,

6

Methodologies to measure impact The academic community is now in a position where it is being required to demonstrate the impact of its research, as impact is now one of three criteria by which the REF assesses research excellence. This is the first REF to include impact, and it will be interesting to see whether it has a positive effect on research uptake and leads to research being communicated in a manner more accessible to directly affected communities. Three other methodologies were outlined which could be considered more ‘proactive’ and applied:

The Theory of Change has gained much traction within donor organisations such as the UK’s Department for International Development (DFID) and provides a collaborative, participatory framework combined with a systemic view that can accommodate complexity and non-linear, iterative change processes more suited to a humanitarian environment.

Outcome mapping was originally designed for measuring small-scale socio-political change but has since been scaled up to accommodate macro policy-level change as well. Again, it is a flexible tool that allows for complexity and works well with other processes, including Theories of Change and the Logframe.

Action research can be used for a wide range of purposes, including research, individual and organisational learning, monitoring and evaluation, and impact evaluation. It is again highly participatory, embraces complexity and is very much practitioner-focused. It recognises different knowledge sources, encouraging ongoing input from all stakeholders.

While all of these methodologies embrace complexity and are highly participatory, they are resource-intensive, in both time and money. Additionally, because they are qualitative methods rather than linear quantitative methods, some in the academic community consider they lack rigour. There is thus an issue of compromise, negotiation and astute methodology selection. The three case studies1 in this report provide a rich demonstration of how three very different research programmes have grappled with the challenges and found effective pathways of impact measurement which can be developed and applied in different contexts.

Conclusions

Suggested approaches to understanding what works and what doesn’t

Bottom-up approach

Donors should let vulnerable communities lead the impact agenda, as ultimately they should be the beneficiaries. Allocating donor funding directly to affected communities would allow them to undertake their own impact assessments of development and humanitarian interventions.

Impact should be about social change, which is complex but should not be shied away from. Impact measurement should focus on both intended and unintended consequences.

Governance and monitoring

There was concern that the current approaches to measuring impact – through a process of self-evaluation to satisfy criteria that differ between donors and research funders – lack independence and rigour. Consequently, debate is required concerning the benefits and risks associated with

1 a. Amy Proctor, Rural Economy and Land Use (RELU) Programme: Stakeholder Impact Analysis Matrix

b. Daniel McAvoy, University of East Anglia (UEA): Contributions to Change: A Guide for Evaluating Change After Rapid Onset Emergencies c. S H M Fakhruddin, RIMES, Bangkok: Presentation; background paper: Forecasts for Community Applications

Page 7: Measuring ‘Real’ Impact - Elrha · Impact happens over a continuum – and capturing this kind of information (in essence a moving target) requires more than the more standard,

7

establishing an independent, external body to evaluate the impact of the development and humanitarian sectors.

The meaning of ‘impact’ differs between donors and research funders. There is a need to map the various methodologies, including the monitoring, planning and evaluation approaches. This would provide the information with which to coalesce impact definitions and methodologies and begin to develop a common framework bringing together the types of impact sought across actors and sectors.

Consideration is needed of whether criteria for the impact assessment of humanitarian and development interventions should be separate. Insights could be drawn from the impact assessments of other sectors that incorporate protection and assistance functions, such as the police or fire services.

The practicality, feasibility and affordability of impact assessment must be considered.

Practical approaches and case studies regarding how to measure impact are needed. However, within the context of natural disasters, preparedness can provide a useful framework with which to evaluate the impact of earlier interventions.

In order to understand and measure impact, coherent baseline studies are as important as evaluations. Often funding is only available for evaluations (if at all), whereas studies that assess the status quo (to provide a baseline against which to assess impact) are often overlooked or assumed to have been undertaken. One workshop participant described this as the need for ‘pre-impact evaluations’.

Are measures complementary?

The emphasis on impact is based on a desire to understand what works and also to encourage greater accountability within the development and humanitarian sectors. It was noted, however, that an over-obsession with the attribution of impact could lead to the delivery of only those interventions which can be effectively measured.

Impact is also driven by a need to justify investments, and this should be challenged, because it is difficult (if not impossible) to distinguish the impact of a multitude of different funding sources in one context.

Across the various scientific and professional disciplines the understanding of what constitutes positive (and negative) impact varies. Efforts should be made to distinguish where the areas of commonality lie, as focusing on achieving impact in these areas could provide the stimulus for more joined-up and holistic international engagement.

What is the role of science in all this?

Research and evidence needs

Research and evidence is needed to help establish a rigorous and transparent framework to measure impact. Most pressing is the need to develop a sophisticated understanding of intended and unintended consequences.

The ethics of measuring impact must also be considered, with the objective of understanding what processes and principles are appropriate. The involvement of at-risk communities should be central to this analysis to ensure the legitimacy of the findings and recommendations.

Science-actors should also think about how they can collaborate with communities directly, so that communities can better understand how they can use and benefit from scientific information and outside support.

Interdisciplinary approaches should be encouraged for understanding and measuring impact. Enhancing research impact within the humanitarian and development context

The impact agenda has increased the need for scientific research to be socially relevant and useful. While the need to support more blue-skies thinking cannot be denied, the model of predict and provide is no longer fit for purpose. Instead the development of knowledge must become a two-way

Page 8: Measuring ‘Real’ Impact - Elrha · Impact happens over a continuum – and capturing this kind of information (in essence a moving target) requires more than the more standard,

8

process where potential users and beneficiaries (at-risk communities) are, where possible, brought into the entire research process.

Tools

The development and integration of quantitative and qualitative approaches to impact measurement is needed in order to provide balanced assessment.

Prevailing approaches to impact measurement have been at the project, programme or institutional level. If the impact agenda is to drive future donor investments, there is a need to draw together impact assessments across disciplines in order to provide a more holistic understanding of, and interrelationships within, the impact landscape.

Additional points The Foresight report should:

navigate around contested incentives of impact assessment while identifying areas of shared interest across stakeholders

highlight consideration of the differences between what the humanitarian and development communities deliver, what the research community produces, and what communities identify as their needs.

Page 9: Measuring ‘Real’ Impact - Elrha · Impact happens over a continuum – and capturing this kind of information (in essence a moving target) requires more than the more standard,

9

WORKSHOP REPORT

Introduction

“Not everything that can be counted counts, and not everything that counts can be counted” Albert Einstein

The role of impact is in the ascendant for both research and practice within the humanitarian and development communities as well as within economic, social and natural research. The desire to demonstrate value for money has further underpinned this process. However, it is a contentious area for a number of reasons and raises concerns about how to measure impact.

In the highly interdisciplinary environment in which humanitarian and development communities work, quantified measurement is not always appropriate, and a number of methodologies have arisen over the years to capture differences achieved through employing more qualitative approaches. This is particularly important in attempts to demonstrate changes in people’s wellbeing or, more importantly, their own perception of their wellbeing.

A number of initiatives in the UK have looked at how science, learning and research feed into humanitarian and development work, as well as exploring the difficulties of working with emerging, uncertain scientific information. In the humanitarian context, working with uncertainty is all part of the terrain. The previous workshop in this programme2 highlighted the importance of furthering understanding of how attitude, perception and subjective norms influence the way in which scientifically-based information permeates through to decision making within organisations and communities. A track record of being able to demonstrate impact is one way to underpin these messages.

This report provides a synthesis of discussion from a one-day workshop held on 25 June 2012, drawing also on learning from the two background papers commissioned3 and a wealth of examples discussed at the workshop, as well as further examples that provide more methodologies and approaches to measuring impact that were identified during the course of preparing for the workshop discussions. The fundamental question we sought to answer at the workshop was: How do you measure impact?

With impact meaning different things to different people, some felt it more useful to use the words ‘change’ or ‘contribution to change’ rather than impact, to reflect the more outcome-related discourse of the social sciences that is better suited to describing behavioural changes within at-risk communities. Terry Smutylo from the Outcome Mapping Learning Community summed this up as follows:

Impact implies: The reality is: cause and effect open system positive, intended results unexpected positive and negative results occur focus on ultimate effects upstream effects are important credit goes to a single contributor multiple actors create results and need credit story ends when programme obtains success change process never ends A significant set of issues also revolves around time and attribution. For an activity or series of activities to which scientific contributions are made (on the basis of either physical science or social science), the farther away from the initial ‘input’ a contribution is, the more difficult it becomes to attribute its impact. Impact occurs over a continuum, and capturing this kind of information (in essence a moving target) requires more than the more standard, quantifiable and linear measurement approaches.

2 Tolerating the Right Kinds of Uncertainty, 28 May 2012, at UKCDS:

http://www.devstud.org.uk/tolerating_the_right_kinds_of_uncertainty_workshop_28th_may_2012-104.html 3 The two papers are: ‘Action Research’, Rowan Popplewell, INTRAC, UK; ‘Ensembles Probabilistic Forecasts for Community Level

Applications’, S H M Fakhruddin, RIMES, Bangkok

Page 10: Measuring ‘Real’ Impact - Elrha · Impact happens over a continuum – and capturing this kind of information (in essence a moving target) requires more than the more standard,

10

The discussions were set in the context of enabling complex scientific messages to be understood and appropriately applied by directly affected communities. The science might be brilliant, but if it is not understood then the potential of the scientific learning to support directly affected communities is lost. In particular, discussion highlighted the role of social science in supporting appropriate communication and application, as well as contextualising impact.

The Active Learning Network for Accountability and Performance in Humanitarian Action (ALNAP) study ‘Improving humanitarian impact assessment: Bridging theory and practice’4 states that “the development sector includes several system-wide initiatives to strengthen capacity in impact assessment, but there is no equivalent in the humanitarian sector”. This workshop aimed to:

1. showcase some of the current methodologies and approaches being used by the development sector and gaining traction in the humanitarian sector since the ALNAP study in 2009

2. provide some case studies illustrating the realities of measurement approaches 3. identify how best to develop indicators of shared relevance and consider the relative opportunities

and challenges for incentivising cross-disciplinary approaches.

Background to the workshop Commissioned by the UK Government Office for Science (GO-Science) Foresight project, ‘Improving Future Disaster Anticipation and Resilience’ – which was initiated in response to issues raised in DFID’s 2011 Humanitarian Emergency Response Review (HERR) – the workshop was built on existing efforts supported by the NERC5 and the UK Collaborative on Development Sciences (UKCDS) to make space for more systematic dialogue between scientists, humanitarian and development policy-makers and the communities and partners with which they work.

The initiative is a collaboration between Enhancing Learning and Research for Humanitarian Assistance (ELRHA), the Humanitarian Futures Programme (HFP), King’s College London (KCL), Save the Children UK, the Consortium of British Humanitarian Agencies (CBHA), the Development Studies Association (DSA) and HelpAge International (HAI).6

A selection of approaches and methodologies Four methodologies were presented at the workshop, all recognised as credible tools for capturing impact at different stages of the humanitarian/development/DRR cycle, for different actors, groups and operational and decision-making levels, as well as providing an audit trail for attribution.7 These four methodologies were:

1. the Theory of Change 2. outcome mapping 3. action research 4. the Higher Education Funding Council’s Impact criterion for the REF 2014.

4 Proudlock, K, Ramalingam, B and Sandison, P, www.alnap.org/pool/files/8rhach2.pdf, part of the ALNAP Review of Humanitarian

Action 2009 5 NERC is supporting a two-year Knowledge Exchange Fellowship within the Humanitarian Futures Programme, King’s College

London, focused on strengthening dialogue between scientists and those with humanitarian responsibilities on issues of future vulnerability. 6 Information on the climate science–humanitarian policy exchange is available at

http://www.humanitarianfutures.org/about/futuresgroup/exchange 7 While attribution might appear to be self-interested, it is all-important when it comes to incentives and for organisations, scientists

and agencies to demonstrate that it is their contributions that have had an impact. The prevailing climate of scarce resources and the need to demonstrate value for money, combined with an ever more crowded market place and a desire to ‘brand’ one’s activities, means that an ability to demonstrate impact will lead to further funding, esteem and credibility.

Page 11: Measuring ‘Real’ Impact - Elrha · Impact happens over a continuum – and capturing this kind of information (in essence a moving target) requires more than the more standard,

11

Two further resources were not presented but considered highly relevant: 1. the Humanitarian Charter and Minimum Standards of the Sphere Project8 2. the ALNAP study ‘Improving humanitarian impact assessment: Bridging theory and practice’.9

Theory of Change

Theory of Change is a structured approach to systematically thinking through a change process from intervention to longer-term impact. “The description of a sequence of events that is expected to lead to a particular desired outcome” – Rick Davies10.

This approach is a tool for testing and refining a programme/project model/road map, challenging preconceptions, assisting reflection and catalysing staff to frequently ask themselves, “Are we going in the right direction? Are we doing the right thing to achieve the changes we want to see? What else needs to be happening to support the changes we wish to see?”

Theory of Change is an iterative methodology that can track changes backwards from (1) desired impact [long-term shift] to (2) sphere of indirect influence [medium-term] to (3) direct influence [short-term changes] to (4) the devised programme strategy (see Figure 1 below). Figure 1. Theory of Change cycle

LONG-TERM, LASTING CHANGES

MEDIUM TERM CHANGESSphere of indirect influence – policy

shapers, knowledge networks, planners, practitioners, stakeholder groups

SHORT-TERM CHANGES

Sphere of direct influence – partners, collaborators, stakeholders

immediate programme target groups

Theory of change: ‘Description of a sequence of events expected to lead to a particular desired outcome’, Rick Davies

Sphere of controlProgramme strategy: Activities, stakeholder engagement; outputs

CONTEXTUAL DRIVERS

Socio-economic, political, Technological factors

Existing policies,practices, beliefs

Actors, networks in research, policy and practice

Capacity of targetgroups to respond

Receptiveness of context

Organizations,resources,systems, skills

Outputs = products + comms + incentives + networks

Used by main actor / stakeholder groups

Changes in e.g. knowledge, attitudes , skills, relationships

Changes in e.g. practices , policies, allocations

Scaling up/out of changes in knowledge, attitude, skills , practices, policies through actor networks etc.

Impact

For whom? Defined by whom? Significant for whom?

What needs to be happening to support this change?

Isabel Vogel, 25.06.12

8 http://www.sphereproject.org

9 9 Proudlock, K, Ramalingam, B and Sandison, P, www.alnap.org/pool/files/8rhach2.pdf, part of the ALNAP Review of Humanitarian

Action 2009 10

http://mande.co.uk/blog/wp-content/uploads/2012/04/Evaluablity-of-TOC-criteria3.pdf

Page 12: Measuring ‘Real’ Impact - Elrha · Impact happens over a continuum – and capturing this kind of information (in essence a moving target) requires more than the more standard,

12

Outcome mapping Application

Outcome mapping was originally designed by the International Development Research Centre (IDRC) (Canada) for measuring socio-political change at the micro level but has since been adapted to deal with more macro-level change. Approach Outcome mapping looks at change within and outside an organisation, considering activities, behaviour change and competencies. To monitor change/impact it uses journals and observations and then analyses the reasons for the change, to justify ‘changes’ observed. Journals are important, as attribution can be documented. Figure 2. Outcome mapping stages

Terry Smutylo, Outcome Mapping Learning Community11

11

http://www.outcomemapping.ca

Page 13: Measuring ‘Real’ Impact - Elrha · Impact happens over a continuum – and capturing this kind of information (in essence a moving target) requires more than the more standard,

13

Action research Figure 3. Action research cycle

Application Action research (AR) can be used for a wide range of purposes, including research, individual and organisational learning, monitoring and evaluation, and impact evaluation. Approach AR is a label that covers a broad family of approaches that share similar characteristics: they are typically values-based, action-oriented and participatory. AR approaches have not emerged from a single academic discipline; rather they have slowly developed over time within a wide range of disciplines and professions, with new approaches continually emerging and existing ones being refined.

HEFCE’s Impact Criterion for the REF 2014 Impact is a new criterion for the REF and will carry a weighting of 20%. Under previous Research Assessment Exercises (precursors of the REF) impact was not recognised. Many welcome the inclusion of impact in the assessment framework as an important step forward, but it has caused some concern among academics, particularly in the social sciences, where attribution can be difficult to demonstrate. Many of the monitoring and impact tools currently being developed within the humanitarian and development sectors, therefore, might offer support to these communities and could be adapted for use by social and natural science researchers. The REF assesses the quality and impact of research in all disciplines from all UK universities – it is not academic impact that will be measured but the impact outside academia. The primary threshold is for academic excellence and the quality of the research, and then impact. Universities will provide case studies of four pages, and a peer review process will make a judgement on a five-point scale, from ‘outstanding’ to ‘no impact’. A pilot exercise in 2010 proved positive. These case studies will demonstrate:

impacts that have already taken place, underpinned by high-quality research

answers to the questions: What research? How did it contribute? What was the impact?

appropriate evidence and sources of corroboration. For HEFCE’s FAQs on impact, visit http://www.ref.ac.uk/faq/impactcasestudiesref3b

Page 14: Measuring ‘Real’ Impact - Elrha · Impact happens over a continuum – and capturing this kind of information (in essence a moving target) requires more than the more standard,

14

Figure 4. REF 2014 examples of impact

Some examples of impact

Public debate has

been shaped or

informed by research

A social enterprise

initiative has been

created

Policy debate or decisions

have been influenced or

shaped by research

A new product has

been commercialised

Enhanced professional

standards, ethics, guidelines

or training

Jobs have been

created or protected

Improved business

performance

Changes to the

design or delivery of

the school curriculum

The policies or activities of

NGOs or charities have been

informed by research

Improved management or

conservation of natural

resources

Improved forensic

methods or expert

systems

Production costs have

reduced

Levels of waste have

reduced

Improved quality,

accessibility or efficiency of a

public service

Enhanced preservation,

conservation or presentation

of cultural heritage

Organisations have

adapted to changing

cultural values

New forms of artistic

expression or changes to

creative practice

More effective

management or

workplace practices

Changes to

legislation or

regulations

Enhanced corporate

social responsibility

policies

Research has informed

public understanding, values,

attitudes or behaviours

Improved access to

justice, employment

or education

Enhanced technical

standards or

protocols

Improved risk

management

Improved health or

welfare outcomes

Research has enabled

stakeholders to challenge

conventional wisdom

Changes in

professional practice

Summary: advantages and challenges of methodologies

Methodology Advantages Challenges

Theory of Change Collaborative and participatory; stakeholders can be involved in identifying changes. It also meshes well with other participatory approaches, such as outcome mapping.

Encourages a systemic view that can accommodate complexity, non-linear and iterative change processes, which is more contextualised and representative of the processes involved.

Strategic thinking about outcomes, interventions, implementation and evaluation is integrated.

Requires some investment of time: the process can generate a lot of material and feel overwhelming.

Assumptions and hypotheses can be difficult to access and explore in critical depth – often requires a facilitator.

For researchers, it can often be challenging or inappropriate to think about influence and impact before knowing the findings.

Outcome mapping

Best for measuring social and political change.

Works with complexity but does not control it.

Works with many Theories of Change and the Logframe.

As scaling-up occurs it becomes harder to map change, but if there is good evidence for change that can be documented, demonstrated and evidenced, donors are more likely to accept this as a methodology for measuring impact.

If donors are keen on ‘results, results’ and highly quantified information, this may not be the approach for them.

Does not replace political economy/social research as a way to understand the context and other players.

Flexibility demands more, rather than less, effort.

No single 'impact', and tends to demand participation and lots of ex-ante discussion.

Page 15: Measuring ‘Real’ Impact - Elrha · Impact happens over a continuum – and capturing this kind of information (in essence a moving target) requires more than the more standard,

15

Action research • Highly participatory, with researchers and stakeholders collaborating in diagnosing a problem and designing, implementing and evaluating interventions to resolve it.

• Participation generates ownership, making interventions more targeted and relevant and increasing the ‘agency’ of stakeholders who help ‘cause’ successful outcomes through their own actions and decisions.

• AR approaches facilitate and promote organisational and individual learning, reflexivity and downward accountability (ie, to stakeholders, participants and beneficiaries).

• Contributes to impact evaluation through providing in-depth understanding of local contexts and communities, thus improving construct validity (whether you are actually measuring what you think you are measuring) of the evaluation

• Particularly appropriate for evaluating complex programmes.

• Criticised for lacking quality and rigour, particularly by positivist social science.

• Action researchers have sought to counter these critiques in a variety of ways, from devising alternative quality assurance mechanisms to rejecting them outright for judging AR against a set of criteria that are not appropriate.

• Participatory approaches have been criticised for lacking rigour, as their approach to establishing causal inference is very different to that of experimental or statistical methods which use counterfactuals or statistical correlations to demonstrate causality.

• As with all approaches, there are some situations where AR is appropriate and others where it is not.

• AR approaches do not need to be used in isolation; other designs and methods can be used to triangulate or complement findings established through AR or vice versa, thus improving ‘rigour’.

Group and plenary discussion of methodologies The ensuing discussion at the workshop related impact to social change, with many participants considering that verified social change is what constitutes ‘real’ impact. Such an understanding underscores the complexity of measuring impact, especially in the humanitarian sector, as the observed social change might not have been intended (or planned for) and can look different in the eyes of different people. Furthermore, people questioned who was behind the impact agenda and what the motives for this agenda were. Impact is often translated as output measuring, whereas ‘real’ impact often cannot be ‘counted’, as it only becomes apparent after a long period of time. It was also generally acknowledged that measuring ‘real’ impact is a resource-intensive process that is not fully acknowledged by funders. More time and resources are needed within the overall programme/project structure to build these measures in at the start, establish baselines and then allow sufficient time to measure the change throughout and after the programme or project. It was felt that there was some scope for advocacy on this item, given the current climate in which the question of impact and its measurement is gaining such traction. Interconnectedness and silos The interconnectedness of the huge range of actors engaged in the humanitarian, DRR and development sectors raises questions concerning the attribution of impact and the different roles of those actors. The impact a donor is looking for may well be different to that which the at-risk communities are seeking, and this will affect how the impact is measured/assessed. The humanitarian and development sectors draw on a number of disciplines, offering the possibility of approaching situations in a more holistic manner. Yet while there are significant overlaps, the humanitarian and development communities often continue to operate in their own silos. While DRR and DRM activities are broadly similar to longer-term development activities, a whole body of parallel expertise and language has also emerged here. There are also silos within and between different scientific disciplines – economists find it hard to talk to anthropologists, for example.

Page 16: Measuring ‘Real’ Impact - Elrha · Impact happens over a continuum – and capturing this kind of information (in essence a moving target) requires more than the more standard,

16

The role of connectors and brokers was discussed, and it was acknowledged that they might possibly be in a better position to report success, failure and lessons learned. However, it was acknowledged that they do have strong incentives for creating demand for their services and could deter investment in sustainable institutional capacity building. It was also acknowledged that there was a need to be wary of too many intermediaries, since the messages can get diluted and distorted as more agendas are added to the mix. How to address long-term/short-term tensions? Impact can be demonstrated at different points of the crisis cycle. It is also important to take into account work that might not necessarily have been put in place by a particular intervention or activity, but without which the intervention would not have had the desired impact. This is especially the case for DRR and DRM activities that fall more within the development sphere and may not be acknowledged by humanitarians in a rapid onset disaster, such as the role of social safety nets: they were not part of the rapid response activity but enabled households to cope better in the crisis when it happened. The negative aspects of being attached to impact and its measurement

The process of demonstrating impact can undermine longer-term goals (such as strengthening democracy) if done in the wrong way, ie, you might get ‘more’ impact if you undertake private advocacy through personal contacts, but this does not strengthen longer-term governance.

Humanitarian intervention demands short-term impact (short funding cycle) which might undermine longer-term impact, and this is particularly a problem where humanitarian aid is being used to support chronic social service gaps rather than to respond to sudden onset disaster. In general, humanitarian aid is supporting communities to return to what ‘was’, the pre-existing situation of vulnerability, rather than supporting longer-term resilience and transformative change. Development work is more focused on lasting social/political change and lends itself more to ‘demonstrating impact’.

Impact can hijack true reporting to donors, and the desire to constantly demonstrate positive impact rather than acknowledge learning from failure can lead to a lack of integrity in reporting. There might also be a reluctance to report back on unintended consequences or impacts if the donor requires too rigid a reporting format based solely on anticipated impact rather than learning as the project/programme proceeds and adapting accordingly, which may mean changing resource allocation.

Managerialism, or “competence without understanding”, was cited as a danger to learning and creating useful feedback on impact.

Impacting policy solely to demonstrate impact can be undermining. The maxim “If it ain’t broke, why fix it” does not appear to be an option in the quest to constantly demonstrate impact. Innovation and constant change are not necessarily beneficial for some communities.

Despite the emphasis on participatory approaches, the voices of the ‘beneficiaries’ or communities at risk are rarely heard and rarely permitted to directly inform the agenda. The development sector/longer–term-focused initiatives are more advanced in their use of participatory approaches, but are still far from perfect. The humanitarian sector would benefit from greater engagement with these approaches, particularly when seeking to identify impact at a community level.

It is almost impossible to isolate the impact of a single project among the huge range of factors which influence potential impact.

Creative conversation exercise All participants at the workshop were categorised and put into three groups – academics, humanitarian practitioners/policy-makers and brokers/connectors. A small number of people from each group were then selected to sit on a central group of chairs (termed the ‘fishbowl’) and discuss a scenario question, while all the other participants sat in a surrounding larger circle and listened to and observed the central actors’ discussion.

Page 17: Measuring ‘Real’ Impact - Elrha · Impact happens over a continuum – and capturing this kind of information (in essence a moving target) requires more than the more standard,

17

The scenario for the fishbowl was the 2011 Horn of Africa drought. There was information, but uncertainty prevailed: why did the system not act on this level of information? The specific question for the fishbowl was: What should your group be doing to demonstrate impact in a context like the 2011 Horn of Africa drought? Fishbowl 1: humanitarian practitioners/policy-makers This group discussed measuring impact in the Horn of Africa from an operational agency point of view. Members felt that the most straightforward way to assess impact would have been to compare two communities, one with agency support and one without, and/or contrast the preparedness of different agencies and how this related to the effectiveness of the support they delivered (although it was noted that it is important to consider context in such an evaluation). There is a need also to consider the role of DRR work, for example, safety net orogrammes, and the impact of this DRR work when the crisis occurred. It is rare, however, to have a control group available for impact evaluation. Measuring impact is very expensive and time-consuming. Capacity to measure is linked to incentives and the entity for which these measurements/demonstrations of impact are undertaken. Moreover, the challenges are that people are absorbed in operations and have little time available to dedicate to impact measurement, and information and activities are patchy. It is easy enough to identify the state of the programme at any one time, but demonstrating impact is complicated by the fact that a range of diverse activities are often being undertaken simultaneously. One approach suggested to circumvent this would be to have a dedicated person in the operations team to critically challenge/question any response actions and evaluate the response process in real time.12 Additionally, some aspects of response work relate to concepts that are difficult to measure, such as ‘dignity’ and ‘alleviation of suffering’. Multi-country responses add further complications, given the difficulties of comparing different situations. Discussants were also acutely aware that humanitarian support represents just a very small part of the picture.

Fishbowl 2: academics This group began by discussing the differences in agenda and motivations between the institution, discipline, research programme and individual researcher when measuring the impact of academic work and research programmes. Goals shift in relation to the perspective, which directly influences how impact is characterised. For example, a researcher might be motivated by the accuracy of earthquake prediction and perceive this to be a measure of impact. However, the communication and effective uptake of the information, as well as the question of whether the findings trigger a response, are equally, if not more, important and should also be considered in the impact assessment. This brings into question whether these additional actions are the responsibility of the researcher when their principal present incentive is peer-reviewed publication. The current paradigm sees scientists as providing the information, and no one integrates the information in a form that is relevant to users. Impact could also be reconfigured by developing ways to make research questions more informed by directly affected users and decision-makers. Greater collaboration between physical and social sciences is required to contextualise and embed messages that will ultimately lead to behaviour change. Fishbowl 3: operational (second round) The group began by discussing how interventions can have significant unintended consequences. Examples included an intervention meant to prevent livestock from dying which led to a water crisis, and another project supporting enhanced access to resources for some livestock in Kenya which led to conflicts and

12

A number of humanitarian organisations already undertake ‘real-time evaluations’, but rarely are these processes sufficiently embedded or timely to enable accrued learning to inform the development of ongoing humanitarian operations.

Page 18: Measuring ‘Real’ Impact - Elrha · Impact happens over a continuum – and capturing this kind of information (in essence a moving target) requires more than the more standard,

18

increased raiding between cattle- and camel-herders, with the result that livelihood-related mortality went down while conflict-related mortality went up. The evidence base for impact measurement is important, but the degree to which it is relied upon in situations of uncertainty can be problematic: “Good evidence for everything misses the point sometimes.” If ‘good’ evidence is always required, this could lead to stasis. Fishbowl 4: intermediaries The group started by noticing the lack of linkages between the operational and academic groups, raising questions as to who is the messenger and who is the intermediary. Some parties that are seen as intermediaries actually have their own separate agenda, eg, humanitarian agency advocacy staff might see an increase of funding after the Horn of Africa drought as an impact of their work, whereas the focus should be on understanding the impact on support for those affected by the crisis. A way of approaching the question of intermediaries and where they are needed would be to look at the system and all its networks and to identify where their engagement could best bring added value. The group concluded that intermediaries might be more objective, because they do not have to grapple with positive and negative impact, and are better placed to report failure. On the other hand, being further removed from the ‘impact’ itself might mean that their responsibility is limited, their only obligation being to ‘connect’. Lastly, this group considered that impact should be understood as lasting change, such as a ‘culture shift’ or system change, which enhances the capacity of the system.

Fishbowl 5: mix of all three groups It was noted that the difference between the agendas of each group is very strong. New institutions are not the answer to this question: it is the approach that needs to change. One group member mentioned that simply being aware of our different agendas might not be good enough: “Sometimes you need to stop and completely change.” Another group member asked whether an independent body could track impact, or whether academic and community evaluators were sufficient. The independence of a party is important. More attention should be given to interaction: where does research filter through, where does it not, and can we learn from those experiences? People rarely talk beyond their institutional or organisational boundaries and fail to overcome existing silos. There is a need to create space for such linkages to be created, yet resources for this are currently limited and it remains unclear where responsibility for this linkage lies.

Case studies

Rural Economy and Land Use Programme The Rural Economy and Land Use (RELU) programme – www.relu.ac.uk – has an innovative approach to research. The programme involves 4,000 stakeholders, 90 research projects across 40 academic areas, and 500 researchers, fostering a culture of engaged research and a practical philosophy of knowledge exchange. It promotes interdisciplinary research on the social, economic, environmental and technological challenges facing agriculture and rural areas, with a strong focus on building the capacity of both physical and social scientists to engage with each other. There is also a conscious effort to include people outside the public sector and academic constituencies (see Figure 5 for knowledge exchange linkages). This user-focused approach was initially challenging for the academics, who were nervous about participating in such a practice-oriented way. However, the programme has managed to devise a methodology that provides a suitable environment for academics to carry out their research without losing any sense of rigour, and yet bringing the science to application. This has had an added beneficial impact, providing academics with case studies they can take to the REF and publication, as well as providing useful, coherent science for directly affected communities.

Page 19: Measuring ‘Real’ Impact - Elrha · Impact happens over a continuum – and capturing this kind of information (in essence a moving target) requires more than the more standard,

19

Stakeholder Impact Analysis Matrix A key tool for measuring impact within the RELU project is the Stakeholder Impact Analysis Matrix (SIAM), which seeks to understand impact mechanisms rather than just be an assessment exercise. It keeps an annual track of what stakeholders bring to, and get out of, the programme, tracking stakeholder engagement right from the outset, and has a research/knowledge exchange bias in that it tracks the impact of research on policy and practice. The audit trail also helps with the attribution issue. It makes it possible to map the impact of users on science (quality and relevance of science) and the impact of science on policy and decision-takers (in their knowledge and practices). Mapping the stakeholder connections allows identification of where ‘rewiring’ is required and allows for adaptation through the programme. A briefing paper is available at: http://www.relu.ac.uk/news/Consultations/Relu%20response%20to%20REF%20Consultation.doc Figure 5. RELU knowledge exchange linkages

Feedback

Visitor/

Host

Advisory

GroupDissemination

Accounting for

Knowledge

Exchange

Emergency Capacity Building Project The Emergency Capacity Building (ECB) Project is devising a Guide to Evaluating Change after Rapid Onset

Emergencies (www.ecbproject.org), which seeks to recognise the complexity of the context in which humanitarian work is undertaken, and the tensions between the need to demonstrate impact within projects of short duration and the need for longer-term social change, as well as the fact that it is often extremely difficult to measure impact in an emergency situation. There are seldom any comparators available to gauge whether an intervention works better for some than others, for example. This also raises the issue of ethics, in the sense that it is not possible to deny one community an intervention in order to set up a control situation. The Guide has taken the household and changes to livelihoods as its area of measurement and therefore covers a range of areas, including water, shelter, health, sanitation and food

Page 20: Measuring ‘Real’ Impact - Elrha · Impact happens over a continuum – and capturing this kind of information (in essence a moving target) requires more than the more standard,

20

security, enabling a more holistic measurement that cuts across mandates, clusters and sectors. This approach is designed to be robust and yet simple enough for field staff who might not necessarily be experienced researchers, but also enables the collection of credible evidence on ‘contributions to change’. Another important attribute is the methodology’s ability to establish a retrospective baseline, as the reality of most emergencies is that baseline information from before the crisis is not easily available. Regional Integrated Multi-hazard Early Warning System The Regional Integrated Multi-Hazard Early Warning System (RIMES) – www.rimes.int – works across countries in South and South-East Asia and East Africa. Early warning is a key element for DRR. However, the advances in generating hazard risk information have not yet been incorporated into operational forecast systems, and consequently, operational forecasts have not been integrated into decision-making processes in order to reduce disaster risks. RIMES has identified three main reasons for failure to make better use of science to support at-risk communities: • Science and technology are not understood. • Science and technology are understood but ignored – “You can’t eat information” – so are not prioritised. • Science and technology are understood and the desire to act is there, but there are no resources to do so. Yet RIMES’ work highlights that contrary to popular opinion, communities are very able to take on probabilistic information if this information is provided in accessible and relevant ways. The presentation of RIMES’ work showcased a project in Bangladesh to develop 24/7 early warning information with communities at risk of flooding. The approach underlines the need to design location-specific forecasts tailored for specific user needs. The Decision Support System, which integrates scientific and community information with community vulnerabilities, was applied to a pilot area in the Brahmaputra river basin in Bangladesh for the agricultural sector. In this project, Ensembles Probabilistic Forecasts for Community Level Applications, the approach ensures that relevant early warning information products are easily accessible to directly affected communities. For example, houses were marked with different colours to denote different levels of risk. RIMES has been working with farmers to produce a 10- to 15-day early warning and, as with the ECB Project, the household was the focus for impact analysis. To support credibility, the information was delivered by a well-known personality.

Figure 6. Risk assessment framework

Risk Scoping

Risk

Characterization

Risk Evaluation

Risk Management

Feedback

• Establish target and

criteria through

consultation with

stakeholders

• Identify possible

•Risk event

•Source of stress

•Stress receptors

•Relationship

between sources

and receptor

• Estimate for risk

event and receptor:

•Likelihood of

exposure to

stressors

•Consequences

of exposure to

stress

•Develop risk

profile

•Compare event

and total risks with

targets and criteria

•Assess existing

risk management

practices against

risk profile

•Evaluate

treatment options

•Develop strategy

based on option

Page 21: Measuring ‘Real’ Impact - Elrha · Impact happens over a continuum – and capturing this kind of information (in essence a moving target) requires more than the more standard,

21

Conclusions

Suggested approaches to understanding what works and what doesn’t

Bottom-up approach

Donors should let vulnerable communities lead the impact agenda, as ultimately they should be the beneficiaries. Allocating donor funding directly to affected communities would allow them to undertake their own impact assessments of development and humanitarian interventions. Such an approach would ensure the legitimacy of impact assessment, and the incentive to achieve positive impact would come directly from the communities themselves. However, it was noted that such an approach would be challenging and would be likely to require support over multiple years and measures to develop approaches appropriate for all partners.

Workshop participants felt that impact should be about social change, which is complex, but we should not shy away from that subject. Impact measurement should focus on both intended and unintended consequences.

Governance and monitoring

There is a concern that the current approaches to measuring impact – through a process of self-evaluation to satisfy criteria that differ between donors and research funders – lack independence and rigour. Consequently, debate is required concerning the benefits and risks associated with establishing an independent, external body to evaluate impact in the development and humanitarian sectors.

The meaning of ‘impact’ varies between donors and research funders. For example, in the UK context, impact from HEFCE’s point of view is retrospective, whereas the Research Councils are looking for future impact of research. Similarly, suggested methodologies (both quantitative and qualitative) to measure impact differ between donors and funders. There is a need to map these various methodologies, including the monitoring, planning and evaluation approaches. This would provide the information with which to coalesce impact definitions and methodologies and begin to develop a common framework bringing together the types of impact sought across actors and sectors.

There is a need to consider whether criteria for the impact assessment of humanitarian and development interventions should be separate, or whether it requires the flexibility to incorporate both preventative (ie, greater resilience from enhanced socio-economic development) and transformative and responsive roles. Insights could be drawn from the impact assessments of other sectors that incorporate both functions, such as the police or fire services and the European Civil Protection Mechanism in response to a security breach.

Similarly, the practicality, feasibility and affordability of impact assessment must be considered. While workshop participants were not in agreement on this issue, it was suggested that the development and humanitarian sectors – which are broad and diverse – should, for the purposes of impact assessment, be split into their component parts. This was challenged, however, as it was thought it could promote fragmentation within the sector. A possible solution voiced was to assess ‘clusters’ that worked together in the field.

Practical approaches and case studies concerned with how to measure impact are needed. However, within the context of natural disasters, preparedness can provide a useful framework with which to evaluate the impact of interventions. Measures to improve preparedness of at-risk communities can be evaluated through a ‘post-mortem’ exercise following an event, which would not only assess the impact of interventions but also establish which measures were effective and which were not.

Coherent baseline studies are as important as evaluations in aiding understanding and measurement of impact. Often funding is only available for evaluations (if at all), whereas studies that assess the status quo (so that an impact assessment could compare against that) rarely take place. One workshop participant described this as the need for “pre-impact evaluations”.

Page 22: Measuring ‘Real’ Impact - Elrha · Impact happens over a continuum – and capturing this kind of information (in essence a moving target) requires more than the more standard,

22

Forms of impact measurement which recognise and support sustainability and cross-disciplinary approaches

The emphasis on impact is based on a desire to understand what works and also to encourage greater accountability within the development and humanitarian sectors. However, an over-obsession with the attribution of impact could lead to the delivery of only those interventions which can be effectively measured. Moreover, to ensure the longer-term sustainability of development and humanitarian interventions, the degree to which in-country capacity and skills are enhanced will be needed as a complementary measure.

Impact measurement is also driven by a need to justify investments, and this should be challenged, because it is difficult (if not impossible) to distinguish the impact of an individual funding source in a context with multiple funders.

Across the various scientific and professional disciplines the understanding of what constitutes positive (and negative) impact varies. Efforts should be made to distinguish where the areas of commonality lie, as focusing on achieving impact in these areas could provide the stimulus for more joined-up and holistic international engagement. This exercise would also establish areas of conflicting objectives, awareness of which would be equally valuable in efforts to understand and interpret impact.

What is the role of science in all this?

Research and evidence needs

Research and evidence are needed to help establish a rigorous and transparent framework for measuring impact. Most pressing is the need to develop a sophisticated understanding of intended and unintended consequences – over the short, medium and long term – of impact assessment for all stakeholders.

The ethics of measuring impact must also be considered with the objective of understanding what processes and principles are appropriate. The involvement of at-risk communities should be central to this analysis in order to ensure the legitimacy of the findings and recommendations.

Science-actors should also think about how they can collaborate with communities directly, so that communities can better understand how they can use and benefit from scientific information and outside support.

Interdisciplinary approaches to understanding and measuring impact should be encouraged. Enhancing research impact within the humanitarian and development context

The impact agenda has increased the need for scientific research to be socially relevant and useful. The model of predict and provide is no longer fit for purpose. Instead the development of knowledge must become a two-way process where potential users (humanitarian and development actors) and beneficiaries (at-risk communities) are, where possible, brought into the entire research process (concept, data collection, analysis, dissemination). To complement this approach, more extensive outreach is required to ensure that non-scientific stakeholders are aware of the opportunities and limitations of greater engagement with the scientific community.

Tools

The development and integration of quantitative and qualitative approaches to measuring impact is needed to provide balanced assessment. To make this possible, it will be crucial for academic communities to come together and for a common language between disciplines to be developed.

Prevailing approaches to measuring impact have been at project, programme or institutional level. If the impact agenda is to drive future donor investments, there is a need to draw together impact assessments across disciplines in order to provide a more holistic understanding of, and interrelationships within, the impact landscape. As above, multiple thinkers are needed to ensure the effectiveness of this.

Page 23: Measuring ‘Real’ Impact - Elrha · Impact happens over a continuum – and capturing this kind of information (in essence a moving target) requires more than the more standard,

23

Additional points

The Foresight report should:

navigate around contested incentives of impact assessment while identifying areas of shared interest among stakeholders

highlight consideration of differences between what the humanitarian and development communities deliver, what the research community produces and what communities identify as their needs.

Page 24: Measuring ‘Real’ Impact - Elrha · Impact happens over a continuum – and capturing this kind of information (in essence a moving target) requires more than the more standard,

24

ANNEX 1: AGENDA AND PARTICIPANTS

Download presentations from http://www.devstud.org.uk/measuring_real_impact-105.html. Commissioned by the Government Office for Science’s Foresight project, ‘Improving Future Disaster Anticipation and Resilience’ – which was initiated in response to issues raised in DFID’s 2011 Humanitarian Emergency Response Review (HERR) – this workshop builds on existing efforts supported by the Natural Environment Research Council (NERC) and the UK Collaborative on Development Science (UKCDS) to make space for more systematic dialogue between scientists and humanitarian and development policy-makers and the communities and partners with which they work.

The current initiative is a collaboration between Enhancing Learning and Research for Humanitarian Assistance (ELRHA), the Humanitarian Futures Programme (HFP), King’s College London, Save the Children UK, the Consortium of British Humanitarian Agencies (CBHA), the Development Studies Association (DSA) and HelpAge International (HAI). Climate scientists from the End to End Quantification of Uncertainty for Impacts Prediction (EQUIP) and those who have been participating in a climate science–humanitarian policy exchange coordinated by HFP are also partnering in the first workshop stemming from this collaboration.

Objective This workshop will consider the different types of impact required in social and natural science research, and in humanitarian and development policy and practice, when engaging with directly affected communities. The forum will share views from those engaged in developing monitoring and evaluation of impact in these different communities, reflect on what each considers impact' to be, and seek to identify how best to develop indicators of shared relevance.

Part of the discussion will focus on the current divides across the humanitarian, disaster risk reduction and development communities, and the opportunity for linking these through an integrated resilience lens. The afternoon will showcase and explore a number of approaches which have been effective in creating cross-disciplinary, cross-sectoral frameworks which have resulted in demonstrable impact in strengthening community resilience.

The workshop will also consider the relative opportunities and challenges for incentivising cross-disciplinary approaches.

9.30 Registration and tea/coffee

10.00 Introduction to the initiative and overview of the agenda from project partners Frances Hill, ELRHA/DSA; Daniel Leary, GO-Science; Sean Lowrie, CBHA

10.30 Smorgasbord of selected impact measurement approaches and methodologies Chair: Sean Lowrie, CBHA

1. Theory of Change – Isabel Vogel 2. Outcome mapping – Enrique Mendizabal 3. Action research – Rowan Popplewell, INTRAC 4. HEFCE’s Impact Criterion for REF 2014 – Graeme Rosenberg, HEFCE

11.30 Coffee to take back to groups

11.45 Panel Q & A/discussion Facilitator: Sean Lowrie, CBHA

12.30 Lunch

13.00 Creative conversation exercise Facilitator: Sean Lowrie, CBHA

14.00 Panel/case studies: application and relevance for cross-disciplinary and applied research

Chair: Frances Hill, ELRHA/DSA a. Amy Proctor, Rural Economy and Land Use (RELU) Programme: Stakeholder Impact Analysis

Matrix b. Daniel McAvoy, UEA: Contributions to Change: A Guide for Evaluating Change After Rapid

Onset Emergencies c. S H M Fakhruddin, RIMES, Bangkok: Forecasts for Community Application

Page 25: Measuring ‘Real’ Impact - Elrha · Impact happens over a continuum – and capturing this kind of information (in essence a moving target) requires more than the more standard,

25

15.00 Tea break

15.30 Group discussions to consider recommendations for GO-Science Facilitator: Frances Hill, ELRHA/DSA

16.40 Concluding remarks

17.00 Close Participants

Last Name First name Organisation

Aggett Sian Wellcome Trust

Allen Ben Action Against Hunger

Bartimeus Ama Care International

Camburn Jess ELRHA

Dalrymple Charlie RedR

Datta Ajoy ODI

Dempsey Ben Save the Children

Evans Sophie Film crew

Fakhruddin S H M RIMES

Gordon Jessica International Rescue Committee UK Hanley Teresa Independent consultant

Harris Clare HelpAge International

Hawkes Sarah Institute of Global Health

Hill Frances ELRHA

Houghton Rachel CDAC

Hounjet Marieke CBHA

Knox-Clarke Paul ALNAP

Leary Dan GO-Science Foresight Programme

Lowrie Sean CBHA

McAvoy Danny University of East Anglia

McCaughey Declan Film crew

McLaren Charlie UKCDS

Mendizabal Enrique Mendizabal Ltd

Popplewell Rowan INTRAC

Proctor Amy RELU

Rosenberg Graeme HEFCE

Selfridge Garry KCL

Shaw Andrew DFID

Stevenson Frances HelpAge International

Taylor Stephen World Vision

Taylor Airlie ActionAid

Visman Emma Humanitarian Futures Programme

Vogel Isabel Independent consultant

Warrell Sioned ELRHA