A Quarterly Knowledge Publication on Development...

84
Independent Development Evaluation African Development Bank Made in Africa Evaluations Fourth Quarter 2019 A Quarterly Knowledge Publication on Development Evaluation eVALUation Matters Volume 2:Practical Applications

Transcript of A Quarterly Knowledge Publication on Development...

Page 1: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

Independent Development EvaluationAfrican Development Bank

Made in Africa Evaluations

Fourth Quarter 2019

A Quarterly Knowledge Publication on Development Evaluation

eVALUation Matters

Volume 2:Practical Applications

Page 2: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

© 2019 – African Development Bank (AfDB)

From experience to knowledge... From knowledge to action... From action to impact

eVALUation Matters

Is a quarterly publication from Independent Development Evaluation at the African Development Bank Group. It provides different perspectives and insights on evaluation and development issues.

Editor-in-Chief:

Jayne Musumba, Principal Knowledge Management Officer

Acknowledgments:

IDEV is grateful to all contributors, reviewers, editors, and proofreaders who worked on this issue, in particular: Karen Rot-Münstermann; Dieter Gijsbrechts; Aminata Kouma; Tomas Zak; Kobena Hanson; Stephanie Yoboue

Editing and translation: Dieudonné Toukam; Melora Palmer

Design & Layout:

Créon (www.creondesign.net)

Photo credits:

❙ Yaayaa Diallo from Pixabay ❙ Ollivier Girard/CIFOR ❙ Josephine Watera ❙ Sharon Ang_from Pixabay ❙ Pitu Alquezar from Pixabay ❙ Jeanette Wittstein_from_Pinterest ❙ Aga2rk from Pixabay ❙ Denzel Denys from Pixabay ❙ Bruno Gandon, Pixabay

About Independent Development Evaluation

The mission of Independent Development Evaluation at the AfDB is to enhance the development effectiveness of the institution in its regional member countries through independent and instrumental evaluations and partnerships for sharing knowledge.

Disclaimer:

The views expressed in this publication belong solely to the authors, and not necessarily to the authors’ employer, organization or other group or individual they might be affiliated with.

Evaluator General:

Roland Michelitsch [email protected]

Managers:

Rufael Ebrahim Fassil [email protected] [email protected] Rot-Münstermann [email protected]

Talk to us:

Phone (IDEV) +225 2026 2841 Phone (AfDB standard) +225 2026 4444

Write to us: 01 BP 1387 Avenue Joseph Anoma, Abidjan 01, Côte d’Ivoire

E-mail us: [email protected]

Find us online: idev.afdb.org afdb.org

Connect with us on: @evaluationafdb IDEV AfDB

Page 3: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

Fourth Quarter 2019

In the call for contributions for this edition of Made in Africa Evaluations (MAE), we received many articles, thus warranting two volumes. The first volume focused on the theoretical approaches of MAE. This volume highlights some of the practical applications.

This edition of Made in Africa Evaluations explores indigenous approaches and how they could fast-track the achievement of the continental development agendas – the United Nations Sustainable Development Goals (Agenda 2030) and the African Union Agenda 2063. We asked contributors to respond to one or all of the three following questions: What is meant by “Made in Africa” evaluation and how does it differ from other approaches? What unique insights could an African cognitive lens bring to evaluation? How should countries go about creating indigenous evaluation practices?

Much of the discourse to date on MAE has focused on the conceptual aspects. Therefore, this volume aims to take the discussion a little further in understanding what it is, by providing practical experiences on what the concept looks like in practice.

In this volume, the contributions acknowledge the diversity of the continent and the need to understand the cultural context, to appreciate the local knowledge base, to learn from failures, and to use local evaluation tools such as orality and a participatory approach, national evaluation systems and voluntary national reviews, and community-based advocacy platforms called Barazas.

Page 4: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

Synthesis Report on the Validation of the 2016

Project Completion Reports

July 2019

An ID

EV P

CR V

alid

atio

n Sy

nthe

sis

Tabl

e of

con

tent

s

4 From the Evaluator General’s DeskRoland Michelitsch, IDEV, African Development BankThis second volume explores the application of the Made in Africa Evaluation (MAE) concept and what MAE evaluations would look like in practice. Given the diversity of Africa, it is important to recognize that in practical terms, on a continent with one billion people speaking more than 2000 languages, it would be difficult to establish a one-size-fits-all model of evaluation for Africa that can take account of all the complexities found on the continent. That is why the approaches we explore will likely only provide partial answers.

8 Indigenizing citizen-based M&E mechanisms in Africa: Lessons from Uganda's Barazas Josephine Watera, Parliament of UgandaIn 2009, the Government of Uganda introduced Baraza, a community-based advocacy platform for technical officers and political leaders to provide evaluative information on service delivery. This article documents the experiences of Uganda in implementing Baraza, as a step towards “Made in Africa Evaluations”, highlighting the platform concept, the history of the decentralization policy framework in Uganda, the CLEAR Model and its application to findings, emerging lessons and conclusions.

22 Application of contextual analysis to evaluation methodologies in Africa: The case of contribution analysis Andrew Anguko, IDEV, African Development Bank In Africa, where unique contextual factors influence implementation and outcomes, we risk not identifying the right questions for evaluation, ignoring key stakeholders and misinterpreting stakeholder priorities. This article explains step-by-step how evaluators can conduct a contextual analysis in the various phases of Contribution Analysis Evaluation Methodology, to bring validity to the “Made in Africa Evaluation” practice.

32 Orality and participatory approach for a “Made in Africa” evaluation Yao Roger Modeste Apahou, Ivorian Network of emerging evaluators Despite having the majority of its historical literature essentially recorded orally, Africa has much to offer to evaluation theory and practice– currently dominated by the Western worldview. This article demonstrates that far from being anachronistic, the oral history of the continent could provide a solid foundation for promoting an evaluation approach Made in Africa.

July 2019

Synthesis Report on the

Validation of the 2017

Project Completion Reports

An ID

EV P

CR V

alid

atio

n Sy

nthe

sis

An ID

EV C

orpo

rate

Eva

luat

ion

September 2019

Evaluation of the AfDB's Integrated Safeguards System

Summary Report

News in pictures, page 72

Page 5: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

eVALUation Matters Editorial Calendar 2020:

The editorial calendar for 2020 has been published. Please find the calendar plus g uidelines for contri-butions here: http://idev.afdb.org/en/document/editorial-calendar-2020

Find eVALUation Matters at

Q1 2020 Promoting an evaluation culture in 2020 and beyond

Q2 2020 Preparing evaluation of the future: Big Data, modern technologies and shi!s in global development priorities

Editorial Calendar

eVALU

ation Matters: M

ade in Africa E

valuationseVA

LU Third

Quarter 2019

eVALU

African Development Bank Group

Avenue Joseph Anoma, 01 BP 1387, Abidjan 01, Côte d’Ivoire

Phone: +225 20 26 28 41

E-mail: [email protected] Independent Development EvaluationAfrican Development Bank

Independent Development EvaluationAfrican Development Bank

Made in Africa Evaluations

Third Quarter 2019

A Quarterly Knowledge Publication on Development Evaluation

eVALUation Matters

Volume 1:Theoretical Approachesidev.afdb.org

Published December 2019

42 Evaluation of development aid: what are the most appropriate approaches for Africa?Salah Eddine Bouyousfi, Independent Consultant Between the demands of donors and the need for accountability, Africa is looking for tools to better conduct evaluations of development aid. Through a review of related literature, this article proposes three approaches- the realistic approach, the empowerment approach, and developmental evaluation- which, although developed in the West, the author deems to be best adapted to the African context.

54 National Evaluation Systems and Voluntary National Reviews: An African approachLaila Smith and Angelita Kithatu-Kiwekete, CLEAR-AA at Wits University, South Africa Use of home-grown methods is constrained when the commissioners of evaluations are often sitting in the North. Establishing a National Evaluation System is a critical step in empowering a country to set the rules in order to be able to independently conduct evaluations. This article explores the need for the Voluntary National Review (VNR) reporting that is recommended in the United Nations Sustainable Development Goals. It also highlights the experience of coun-tries implementing their national evaluation systems.

62 Made in Africa evaluation failures: Case studies from the continent Edited by Nicola Theunissen, with contributions from Jennifer Bisgard; Felix Muramutsa; Zakariaou Njoumemi and Ahmed Ag AboubacrineThe reasons for evaluation failure in the Global South, specifically in Africa, are vastly different from the reasons found in the Global North. Yet, African stories of evaluation failure remain largely untold and undocumented, driven in part by a reluctance to expose mistakes to the commissioners of evaluations such as donors. This article highlights the many unique evaluation challenges that the African continent is facing, including data transparency, the relationship with government stakeholders, resource constraints and the need to understand cultural context.

72 News in pictures

76 Hot off the press

http://idev.afdb.org/en/page/evaluation-matters-magazine

"The African continent is home to countries with mostly unfavourable contextual variables. Some of these contextual variables, such as political instability, are generally unique to Africa and have the potential to impede the implementation of interventions and negatively influence development outcomes".Andrew Anguko, IDEV, African Development Bank

Table of contents 3

Page 6: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

Made in Africa Evaluation (MAE) as a concept and movement is not new. The African evaluation community had already been discussing African indigenous evaluation before the African Evaluation Association (AfrEA) made it central to the Association’s aspiration at its foundation in 1999, culminating in MAE as a pillar of its 2018-2021 strategy. However, in light of the recent review of the international evaluation criteria, MAE has resurfaced as an important part of the discourse on evaluation practice in Africa and the contribution to global evaluation practice by the African evaluation community.

In this context, the Independent Development Evaluation (IDEV) of the African Development Bank is publishing a two-volume edition of Evaluation Matters on MAE. The first volume, published in the previous quarter, focused on some of the arguments for and theoretical approaches to “Made in Africa” evaluation, exploring indigenous approaches and how they could contribute to fast-tracking the achievement of the continental development agenda. It proposed that to gain relevance, accuracy and valuable lessons to improve development outcomes of initiatives conducted in Africa, these initiatives should be evaluated through an appropriate lens Fr

om th

e Ev

alua

tor G

ener

al’s

Des

k

eVALUation Matters Fourth Quarter 2019

Page 7: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

that takes into account particular realities seen in a vast and varied continent - Made in Africa Evaluation.

This second volume explores the application of the MAE concept and what MAE evaluations would look like in practice. Given the diversity of Africa, it is important to recognize that in practical terms, on a continent with one billion people speaking more than 2000 languages, it would be difficult to establish a one-size-fits-all model of evaluation for Africa that can take into account of all the complexities found on the continent. That is why the approaches we explore will likely only provide partial answers. We have included in this edition three articles that propose specific approaches for evaluation on the continent, to provide an informative albeit limited picture of MAE, and where perhaps some commonalities can be drawn.

A further article conducts a literature review to propose three distinct methods, while another looks at the existing National Evaluation Systems and Voluntary National Reviews, recommended by the UN Sustainable Development Goals, as a tool for African countries to take control and independently conduct evaluations. Finally, we have included a contribution to the discourse that looks at African stories of evaluation failure to showcase the need for understanding cultural contexts.

A frustration experienced by many practitioners in the African evaluation community is that there seem to be many conversations taking place about what MAE could be or should be or should not be, but no one is able to come to final agreement - with the varied nature of the continent playing a role here. It would be too much to assume that this magazine is in any position to make a determination of what MAE is or is not. However, by bringing together various perspectives about what MAE could encompass (conceptually in volume 1 and what it could look like in practical terms in volume 2), we hope to bring the conversation a step closer to a common understanding.

Happy reading!

From the Evaluator General’s Desk 5

eVALUation Matters Fourth Quarter 2019

Page 8: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

About the Evaluator GeneralRoland Michelitsch is the Evaluator General of the African Development Bank (AfDB). Prior to joining the AfDB, he spent many years with other Multilateral Development Banks. At the Inter-American Development Bank (IDB) Group, he led evaluations of private and public sector activities. With the International Finance Corporation (IFC), he managed the investment unit of the Development Impact Department. He also led IFC’s project evaluation system and framework, and evaluations on various topics. In the World Bank, he worked on corporate governance in Central and Eastern Europe (and in sub-Saharan Africa on population, agriculture and the environment). Roland holds a PhD and MA in Economics from the University of Arizona and an MBA from the University of Graz.Full bio at: h!p://idev.afdb.org/en/page/mr-roland-michelitsch-evaluator-general

From the Evaluator General’s Desk 6

eVALUation Matters Fourth Quarter 2019

Page 9: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

From the Evaluator General’s Desk 7

eVALUation Matters Fourth Quarter 2019

Page 10: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

Finding a path towards sustainable development will require the pooling of diverse perspectives, knowledge and resources, but more importantly a citizen-based approach.As countries take greater ownership of, and leadership in, their development processes, they have increasingly developed their systems to lead, manage and account for resources invested in these processes and results produced by them. As part of its efforts to strengthen accountability in public service delivery and improve the performance monitoring of the local governments, the Government of Uganda, in 2009 under a presidential directive, introduced Community-Based Advocacy Fora called Baraza. Baraza creates a platform for technical officers and political leaders to provide evaluative information about the status of service delivery to the citizens and in turn paving the way for citizens to participate in the development cycle by monitoring the usage of public funds and other resources.This article documents the experiences of Uganda in implementing Baraza platforms as a step towards “Made in Africa Evaluations”, highlighting the history of the decentralization policy framework in Uganda, the Baraza concept, the CLEAR model and its application to findings, emerging lessons and conclusions. In

dige

nizin

g C

itizen

-Bas

ed M

&E M

echa

nism

s in

Af

rica:

Les

sons

from

Uga

nda'

s B

araz

as

Page 11: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

eVALUation Matters Fourth Quarter 2019

Josephine Watera, Parliament of Uganda

Introduction

In an increasingly interconnected world, marked by international movement towards widely shared information, greater group and individual engagement solidarity,

citizen participation offers renewed opportunities to strengthen democracy, accountability and rule of law (Mindzie, 2015). In Africa, this renewed participation is made possible by a relatively conducive, normative and institutional environment. As a result, citizens have been able to counter poor governance practices perpetuated by the monopolization of power, control over national resources by ruling elites, and the marginalization of groups, including women and youth, who still constitute Africa’s largest component of the population.

The United Nations Agenda 2030 for Sustainable Development sets out 17 goals, and at the core of this discussion is the Sustainable Development Goal 16 on the promotion of peaceful and inclusive societies, the provision of access to justice

for all, and the building of effective, accountable and inclusive institutions, offer additional prospective for strong citizen-based monitoring and for holding African governments accountable.

Similarly, the African Union Agenda 2063 pledges to mobilize people and their ownership of continental programs; promote the principle of self-reliance and the importance of capable, inclusive and accountable states and institutions at all levels and in all spheres (Africa Union Commission, 2015:1). Evaluation

Key Messages

❚ The global agenda of the Sustainable Development Goals calls for “Leave no one behind” while the African Union Agenda 2063 pledges to mobilize people and their ownership of continental programs.

❚ Evaluation plays a critical role in accomplishing these great aspirations. Evaluations examine actions and results and ask the questions: are we doing the right thing? and Are we doing things right? These questions can only be relevant if asked in the right context, hence the debate on Made in Africa Evaluations.

❚ Barazas are public fora conducted at sub-county level for the local leaders to justify to the people how public funds received for a specific financial year were being utilized and what results were achieved.

❚ Citizen participation in monitoring and evaluation offers renewed opportunities to strengthen democracy, accountability and the rule of law.

9Indigenizing Citizen-Based M&E Mechanisms in Africa: Lessons from Uganda's Barazas

Page 12: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

eVALUation Matters Fourth Quarter 2019

plays a critical role in accomplishing this pledge.

It examines actions and results and asks the questions: are we doing the right thing? Are we doing things right? Are we ge!ing results that make a difference? Are these the right results, and what is the impact and value? (Sukai 2013:77). The Baraza platform asks similar questions with African lenses of the involvement of the community.

Uganda’s decentralization policy framework

In 1992, Uganda introduced a decentralization policy in which the central government cedes some of its power to local governments to carry out part of its mandates on its behalf. The policy was strengthened by its inclusion in the 1995 Constitution of the Republic of Uganda and further consolidated in the Local Government Act (1997). Decentralization is both a technical and political process as illustrated in figure 1.

Uganda's decentralization policy was designed to: improve service delivery in local government and lower levels; strengthen people’s participation in initiating, planning, implementation and control of their socio-political and economic developments; strengthen transparency

and accountability in the management of local governments; and promote people’s ownership of development policies.

In 2008, H.E. President Yoweri Museveni of the Republic of Uganda directed that meetings be held at sub-country level across the country as community dialogue platforms that engage the local population and their leaders on ma!ers of service delivery. Since 2009, the Office of the Prime Minister (opm) has been implementing this directive under a community-based monitoring and engagement mechanism-Barazas.

Se(ing the Made in Africa Evaluation agenda: The concept of Barazas in Uganda

There is a growing concern across the globe that a one-size-fits-all program

Figure 1: Technical and political processes of the decentralization policy in Uganda

• Administration

• Planning

• Budgeting

• Financial management

• Human resources management

• Monitoring and support

• Supervision and mentoring

• Leadership

• Participation

• Inclusion

• Representation, and

• Decision making relations

between central and local

governmentsTech

nica

l Pro

cess

Polit

ical

Pro

cess

A prosperous and peaceful Africa, driven by its own citizens and representing a dynamic force in the international arena

Pan African Vision, Para 4, Agenda 2063

All power belongs to the people...

Article 1 of the 1995 Constitution of the Republic of Uganda

10 Indigenizing Citizen-Based M&E Mechanisms in Africa: Lessons from Uganda's Barazas

Page 13: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s
Page 14: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

eVALUation Matters Fourth Quarter 2019

evaluation approach according to the Western evaluation models is not always appropriate in the cultural and developmental contexts of Africa (Cloete 2016:55). The concept of a more appropriate Africa-rooted program evaluation management model has now been explicitly placed on the evaluation agenda in Africa. Barazas are public fora conducted at sub-county level for the local leaders to justify to the people how public funds received, for a specific financial year, were being utilized. In these fora, the local government leadership is expected to demonstrate what resources they have received, and what results have been achieved in five (5) key priority sectors, namely: health, education, water, agriculture, and roads.

Moreover, building on the field of Community Engagement partners, Babler (2015) argue that every context is different, so evaluation has to be a!entive to what people care about and are experiencing in their community. The ultimate purpose of the Baraza platform is to bring together stakeholders to share public information; and generate debate and dialogue on how to develop collective strategies to improve service delivery at the community level. The uniqueness of each community informs decision-making and defeats the one-size-fits-all evaluation approaches

typical of Western evaluation practices and models.

Exercising evaluation in an independent, credible and useful way is essential to realize the contribution it can make to good governance, including accountability from governments to their citizens, transparency in the use of resources and their results, and in learning from experience (Segone et al., 2013:8). The results chain of Baraza platforms as illustrated in figure 2 strongly agrees with the observations made by Segone et al. (2013).

These fora are among the measures instituted by government to stamp out corruption, increase transparency in the management of public funds, improve accountability and enhance the public’s involvement in holding the government to account for service delivery as illustrated in figure 3.

Theoretical framework underpinning “Barazas”- CLEAR model

In order to facilitate a deeper reflection on what has worked about the Baraza, this paper employed the “CLEAR” model for citizen participation at the local level (Lowndes and Pratchett, 2010). The

12

Figure 2: The results chain of the Baraza platform

Grassroots

participation in

demanding

services

Feedback and

accountability

by district

leaders

Follow-up on

concerns and

provision of

feedback to

citizens

Citizens

empowered

and

government

encouraged by

utilizing Barazas

as a monitoring

tool

INPUT PROCESS OUTCOME

Indigenizing Citizen-Based M&E Mechanisms in Africa: Lessons from Uganda's Barazas

Page 15: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

eVALUation Matters Fourth Quarter 2019

CLEAR model was operationalized for international use at the request of the Council of Europe’s Steering Commi!ee on Local and Regional Democracy since 2006. The CLEAR model presents a framework for understanding public participation and argues that participation is most successful or effective where citizens Can do, Like to, are Enabled to, are Asked to and are Responded to, as illustrated in figure 4. This paper places this model in the context of Barazas based on the financing, governance, organization, documentation and follow-up of outcomes.

Findings on Barazas against the five factors of the CLEAR model

Can do

“Can do” refers largely to arguments about socio-economic status, in that when people have the appropriate skills and resources, they are more able to participate (Lowndes and Pratche!, 2010). These skills range from the ability and confidence to speak in public or write le!ers, to the capacity to organize events and encourage others of similar mind to support initiatives.

Barazas are initiated, coordinated and logistically supported by the opm, but their implementation has been decentralized, with the office of the Resident District Commissioner (rdc) in each respective district taking the lead in local coordination and mobilization, hence reinforcing a sense of a!achment and participation. In some instances, however, there have been reports of delayed payment of facilitation funds to the Resident District Commissioners and the coordination team at the local level, causing delays in the mobilization process and awareness campaign within the district, hence a great threat to the success of this initiative (Office of the Prime Minister, 2017).

The rdcs and selected moderators have been equipped with additional skills on how to facilitate Barazas and report in a timely way to relevant authorities (Office of the Prime Minister, 2017). Additionally, as part of the steps towards standardized and formalized procedures of conducting Barazas, a manual was developed in 2013 to guide the implementation of the Baraza program.

Despite these milestones, a number of studies have pointed out low

13

Figure 3: Implementation mechanisms and key players in the Baraza platform

Policy Makers

(Government)

Public service

providers

(Technocrats)

Public service

users

(Citizens)

Share relevant public

information and develop

corrective strategies to

address outstanding

challenges and issues that

affect the livelihood of

communities

Indigenizing Citizen-Based M&E Mechanisms in Africa: Lessons from Uganda's Barazas

Page 16: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

eVALUation Matters Fourth Quarter 2019

literacy levels as an impeding factor for the success of Barazas (Initiative for Social and Economic Rights, 2018). Literacy is one of the outcomes of basic education and it is defined as the ability to read with understanding and write meaningfully in any language. Whereas Barazas are supposed to be conducted in local languages, some of the local leaders and technocrats cannot easily make presentations or respond to issues in local languages. The 2017 National Governance Peace and Security Survey showed that nationally, 66% of the adult population was literate, with males (76%) more literate than females (58%). The literacy rate was even lower among rural residents (61%) than those in urban areas (79%), yet the biggest Baraza participation is in the rural areas. One of the objectives for the manual is to support training of trainers and other capacity building initiatives on the Baraza program (opm, 2013).

Like to

“Like to” rests on the idea that people’s felt sense of being part of something encourages them to engage. The argument is that if one feels excluded or senses a lack of belonging, then there are low chances of participation (Lowndes

and Pratchett, 2010). A sense of trust, connection and linked networks can, according to social capital argument, enable people to work together and co-operate for participation.

Unlike the former centralized government structure where public service officials at the lower local level (sub-county) would implement development plans formulated by the central government at the district level and report back again, the decentralized system and more importantly the Baraza approach has placed an uphill task for technocrats to be directly accountable and responsive to the citizens within their purview (Campenhout et al., 2017). This system has been essential in creating a sense of belonging for programs at the local government.

Barazas were found to be not only means for evaluating project implementation, but also a mechanism for identifying priority areas that require further or future action. Citizens can exceedingly a!end Barazas if information about them is availed in time using different platforms and citizens mobilized through the use of multiple means (Initiative for Social and Economic Rights, 2018).

14 Indigenizing Citizen-Based M&E Mechanisms in Africa: Lessons from Uganda's Barazas

Figure 4: The CLEAR model of citizen participation in local governments

have the

resources and

knowledge to

participate

have a sense of

attachment that

reinforces

participation

are provided with

the opportunity

for participation

are mobilized

through public

agencies and

civic channels

see evidence that

their views have

been considered

Can do

Like to

Enabled to

Asked to

Responded to

Page 17: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

eVALUation Matters Fourth Quarter 2019

However, reports from districts have pointed out that the Baraza concept is still misconstrued by several people as a political forum at which grievances and sentiments between varying political factions are aired (Office of the Prime Minister, 2017). The government has already responded to this challenge with a revised manual for conducting Barazas.

Enabled to

“Enabled to”, as a factor in participation, is premised on the research observation that most participation is facilitated through groups or organizations (Lowndes and Pratchett, 2010). Collective participation provides continuous reassurance and feedback that the cause of engagement is relevant and that participation is having some value.

Article 38 of the Constitution of the Republic of Uganda provides for citizen participation and thus the Baraza initiative is one of the mechanisms that enables and affords citizens an opportunity to participate in the government service delivery process.

On a given Baraza event, the three stakeholders are represented by both district level and sub-county level equivalents. The political heads (principals) constitute commi!ees that initiate projects, approve budgets and monitor government programs and service delivery. The technical side is led by the Chief Administrative Officer (cao), who is head of civil service at the districts and is mandated to oversee the various sectors and each of the sector heads (agriculture, education, health, water and roads). Based on this set-up, community members are enabled to address their ma!ers directly with the principals and technocrats with a reassurance of positive results.

However, in other areas, the people living in remote hard-to-reach areas had low participation. Such factors, if not properly addressed, can disable the success of the initiative and probably miss out on key issues that could be of significance (Office of the Prime Minister, 2017).

Asked to

“Asked to” builds on the finding that mobilization matters. People tend to become engaged more often and more regularly when they are asked

15Indigenizing Citizen-Based M&E Mechanisms in Africa: Lessons from Uganda's Barazas

Page 18: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

eVALUation Matters Fourth Quarter 2019

to engage. People’s readiness to participate o+en depends upon whether or not they are approached and how they are approached (Lowndes and Pratche!, 2010). Mobilization can come from a range of sources, but the most powerful form is when those responsible for a decision ask others to engage with them in making the decisions. Lowndes et al. (2006) observe that the degree of openness of political and managerial systems has a significant effect, with participation increasing where there are a variety of invitations and opportunities.

Barazas are preceded with posters relaying information about the service delivery strategic locations across the sub-county where Barazas will take place and community members called upon to participate (Campenhout et al., 2017). In order to a!ract good a!endance, they are held in or near public places like schools and during community meetings like market days.

The agenda of a Baraza event starts with opening remarks by the Resident District Commissioner of the host district who explains the objectives and process of

the engagement, followed by speeches of district and sub-county political heads and of a representative from the Office of the Prime Minister, and at the core of it, a presentation by the CAO on the performance of the previous financial year. Where necessary, that presentation is further reinforced by submissions from respective heads of departments. The question and answer session constitutes the largest part of the interactive meeting where citizens are asked to make submissions in response to the presentations, in terms of additional information, questions or complaints.

Largely, the participants raise their issues or contribute to the proceedings through verbal communication and, to some extent, wri!en anonymous notes to not only cater for individual communication, but also ensure maximum participation where there are time constraints. Initiative for Social and Economic Rights (2018) observed that poor and marginalized groups including the youth and women reasonably participated in the Barazas and, indeed, in some cases, women were found to have participated more than men.

16 Indigenizing Citizen-Based M&E Mechanisms in Africa: Lessons from Uganda's Barazas

Page 19: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

eVALUation Matters Fourth Quarter 2019

Information is power only if you can take action with it. Then, and only then, does it represent knowledge and, consequently, power.

Daniel Burrus

Responded to

“Responded to” captures the idea that for people to participate on a sustainable basis, they have to believe that their involvement is making a difference, which is achieving positive results. For people to participate, they have to believe that they are going to be listened to and, if not always agreed with, at least in a position to see their views taken into account (Lowndes and Pratche!, 2010). Responsiveness is about ensuring feedback, which involves explaining how the decision was made, and the role of participation in that.

From a number of assessments, service users/community members felt that they were being responded to (Initiative for Social and Economic Rights, 2018). According to Campenhout et al. (2017), stakeholders thought that Barazas are useful for improving service delivery across all sectors and had no difficulty in providing examples of changes they felt were a direct result of the Barazas being held. These were in terms of projects that were previously dragging being finished or taken up afresh, sub-standard work being redone and in some instances, priorities were changed to better align with citizen’s needs. A substantial part of these outcomes seemed to derive from the Baraza’s potential to simply fix information asymmetries.

Emerging lessons and recommendations

As Africa through its agenda 2063 aspires for an Africa whose development is people-driven (African Union Commission, 2015), there is a number of emerging lessons from such initiatives with no exception to Barazas in Uganda.

Capacity to engage - Barazas have been instrumental in providing accurate information to the citizenry on how government operates. However, low literacy rates remain a big challenge

for the effective implementation of the Baraza program.

Participatory planning - This promotes ownership of decisions, effective implementation of actions and sustainability of results. Barazas are premised on the principal of participatory planning right from the village level. It has enabled Government and Local Governments to be!er understand the local needs of people.

Timing/periodicity - Barazas are planned for only once a year, yet the original directive by the president was twice a year. He could have envisaged the first session for planning and the second for reporting results or giving feedback. There is need to move beyond traditional models of governance where citizen input is received just once per election cycle, or sometimes not at all.

Feedback mechanisms - With only one annual opportunity to hold the Baraza in a district, the process of providing feedback still remains weak. Enhancing central government’s responsiveness to citizen’s development demands and public service delivery concerns is critical for the success of the initiative. There is a need to build institutional frameworks that incorporate citizen voices in decision-making processes. There is also a need to develop a corrective strategy aimed at enhancing public accountability through which the central government’s quick responsiveness can rebuild government’s popularity towards its citizens.

Funding - Whereas the original presidential directive was to conduct Barazas

17Indigenizing Citizen-Based M&E Mechanisms in Africa: Lessons from Uganda's Barazas

Page 20: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

eVALUation Matters Fourth Quarter 2019

at sub-county level, with about 1,400 sub-counties in Uganda, amidst limited resources, the Office of the Prime Minister is still constrained to deliver on this mandate. There is a need to devote more resources to this initiative which has been key in increasing a sense of citizenship amongst Ugandans. The financial support from development partners could also make a great difference towards the effective implementation of Barazas in Uganda.

Institutionalization of downward accountability - Have each sub-county and district plan own Baraza within a financial year. This is critical in bringing about improvements in public service delivery and transparency in the use of public resources. This will instil a home-grown culture of independent citizen monitoring for constructive criticism sustained the wellbeing of the people.

Assessment of the Baraza initiative - The Baraza initiative has been implemented now for almost a decade, but so far only one comprehensive assessment has been conducted. It is important that the Office of the Prime Minister and districts themselves engage in continuous assessment of the initiative to take stock of what has worked

and what has not and make necessary adjustments to the conceptualization and implementation of the initiative. Even more important is an extended study on the assessment of the Baraza Process, in terms of its effectiveness in influencing decision-making processes.

Conclusion

Evaluation is a judgement of value or worth and provides information to support decision-making (Sukai 2013:77). In development evaluation, it supports accountability for the effective use of resources, lessons for improvement, knowledge sharing, and the distillation of this knowledge for use.

Barazas are good accountability platforms or mechanisms and can thus be very instrumental in enhancing citizen-based monitoring and improving local public service delivery systems.

Barazas have been at the centre of sharing information and educating masses of their role in holding the government accountable and ultimately tapping their knowledge on

Indigenizing Citizen-Based M&E Mechanisms in Africa: Lessons from Uganda's Barazas18

Page 21: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

eVALUation Matters Fourth Quarter 2019

community needs-based planning and service delivery, which is core in advancing the learning function of evaluations.

Barazas are instrumental in contributing to the overall aspirations of Agenda 2063, “the Africa We Want” which other African countries can learn from despite the gaps identified. It is noted that sdg 16 addresses three interrelated topics, namely “peace”, “inclusion” and “institutions”. “Inclusion” and “institutions” are also highly relevant

for the achievement of other sdgs. These two topics are the core drivers of Baraza platforms in Uganda, but more important at the centre of advancing Made in Africa Evaluation approaches.

Knowledge is power. Information is liberating. Education is the premise of progress, in every society, in every family

Kofi Annan

Indigenizing Citizen-Based M&E Mechanisms in Africa: Lessons from Uganda's Barazas 19

Page 22: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

Africa Union Commission 2015, “Agenda 2063: The Africa We Want”. Final Edition, April 2015. Popular Version. ISBN: 978-92-95104-23-5.

Building the Field of Community Engagement partners and Babler, T, 2015, “Evaluation and Community Engagement”.

Campenhout B,V, Bizimungu, E Smart J and Kabunga N, 2017, “Impact pathways of a participatory local governance initiative in Uganda: A qualitative Exploration”. Development Strategy and Governance Division. International Food Policy Research Institute. IFPRI Discussion Paper 01688.

http://www.europeanchallenge.eu/media/papers/ws1_Keynote_Lowndes_and_Pratchett_CLEAR.pdf. Accessed on 20/03/2019.

Cloete, F. (2016), “Developing an Africa-rooted programme evaluation approach”. African Journal of Public Affairs. Pg 55-70, Vol 9, Number 4, December 2016.

Initiative for Social and Economic Rights (2018), “An assessment of the role and effectiveness of Barazas in decision making process” Kampala, Uganda.

Lowndes, V. & Pratchett, L. (2010), “CLEAR: Understanding citizen participation in local government-And how to make it work better. Local Governance Research Unit, De Montfort University, Leicester, United Kingdom.

Mindzie, M. A. (2015), “Citizen participation and the promotion of democratic governance in Africa”. GREAT Insights Magazine, Volume 4, Issue 3. April/May 2015.

Office of the Prime Minister (2013), “The Baraza Implementation Manual”. Kampala, Uganda.

Office of the Prime Minister (2017), “Kabale District Community Based Monitoring Forum (Baraza Initiative) for Financial Year 2016/17: Final Report”. Kampala, Uganda.

Office of the Prime Minister (2017), “Namayingo District Community Based Monitoring Forum (Baraza Initiative) for Financial Year 2016/17: Final Report”. Kampala, Uganda.

Pattie, C., Patrick, S and Paul, W. (2004), “Citizenship in Britain”. Cambridge: Cambridge University Press.

Segone, M, Heider C, Oksanen, R, Silva D.S, Sanz B. (2013), “Towards a Shared Fraewokr for national Evaluation Capacity Development”. eValuation Matters, A quarterly knowledge publication of the Operations Evaluation Department of the African Development Bank Group, September 2013. Volume 2. Number 3.

Sukai, P. J. (2013), “Rebirth, Restoration, Reclamation, and Responsibilities of the Evaluation Function of Africa”. eValuation Matters, A quarterly knowledge publication of the Operations Evaluation Department of the African Development Bank Group, September 2013. Volume 2. Number 3.

The Constitution of the Republic of Uganda (1995), Kampala, Uganda.

Uganda Bureau of Statistics (2017), “National Governance, Peace and Security Survey”. Kampala, Uganda.

References

Indigenizing Citizen-Based M&E Mechanisms in Africa: Lessons from Uganda's Barazas20

eVALUation Matters Fourth Quarter 2019

Page 23: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

21Indigenizing Citizen-Based M&E Mechanisms in Africa: Lessons from Uganda's Barazas

A Social Worker by profession, Josephine Watera is the Head of the Monitoring and Evaluation Division in the Parliament of Uganda with over twelve (12) years of working experience in program design, implementation, performance management, community-driven development, and action learning, including planning, monitoring and evaluation, research and advancing evidence use in decision-making.

Josephine holds a triple master’s degree in Monitoring and Evaluation; Business Administration; and Social Sector Planning and Management.

Josephine is the Sub-Saharan Africa representative on the Board of International Development Evaluation Association (ideas), Advisory Commi!ee Member of the Africa Center for Evidence (ace) at the University of Johannesburg, and the Secretary-General of the Uganda Evaluation Association (uea). She is part of the team that is currently reviewing the African Evaluation Guidelines led by the African Evaluation Association.

As an author, presenter and speaker, she has made great contributions in the field of monitoring and evaluation. She is greatly involved in delivering training in monitoring and evaluation; and in the recent past, she has engaged as mentor in the Piloted EvalYouth International Youth Mentorship Program by EvalPartners, and in the Development Monitoring and Evaluation for Peace (dpme) Mentorship Program 2018 by Search for Common Ground, both aimed at developing Monitoring and Evaluation Capacity of Young and Emerging Evaluators.

Josephine’s works have been internationally recognized, as she was awarded as the most promising evaluator from developing countries during the 2017 American Evaluation Association Conference.

Auth

or’s

profi

le

Indigenizing Citizen-Based M&E Mechanisms in Africa: Lessons from Uganda's Barazas 21

eVALUation Matters Fourth Quarter 2019

Page 24: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

App

licat

ion

of C

onte

xtua

l Ana

lysi

s to

Eva

luat

ion

Met

hodo

logi

es in

Afri

ca: T

he C

ase

of C

ontri

butio

n A

naly

sis

Context refers to the combination of factors accompanying the design, implementation and evaluation of an intervention that might influence its results. The challenge evaluators have is that interventions implemented in an identical way in different locations will o#en have different development outcomes. Failure to consider context in evaluations may result in wrong estimations of intervention impact. Conducting contextual analysis increases the quality and strength of evidence and thus provides useful evaluative information to policy makers in Africa.This article explains how evaluators can conduct contextual analysis in the various steps of the Contribution Analysis Evaluation Methodology to bring validity to the “Made in Africa Evaluation” practice. Theory-based approaches seek to identify and assess any significant influencing factors (i.e., contextual factors) that may also play a role in the causal chain and thus affect the contribution claim. The article posits that evaluators need to recognize and negotiate these contextual challenges effectively to ensure quality evaluation results. Analysis of contextual challenges will act as a practical solution to understanding evaluation findings in Africa and thus help in generating useful evaluative information.

Page 25: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

eVALUation Matters Fourth Quarter 2019

Andrew Anguko IDEV, African Development Bank

Introduction

There are a number of contextual factors (economic, political , institutional , environmental, security, socio-psychological factors, and the

socio-cultural environment) that affect the implementation and the outcomes of an intervention. It is important to consider the influence of contextual variables in African evaluation practice in order to increase evaluation use, give voice to local issues, and explain program effects. Context explains what works best for whom and under what conditions. In addition, it can help to identify which evaluation approach provides the highest quality and most actionable evidence, and gives more direction in the replication and generalizability of findings (Mark, 2001).

In the event that we fail to consider context in Africa where unique contextual factors influence implementation and outcomes, we risk identifying the right questions to frame the evaluation, ignoring key stakeholders who are potentially strong users of evaluation, and misinterpreting stakeholder priorities or even program goals. There is also a possibility of failing to describe the evaluand appropriately, failing to understand the evaluand’s outcomes because the evaluator is unable to notice

nuances or subtleties of the culture, and is finally reporting results using means that are only accessible by the dominant culture or those in positions of power (Rog.2012).

Theory-based evaluation methodologies such as contribution analysis are particularly useful for understanding the influence of context on the implementation and achievement of outcomes (Mayne, 2008). This is because contextual variables exert their influence at various stages of the Theory of Change.

The peculiarity of the African context

The African continent is home to countries with mostly unfavourable contextual variables. Some of these contextual variables, such as political instability, are generally unique to Africa and have the potential to impede the implementation of interventions and negatively influence development outcomes.

Gender inequalities persist in most African countries, and data on gender outcomes show a complex canvas of gains and missed opportunities in women’s social and economic development and in their political and legal status (World Development Report, 2012).

Key Messages

❚ Context explains what works best for whom and under what conditions.

❚ In Africa, where unique contextual factors influence implementation and outcomes, we risk not identifying the right questions for the evaluation, ignoring key stakeholders and misinterpreting stakeholder priorities.

❚ Contextual analysis may explain why two identical interventions may have very different development outcomes.

Application of Contextual Analysis to Evaluation Methodologies in Africa: The Case of Contribution Analysis 23

Page 26: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

eVALUation Matters Fourth Quarter 2019

Disparities in voice and agency persist—for example, less than one in five members of parliament are women, and in many countries, women suffer pervasive legal discrimination (World Economic Forum, 2018).

In Africa, which contributes relatively low in greenhouse gas emissions and where rain-fed agriculture is the backbone of most economies, the consequences of frequent, severe droughts and flooding are serious enough to jeopardize development efforts (Agbola and Fayiga, 2016), unlike in other continents, and to undermine the successful implementation of interventions.

Africa has tried to pursue economic inclusion, social inclusion, spatial inclusion and political inclusion but cannot compare to Europe and America in these regards. Despite increase in GDP, African growth has been narrowly concentrated in a few sectors and geographic areas (World Bank 2013a). These inequalities

may potentially influence the success of development programs.

Although fragility varies across countries in Africa, the major features remain the same: a legacy of conflict, violence and insecurity, weak institutions, poor economic and administrative governance and an inability to deliver public goods adequately, efficiently or equitably (AfDB, 2009). Drivers of fragility include economic, social, political and environmental dimensions, but all too o+en, demands for inclusion and equity underlie these drivers. As mentioned earlier, the potential threat of these factors to the successful implementation of interventions and subsequent outcomes is generally higher in Africa than in Europe and America.

Analysis of contextual variables

An analysis of contextual factors can o+en help explain why two identical

Application of Contextual Analysis to Evaluation Methodologies in Africa: The Case of Contribution Analysis24

Page 27: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

eVALUation Matters Fourth Quarter 2019

interventions may have very different development outcomes in Africa as compared to Europe or America and shed more light on the success or otherwise of interventions.

Contextual analysis may be conducted from a review of the literature, the opinions of stakeholders, a rapid assessment study, or a needs assessment and content analysis during which the target group is consulted (Corner, 2012). It involves asking the broad question “What demographic, economic, community, historical, cultural, political or other sociocultural factors could have influenced the effort’s implementation and outcomes?” Contextual analysis is achieved by asking the following specific questions in the areas indicated below:

The economic climate: Are economic conditions getting better, remaining constant or getting worse? This will, for example, influence the decisions of families as to whether or not they want to participate in any intervention that either requires payments or that promotes present or future income generating activities.

The political climate: Is the local political climate likely to support or undermine the implementation of the intervention?

Organizational and institutional factors: To what extent do local organizations (government, NGOs, and private sector) support or hinder the intervention?

Natural environmental factors: In what ways do environmental factors influence the intervention?

The characteristics of the communities affected by the intervention: How do social, cultural, economic and other characteristics influence how different groups respond to the intervention or are affected by it? How might the needs, problems, constraints, and expectations of the different groups affect the intervention?

Application of contextual analysis to contribution analysis evaluation methodology in Africa

The notion of a ‘contributory’ cause recognizes that effects/outcomes are produced by several causes at the same time, none of which might be necessary nor sufficient for impact. In addition and particularly in Africa, contextual variables affect the implementation and outcomes of interventions, which therefore makes contextual analysis a prerequisite in African evaluation practice. Rather than trying to prove a!ribution that A caused B, contribution analysis seeks to identify the contribution that A made to B, while also giving credit to other influencing factors. The credibility of its findings emerges from the care with which a theory of change is described, tested, and revised over multiple iterations, and the rigour with which an evaluation team identifies, tests, and validates contribution claims. This makes it a good fit for complex policy change initiatives applicable in evaluations conducted by the Independent Development Evaluation of the African Development Bank. It is suitable where the policy initiative being evaluated has multi-components, multiple partners, and is being implemented within a dynamic setting and different outcomes are expected. The following are the proposed six steps for conducting contribution analysis and how contextual analysis may be integrated at each of the steps.

Step 1. Set out the cause-effect issue to be addressed

❚ What is the outcome and what are the possible contributory factors?

❚ Acknowledge the a!ribution problem. At the outset, it should be acknowledged that there are legitimate questions about the extent to which the program has brought about the results observed.

❚ Determine the specific cause–effect question being addressed.

Application of Contextual Analysis to Evaluation Methodologies in Africa: The Case of Contribution Analysis 25

Page 28: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

eVALUation Matters Fourth Quarter 2019

Page 29: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

eVALUation Matters Fourth Quarter 2019

❚ What level of confidence is needed? What is to be done with the findings? What kinds of decisions will be based on the findings? The evidence sought needs to fit the purpose.

❚ Explore the type of contribution expected. It is worth exploring the nature and extent of the contribution expected from the program. This means asking questions such as: what do we know about the nature and extent of the contribution expected? What would show that the program made an important contribution? What would show that the program 'made a difference'? What kind of evidence would we (or the funders or other stakeholders) accept?

❚ Assess the plausibility of the expected contribution in relation to the size of the program. Is the expected contribution of the program plausible? Assessing this means asking questions such as: Is the problem being addressed well understood? Are there baseline data?

Contextual analysis: What are the key contextual factors influencing the outcome and explanatory variables. In determining the nature of the expected contribution from the program, the other factors that will influence the outcomes will also need to be identified and explored, and their significance judged. Determine through contextual analysis the contextual variables that are likely to have an influence on the implementation and outcome and try to understand the pathways by which they could do it. You need to collect relevant data on the variables and analyze to check to what extent, the variables affected the outcome.

Step 2. Develop the postulated theory of change and the inherent risks

The key tools of contribution analysis are theories of change and results chains. With these tools, the contribution story can

be built. Theories of change (Weiss, 1997) explain how the program is expected to bring about the desired results—the outputs, and subsequent chain of outcomes and impacts (impact pathways of Douthwaite et al., 2007). But a theory of change needs to spell out the assumptions behind the theory, for example to explain what conditions have to exist for A to lead to B, and what key risks exist to that condition. Leeuw (2003) discusses different ways of eliciting and illustrating these behind-the-scenes assumptions.

❚ What is the postulated theory of change for the intervention strategy?

❚ List the assumptions underlying the theory of change.

❚ Include consideration of other factors that may influence outcomes.

❚ What are the expected contributions of different players and are these understood? Having established the range of measures required, this help to identify the different players and what was their expected contribution to the strategy.

❚ Are these expected contributions plausible and feasible to deliver in practice?

❚ What are the main strengths and weaknesses in the postulated theory of change?

❚ Set out the initial ‘contribution story’. A short wri!en version of the postulated and agreed ToC, se!ing out partner contributions, to be produced.

Contextual analysis: What are the structure, complexity and dynamics of the intervention? How dynamic and evolving an intervention is, how complex with respect to its theory of change, and the extent to which it blurs with the setting itself has implications for how you choose to design the evaluation. Interventions that blur with

27

Page 30: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

eVALUation Matters Fourth Quarter 2019

their broader environment make it difficult to make attributions of change to the intervention because of the number of confounding externalities. It is hard to trace the exact causal mechanism (Rog 2012).

What are the contextual variables that were likely to influence the Theory of Change? At what stages of the theory does each of the identified contextual variables have an effect? What are the possible magnitudes of these effects? It is important to gather data and analyze it to check to what extent the contextual factors could affect the activities, the generation of relevant outputs, intermediate outcomes and final outcomes. You also need to check whether contextual variables could affect the mechanisms at each stage of the Theory of Change.

Step 3: Gather the existing evidence on the theory of change:

Evidence to validate the theory of change is needed in three areas: observed results, assumptions about the theory of change, and other influencing factors. Gathering evidence can be an iterative process, first gathering and assembling all readily available material, leaving more exhaustive investigation until later.

Evidence on the occurrence or not of key results (outputs, and immediate, intermediate and final outcomes) is a first step for analysing the contribution

the program made to those results. Additionally, there must be evidence that the program was implemented as planned.

Evidence is also needed to demonstrate that the various assumptions in the theory of change are valid, or at least reasonably so. Are there research findings that support the assumptions?

Finally, there is a need to examine other significant factors that may have an influence. Possible sources of information on these are other evaluations, research, and commentary. What is needed is some idea of how influential these other factors may be.

Contextual analysis: The evaluator needs to look at the se!ing of the intervention; e.g, a community change initiative aims to change the community in which it sits. This has a potential effect on the evaluation because if an intervention is being rolled out in different communities, it can look at how it is adapted in those communities. It should examine whether the original theory of change is intact and what factors in the context influenced implementation and outcomes. The evaluator needs to understand the ways in which the broader environment affects the ability of an intervention to achieve outcomes. This is critical to understanding the generalizability of the evaluation findings to other contexts or situations.

Application of Contextual Analysis to Evaluation Methodologies in Africa: The Case of Contribution Analysis28

Photo: Contextual Analysis Discussion Session

Page 31: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

eVALUation Matters Fourth Quarter 2019

Step 4: Assemble and assess the contribution story, and challenges to it:

The contribution story, as developed so far, can now be assembled and assessed critically. Questions to ask at this stage are:

❚ Which links in the results chain are strong (good evidence available, strong logic, low risk, and/or wide acceptance) and which are weak (li!le evidence available, weak logic, high risk, and/or li!le agreement among stakeholders)?

❚ How credible is the story overall? Does the pa!ern of results and links validate the results chain?

❚ Do stakeholders agree with the story—given the available evidence, do they agree that the program has made an important contribution (or not) to the observed results?

❚ Where are the main weaknesses in the story? For example: Is it clear what results have been achieved? Are key assumptions validated? Are the impacts of other influencing factors clearly understood? Any weaknesses point to where additional data or information would be useful.

Contextual analysis: The evaluator needs to look at each link in the theory of change and try to identify what contextual variable may affect the link and to what extent. The evaluator needs to collect data to validate the possible magnitude of effect of the identified contextual variable on the links in the theory.

Step 5: Seek out additional evidence:

Identify what new data is needed. Based on the assessment of the robustness of the contribution story in Step 4, the information needed to address challenges to its credibility can now be identified, for example, evidence regarding

observed results, the strengths of certain assumptions, and/or the roles of other influencing factors.

❚ Adjust the theory of change. It may be useful at this point to review and update the theory of change, or to examine more closely certain elements of the theory. To do this, the elements of the theory may need to be disaggregated to understand them in detail.

❚ Gather more evidence. Having identified where more evidence is needed, it can then be gathered. Multiple approaches to assessing performance, such as triangulation, are now generally recognized as useful and important in building credibility.

Some standard approaches to gathering additional evidence for contribution analysis (Mayne, 2001) include surveys, syntheses, case studies, etc.

Contextual analysis: The methods the evaluator uses to gather evidence depend on the available budget, time and data since evaluations often come with constrained budgets, time and data. The evaluator has to decide which of the many possible variables he could measure, will actually get measured. If the timeline for the evaluation is shorter than when you can reasonably expect outcomes to occur, you may need to design an evaluation that checks if you are on track to achieve those outcomes down the road.

Step 6: Revise and strengthen the contribution story:

New evidence will build a more credible contribution story, bu!ressing the weaker parts of the earlier version or suggesting modifications to the theory of change. It is unlikely that the revised story will be fool-proof, but it will be stronger and more credible.

Application of Contextual Analysis to Evaluation Methodologies in Africa: The Case of Contribution Analysis 29

Page 32: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

AfDB (African Development Bank) (2009), African Development Report 2008/2009. Tunis: AfDB.

Agbola, P. and Fayiga A.O (2016), Effects of climate change on agricultural production and rural livelihood in Nigeria. Journal of Agricultural Research and Development.Vol.15 (1): 71-82

Conner, R.F., Fitzpatrick, J.L., & Rog, D.J. (2012), A first step forward: Context assessment. New Directions for Evaluation. 135: 89-105.

Douthwaite, B., Schulz, S., Olanrewaju, A.S. and Ellis-Jones, J. (2007), Impact pathway evaluation of an integrated Striga hermonthica control project in Northern Nigeria. Agricultural Systems 92: 201-222.

Leeuw, F. D. (2003), Reconstructing Program Theories: Methods Available and Problems to be Solved. American Journal of Evaluation, 24(1), 5.

Mayne, J. (2001), Addressing attribution through contribution analysis: using performance measures sensibly. Canadian Journal of Program Evaluation 16: 1-24.

Mayne, J. (2008), Contribution analysis : An approach to exploring cause & effect. ILAC Brief 16.

Rog, D.J. (2012), When background becomes foreground: Toward context-sensitive evaluation practice. New Directions for Evaluation. 135: 25-40.

Weiss, C.H. (1997), Theory-based evaluation: past, present, and future. New Directions for Evaluation 76 (Winter): 41-55.

World Economic Forum (2018).

World Development report (2012) accessed at.

World Bank (2013a), Inclusion Matters – The Foundation for Shared Prosperity. Washington DC: World Bank.

World Bank and Economic Commission for Africa (1994), Sub–Saharan Africa Transport Policy Program.

References

Contextual analysis: The evaluator needs to understand who the decision-makers are, the types of decision they need to make, the standards of rigour they expect, and the level of confidence that is needed to make the decisions as well as other structural and cultural factors that influence their behaviour. Understanding the decision-making context allows the evaluator to design an evaluation that is more likely to get used.

Conclusion

Africa has a unique context and is plagued by variables such as a fragile political environment, climate change issues,

weak institutions of governance and insecurity, which significantly affect the application of methodological approaches to evaluation and outcomes. Attention to these unique variables in evaluation in Africa generates useful evaluative information. Theory-Based Evaluation approaches offer an opportunity for evaluators to conduct contextual analysis at the beginning of an evaluation and at the various stages of the Theory of Change. Ultimately, contextual analysis helps evaluators to understand where, how, why and for whom the interventions worked or did not work and thus improves the quality of evaluations and the learning considerations distilled from them, particularly in African setups.

Application of Contextual Analysis to Evaluation Methodologies in Africa: The Case of Contribution Analysis30

eVALUation Matters Fourth Quarter 2019

Page 33: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

31Application of Contextual Analysis to Evaluation Methodologies in Africa: The Case of Contribution Analysis

Auth

or’s

profi

le

Andrew Anguko is currently Chief Quality and Methods Advisor with the African Development Bank where he works with colleagues on methodological approaches and quality assurance of tools and evaluation products for evaluations in the Independent Development Evaluation of the Bank. Andrew previously worked as M & E Manager for the USAID funded Kenya water, sanitation and hygiene project in Nairobi. He also worked as a Global Impact Evaluation Adviser at Oxfam GB where he provided specialist advice on tools, methods and process for undertaking rigorous impact evaluation on Oxfam's projects. Andrew formerly served as a Senior Monitoring and Evaluation Adviser at Danya International in Kenya, as a Monitoring and Evaluation Specialist at Malaria Consortium in Kampala, Uganda, and as Senior Data Analyst at the American Centre for Disease Control and Prevention (CDC). Andrew graduated with an M.Sc. in Biostatistics and Epidemiology from the University of the Witwatersrand in Johannesburg, South Africa, and an M.Sc. in Medical Statistics from the University of Nairobi, Kenya. He also holds a B.Sc. in Forestry and a Postgraduate Diploma in Education from Moi University in Kenya.

Application of Contextual Analysis to Evaluation Methodologies in Africa: The Case of Contribution Analysis 31

eVALUation Matters Fourth Quarter 2019

Page 34: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

Ora

lity a

nd P

artic

ipat

ory

Appr

oach

fo

r a “M

ade

in A

frica

” Eva

luat

ion Over the past two decades, Africa has begun a

process of development with the objective of creating be$er living conditions for its people. The inclusive nature of development policies or programs is illustrated by the fact that they take into account stakeholders at all levels - including populations (potential beneficiaries) - in the design, implementation and evaluation.For the specific case of evaluation, it seems that Africa, despite having its historical literature essentially recorded orally, has much to offer in order to adapt an evaluation theory and practice – currently dominated by the Western worldview to its own context and needs.Far from being anachronistic, the oral history of the continent could provide a solid foundation for promoting an evaluation approach Made in Africa.

Page 35: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

eVALUation Matters Fourth Quarter 2019

Yao Roger Modeste Apahou, Ivorian Network of Emerging Evaluators

Key Messages

❚ Orality is a window of opportunity to explore for the promotion of a Made in Africa Evaluation.

❚ The participatory approach is a vector of democracy which enable the freedom and full participation of stakeholders in development projects.

❚ An evaluation method which matches the experience of the African populations, is the substrate which can underlie a lasting change in the life of the said populations, hence the development of African countries.

Introduction

For many decades, Africa has been an experimental field for a multitude of development policies, plans and programs. However, their implementation

in all African countries has so far yielded mixed or, at the least, insignificant results. These results stem from a lack of ownership of strategic frameworks by project populations in addition to their very weak participation in the development of programs and evaluation processes that accompany them. Therefore it is necessary to find and carefully adapt evaluation approaches that are appropriate to the African context and which incorporate genuine participation by the affected populations.

Africa is unique due to its rich oral history, or orality, with more than 3,000 distinct ethnic groups and 1,500 languages spoken in 54 countries. Despite modernism, which is fueled by globalization on a larger scale, the oral conveyance of history still occupies a prominent place in the transmission of values and knowledge in Africa.

This "in forma" basis for a "Made in Africa Evaluation” launched the creation of the African Evaluation Association (Association africaine d’évaluation or,

AfrEA) in 1999 and was reiterated at the Bellagio conference in September 2012. This new concept of "Made in Africa Evaluation" aims to combine international evaluation methods with African political, economic, social and cultural realities and values. This being the case, it is not a question of contrasting a typically African view with the international standards of evaluation or even less of "indigenizing" the practice of evaluation, as pointed out by F. Cloete.

It is worth reminding that evaluation is not new in Africa. In many traditions and cultures, the transition from one stage of life to another is furnished with several activities aimed at assessing the potential of possible candidates. Through initiation rites and exoteric activities, members of traditional African communities are tested for recognition or transition to a higher social level.

Using this customary approach to communication within African populations could help to drive better monitoring and evaluation processes. In this respect, the role of orality is not just marginal or additive but can be a “royal path” to producing an evaluation model developed to the unique the specificities of Africa.

This article aims to highlight the theoretical aspects underlying a conceptualization of orality, to address this relationship

Orality and Participatory Approach for a “Made in Africa” Evaluation 33

Page 36: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

eVALUation Matters Fourth Quarter 2019

Page 37: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

eVALUation Matters Fourth Quarter 2019

with linguistic diversity and to present another anthropocentric evaluation approach that combines orality and video.

Orality, a difficult but unifying concept

What do we mean by orality?

According to Élisabeth Lhote, orality is first and foremost an enactment of a long psycho-socio-linguistic-physiological process that is accompanied by a sound emission and / or reception using vocal organs and hearing. It is, in essence, the process of conveying thought in a physical or physiological nature, which every individual has learned through a language. When we put thought to action, we assume a preparatory phase during which a certain number of factors are solicited.

Joseph Mamboungou, for his part, indicates that orality can only be understood in terms of the relation that the individual maintains with language, him or herself, others and the whole of the outside world.

Jean Derive, in his article entitled "Typology and functions of some oral genres of Manding in terms of the criterion of spatiality", perfectly illustrates these different relationships in the field of orality. According to him, the Mandingo civilization highlights three types of spaces where oral genres occur: a private space, a public space, and a contingent space - that is, a space that defines a particular type of activity exercised in a place to which the oral genre is intrinsically linked.

The private space

This is defined as that of the "family area" (lù), a space where shelter is provided to an extended family whose core is a home with a head of the family.

It is in this space that women traditionally cook; it is also the place for meetings

(especially in the dry season) to entertain, address various subjects and practice oral literature through "stories" (nsíirin or ntàlen), "riddles" (ntàlenkɔrɔbɔ), historical accounts, myths and allegories, etc. Friends and neighbors can a!end as well.

The public space

For cultural events, including the expression of oral literature, Manding have a public ceremonial space called fɛrɛ that can be used both at the neighborhood and community level. This usually entails singing sometimes accompanied by dance, the reciting of "epics" (fàsa) of ceremonial and melodic songs involving not only the family, but the entire community.

The contingent space

Some genres of Mandingo oral literature are intrinsically linked to activities that take place in a specific space; with any works which are the outcome of these activities being executed in the same place. So, for example, "agricultural songs" (sɛne donkili) – o+en punctuated with stories and riddles - are performed in the fields during key seasons (sowing, weeding, etc ...). Similarly, several types of "songs or stories of hunters" (dònsodɔnkili, dònsomaana) are performed in the bush or in ritualised places, while others can also be performed on the fɛrɛ.

These examples demonstrate how, for the Mandingo, literary speech is well controlled in space (as in time) and the modes of celebration of oral literature are diversified, yet standardized. Moreover, the places for expression depend on the type of oral literature with different literary genres organized differently.

Using linguistic diversity to develop a participatory approach

The foregoing clearly shows the unifying character of orality in African societies through linguistic diversity.

Orality and Participatory Approach for a “Made in Africa” Evaluation 35

Page 38: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s
Page 39: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

eVALUation Matters Fourth Quarter 2019

The linguistic diversity of Africa is a considerable asset that must be taken into account in the design and implementation of development programs. Since languages are essentially the vehicle of cultures, including idiosyncrasies and socio-anthropological singularities, the multiplicity of dialects allows an accumulation of diverse endogenous elements that actors in the evaluation process can exploit for adaptation purposes.

Applauding oral literature, Léopold Sédar Senghor wrote, "It is the chance of Africa to have disdained writing, even when she did not ignore it ... Because writing impoverishes reality, it crystallizes it into rigid categories and fixes it when reality is itself, to be alive, fluid and without contours. "

The fluidity of oral language opens windows of opportunities for increased participation by populations in the evaluation of projects for which they are the beneficiaries.

The concept of participation is applied in many ways and covers different practical fields, but it can also directly involve a community. Participation is intrinsically linked to the exercise of democracy, the freedom of expression, association, and opportunities for a community to communicate through explicit signs. Similarly, participation has a strong correlation with accountability. In all projects, including an evaluation project, the responsibility of the stakeholders comes through their effective participation, it requires the clarification of their roles and duties as well as their contributions to a project.

These contributions can come in many forms. It may involve i) devoting time to the project; ii) providing services; iii) providing equipment or any other input needed for the project; etc.

These contributions, however modest, provide a sense of ownership over

evaluation activities. Without this, the project may always be perceived as the initiative of "others". Indeed, evaluation is inseparable from a certain degree of democracy. The culture of evaluation must therefore be profound enough to have a lasting impact on the beneficiaries of development projects or programs.

Most Significant Change: an anthropocentric evaluation technique

An evaluation technique li!le known in Africa but already used on other continents can easily apply to this situation. This technique is referred to the Most Significant Change (MSC) and was developed by Rick Davies and Jess Dart.

MSC provides a form of participatory monitoring and evaluation, as it promotes the participation of a large number of stakeholders in a project - all of whom play a key role in the choice of the changes to be made, as well as in the analysis of the data. In addition, this alternative evaluation method seeks to highlight peculiarities and divergences on points of view rather than to simply synthesize the information. It can be used as an alternative to the formulation of indicators, or as a complementary solution.

This method is based on a «soft»1 systemic approach. It involves structured interactions between stakeholders. Or, in other words, instead of predefined indicators of progress, it is based on «stories on the ground» to «make sense from practical reality and the effects that follow.»2

The MSC method also enables beneficiaries, including the most vulnerable, to be heard and encourages collective learning.

One of the advantages of the MSC evaluation method is that it provides data on impact from which subsequent obtained results can be used to judge the performance of the program as a whole.

Orality and Participatory Approach for a “Made in Africa” Evaluation 37

Page 40: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

eVALUation Matters Fourth Quarter 2019

Overall, the MSC method is about collecting stories of significant change in the field and then systematically selecting the most significant ones through stakeholder panels or team members. These actors are initially involved through the investigation of project impact. The MSC method has been judged as very useful by many organisations, for many reasons:

1. It is an effective way to identify unexpected changes;

2. Makes for the clear identification of an organization’s values and the determination of the most important ones. In addition, by submi!ing these values to analysis, it is easy to identify the most significant changes at one level or another in the organization;

3. Provides a participatory form of evaluation that does not require any particular professional competence. Compared with other forms of evaluation, this method can more readily cross cultures;

4. Encourages the analysis and collection of data in a collegial way as participants must justify to their colleagues why

they think that a given change is more important than the other;

5. Can help build analytical capacity and conceptualize the expected effects of a project;

6. Can offer a very detailed picture of what is happening, rather than an excessively simplified image in which organizational, social and economic evolutions are reduced to a single figure;

7. Can be used to monitor and evaluate bo!om-up initiatives, in the absence of predefined results, without possible gap analysis.

Participatory video and the Most Significant Change

msc is readily adaptable to media tools, such as video, and can produce wonderfully unexpected effects. Video is a very interesting means of communication due to its versatility and the evocative richness of animated images.

According to Fabio La Rocca, an image must be thought of text, or in other words, as a mechanism capable of

Figure 1: Story collection process, Learning For Peace Program, UNICEF in the Haut-Sassandra region , Côte d'Ivoire

NAMES (Stories) CYNTHIA ZOZORO

CRITERIA

ABILITY

Leader, model, responsible, courag eous● ●

COMPETENCES

Mediator, Facilitator, sensitizer ●

ESTABLISHING RELATIONSHIP

Unifying , communicator●

DYNAMIC OF ACTIONS

Influence, responsible activism ● ●

PERSONNALITY

Non-violent, forg iving , humble, kind, tolerant● ●

Orality and Participatory Approach for a “Made in Africa” Evaluation38

Page 41: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

eVALUation Matters Fourth Quarter 2019

forming meanings whose functioning and effects are describable. It is a means of expression, communication and demonstration, a tool that brings together the three fundamental principles of an analysis: description, research of contexts and interpretation.

The use and objective of video can be very different. For example, it can serve as a "notebook" to describe a situation, report an event, collect a testimony or a statement, and disregard the quality of its audio-visual properties. It can also be used as a more sophisticated and complex information or training tool. The video can be (...) used to exchange testimonies, to report important events or ceremonies, to film musical groups, support storytelling, or theater performances.

Participatory video (pvmsc) can also provide a technique to allow a human group to shape and create their own story. Making a video is easy and accessible, and is an effective way to encourage people to explore issues together, express their concerns, or just be creative in the art of telling stories.

This powerful process can enable a group or community to solve problems of data collection and archiving social, cultural or historical symbols. However, it also helps to communicate a community’s needs and ideas more easily to decision-makers and/or other groups and communities. As such, participatory video can be a very effective tool for mobilizing and engaging marginalized populations and helping them to implement their own forms of sustainable development based on local needs.

Participatory video can also help to provide more precise information on program performance and achieved results as well as the effects and changes emanating from the program. All can be used to help measure program performance more effectively.

Orality and Participatory Approach for a “Made in Africa” Evaluation 39

Page 42: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

eVALUation Matters Fourth Quarter 2019

1. "Soft" system thinking postulates that there are multiple perceptions of reality and emphasizes qualitative methods

2. Watson D. "Monitoring and Evaluation Capacities and Capacity Building. Join an innovative practice ", http://www.capacity.org/

Endnotes

msc is an evaluation approach that, because of its flexibility, adaptability and ease of use, fits well into program evaluation needs within an African context where people are more inclined to use speech as the main communication tool.

Conclusion

While explaining several key aspects of orality and community participation in the African context, this article has a!empted to show the benefits of an oral communication-based assessment. Clearly, orality, along with other African socio-cultural specificities, can play a major role

in the evaluation process and help develop an evaluation model adapted to Africa and hence, be!er contribute to its development.

By combining the msc approach with the use of participatory video, evaluation can be!er take into account the opinions of African populations, especially those in rural areas, and help them to become familiar with an evaluation culture – all in the objective to better judge the effects of development projects and programs on the continent. In addition, the msc model is a meta-evaluation tool for programs and therefore contributes to improvement of program life cycles.

Orality and Participatory Approach for a “Made in Africa” Evaluation40

Page 43: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

41Orality and Participatory Approach for a “Made in Africa” Evaluation

Auth

or's

pro

file

Baré J. (2001). L’évaluation des politiques de développement. Approches pluridisciplinaires.

Cloete F. (2016), Developing an Africa-rooted programme evaluation approach.

Cota asbl (July 2007), Fiche 9 - Technique du changement le plus significatif.

Davies R. and Dart J. (2005), La technique du changement le plus significatif (CPS), disponible sur www.mande.co.uk/wp.../2005/.../Franch%20of%MSC%20Guide.pdf

Derive J. (2009), Typologie et fonctions de quelques genres oraux du Manding à l’aune du critère de la spatialité. 79-2 | : L’expression de l’espace dans les langues africaines II

eVALUation Matters (December 2014) Building African states, Independent Development Evaluation of the African Development Bank Group.

Estrella M. (2004), L’évaluation et le suivi participatifs. Apprendre du changement, -Karthala. Paris.

Li Z. (1996), The Private Life of Chairman Mao. Arrow Books.

Lothe E. (2001), Pour une didactologie de l’oralité.

La Rocca F. (2007), Introduction à la sociologie visuelle, available at https://www.cairn.info/revue-societes-2007-1-page-33.htm

Mamboungou J. (2014), « Littérature orale et civilisation de l'oralité en Afrique$: Quelques barrières à lever pour une approche objective de la culture africaine moderne ».

Nick et Chris Lunch (2006), Vidéo Participative: Perspectives et Applications, Un manuel pratique. Insightshare, Guide.

The Bangkok declaration (2015), available at web.undp.org. (eng.)

https://journals.openedition.org/africanistes/3025

References

Orality and Participatory Approach for a “Made in Africa” Evaluation 41

eVALUation Matters Fourth Quarter 2019

Yao Roger Modeste APAHOU is a consultant in communications, project management and monitoring & evaluation. He holds a Master’s degree in English obtained at the University of Cocody, a Master's degree in communication research obtained at the University of Félix Houphouët-Boigny and a certificate in project management obtained at the University of the UNESCO Associated Schools Network (Asp.net).

Roger Apahou also has a good capacity for analysis and synthesis, and excellent writing skills. All these qualities allowed him to obtain the second AfDB Essay Contest Prize, during the AfDB 2016 Evaluation Week, and to be the author of the Education and Learning Handbook of the Youth Democracy in Côte d'Ivoire for unesco.

He also has a capacity for autonomy in the conduct of projects in general, and training in engineering. Lastly, Apahou collaborated with the UK Evaluation Office on the UNICEF Learning for Peace Project, implemented by Search for Common Ground in the Daloa Region in Côte d’Ivoire.

Page 44: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

Considering the diversity of evaluation approaches, it seems appropriate to ask which ones are best suited to the African context. For the past two decades, the African continent has become increasingly interested in the practice of evaluation. Between the demands of donors and the need for accountability, Africa is looking for tools to be$er conduct evaluations of development aid. However, it is not only a ma$er of institutionalizing evaluation and importing turnkey solutions but choosing the most appropriate approaches to the local contexts. Through a review of related literature, this article proposes three approaches deemed to be best adapted to the African context. Although they were developed in the West, they strongly integrate the context and stakeholders in the evaluation process and hence, through a practical, empowerment and evolving approach, Africa can adapt them to develop more effective evaluation practices.Ev

alua

tion

of D

evel

opm

ent A

id: W

hat a

re th

e M

ost A

ppro

pria

te A

ppro

ache

s fo

r Afri

ca?

Page 45: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

eVALUation Matters Fourth Quarter 2019

Salah Eddine Bouyousfi, Independent Consultant

Introduction

In recent decades, international organizations and their financial oversights have paid particular attention to transparency and accountability related issues.

Financial and economic crises, and the dominance of control, have accentuated this trend. As a result, an arsenal of tools, methods, standards and laws were in place to translate this trend into the field. It is in this context that the practice of evaluating policies and programs takes an important place.

Faced with a myriad of evaluation methods and approaches, choosing one method to adopt for specific programs remain a central issue in the evaluation process. In their perpetual quest for rigor in evaluation, international development practitioners recognize the delicate application of these approaches in the context of development (Ridde, 2016). Given the growing need to evaluate the effectiveness and efficiency of public policies, several African countries have taken a particular interest in the practice of evaluation. This trend has resulted in the proliferation of national evaluation associations and the diversification of adopted methodological approaches

(Kobiané, Kouanda & Ridde, 2016). Thus, the first conference of the African Evaluation Association (AfREA) was established in 1999 (Mathison, 2004) and the year 2003 saw the creation of the International Organization for Cooperation in Evaluation (IOCE).

Developed in the North, the different evaluation approaches are difficult to transpose to Southern countries. The existence of gaps between what should be done ("good practices") and what is actually done ("real practices") is due to several factors: the actors and their logic, planning evaluation, and the role of development aid donors (Ridde, 2016). A+er assessing the situation in development aid agencies, the article proposes a methodological reflection on evaluation practices in order to determine which are best suited to the African context namely the realistic approach, the empowerment approach and the developmental approach.

Evaluation within development aid agencies

In the 1960s and 1970s, the practice of evaluation within development agencies was limited to measuring

Key Messages

❚ Development assistance programs take place in an open social system. This is the case of the African context, which however presents unique complexities in the implementation of programs.

❚ Context-based evaluative approaches and causal mechanisms provide in-depth information on whether goals are being met or not.

❚ Stakeholder integration and the adaptability of evaluative approaches foster ownership of evaluation results and improvement of development aid programs to more closely address the needs of local populations.

Evaluation of Development Aid: What are the Most Appropriate Approaches for Africa? 43

Page 46: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

eVALUation Matters Fourth Quarter 2019

the profitability of projects. These institutions were not particularly subject to accountability, and as a result, their interest in evaluation was, at best, ad-hoc and secondary. The 1979 oil shock triggered an international monetary crisis that prompted governments, particularly Anglo-American governments, to develop the evaluation of public policies with an objective of rationalization (Laporte, 2015). This trend resulted in the development of international “good practices” in evaluation of development aid. It was followed by the adoption, in 1991, of the Principles for evaluation of development assistance developed by the Development Assistance Commi!ee (DAC)of the Organization for Economic Co-operation and Development (OECD) for evaluation questions.

Although this period was marked by the standardization and institutionalization of evaluation in development agencies, implementation has had two opposite but complementary approaches. The first, serving an objective of the egalitarianization of aid, is a qualitative and participative approach to evaluation, while the second, based on experimental methods, is a quantitative approach with a rationalization objective. This massive runaway for evaluation has resulted in the strong development of evaluation research, and therefore a diversity of approaches and concepts. It is in this perspective that this article proposes three approaches that seem to be, according to us - the best adapted to the African context. The first approach is the realistic approach.

The realistic approach

Over the last twenty years, the realistic evaluation proposed by Pawson and Tilley (1997) has been of particular interest, mainly due to the limitations of randomized experiments in testing the theory of a complex program, and subsequently measuring its long-term effects. Realistic evaluation becomes a

promising alternative to the impossibility of isolating a context or mechanism in order to randomize it. This complexity of mechanisms is due to the phenomenon of causality, the consequence of multiple factors that can interact differently and thus generate results in different ways in different contexts (Fletcher et al., 2016). It is in this perspective that the critical realism, to which the realistic approach is related, represents a promising alternative to dissect the complexity of interventions that are inherently embedded in open social systems.

In line with critical realism, the realistic approach is based on research in natural environments that contain contextual information; it employs the use of both quantitative and qualitative methods (Pawson et al., 2005). The objective of the realistic evaluation is to explain socially significant regularities whose underlying mechanisms are generally hidden. These are defined as elements of the evaluator's reasoning in response to an intervention (Astbury & Leeuw, 2010). This is why the realist approach is considered as a medium-range theory insofar as the knowledge produced is partially regular (Ridde et al., 2012). In an iterative process, the approach breaks down through the following steps:

❚ Step 1: The evaluator tries to understand the nature of a program based on literature reviews and interviews with program implementers;

❚ Step 2: The evaluator develops hypotheses on the mechanisms contributing to the achievement of results in a well-defined context;

❚ Step 3: The evaluator tests his/her hypotheses through a survey of the results;

❚ Step 4: Through the analysis of collected data, the evaluator devises a theory of the program, i.e. context, mechanism and outcome configurations that

Evaluation of Development Aid: What are the Most Appropriate Approaches for Africa?44

Page 47: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s
Page 48: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

eVALUation Matters Fourth Quarter 2019

inform how the program works and under what circumstances (Blamey & Mackenzie, 2007).

Pawson & Tilley advocate an iterative process. The theory of the program is thus refined throughout successive iterations. The figure below represents the realistic evaluation approach according to its authors.

On the methodology side, the majority of surveys conducted as part of a realistic approach are semi-directive interviews with a predominance of exploratory questions (Manzano, 2016) with the choice of respondents based on the researcher's assumptions. Indeed, critical realism does not attempt, unlike constructivism, to construct a reality but to test hypotheses. The purpose of the interviews is to clean the program theory first, then to refine it before consolidating the knowledge produced.

Although it generates operational results that improve programs (Punton, Vogel & Lloyd, 2016), the realistic approach

has methodological limitations and implementation constraints. In addition to the lack of pre-existing data, the approach is time and resource-consuming (Salter & Kothari, 2014). In addition, these concepts are still subject to various interpretations or operational difficulties (Pawson & Manzano-Santaella, 2012). Moreover, the dissociation of the contextual elements of the mechanisms represents a major challenge to its operationalization (Ridde et al., 2012). Lastly, the knowledge produced is difficult to transport across borders (E. De Souza, 2015).

In summary, the realistic approach aims to understand and explain, from a formative perspective, why and how a program achieves its objectives. Given the limitations of experimental evaluations to evaluate development programs, which are usually driven by hidden mechanisms, we strongly advocate the mobilization of critical realism as a conceptual framework. It seems better adapted not only to understand the social reality, but also to demonstrate the methodological

Evaluation of Development Aid: What are the Most Appropriate Approaches for Africa?46

Figure 1: The realistic evaluation approach

Understand the

nature of the

programDevelop

hypotheses on

program

functionality

Data collection:

survey of results

Construction

of the program

theory: what

works and under

what conditions

Page 49: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

eVALUation Matters Fourth Quarter 2019

flexibility of the realistic approach that allows tools and methods to be adapted according to the specificity of the program (Pawson, 2013).

Thus, the realistic approach is a relevant tool to inform decision makers about the mechanisms and contextual factors that shape the course of an intervention. It is, therefore, in many ways an appropriate approach to deepen an understanding of development aid programs. But, Africa needs much more, its context being complex and therefore more difficult to appreciate via experimental evaluations based essentially on the predominance of numbers. However, while it is an a!ractive alternative, the realistic approach does not necessarily include stakeholders or, at the very least, does not specify their degree of inclusion. In this context, we propose another approach based on an inclusive approach to evaluation. This is the empowerment approach.

Collaborative approaches in evaluation: The empowerment approach

Participatory evaluation requires a thorough understanding of culture and context, which requires the adoption of participatory methods and the targeting of specific population needs (Chouinard & Cousins, 2015). Chouinard & Cousins identified three dimensions of stakeholder integration in the evaluation process. First, the degree of their diversity; second, their level of integration into the production of the evaluation; and, finally, the degree of sharing over the control of evaluation techniques.

Over and above controlling these dimensions, it is important that the majority of stakeholders agree on the objectives of the intervention. To do this, a good understanding of the complex underlying phenomena, as well as the adoption of learning as a central goal

of the assessment, is needed (Connolly et al., 2015). Several approaches are part of this logic. Among the most used is the empowerment approach designed by Fe!erman.

As part of the empowerment approach, Fe!erman has outlined guidelines for a participatory evaluation. These are built around ten principles (Fe!erman, 2005):

❚ Improvement ❚ Community ownership ❚ Inclusion ❚ Democratic participation ❚ Social justice ❚ Knowledge of the community ❚ Evidence-based strategies ❚ Capacity building ❚ Organizational learning ❚ Accountability

Through these principles, the empowerment evaluation aims to improve the program as well as the skills of those who will contribute to the evaluation process (identify evaluation questions, collect and analyze data). However, to strengthen evaluation capacity, Fe!erman and others advocate a holistic and systemic approach. In other words, there is also a need for participatory evaluations to provide training, technical assistance and quality improvement of the evaluation (Fe!erman & Wandersman, 2007). These are central requirements in the African context where cultural dimensions are strongly present. At the same time, key stakeholders need to be integrated, in a flexible manner, from the beginning of the assessment to fit the context of the intervention (Fe!erman, Wandersman & Ka+arian, 2015). A flexible approach makes it possible, among other things, to mobilize the aforementioned principles. It all depends on the context and the needs of the concerned community for the intervention to be affected (Fe!erman, Wandersman & Ka+arian, 2015).

The empowerment approach is more a way of thinking than a methodology

Evaluation of Development Aid: What are the Most Appropriate Approaches for Africa? 47

Page 50: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

eVALUation Matters Fourth Quarter 2019

or a concept. It primarily helps people to evaluate their own programs (Fetterman, Wandersman & Kaftarian, 2015) with a main objective of increasing the probability of achieving results by increasing the capacity of stakeholders to plan, implement and evaluate their own programs (Fetterman & Wandersman, 2007). Of particular interest is the inclusion of all stakeholders, including marginalized populations, without favoring elites. This has the advantage of allowing greater ownership of results and increased program efficiency and effectiveness (Fetterman, Wandersman & Kaftarian, 2015). The choice of participants in a participatory evaluation must be based on an analysis of networks of actors, in order to avoid any instrumentalization of the evaluation and to guarantee a sufficient level of pluralism on points of view. This procedure is especially recommended in conflict situations where the interests of the various stakeholders are divergent. The goal is not just to build consensus but to bring out diversity in the issues and interests of stakeholders (Rey-Vale!e & Mathé, 2012).

In an empowerment evaluation, ideas, values, and practices contribute to building evaluation capacity, particularly in developing countries. It is in this perspective that the empowerment evaluation can enable African populations to be!er evaluate, in partnership with the donors, development aid programs. While this approach highlights the adaptive nature of evaluation design, developmental evaluation does it even be!er.

Developmental evaluation

The imperatives of an increasingly dynamic world have pushed evaluators to move towards a non-standard but evolving evaluation model. This provides an approach that allows for ongoing adaptation and timely decisions based on changing conditions (Patton, 2016). From this, the developmental evaluation of Patton was born. It is an innovative approach that uses adaptability and feedback to adjust to changes. Unlike traditional approaches that rely on a linear model to explain program

Evaluation of Development Aid: What are the Most Appropriate Approaches for Africa?48

Page 51: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

eVALUation Matters Fourth Quarter 2019

outcomes, this approach is based on an adaptive development process in dynamic environments.

In Africa, the process involves diligently repeating the steps, principles and processes of a model to effectively perform an assessment. These imported models, originally designed to inform decision-makers about the merit of a program, produce simple evaluation reports at the end of an intervention. Following this, Pa!on proposes a systemic approach to evaluation that specifies the boundaries of a particular approach (Pa!on, 2016). Thus, the developmental evaluation, based on eight principles, came into being:

1. Developmental purpose;2. Evaluation rigor;3. Utilization focus;4. Innovation;5. Perspective of complexity;6. Systems thinking;7. Co-creation; and8. Instant feedback (Pa!on, 2016)

An evaluation should be considered developmental if and only if the above

principles, interpreted and applied according to the context, are scrupulously respected. In other words, these principles will need to be explicitly contextualized in the processes, outcomes, design and use of evaluation recommendations.

Developmental evaluation implies that the innovation displayed by a program or project is evaluated in a dynamic and complex context. This is the case for social and development programs that are usually embedded in changing systems. In this case, implementation managers are always looking for innovative solutions to solve complex problems. Developmental evaluation can then be used to tailor effective and fast-moving solutions to a specific context, including crisis situations (Pa!on, 2016). Thus, this approach is strongly linked to innovation as all stakeholders learn, deepen their knowledge and progress. These in-depth skills and knowledge represent innovation within a particular context. Thus, developmental evaluation becomes a pillar of the intervention because it influences the intervention through instant feedback (Pa!on, 2016).

Evaluation of Development Aid: What are the Most Appropriate Approaches for Africa? 49

Figure 2: The conceptual framework of Patton's adopted developmental evaluation

8 principles of developmental evaluation

Innovation

Change

Evaluation

Conception

Methodology

Collect & data analysis

Recommendations

The context

Stakeholders InterventionFeedback Influence

Page 52: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

eVALUation Matters Fourth Quarter 2019

Evaluation of Development Aid: What are the Most Appropriate Approaches for Africa?50

Page 53: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

eVALUation Matters Fourth Quarter 2019

At the methodological level, developmental evaluation follows a number of stages: (i) the description of the intervention and, above all, presuppositions of the efforts to be made to achieve the expected results; (ii) instant notification of deviations from aspirations; and (iii) review, in collaboration with the implementers, of what is working and corrective measures to close identified gaps and make necessary changes.

Traditional evaluation approaches are based on linear models with measurable results, but they can have limitations in an environment of high turbulence and rapid change. As a result, evaluators most often turn to pre-existing models, not because they are the most appropriate but because they know them (Patton, 2016). It is in this perspective that the developmental approach to evaluations represents a promising avenue in a dynamic and complex environment. The following figure represents the conceptual framework of Pa!on's adopted developmental evaluation.

Imported evaluation models carry a number of criticisms. Developmental evaluation represents an interesting avenue as the design of the evaluation itself is evolutionary and therefore, can easily integrate and adapt to a local African context.

Conclusion

This article seeks to argue that by focusing on the context - and taking into account stakeholders - contextualized, participatory and adapted approaches appear to be the most appropriate for evaluations of development aid in Africa. In addition, their methodological flexibility gives them the necessary capacity to highlight the causal mechanisms of an inclusive evaluation process. These approaches are particularly relevant within the African context, which presents a set of unique complexities and where the inferential links between interventions and program results are not always easily identifiable.

Evaluation of Development Aid: What are the Most Appropriate Approaches for Africa? 51

Page 54: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

Astbury, B. and Leeuw, F. L. (2010) ‘Unpacking Black Boxes: Mechanisms and Theory Building in Evaluation’, American Journal of Evaluation, 31(3), pp. 363–381. doi: 10.1177/1098214010371972.

Blamey, A. and Mackenzie, M. (2007) ‘Theories of Change and Realistic Evaluation’, Evaluation, 13(4), pp. 439–455. doi: 10.1177/1356389007082129.

Chouinard, J. A. and Cousins, J. B. (2015) ‘The journey from rhetoric to reality : participatory evaluation in a development context’, pp. 5–39. doi: 10.1007/s11092-013-9184-8.

Connolly, J., Reid, G. and Mooney, A. (2015) ‘Facilitating the evaluation of complexity in the public sector: learning from the NHS in Scotland’, Teaching Public Administration. Elsevier B.V., 33(1), pp. 74–92. doi: 10.1177/027507408101500205.

E De Souza, D. (2015) ‘Critical Realism and Realist Review : Analyzing Complexity in Educational Restructuring and the Limits of Generalizing Program Theories Across Borders’, pp. 1–22. doi: 10.1177/1098214015605175.

Fetterman, D. M. (2005), ‘Empowerment Evaluation Principles in Practice’, Guilford Publications, pp. 42–72.

Fetterman, D. M., Wandersman, A. and Kaftarian, S. (2015), ‘Empowerment evaluation is a systematic way of thinking: A response to Michael Patton’, Evaluation and Program Planning, 52(August 2018), pp. 10–14. doi: 10.1016/j.evalprogplan.2015.03.004.

Fetterman, D. and Wandersman, A. (2007), ‘Empowerment Evaluation Yesterday, Today, and Tomorrow’, American Journal of Evaluation, 28(2), pp. 179–198. doi: 10.1177/1098214007301350.

Fletcher, A. et al. (2016), ‘Realist complex intervention science: Applying realist principles across all phases of the Medical Research Council framework for developing and evaluating complex interventions’, Evaluation, 22(3), pp. 286– 303. doi: 10.1002/ev.41.

Kobiané, J.-F., Kouanda, S. and Ridde, V. (2016), Pratiques et méthodes d’évaluation en Afrique.

Laporte, C. (2015), L’évaluation, un objet politique : le cas d’étude de l’aide au développement. Institut d’études politiques de Paris.

Manzano, A. (2016), ‘The craft of interviewing in realist evaluation’, Evaluation, 22(3), pp. 342– 360. doi: 10.1177/109821409301400302.

Mathison, S. (2004), ‘No Title’, Encyclopedia of Evaluation.

Patton, M. Q. (2016), ‘What is Essential in Developmental Evaluation? On Integrity, Fidelity, Adultery, Abstinence, Impotence, Long-Term Commitment, Integrity, and Sensitivity in Implementing Evaluation Models’, American Journal of Evaluation, pp. 1–16. doi: 10.1177/1098214015626295.

Pawson, R. et al. (2005), ‘Realist review – a new method of systematic review designed for complex policy interventions’, Journal of Health Services Research & Policy, 10(1), pp. 21–34.

Pawson, R. (2013), The Science of Evaluation: A Realist Manifesto, Los Angeles, CA and London: Sage. doi: 10.1177/0020852313497455.

Pawson, R. and Manzano-Santaella, A. (2012), ‘A realist diagnostic workshop’, Evaluation, 18(2), pp. 176–191. doi: 10.1177/1356389012440912.

Punton, M., Vogel, I. and Lloyd, R. (2016), Reflections from a Realist Evaluation in Progress: Scaling Ladders and Stitching Theory.

Rey-Valette, H. and Mathé, S. (2012), ‘L’évaluation de la gouvernance territoriale. Enjeux et propositions méthodologiques’, Revue d’Économie Régionale & Urbaine - RERU, (2012/5), pp. 783–804. doi: 10.3917/reru.125.0783.

Ridde, V. et al. (2012), ‘L’approche réaliste à l’épreuve du réel de l’évaluation des programmes’, The Canadian Journal of Program Evaluation, 26(3), pp. 37–59.

Ridde, V. (2016), ‘“Bonnes”,“vraies”et “quelquesmeilleures”pratiquesd’évaluationdeprogramme de développementenAfrique’, Hal.

Salter, K. L. and Kothari, A. (2014), ‘Using realist evaluation to open the black box of knowledge translation: a state-of-the-art review’, Implementation Science. doi: 10.1186/s13012-014-0115-y.

References

Evaluation of Development Aid: What are the Most Appropriate Approaches for Africa?52

eVALUation Matters Fourth Quarter 2019

Page 55: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

53Evaluation of Development Aid: What are the Most Appropriate Approaches for Africa?

Auth

or 's

pro

file

Salah Eddine Bouyousfi is a project management consultant and researcher in evaluation. He is the Technical Director of a consulting and training firm based in Rabat, Morocco. He has twenty years of project management experience in human resources, logistics and training. He has carried out two program evaluations on the development of disadvantaged regions in northern Morocco and on the vocational integration of young people in difficulty in Casablanca, a part of his doctoral studies at the Center for Doctoral Studies of the Higher Institute of Commerce and Business Administration Group (ISCAE-Casablanca Group). Holding a diploma as state engineer and a Master of Business Administration from the Institut d'Administration des Entreprises de Poitier, France (IAE). He was also awarded the PMP certificate in project management issued by the Project Management Institute.

Evaluation of Development Aid: What are the Most Appropriate Approaches for Africa? 53

eVALUation Matters Fourth Quarter 2019

Page 56: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

Nat

iona

l Eva

luat

ion

Sys

tem

s an

d Vo

lunt

ary

Nat

iona

l Rev

iew

s: A

n Af

rican

App

roac

h

Several African countries are in various stages of implementing National Evaluation Systems (nes). One of the key steps in enhancing the use of evaluations is through increasing culturally appropriate and contextually relevant methods. Establishing a National Evaluation System is a critical step in empowering a country to set the rules in order to be able to independently conduct evaluations. African nations have also committed to realize the 2030 Global Development agenda that is enshrined in the Sustainable Development Goals (sdgs). This article explores the need for the Voluntary National Review (vnr) reporting that is recommended in the sdgs. It also argues that vnr reporting and national evaluation systems should complement one another in order to ensure the relevance of vnr reporting in demonstrating progress in country steps towards achieving their own National Development Plans. Moving towards establishing a National Evaluation System can simultaneously help improve vnr reporting.

Page 57: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

eVALUation Matters Fourth Quarter 2019

Laila Smith and Angelita Kithatu-Kiwekete, CLEAR-AA at Wits University, South Africa

Key Messages

❚ There is a real risk that the drive to achieve international and continental development aspirations through the SDGs and African Union Agenda 2063 becomes a top-down approach that minimizes the importance of national development agendas as they are articulated through National Development Plans. The complexity involved in developing the monitoring and evaluation frameworks and capacities to report against the sdgs and Agenda 2063 can risk a!entions being distracted from building the capacity to develop country evaluation systems.

❚ The Made in Africa Agenda is promising in its ability to circumvent this "top-down" requirement by promoting country-owned approaches through the development of National Evaluation Systems on the demand side, and growing an indigenous curriculum to feed the supply side.

❚ The Voluntary National Review (vnr) requirements are beginning to illustrate the need to prioritize local evaluation capacity development in order to transfer these indigenous insights into how progress in implementing the sdgs is unfolding.

Introduction: African agenda for evaluations

Evaluation is recognized as an urgent action to support the implementation of the Sustainable Development Goals and au Agenda 2063 at

the national, regional and continental levels. Growing evaluation practice through conducting and using evaluation findings is important for African states to see how well they fare in achieving a communal development agenda (Agenda 2063). It is also critical in that governments harness the opportunity to build capable institutions that remain open to scrutiny. The ultimate use for evaluation is to have capable and transparent institutions that provide services to citizens in a manner that is both effective and efficient. Reservations have been raised that indicate that the role of evaluation is still yet to be meaningfully integrated for the sdg Agenda (Simon, et al., 2017; Meyer, et al., 2018). Moreover, there is still room for improvement in building the

demand and supply of evaluation capacity on the African continent (Mbecke, 2018; undp, 2015).

The African Union (au) Agenda 2063 the Africa We Want Pan-African vision has seven aspirations: a prosperous Africa based on inclusive growth and sustainable development; an integrated continent, politically united and based on the ideals of Pan-Africanism and the vision of Africa’s Renaissance; an Africa of good governance, democracy, respect for human rights, justice and the rule of law; a peaceful and secure Africa; an Africa with a strong cultural identity, common heritage, shared values and ethics; an Africa whose development is people-driven, relying on the potential of African people, especially its women and youth, and caring for children; Africa as a strong, united and influential global player and partner (African Union, 2015: 2). au member states should align their individual development aspirations with this continental

National Evaluation Systems and Voluntary National Reviews: An African Approach 55

Page 58: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

eVALUation Matters Fourth Quarter 2019

vision. The au Agenda 2063 acknowledges the need for established evaluation systems that gauge outcomes at a national, regional and continental level (African Union, 2015: 18).

Concurrently, the global development agenda enshrined in the sdgs also compels national voluntary reviews, vnr between peers at the United Nations High Level Political Forum annually hosted to report on a set of predetermined sdgs. Table 1 indicates that a growing number of African states have been reporting on their vnrs from 2016 to 2019. Although the vnrs focus is on the sdgs, these states should report on how their national development plans and the au Agenda 2063 are aligned with the sdgs and how the realization of these ideals is evaluated regularly by the states themselves. The significance of this opportunity for maturing country-driven evaluation needs to be harnessed so that evaluation is more meaningfully embedded into government systems for service delivery.

Yet, three years down the line, a+er the endorsement of the Agenda 2030, little progress has been done in terms of using evaluations and evaluative evidence to inform the VNRs. Some of the key assessments of VNR reports done in 2016 and 2017 in the IIED briefing (EvalParners, UNEG, et al. 2016, 2017) indicate that while M&E systems are becoming embedded within the public administration of reporting countries, there is very li!le E in the M&E.

Approximately 18 African nations out of a total of 51 countries globally have registered to present their VNRs for the first time,

except Sierra Leone, in July 2019 at the United Nations High Level Political Forum on the SDGs. This international platform is important for assessing whether governments are embedding the practice of evaluation into their processes. These same 18 were selected to receive some awareness raising from the African Evaluation Association (AfrEA) and initial training on evaluation from the Centre for Learning Evaluation and Results (CLEAR-AA) at the AU in Addis-Ababa, Ethiopia. A key concern raised from the Addis training was the need for countries to have proactive country led processes that also includes VNR reporting (CLEAR – AA, 2018). Although some African countries have begun the process of aligning the Agenda 2063 to their national development plans, the use of evaluation needs to be much more integrated into VNR reporting processes (Meyer, et. al, 2018). The levels of interest amongst participating countries and the expressed need for more tell us about the importance of evaluation capacity development to support countries in building the foundations for stronger evaluation systems so as to be able to increasingly carry these out by drawing on domestic capacity. Doing so will help create the local capacity as well as a source of evaluation evidence to inform VNRs and, therefore, begin to track progress towards the achievement of the sdgs.

Evaluation practice in Africa

For the past decade, numerous interventions have been put in place to grow the demand and supply of evaluations in Africa as a first step towards encouraging a culture of evaluation with an emphasis on teasing out the learning from what evaluations

African States that have VNR reporting on the SDGs

2016 2017 2018 2019 Total

6 6 10 18 40

Total African VNRs 56Source: Author adaptation from United Nations, 2019

Table 1: List of African States Reporting VNRs 2016 – 2019

National Evaluation Systems and Voluntary National Reviews: An African Approach 56

Page 59: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s
Page 60: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

eVALUation Matters Fourth Quarter 2019

offer rather than relying on this form of review primarily as an accountability tool. Sustained political will has been the key driving force in moving away from a donor-driven agenda where evaluations carried out across many African countries have been difficult for state officials to access.

Two key ways of addressing these barriers: First, building indigenous national evaluation systems that should persuade development partners to work through and with when carrying out an evaluation on programs they are funding in a given country. By se!ing up the systems that establish guidelines and standards on how evaluations should be carried out, a growing number of African countries have become empowered where the use of evaluations could benefit policy, program and project reviews. Furthermore, NES have been helpful in addressing power asymmetries with development partners to ensure that evaluations carried out in their countries are done on their terms.

Second, and drawing on this first step, building the local capacity to commission and carry out evaluations. The success of building local capacity efforts in a manner that can be institutionalized and sustained over time is reliant on the strength of a country’s culture of evaluation. A shared epistemological framework among country stakeholders, such as civil society, parliamentarians, and Voluntary Organizations for Professional Evaluation (vopes), universities, in defining the purpose of a national evaluation systems and the value its serves can help ensure that the machinery driving the supply of evaluators (universities and vopes) is structured to meet the nature of demand (largely driven by the state).

Benin, Uganda and South Africa, the first generation of nes, have been pioneers on the African continent, and much has been written about their achievements (Goldman and Porter, 2014; Goldman et al., 2018; Goldman et al. 2019). Benin’s National

Evaluation System was established in 2007 and Uganda and South Africa’s were established in 2011. The gradual decentralization of these systems from Central Government agency line ministries and in some instances to provincial or state levels has, over time, shi+ed the discourse within these public administrations to the importance of sound evidence in informing decision-making around how budgets are allocated and how to steer project and program improvements. There is even a peer learning network amongst these countries, Twende Mbele, which is a country-driven initiative aimed at improving some of the weak spots in how these systems are run through collaborative tool development, such as rapid evaluation assessments or integrating gender more thoroughly into how evaluations are conducted.

A second generation of African countries is evolving in establishing national evaluation systems, such as Ghana, Kenya, Zambia, Niger. Each of these countries have dra+ed their National Evaluation Policies and have (Niger 2017, Zambia 2019) or are about to have (Ghana 2019) them approved by cabinet as the critical first step in moving towards a country-driven approach to evaluation systems. These countries still have some way to go towards building the culture of M&E evidence use at the line ministry level (beyond Health and Education) or at provincial or state level, let alone shifting attitudes away from fearing evaluations as a punitive rather than learning tool.

CLEAR-AA has carried out some research to try to track how countries are progressing in the establishment of their evaluation systems. In 2018, CLEAR-AA carried out a series of diagnostics across five countries (Ghana, Zambia, Uganda, Rwanda, and Kenya) to be!er understand the nature of how evaluations systems are functioning and where the entry points might be to strengthen evaluation use. These diagnostics revealed that some countries see the value of putting systems in

National Evaluation Systems and Voluntary National Reviews: An African Approach 58

Page 61: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

eVALUation Matters Fourth Quarter 2019

place to ensure high quality evaluations are done, but there is no pressing need or desire to have these centrally coordinated, as was the case in Benin, Uganda and South Africa. Given the plethora of development actors engaging in capacity building around National Evaluation Systems, such as unicef, the International Initiative for Impact Evaluation (3ie), Independent Development Evaluation (idev) of the African Development Bank Group, Twende Mbele, and CLEAR-AA, one wonders whether this isn’t an imposed normative framework from outside agencies rather than a home-grown aspiration.

This diagnostic evidence fed into the Compass (2019), a bi-annual tracking tool on monitoring and evaluation developments in the same five countries noted above but with the addition of South Africa. The four dimensions of the Compass look at: (1) the government-wide monitoring and evaluation system; (2) the functioning of parliament; (3) the profesionalization of evaluation; and (4) the existence of an enabling environment. The value of the Compass, for CLEAR-AA and the broader Evaluation Capacity Development community, is to gage over time how M&E systems are

becoming embedded at a country level. For instance, where there is a growth of Higher Education Institutions offering M&E degree programs, it is an indication that the demand for M&E in the country is strong enough to warrant the investment in skill building and professionalization pathways. We are starting to see good signs on the institutionalization of local capacity building through accredited postgraduate degree M&E programs in South Africa, Benin, Uganda and Kenya, and Ghana.

Sometimes, building this local capacity of evaluators may take a generation to wean countries reliance on “International Experts” flown in from abroad. Nevertheless, if the literature informing the curriculum continues to draw on North American practice and literature, then the reference points for building evaluation capacity in African Higher Education Institutions will continue to miss the point. Efforts to decolonize how evaluation studies are designed are beginning to emerge in exciting ways. For instance, clear-aas annual offering of the Development Evaluation Training Program in Africa (depta) has piloted a “Made in Africa” curriculum that has been constructed

Page 62: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

eVALUation Matters Fourth Quarter 2019

African Union, 2015. Agenda 2063: The Africa We Want. Available at:https://au.int/sites/default/files/documents/33126-doc-03_popular_version.pdf.

CLEAR-AA, 2019. COMPASS: Tracking Monitoring and Evaluation Developments in Anglophone Africa. Unpublished report, available at https://www.wits.ac.za/clear-aa.

CLEAR-AA. 2018. Report on VNR Training 10-12 December. Commissioned by UNICEF.

Goldman, I, Byamugisha, A., Gounou, A, Smith, L., Ntakumba, S., Lubanga, T., Sossou, D & Rot-Munstermann. K., (2018). The emergence of Government evaluation systems in Africa: the case of Benin, Uganda and South Africa, African Evaluation Journal.

Mbeck, E.W. 2018. Strategies to improve the supply and demand for evaluation in Africa. Evaluation Matters, First Quarter.

Meyer, W., Naidoo, I., D’Errico, S., Hofer, S., Bajwa, M., Pérez, L.A.T., El-Saddik, K., Lucks, D. Simon, B. & Piergallini, I. 2018. VNR reporting needs evaluation: A call for global guidance and national action. IIED Briefing. January Issue.

Ngawbi, N., Wildshut, L. 2019. “Scandinavian Donors in Africa: A Reflection on Evaluation Reporting Standards” in Mapitsa-Blaer, C., Tirivanhu, P., and Pophiwa, N. (eds) in Evaluation Landscape in Africa. Sun Press, Stellenbosh.

Porter, S., Goldman, I. 2013. A Growing Demand for Monitoring and Evaluation in Africa” in African Evaluation Journal, Vol 1, No. 1, a25, https://doi.org/10.4102/aej.v1i1.25.

Simon, B., Meyer, W., D’Errico, S., Schwandt, T., Lucks, D., Zhaoying, C., El-Saddik, K., Scheinder, E., Taube, L., Anderson, S. & Ofir, Z. 2017. Evaulation: A missed opportunity in the SDG’s first set of Voluntary National Reviews. IIED Briefing. May Issue.

UNDP. 2015. Insights into national evaluation capacities in 43 countries. International Evaluation Office: UNDP. http://www.nec2015.net/sites/default/files/NEC_BaselineStudy.pdf.

United Nations, 2019. Voluntary National Reviews Database. Available at: https://sustainabledevelopment.un.org/vnrs/.

References

by African Evaluation thought leaders such as Zenda Offir and Baguele Chilisa-both of whom are commi!ed to growing indigenous knowledge systems. Furthermore, CLEAR-AA’s partnership with Twende Mbele in building the collaborative curriculum initiative has involved 23 African universities to build a common framework for evaluation studies on the continent. This model is aimed to ensure that amidst the tremendous cultural diversity on the continent, African countries are rowing together in how they build professional pathways for evaluation in academia.

Concluding Remarks

It must be noted that African economies are distinct and at different levels of development. These individual

contextual nuances are useful in developing national evaluation systems. This is because unique country contexts are shaped by national development visions that should address their citizens concerns, and the country’s performance monitoring systems should be set up to to track service delivery in a manner that is transparent to the public. Evaluation that is country-driven is capable of increasing the supply of local evaluators and in doing so can begin to embed evaluation practice more meaningfully into government systems. Each African country should be able to communicate to its citizens and to the global community its journey to this ideal, especially in relation to how vulnerable groups are assisted in realizing the same aspirations as their fellow citizens.

National Evaluation Systems and Voluntary National Reviews: An African Approach 60

Page 63: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

61National Evaluation Systems and Voluntary National Reviews: An African Approach

Auth

ors'

Pro

file

Laila Smith is the Senior Advisor on Learning in Evaluation at CLEAR-AA at Wits University. She was the Director from 2015 where she served as a member of the Management Commi!ee of Twende Mbele. Prior to engaging in the evaluation arena, Laila worked in international development for twenty years, fi+een of which were spent primarily in the water and sanitation sector in Africa. This involved managing a Regional program on water and sanitation in rapidly growing small towns and informal se!lements, driving stronger regulation in the water sector as a Regulator for Johannesburg as well as developing, piloting and scaling up a model on citizens-voice in the regulation of water services. She has published widely on the governance of urban water services.

Angelita Kithatu-Kiwekete’s research interests reflect on public-sector innovations and constraints. She is conversant with gender issues and works to provide strategic, conceptual and practical linkages between vulnerable women in community; she is also interested in policy and development processes, and has published and made presentations in this regard. She has conducted research and has published on the supra, national and local political and economic landscape of the continent. She has a PhD in Public Finance from the University of Witwatersrand, South Africa. She is presently a Senior Research Fellow at the Centre for Learning on Evaluation and Results, CLEAR-AA/ Twende Mbele.

National Evaluation Systems and Voluntary National Reviews: An African Approach 61

eVALUation Matters Fourth Quarter 2019

Page 64: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

Mad

e in

Afri

ca E

valu

atio

n Fa

ilure

s: C

ase

Stu

dies

from

the 

Con

tinen

t

In the book Evaluation Failures edited by Kylie Hutchinson, 22 esteemed evaluators share funny, brave and highly insightful accounts of the lessons they have learned by making mistakes. The concept is liberating but is it one that African evaluators can easily adopt in a context and culture where we have been conditioned to hide our mistakes.

Page 65: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

eVALUation Matters Fourth Quarter 2019

Edited by Nicola Theunissen, with contributions from Jennifer Bisgard; Felix Muramutsa; Zakariaou Njoumemi and Ahmed Ag Aboubacrine

Introduction

The current open debate about failure in Monitoring & Evaluation stems from the book Evaluation Failures, edited by Kylie Hutchinson, in

which seasoned evaluators reflect on some of the mistakes they have made and what they have learned from them.

The book originated from the 2015 American Evaluation Association’s conference themed Exemplary Evaluations in a Multi-Cultural World: Learning from Evaluation Successes. Noting the event’s evident bias towards documenting success, Michael Quinn Pa!on proposed a session on failure – in order to ‘bring some balance’ to the theme, as he writes in the book’s introduction.

The idea of balance is fundamental in moving discussions on Made in Africa evaluation forward. Of the 22 cases from around the world that were featured in the book, only two were from African evaluators.

The reasons for evaluation failure in the Global South, specifically in Africa, are

vastly different from the rationale in the Global North. Yet, African stories of evaluation failure remain largely untold and undocumented.

This emanates in part from the continent’s long tradition to keep mistakes hidden – a direct result of the imbalance. If donors keep giving precedence to Western evaluators or evaluation consultancies over African evaluators when commissioning large-scale evaluations, it would be counterintuitive to talk openly about where they went wrong. Yet, this is not helpful. It is an iterative cycle that will very likely prevent the continent’s evaluators from learning from evaluations that went awry.

To encourage the sharing of stories, Jennifer Bisgard from African M&E firm Khulisa Management Services, chaired a panel at the 9th Afrea Conference on Evaluation Failures Made in Africa, collaborating with three senior African evaluators, namely Felix Muramutsa, Zakariaou Njoumemi, and Ahmed Ag Aboubacrine.

This article highlights the many unique evaluation challenges that the

Key Messages

❚ Borrowing directly from the Evaluation Failures approach, this article shares four stories of failure by African evaluators (two of these are excerpts from the book) to reduce the sting from failures that were ‘Made in Africa’ and encourage the confidence to voice them.

❚ Similar to the book, it derives the lessons learned from making mistakes in four different regions of the continent – Southern Africa, East Africa, West Africa and the Sahel.

❚ The article concludes that the inability to value indigenous knowledge, contributes to typical Made in Africa evaluation failures, and therefore encourages African evaluators to admit to, and talk openly about failures for improved development effectiveness.

Made in Africa Evaluation Failures: Case Studies from the Continent 63

Page 66: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

eVALUation Matters Fourth Quarter 2019

African continent is facing, including data transparency, the relationship with government stakeholders, resource constraints and the need to understand cultural contexts. It adopts a similar approach used in the Evaluation Failures book—presenting case studies of how evaluations went wrong in four African regions (Southern Africa, East Africa, West Africa and the Sahel) with a view to documenting the stories of African-rooted evaluation failure.

Evaluation failure: Southern Africa

Data sensitivity in corporate entities in South Africa

Across Africa, evaluators and program managers assume that data is being collected routinely and effectively. Is this assumption true? If it is not true, why do we hide it? Is there any crisis of confidence? Are there remedial actions? In this case study from South Africa, Jennifer Bisgard shares some of the learnings around failing to obtain evaluation data.

Both the evaluand (the ngo we were evaluating) and the donor assumed that, in our capacity of evaluators with the required technical skills and knowledge, we would be able to produce outstanding outcomes from the evaluation process.

Perhaps we could have reconstructed this data, but we were hit by another false assumption. Accustomed as we were to evaluating donor projects, we were completely unprepared for a key stakeholder group to refuse to cooperate and share their data with us.

The project was designed to provide opportunities for small businesses to be certified and thus become suppliers to large corporate entities from 2012 to 2015. These firms later joined the ngo as members and sat on their board. The ngo

aimed to create “matches” between their 27 corporate members and approximately 400 certified suppliers.

At the donor inception meeting, they stressed that they needed quantitative data for financial results. One reason for the evaluation was that the ngo’s routine monitoring had reported low numbers of “matches” between corporate members and certified suppliers.

Thus, the donor required the evaluator to request corporate procurement divisions to share procurement data for the 2012-2015 period (including overall procurement expenditure, and certified supplier expenditure contrasted with non-certified supplier expenditure).

Upon request by the ngo, few actually provided the information. However, both the donor and the ngo assumed that corporate members would share this data more openly with an external evaluator, even though they had been unwilling to share it with the ngo itself. This assumption was not just false – it was incredibly false.

We spent weeks struggling to set up key informant interviews with corporate members, often running into a brick wall. When we finally got around to analyzing the routine data we, realized, to our absolute horror, that the data was incomplete, with most members only reporting erratically. The ngo also had a “data crash” – the firewall, antivirus, and back-up had failed and the repair of the historic dataset was grossly incomplete. Not only was there an abysmal lack of data, but also the li!le information that existed was lost in the crash.

Therefore, relying on the corporate members’ cooperation to share historic procurement data became paramount. However, we continually ran into confidentiality concerns,

Made in Africa Evaluation Failures: Case Studies from the Continent64

Page 67: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

eVALUation Matters Fourth Quarter 2019

indifference, and corporate staff who had never heard of the ngo. Many corporate members were unwilling to divulge the requested information, as they considered it too sensitive and confidential to share for an evaluation report that would end up in the public domain.

Things went wrong because the assumption that corporate entities would share their procurement and finance data was central to the verification portion of the evaluation, leaving us without an approach to a!ribute (or even contribute) towards the ngo’s impact. Although the corporate members themselves governed the ngo, even the board could not convince its members to release the information to our evaluation team.

We overestimated the corporate members’ commitment to the ngo and underestimated the sensitivity of the procurement and financial information. Although we signed confidentiality agreements, and cajoled and applied pressure, none of it was enough.

In the end, we succeeded in getting only partial datasets from six corporate members. We used this limited procurement data to triangulate with the spo!y routine reporting data. We were able to answer most evaluation questions, but there was not sufficient data to document the impact.

A critical lesson stemming from this evaluation failure is that an evaluability assessment is essential prior to commissioning any evaluation. If, in this case, the donor had conducted an evaluability assessment, there would have been an early realization that an external evaluation could not address the extreme data gap. In other contexts, evaluability assessments can identify when the program has not progressed enough to be evaluated.

* This is an edited version of one of the case studies that appeared in Evaluation Failures: 22 Tales of Mistakes Made and Lessons Learned

Evaluation failure: East Africa

Navigating conflicting stakeholder requirements in Rwanda

During the collection of baseline findings in an African context, the relationship between donors and governments can be particularly complex. Felix Muramutsa shares how he a"empted this challenge with an evaluation in Rwanda.

The evaluation was a baseline prevalence survey of a child labor project run by an international ngo (ingo) for which I was working as Senior Monitoring & Evaluation Advisor and Deputy Director. The purpose of the evaluation was to assess the prevalence of child labor in the project’s intervention area. The major stakeholders were the donor, the Government of Rwanda (represented by the Ministry of Labor), tea companies, tea cooperatives, district and local authorities, the implementing local ngos, the consultancy company, parents and children.

My role was to coordinate the overall baseline survey and ensure the consultancy company hired to conduct the survey delivers the goods accordingly, while fully taking into account the expectations of the donor, ingo, and the government. The role of the public staff was to ensure the baseline data reflects the local reality. As the overall baseline coordinator, I was expected to ensure the harmonization of all stakeholders’ roles and expectations.

As it is o+en the case in African evaluations, the requirements set forth by both the donor and ingo were different from those by the government; and as an evaluator, my job was to respect them all.

Made in Africa Evaluation Failures: Case Studies from the Continent 65

Page 68: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s
Page 69: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

eVALUation Matters Fourth Quarter 2019

For example, the donor considered that the baseline results were first of all their business only. However, the government also required me to inform them of all the steps occurring and expected me to validate the findings with them prior to publication. I was instructed by my employer (the ingo) to not share the preliminary findings with the government, which were instead discussed and validated between the consultancy company, ingo and the donor. Later on, ingo shared a final version with the government, and I was tasked with following up with dissemination.

The alert came when project stakeholders started challenging our results during meetings with government officials. To my dismay, the government questioned the methodology, sampling and findings, and ultimately did not validate the report. I was personally caught in the middle, with each side requiring me to convince the other of their position.

I believe things went wrong when I first accepted instructions from the donor not to involve the government in all steps of the baseline survey. I should have known there would be complications at the validation stage. What contributed to this mistake was my blind respect for the donor and ingo’s request to be excluded in the first place, and then include the Government, all the while knowing it conflicted with its requirements of being involved at all stages of the project, including the baseline step.

When things began to go wrong, I became a scapegoat. I was very frustrated to be treated by some government officials like an accomplice of “foreigners who want to spoil the country’s good image by exaggerating a fake child labor prevalence.”

To address the situation, I personally engaged, both formally and informally, government officials and local project implementers in close discussions and

some advocacy to be!er understand the issues at play beyond technical ones. We discussed what improvements were needed for the dra+ report.

As an evaluator, my work was significantly influenced by the situation. It has sharpened me to be!er understand that the early involvement and active participation of all stakeholders is key for a successful evaluation.

For multi-stakeholder evaluations of this nature, it is also necessary to set up a joint baseline steering commi!ee. The role of the commi!ee would be to anticipate and discuss any technical, administrative or political issues that might surface at all levels, and then build the foundation of the next steps during research.

* This is an edited version of one of the case studies that appeared in Evaluation Failures: 22 Tales of Mistakes Made and Lessons Learned

Evaluation failure: Central Africa

Failing to use mixed methods in a water project in northern Cameroon

Resource constraints can hugely detract from the quality of evaluation findings. This case study from Northern Cameroon by Zakariaou Njoumemi details why it is critical to factor in both quantitative and qualitative methodologies with long-term social sector projects in Africa. Failing to do so will lead to a wrong response to why a project did or did not work.

In many rural communities in Northern Cameroon, access to safe drinking water is severely challenging. Many women and girls are used to walk long distances, o+en 10 to 15km away from their houses, to collect unsafe drinking water in rivers.

In the late 1990s, a five-year drinking water supply project was implemented by an

Made in Africa Evaluation Failures: Case Studies from the Continent 67

Page 70: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

eVALUation Matters Fourth Quarter 2019

international ngo whose mission was to build several drinking water sources at central areas in different rural communities in Cameroon. The objective was to provide safe drinking water to every household, thereby reducing distances and travel times to drinking water sources.

By early 2000, a final evaluation was commissioned. The implementers were excited to know the proportion of women and girls who were using the newly created drinking water delivery points and their level of satisfaction.

We were appointed as junior evaluator to conduct a final evaluation of the project. We only used a quantitative approach, because the project management team argued that they had a limited budget for evaluation. I failed to convince the project management team to increase the evaluation budget in order to allow the use of mixed or combined quantitative and qualitative evaluation methods, even though I knew from experience it was critical for this evaluation.

Our evaluation design and methods therefore excluded a mixed methods approach. The evaluation findings that were released and disseminated to the implementers indicated that about 49% of women and girls were using the newly built drinking water sources located within the communities, while 51% bypassed them and continued to walk long distances in search of drinking water.

The quantitative methods could not shed light on why women and girls did not use the newly built drinking water delivery points situated around or near their houses. Based on these challenges, a qualitative evaluation had to be commissioned to complement the quantitative findings.

The qualitative evaluation discovered that in the assessed rural communities,

women and girls were o+en confined to their houses within closed fences, which led to reduced access to recreational activities. It is only when they went out to collect drinking water that they had the opportunity to socialize, meet their friends and other members of the community, or attend informal meetings dedicated to women and girls. Therefore, walking long distances to rivers provided them with many social advantages.

Our failing to conduct a comprehensive evaluation taught us some lessons, the key one being that social sector evaluations in Africa should investigate much further than just what happened during project implementation, focusing above all on why and how it happened. The evaluator should always use a mixed methods approach or a combined quantitative and qualitative design to determine this.

Managers of development projects need to collect regular routine data through monitoring activities, while ensuring an adequate budgeting for evaluations, so as to cover both quantitative and qualitative evaluation designs and methods from the onset.

Meta-analysis of evaluation failure: The Sahel

Evaluation will not succeed without bearing in mind local context and indigenous knowledge

Ahmed Ag Aboubacrine refers to evaluation failures that can occur if evaluators are not able to meet with all the intended beneficiaries of a project or observe the actual changes due to seasonal factors such as migration or rain. He further highlights three contributing factors for why evaluations fail in the Global South, alluding to typical Made in Africa lessons that could be derived from these factors.

Made in Africa Evaluation Failures: Case Studies from the Continent68

Page 71: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

eVALUation Matters Fourth Quarter 2019

In many African countries, farmers usually engage in income-generating activities other than agriculture after harvest. This includes migrating to neighboring regions or countries to work as manpower. In some countries such as Guinea, Mali, Mauritania and Niger, which are occasionally stricken by drought and where farms are sometimes infected by worms, locusts or other plant diseases, the migration of labor force happens earlier in the farming season. That is why evaluators must go to the field to be!er understand such context-specific and exogenous factors that may bias their findings and conclusions.

What is the specific Made in Africa evaluation lesson that could be learnt? Asymmetry of information and knowledge undermines the likelihood of an evaluation’s success and usefulness. One major lesson is that in Africa, the usual evaluation planning is not enough to detect the real issues and lessons. One needs to know the context beforehand and include it in the evaluation design. To be successful, evaluations in Africa ought to include a preliminary field phase for preparation purpose.

Evaluators should not rely on desk work or completion reports from executing agencies to plan their evaluation exercise. They should go to the field and understand the social dynamics and seasonal effects before deciding how to conduct evaluations.

Failed evaluations in the Global South are attributed to several factors, one of the most notable being the use of Western biases by evaluators (from the West). Even if foreign evaluators do have

the adequate expertise and experience, they o+en tend to look for the change pa!erns expected by the design (strategy, theory of change, results framework) of their respective organizations. Such a misconception is particularly pervasive in the United Nations’ common system and in bilateral agencies.

A second factor is the negative use of innovation in development interventions. With the emergence of results-based financing, there is a risk that agencies will create incentives for ‘cooked’ results. Implementing agencies who want money would focus on creating outputs for the sake of payments and therefore undermine the real relevance of their operations (whether these were the right ones or not).

A last enabling factor of evaluation failure the inability to value indigenous knowledge, and this is more relevant when it comes to typical Made in Africa evaluations. Many evaluations in the Global South miss the assessment of changes in local dynamics and power relations. The undocumented nature of these dynamics and relations makes it difficult for evaluators who have not been exposed before to the context to grasp the true nature of the changes that occurred.

In conclusion, as African evaluators or evaluators working in Africa, we need to take the sting out of the word “failure” and talk openly about and invest in understanding the specific circumstances that make evaluations unsuccessful, so as to set us on a path of learning, improved implementation and development effectiveness.

Made in Africa Evaluation Failures: Case Studies from the Continent 69

Page 72: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

Auth

ors’

pro

file

Nicola Theunissen is an independent communication specialist, with a focus on Monitoring & Evaluation. She has consulted for the African Evaluation Association (AfrEA), Khulisa Management Services, and the South African Monitoring & Evaluation Association (SAMEA), amongst others. She also conducts work in the media industry, including for the fact-checking organization Africa Check. Nicola holds a master’s degree in Communication and an honors’ in Community Psychology.

Jennifer Bisgard co-founded Khulisa Management Services in 1993. As an expert in monitoring and evaluation (M&E) and organizational development, She leads evaluations and capacity building assignments in the education, and democracy and governance sectors. Prior to establishing Khulisa, Jennifer was a Senior Education Specialist at USAID/Pretoria from 1988 to 1993. She has served as board member for the African Evaluation Association (AfrEA), the International Organisation for Cooperation in Evaluation (IOCE) and SAMEA. She holds a master’s degree in Social Change and Development from Johns Hopkins University.

Felix Muramutsa is a freelance consultant with 23 years of experience acquired mainly from more than 30 assignments in research, monitoring and evaluation. He has worked with un agencies, international and local ngos, civil society organizations, government institutions, foreign and national universities, and the private sector. Felix holds a master’s in Psychology and a Bachelor of Arts in Education.

Abraham Wald’s work on aircraft survivability (1984), Journal of the American Statistical Association.

Andrew Zolli & Ann Marie Healey (2011), “When Failure Looks Like Success”. Harvard Business Review

Islamic Development Bank (IsDB) (2018), Synthesis Report of Evaluations of IsDB Interventions in Water and Sanitation Sector.

IsDB (2017a), Synthesis Report of Evaluations of IsDB Interventions in Transport Sector.

IsDB (2017b), Synthesis Report of Evaluations of IsDB Interventions in Health Sector.

IsDB (2016a), Synthesis Report of Evaluations of IsDB Interventions in Energy Sector.

IsDB (2016b), Synthesis Report of Evaluations of IsDB Interventions in Agriculture and Rural Development Sector.

IsDB (2013a), Evaluation of the reconstruction of Small and Medium Scale Irrigation Perimeters Project, Mauritania.

IsDB (2013b), Guinea Country Assistance Evaluation (CAE).

IsDB (2011), Morocco Country Assistance Evaluation (CAE).

IsDB (2010), Mali Country Assistance Evaluation (CAE).

Kylie Hutchinson (Ed.) (2018), Evaluation Failures: 22 Tales of Mistakes Made and Lessons Learned. Sage.

Marc Mangel & Francisco Samaniego (2006), The Fifth Discipline: The Art and Practice of the Learning Organisation. Random House LLC. Peter Senge.

Mohamed Manai (2016), Improving the Quality of Lessons Learned in the Evaluation Products, IsDB.

References

Made in Africa Evaluation Failures: Case Studies from the Continent70

eVALUation Matters Fourth Quarter 2019

Page 73: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

71Made in Africa Evaluation Failures: Case Studies from the Continent

Auth

ors’

pro

file

Zakariaou Njoumemi is a senior health economist specialized in health financing and monitoring & evaluation of public health programs. He holds a PhD in Public Health, with specialization in Health Economics, from the School of Public Health and Family Medicine, University of Cape Town, South Africa. He is a senior lecturer/researcher in public health evaluation, health economics, health policy and health systems at the Department of Public Health, Faculty of Medicine and Biomedical Sciences at the University of Yaoundé. He is also the Director of Health Economics & Policy Research and Evaluation for Development Results Group (HEREG) in Yaoundé, Cameroon.

Ahmed Aboubacrine is a seasoned development evaluator with 17 years of experience, including for CARE International in Mali, Burundi, Sierra Leone, and Liberia, where he played a critical role in evaluation capacity building, program quality and impact measurement. In July 2010, he joined the Islamic Development Bank’s evaluation department based in Jeddah. Ahmed holds an Associate Degree in Mathematics, a bachelor’s degree in Applied Statistics and a master’s degree in Decision-Making. His areas of expertise include the design, monitoring and evaluation of relief and development interventions in the public and private sectors.

Made in Africa Evaluation Failures: Case Studies from the Continent 71

eVALUation Matters Fourth Quarter 2019

Page 74: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

New

s in

pic

ture

sAfrican Development Bank Observes World Food Day

The African Development Bank (“the Bank”) marked World Food Day, on 16 October 2019, with an internal event exploring how the Bank’s “Feed Africa” strategy aligns with this year’s World Food Day theme: “Healthy Diets for a Zero Hunger World.”

The event was held at the Bank's Headquarters in Abidjan, Côte d’Ivoire, organized by the Agriculture, Human and Social Development complex (AHVP). The attendees included Bank staff, senior and executive management.

The program included rich discussions and a video explaining the World Food Day concept and the Bank's Feed Africa Strategy. One of the discussions was a “debate club” featuring Pim De Keizer, Bank Senior Advisor, (for United Kingdom, Italy and the Netherlands), and Girma Earo Kumbi, Principal Evaluation Officer from IDEV.

The debate explored the challenges of implementing the Feed Africa strategy, how to be!er capitalize on value chains to create decent jobs, and how to scale up nutrition smart investment. They joined other panelists in taking questions from the audience, as well as pu!ing questions to each other about areas of opportunity and direction to take.

The Bank’s Feed Africa strategy is a renewed effort to transform African agriculture into a globally competitive, inclusive and business-oriented sector that creates wealth, generates gainful employment and improves the quality of life for the people of Africa.

During the event, attendees were treated to popcorn - a value-added product from maize.

Find out more: https://idev.afdb.org/en/news/

helping-end-hunger-africa-strengthening-agricultural-value-chains

Page 75: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

eVALUation Matters Fourth Quarter 2019

From learning to action: Leveraging the comparative advantages of the private sector in development cooperation in Africa

Cognizant of the strategic role of the Bank in promoting private sector growth in Africa, IDEV has in the recent years sought to mobilize lessons and recommendations for effective private sector development in the continent. To this end, IDEV has conducted several evaluations, whose findings have generated the requisite knowledge, lessons and recommendations to inform design and implementation of subsequent private sector-led development interventions.

As part of its efforts to strengthen the sharing of knowledge and learning from evaluations related to private sector-led development, IDEV organized a series of dialogue forums to disseminate the evaluation findings and provide a platform for knowledge sharing and peer learning for the AfDB, bilateral and multilateral development agencies, as well as other development practitioners and stakeholders involved in private sector development. The latest dialogue forum was held on October 30th, 2019, at the Bank's Headquarters in Abidjan, Côte d’Ivoire. The half-day knowledge session co-hosted by IDEV and AfDB Private Sector , Infrastructure and Industrialization Complex comprised of roundtable discussions covering different

aspects of the Bank’s approach to private sector-led development. IDEV private sector-related evaluations including the evaluation synthesis on “Towards Private Sector-Led Growth: Lessons of Experience” as well as the three previous private sector development knowledge events in Oslo, Nairobi and Pretoria provided background to the discussions.

The discussion focused on the following sub-themes:

❚ How the Bank is leveraging on the private sector to achieve its corporate strategy (Afdb’s 2013-2022 Strategy and the High 5s);

❚ The Private Sector, Infrastructure and Industrialization Complex’s strategic priorities;

❚ The Bank’s Private Sector Operations team and their related peers across the Bank;

❚ idev results and learning (what evaluations have we done? What have the evaluations found out? What evaluations are on-going ?).

❚ The implications of the Bank Development and Business Delivery Model's provisions on private sector operations in terms of upstream support.

Find out more: http://idev.afdb.org/en/event/private-sector-infrastructure-

and-industrialization-complex-and-idev-share-lessons-boost

News in pictures 73

Page 76: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

eVALUation Matters Fourth Quarter 2019

IDEV Webinars held in November

How does the African Development Bank ensure the quality of the projects it funds? – Insights from recent evaluations

The webinar was delivered by three IDEV evaluators (Akua Arthur-Kissi, Mònica Lomeña-Gelis and Madhusoodhanan Mampuzhasseril). They presented the findings, conclusions and recommendations of the independent evaluations of Quality Assurance (QA) across the Project Cycle, the Quality at Entry and Quality of Supervision of African Development

Bank Group operations, during 2012-2017. They discussed the challenges as well as the context for ensuring a robust QA framework for public and private sector development projects and also revealed the best practices for project QA learned from both the Bank and other development organizations.

Find out more:

http://idev.afdb.org/en/document/how-does-african-

development-bank-ensure-quality-projects-it-funds

Towards a socially-inclusive and environmentally-sustainable development: How does the African Development Bank do it?

IDEV organized another webinar on the AfDB’s Integrated Safeguards System (ISS). This webinar aimed to disseminate the findings, lessons and recommendations from the evaluation of the African Development Bank’s ISS, and the actions the Bank is undertaking to ensure effectiveness of the ISS in achieving its safeguards objectives. The evaluation report presents an in-depth analysis of the relevance and robustness of the ISS design; the efficiency of the system, process, resourcing and the existing

incentives; and its emerging effectiveness in achieving the Bank’s safeguards objectives. It also explores success factors and good practices for implementation of safeguards policy, with examples from the Bank and its peer development organizations. The webinar was delivered by Mònica Lomeña-Gelis and Madhusoodhanan Mampuzhasseril, evaluators from AfDB/IDEV and Justin Ecaat, an Environmental Safeguards Specialist.

Find out more:

http://idev.afdb.org/en/document/towards-socially-inclusive-

and-environmentally-sustainable-development-how-does-african

News in pictures74

Page 77: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s
Page 78: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

Hot

off

the

Pres

s Synthesis Report on the validation of 2016 Project Completion Reports (pcrs)

IDEV reviewed the 49 public sector operations for which AfDB Management undertook a self-assessment (Project Completion Report, PCR) in 2016. IDEV produced an independent Project Completion Report Evaluation Note (PCREN) for each project as well as a synthesis report on overall pcren results.

The main objective of the evaluation process was to draw pertinent lessons and make recommendations to improve both the management of current projects as well as the design and management of future projects. The PCRs reviewed related to multi-sector projects (governance, social, finance sectors) as well as some in the infrastructure (power, transport, water supply, and sanitation), agriculture and environmental sectors.

The report presents a synthesis of the 49 PCRENs, covering topics such as project performance, PCR quality, project Monitoring and Evaluation (M&E) systems, and gender sensitivity of M&E.

The report includes a set of 14 recommendations, of which the 5 most important ones are:

1. The need to improve project M&E in the gender dimension. It is recommended that the Bank explore designing a standard M&E framework for its projects based on a single “living” document;

2. To ensure the independent evaluation team has the

crucial documents when conducting the pcrens.

3. The Bank to pay more a!ention toensuring the financial sustainabilityof project outcomes and impacts;

4. To undertake a major review of Cost Benefit Analysis that will enrich the results of pcrs.

5. The Bank to undertake a review of late projects to identify generic issues that could be remedied for future projects since projects that require extension are few.

The Synthesis Report of 2016 PCRs was presented to the Bank’s Board of Directors together with the Synthesis Report of 2017 PCRs, and IDEV also produced a Summary Note drawing together key information from the two reports.

Find out more:

http://idev.afdb.org/en/document/

synthesis-report-validation-2016-

project-completion-reports-pcrs

Synthesis Report on the Validation of the 2016

Project Completion Reports

July 2019

An ID

EV P

CR V

alid

atio

n Sy

nthe

sis

Page 79: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

eVALUation Matters Fourth Quarter 2019

Synthesis Report on the validation of 2017 Project Completion Reports (pcrs)

This report synthesizes findings from the review of 88 PCRs completed in 2017.

The objectives of this assignment included: (i) assessing the quality and validating the performance of each of the 88 projects covered in the PCRs; (ii) assisting AfDB management and staff to improve the quality of the PCR system; and (iii) contributing to IDEV’s Evaluation Results Database (EVRD) on project performance and pcr quality.

The evaluation found the following: i) both PCRs and the review found the relevance of the development objectives for the projects in the portfolio to be highly satisfactory; ii) on average, the PCRs rated development effectiveness as satisfactory whereas the PCRENs found it to be less than satisfactory iii) the PCRs on average rated efficiency criterion as satisfactory, while the review found it to be less than satisfactory; iv) the sustainability of the reviewed projects was found, on average, to be satisfactory by both the PCR and PCRENs; v) in the case of the Bank’s performance, the review rated it lower than the PCRs. In general, it was observed in the PCRs that the Bank’s performance was systematically rated satisfactory or above, even when the project had major implementation issues; vi) in most cases, the rating of the borrower’s performance in the PCRs was neutral and o+en evaluated as satisfactory, even in cases where the borrower’s performance was obviously poor; vii) the average performance of other stakeholders was found to be satisfactory by both the PCRs and PCRENs; viii) the review found that the M&E results framework was o+en inadequate and there were issues with inadequate baseline data, inappropriate indicators, as well as weak implementation and utilization of the M&E system; and ix) the review found the quality of PCRs to be uneven.

The major recommendations outlined in this evaluation were oriented on three majors

points notably project preparation and appraisal, project supervision/implementation and improvement to evaluate projects.

Find out more:

http://idev.afdb.org/en/document/synthesis-report-validation-2017-project-completion-reports-pcrs

July 2019

Synthesis Report on the Validation of the 2017

Project Completion Reports

An ID

EV P

CR V

alid

atio

n Sy

nthe

sis

Hot off the Press 77

Page 80: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

eVALUation Matters Fourth Quarter 2019

Evaluation of the Bank’s utilization of the Public Private Partnership Mechanism, 2006 – 2017

Given the emphasis placed on Public-Private Partnerships (PPPs) as a means of closing the continent’s infrastructure gap and promoting social and economic development, Independent Development Evaluation (IDEV) at the African Development Bank (“the Bank”) conducted an evaluation of the Bank’s utilization of its Public-Private Partnership mechanism over the period 2006-2017.

The objectives of the evaluation were to: i) assess the extent to which the Bank’s PPP interventions achieved development results; ii) assess the extent to which the Bank’s PPP interventions have been well-managed; iii) identify factors that enable and/or hinder the successful implementation and achievement of development results; and iv) harvest lessons from experience to inform the Bank’s future use of its ppp mechanism.

The evaluation was based on a “Theory-of-Change” approach and relied on mixed methods for collecting and analyzing data at project, sector, corporate and country levels. It mainly used evidence synthesized from seven background reports, 11 project results assessments, non-lending reviews, five country case studies, sector syntheses, a portfolio review, and a benchmarking study.

The evaluation found that the Bank has neither an overarching and formal strategy, nor operational guidelines and directives for PPPs. It has generally addressed PPPs within its corporate and sectoral strategies and country strategy papers.

In terms of project performance, however, the evaluation found that the Bank’s PPP interventions are largely relevant and effective, with the benefits likely to be sustained.

In terms of synergies and coordination of the projects, the report found that inside the Bank, despite the lack of a formal

coordination mechanism, most projects were coordinated successfully across all key departments and units, with few instances of inadequate coordination between the public and private sector operations of the Bank.

The evaluation made recommendations for the Bank’s Management to consider at the strategic and operational levels to improve internal efficiency and the effectiveness and impact of PPPs on the continent. Some of them are:

❚ Clearly define a strategic framework for the Bank’s participation in the continent-wide ppp agenda to improve internal efficiency, and ppp effectiveness and impact;

❚ Strengthen and improve coordination between upstream and downstream interventions.

❚ Continue strengthening communication with external stakeholders on the Bank’s ppp agenda in specific sectors.

Find out more:

http://idev.afdb.org/en/document/evaluation-bank%E2%80%99s-utilization-public-private-partnership-mechanism-2006-2017

An ID

EV T

hem

atic

Eva

luat

ion

Evaluation of the AfDB’s Utilization of its Public Private

Partnership Mechanism (2006-2017)Summary Report

September 2019

Hot off the Press 78

Page 81: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

eVALUation Matters Fourth Quarter 2019

Evaluation of the Bank's Integrated Safeguards System

To ensure that its projects do not have negative impacts on people or the environment, the African Development Bank (AfDB or the Bank) adopted an Integrated Safeguards System (ISS) in December 2013 to promote socially inclusive and environmentally sustainable growth.

IDEV’s evaluation of the effectiveness of the Bank’s ISS assesses the relevance and robustness of its design; the efficiency of the system, process, resourcing and incentives in place; and its emerging effectiveness in achieving the Bank's safeguards objectives.

Key findings of the evaluation include:

❚ The Bank's iss architecture is on a par with international best practices, with some areas for improvement.

❚ The iss was found to be well-aligned to the Bank's Ten-Year Strategy, and its contribution to corporate objectives.

❚ In terms of compliance of Bank projects with the iss, the quality of the Bank’s environmental and social (E&S) safeguards work was found to be strong.

❚ Although the Bank was generally found to be compliant with its disclosure requirements before Board approval, there were limitations in the use of E&S documents by stakeholders and deficiencies in their archiving.

❚ The evaluation found that reporting on these E&S covenants and mitigation measures was poor and inconsistent.

❚ The evaluation found mixed E&S performance for Bank operations through Financial Intermediaries (fi)

The evaluation made six major recommendations to Bank Management, including:

❚ Increase the Bank's E&S safeguards resources to be!er support borrowers and clients to manage E&S impacts and risks across the project cycle.

❚ Support borrowers and clients to manage E&S impacts and risks and establish cross-support linkages between the teams dealing with E&S safeguards, climate change, and gender.

❚ Develop an integrated and automated management information system and resume the Management-led safeguards compliance reviews/E&S audits.

❚ Strengthen the content and guidance of certain selected safeguards components to align them with international best practice.

❚ Reinforce the Readiness Review process; ensure compliance with the E&S sub-categorization of fi operations; and standardize the loan covenants regarding E&S reporting for fis.

❚ Strengthen safeguards reporting.

❚ Reinforce the knowledge and awareness of internal and external stakeholders on the iss requirements.

Find out more:

http://idev.afdb.org/en/document/evaluation-banks-safeguard-systems

An ID

EV C

orpo

rate

Eva

luat

ion

September 2019

Evaluation of the AfDB's Integrated Safeguards System

Summary Report

Hot off the Press 79

Page 82: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

eVALUation Matters Fourth Quarter 2019

Past issues

Second Quarter 2019 : Best practices and innovation in evaluationThe field of evaluation is on the move – in tracking progress on Agenda 2030, dealing with increasingly complex development interventions, new technologies and sources of data, and more sophisticated evaluation methods. Sharing good practices and innovations in evaluation can help evaluators to learn from each other, to tackle challenges and continually strengthen the profession. This edition of Evaluation Matters aims to showcase selected good, new or innovative evaluation methods that have contributed to better evaluations of development interventions, with a view to improving project/program planning, design and implementation.

Fourth Quarter 2018: Gender in Evaluation Volume 1This edition of Evaluation Matters seeks to contribute to the debate around some of these questions, including: what types of approaches and methods that meaningfully include gender in evaluation have shown promising results? What type of information should an evaluation seek in order to assess the different impacts of development interventions on women and men at all levels? How could evaluation approaches support the change in mindset required to achieve wider societal impacts (transformative gender equality and women’s empowerment practices)?

Best Practices and Innovation

in Evaluation

Second Quarter 2019

A Quarterly Knowledge Publication on Development Evaluation

eVALUation Matters

http://idev.afdb.org/en/document/best-practices-and-innovation-evaluation

http://idev.afdb.org/en/document/gender-evaluation-volume-1

http://idev.afdb.org/en/document/gender-evaluation-volume-2

http://idev.afdb.org/en/document/made-africa-evaluations-volume-1-theoretical-approaches

First Quarter 2019: Gender in Evaluation Volume 2 Women continue to suffer significant economic, political, legal, social and cultural disadvantages in almost all societies. Evaluations of projects, programs and policies must take into account these disadvantages and provide stakeholders with sound and compelling evidence to better inform the planning and implementation of future development interventions. This edition complements Evaluation Matters Quarter 4 2018 by providing examples of how selected individuals and institutions have been able to concretely integrate Gender Responsive Evaluation approaches into their work.

Third Quarter 2019: Made in Africa Evaluations Volume 1Development evaluation approaches have grown into a largely uniform global practice, in particular among development organizations ascribing to internationally agreed norms such as the OECD-DAC evaluation criteria. Some evaluation practitioners have called into question the relevance and effectiveness of current evaluation approaches in the African context, calling for a “Made in Africa” evaluation that takes into account local values, assumptions, and practices. This edition takes stock of some of the theoretical approaches towards a

“Made in Africa” evaluation, exploring indigenous approaches and how they could fast-track the achievement of the continental development agenda.

Gender in Evaluation Volume 1

Fourth Quarter 2018

A Quarterly Knowledge Publication on Development Evaluation

eVALUation Matters

Independent Development EvaluationAfrican Development Bank

Made in Africa Evaluations

Third Quarter 2019

A Quarterly Knowledge Publication on Development Evaluation

eVALUation Matters

Volume 1:Theoretical Approaches

Gender in Evaluation Volume 2

First Quarter 2019

A Quarterly Knowledge Publication on Development Evaluation

eVALUation Matters

Past issues80

Page 83: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

Independent Development EvaluationAfrican Development Bank

Page 84: A Quarterly Knowledge Publication on Development Evaluationidev.afdb.org/sites/default/files/documents/files... · Evaluation Association (AfrEA) made it central to the Association’s

African Development Bank Group

Avenue Joseph Anoma, 01 BP 1387, Abidjan 01, Côte d’Ivoire

Phone: +225 20 26 28 41

E-mail: [email protected]

Independent Development EvaluationAfrican Development Bank

idev.afdb.org