Regional Workshop on the UNDP Evaluation...

25
Regional Workshop on the UNDP Evaluation Policy Asia and the Pacific 16 th to 19 th October, 2006 Bangkok, Thailand WORKSHOP REPORT November, 2006

Transcript of Regional Workshop on the UNDP Evaluation...

  • Regional Workshop on the UNDP Evaluation Policy

    Asia and the Pacific

    16th to 19th October, 2006 Bangkok, Thailand

    WORKSHOP REPORT

    November, 2006

  • Summary The Asia and the Pacific workshop on the UNDP evaluation policy was the first in a series of regional workshops to launch the policy and discuss its implications at the country level. Held over three-and-a-half days in Bangkok, Thailand between 16th and 19th October 2006, the workshop brought together over sixty representatives from 21 country offices and 15 national governments in addition to five UNDP headquarter bureaux and units. The workshop revealed that the nature and presence of evaluation is shifting in the region. The demands for greater understanding of development effectiveness, what is working, what is not and why, are placing a premium on evaluation at the level of policy and in terms of the effect of actions on peoples lives. The presentations from several national governments revealed the extent and depth to which many countries are investing in monitoring and evaluation systems and practices, reflecting internal pressures on greater understanding and accountability, and external pressures where donor investment is significant. The charge to democratize evaluation was also revealed, drawing on examples of empowering citizens to evaluate government service delivery. UNDP at the country level is being challenged by the Evaluation Policy to respond to increasing internal demands for enhancing the quality and use of evaluation for better managing for results, in harmonizing with other UN agencies, and in terms of its ability to align to and support national systems. The response to these challenges has been varied, with some offices actively developing approaching and systems for monitoring and evaluation, while others have yet to identify and resource appropriate institutional arrangements. The workshop provided an opportunity to address a number of these issues, make commitments to action, and recommend areas for support. The following main recommendations were made: To the Evaluation Office: Update existing guidance (the Handbook on M&E for results) on approaches and

    methodologies for addressing multiple perspectives and the use of mixed methods for evaluating UNDP’s contribution to human development.

    Strengthen the in-house capacity and systems on monitoring and evaluation through developing an evaluation toolkit and on-line resources.

    Develop guidelines for enhancing the dissemination and use of evaluation, including clarifying the roles and responsibilities for management response.

    Review whether it is appropriate and feasible for the United Nations Evaluation Group to manage UNDAF evaluations.

    Work with the Regional Centres in enhancing a vetted roster of consultants and in developing a regional community of practice in evaluation.

    Work with the Bureau of Development Policy to establish a corporate database of thematic indicators that can be drawn upon and adapted by UNDP at the country, regional and global levels.

    To the Regional Bureau for Asia and the Pacific: Enhance the regional capacity in monitoring and evaluation to support Country Offices. Actively strengthen the quality assurance of evaluation, support its systematic use, and

    provide oversight. Work with the Evaluation Office and Bureau of Management to identify a typology (size of

    office, context etc) of institutional arrangements and funding modalities that will service the monitoring, evaluation and knowledge management functions of Country Offices.

  • To other Headquarters Bureaux and units: Support the continual improvement in the clarity of results and contribution of UNDP in

    planning documents (MYFF, UNDAF, CPD). (Regional Bureau, Operations Support Group and Bureau for Development Policy/ Capacity Development Group)

    Establish a clear and consistent approach to results management, supported by standardization of appropriate systems. This should draw upon the positive experiences in the region and from other regions. (Operations Support Group and Bureau of Management)

    Establish a platform for sharing experiences in establishing partnerships between UNDP and national governments in the provision of technical assistance in evaluation. (Country Offices, supported by Special Unit for South-South Cooperation and the Evaluation Office)

    To Country Offices together with Government counterparts: Sensitize management and staff on the Evaluation Policy. Strengthen ownership of government in evaluation through greater involvement in the

    process. Strengthen in-house capacity and systems through developing the institutional framework for

    monitoring and evaluation. Strengthen capacity to support government in evaluation through establishing partnerships

    and building on existing best practice. Enhance the utility and use of evaluations internally and amongst partners through more

    inclusion, greater systematic use of findings and recommendations, and effective dissemination.

  • 1

    1. Introduction In June 2006, the Executive Board endorsed a new Evaluation Policy for UNDP. The policy reaffirms the central tenets of evaluation in UNDP, the guiding principles and norms, the requirements, and the roles and responsibilities at all levels of the organization. To launch the implementation of policy, and discuss its implications with national counterparts, a series of regional workshops are planned by the Evaluation Office in collaboration with the regional bureaux. The workshops were intended for Country Office and national government staff responsible for planning or making decisions on evaluation. The first such workshop was focused on the Asia-Pacific region, and was held in Bangkok between 16-19th October, 2006. Objectives of the Workshop The overall objective of the workshop was to establish a basis for the implementation of the policy in the context of regional and national priorities and challenges. The specific objectives were to:

    1. Develop a comprehensive understanding of the UNDP evaluation policy and its

    implications for implementation. 2. Enhance understanding of how the UNDP evaluation function provides a basis for

    accountability, performance improvement, learning, and knowledge development. 3. Seek to enhance an effective and rational approach in carrying out roles and

    responsibilities in evaluation in partnership with government and other key stakeholders. The workshop also addressed a number of cross-cutting issues including: the alignment of the policy with national evaluation systems and institutions, ways of enhancing collaboration in evaluation with national partners, and UN reform. It will also address evaluations in areas of emerging importance such as conflict and new aid modalities.

    Participation The workshop was attended by 62 persons, representing twenty country offices, and fifteen national governments. Representatives were also present from UNIFEM, the UNDP the Regional Centres in Bangkok and Colombo, and from headquarters units of Regional Bureau for Asia and the Pacific , Operations Support Group, Bureau for Development Policy, Office of Audit and Performance Review and the Evaluation Office. (see Annex 2 for the participants list) Overview of sessions The workshop was divided into five sessions running over three-and-a-half days (see Annex 1 for the full agenda). The workshop was delivered via plenary presentations and discussions, country case presentations, group analyses and practical exercises. The first session evaluation for development effectiveness sought to set the context of evaluation, and address the implications of emerging development issues in the region, alignment and harmonization for the priorities and use of evaluation, and identify key issues, challenges and emerging recommendations.

  • 2

    The second session evaluation and managing for development results outlined the principles for results management, and shared experiences on national and UNDP monitoring and evaluation practices. The third session evaluation in context took evaluation in two specific contexts; evaluation in crisis – human and man-made; and the use of evaluation to promote public accountability and citizen engagement; and shared experiences on challenges pertinent in the region. The fourth session evaluation policy at the country level sought to clarify the issues, roles and responsibilities for evaluation at the country level in line with the evaluation policy. The final session the way forward provided a basis for UNDP to work with national counterparts in identifying measures and actions in implementing the policy. Evaluation of the workshop by participants The workshop was evaluated by participants against the achievement of the stated objectives overall, for each session, and in terms of the modalities used (presentation and group work). In aggregate, eighty four percent of participants rated the workshop as having largely or completely achieved its objectives. Of the sessions, those addressing the evaluation policy (session four) and moving forwards (session five) were considered the most effective by participants. The modalities of the workshop were also well received, with eighty six percent of participants feeling that the use of plenary presentation, plenary discussion, group work and group feedback were largely or completely effective in helping achieve the workshop objectives. The group work was rated higher, although it was noted that more time should be given for this. Qualitative feedback recognized the importance of the strong and active participation of national government representatives, providing lively and open discussion and exchange. It was recognized that a lot of issues were covered, proving interesting and useful in terms of implementing the policy at the national level. However, it was also noted that some sessions were too content loaded, and it was suggested that future workshops may be shortened, more focused and tightened in delivery. Participants comments included: “we should have workshop like this every year in the region” (workshop participant) "[the workshop] has strengthened my commitment to get the M&E (particularly evaluation) function up and running, but more so the government participant has become extremely sensitized and motivated" (workshop participant) Annex 3 provides the overall findings of the workshop evaluation.

  • 3

    2. Implementing the Policy – Issues and Recommendations The following section summarizes the key issues, their implications, and the recommended actions made during the workshop, drawing on case examples. Where appropriate, this is prefaced by relevant sections from the evaluation policy. Recommended actions are supported by an indication of who may act upon the recommendation. Annex 4 provides full case material presented at the workshop. Ownership of Evaluation Relevant sections of the Evaluation Policy

    Evaluation should be guided by national priorities and concerns. It should be inclusive and take into account diverse national interests and values. Evaluation should strengthen partnerships with governments and key stakeholders. It should build the capacity of national institutions to implement, monitor and evaluate. Senior management of country offices, regional bureaux, practice and policy bureaux and the associated funds and programmes will identify, with partner governments and key stakeholders, priority areas for evaluation when preparing the programme, and designing and implementing a strategic evaluation plan.

    Key issues and implications Participants highlighted that development evaluation has historically been driven by donor demands, skewing participatory and accountability away from the real clients, the public. There needs to be greater democratization of evaluation, including multiple perspectives and the use of mixed methods to enrich public discourse. Greater ownership can be engendered by making evaluation more relevant to public interests and needs. The level of national ownership in UNDP commissioning evaluations has often been weak. Government and public involvement in the strategic selection of evaluation topics and in the design of evaluations has often been absent, reducing the utility and use of evaluations outside of the organization. This reduces their value in understanding development effectiveness, and the transparency and thus accountability of its actions. Recommended actions Evaluation in UNDP should be based on the normative underpinnings outlined in the

    Evaluation Policy. It should be focused on human development ends, include multiple perspectives and the use of mixed methods. New approaches and methods are required to help guide this. This will be incorporated a the revised handbook on Monitoring and Evaluating for Results. (Evaluation Office)

    There should be broader involvement and ownership of evaluation. At the country level,

    national steering committees should be established with representation from government, civil society, donors and UN tasked to strategically select the topics for evaluation, inform the preparation process, and periodically review progress. (Country Offices, Regional Bureau, Evaluation Office)

  • 4

    Evaluation plans should be prepared at the outset of the programme (global, regional, country) formulation and included in the programme document submitted to the Executive Board. Evaluation plans should be reviewed annually as part of programme reviews. (Global Programme, Regional Programme, Country Offices).

    National Alignment Key issues and implications It was noted that alignment with national partners development goals may raise ambiguities on the clarity of results to which UNDP works. The intended results must be nationally derived and owned, but it was highlighted that UNDP’s contribution can vary enormously depending on, inter alia, the size of the country, nature of the issues, programmatic areas, etc. The example of China was given, where UNDP is economically insignificant, but whose programmes seek to contribute in terms of knowledge and ideas. By contrast, UNDP in other, often smaller states, was found to play a more major role. Consequently, it was agreed that there needs to be a clear theory of change as to what UNDP contributes to results in each case. Harmonization under the Paris Declaration and associated aid modalities, e.g. Direct Budget Support (DBS) and Sector Wide Approaches (SWAPs) are placing greater demands on transparency and accountability for outcomes, the need for improved systems and procedures, and national capacities including in M&E. Harmonization also provides opportunities for greater national ownership and leadership, reducing transaction costs and the burden of donor coordination. In Vietnam, for example, (see Annex 4.1) it was noted that there is a strong commitment of Government, Donors and UN to Paris Declaration and Hanoi core statement. There is harmonization under the SWAPs in forestry, education and budget support to national programme for poverty reduction in ethnic minority areas. However, harmonization does provides a particular challenge for the UN as it is often small, fragmented, and without a clear relationship to these new modalities. These new aid modalities are also placing greater technical demands on UNDP country offices to build national capacities, including for monitoring and evaluation. This requires a shift in UNDP from running projects to national capacity development and supporting the evaluation of national policies. Recommended actions UNDP planning documents (MYFF, UNDAF, CPD) must be clear about what global,

    regional, national results UNDP is working to achieve, but also the nature and focus of UNDP’s contribution (see later section under evaluation and managing for development results). (Country Offices, Regional Bureau, Operations Support Group and Bureau for Development Policy/ Capacity Development Group).

    There is a need to review the UNDP Evaluation Policy within the context of national policies,

    new aid modalities and in terms of UN reform. The policy should serve as a starting point for discussions on how UNDP Country Offices can better work with government, donors and other agencies in the UN system. (Country Offices)

    There is a need to review and consider UNDP’s internal capacity to carry out the new roles in

    national monitoring and evaluation capacity development. (see later section under capacity development)

  • 5

    Evaluation for Management of Results Relevant section of the Evaluation Policy

    Senior management of country offices, regional bureaux, practice and policy bureaux and the associated funds and programmes will ensure the evaluability of programmes by identifying clear results, developing measurable indicators, and establishing performance targets and baseline information. In collaboration with national stakeholders and partners, [they will] ensure the effective monitoring of implementation and performance of programmes to generate relevant, timely information for management for results and evaluation

    Key issues and implications A presentation made by the UNDP Operations Support Group stated that the corporate multi-year funding framework (MYFF) for results management may be replaced by a Medium Term Strategic Plan (MTSP) from 2008-2011, which will aim to set clear parameters of measurement at different levels. In response to this, concerns were raised by COs and national counterparts on the many new changes proposed, and on the country office capacity to adapt to these changes. It was argued that successful evaluations are built on and support strong, coherent monitoring systems. This was illustrated by cases of national systems in Sri Lanka and India (Annex 4). The strategy to mainstream monitoring and make it a core responsibility of programme staff was not felt to have worked. It was observed that in many cases UNDP has done away with project management or project support units that supported this function. Further, ATLAS in its current form was not considered a strong enough tool to provide the knowledge management support required to ensure good monitoring. A number of offices have begun reviewing organizational structures including the importance of units providing support on monitoring and project management, boosted in part by the introduction of Prince 2 training and the Results Management Guide. As in the case of Indonesia (see Annex 4), country offices are exploring alternative avenues in this regard including development of their own performance monitoring systems. It was noted that the responsibility and support for programme management and monitoring has not been consistently addressed in the organization. It was felt that there is a need for better guidance in the establishment of simple, high-quality monitoring systems with performance indicators, and systems to establish baselines and bases for data analyses. It was suggested that the Regional Centres might consider having staff with adequate skills to backstop COs on technical matters in monitoring and evaluation. The importance of robust indicators and a stock of data on these indicators was recognized as central to better performance management and evaluation. It was noted that UNDP lacks a stock of core dissaggregable data that can be adapted and used to track short, medium and long term progress towards development results in the thematic areas in which UNDP works. Recommended actions A clear and consistent approach to results management is required, supported by

    standardization of appropriate systems. This should draw upon the positive experiences in the region and from other regions. This will avoid the possibility of multiple results management systems at the country level. (Operations Support Group and Bureau of Management)

  • 6

    Regional capacity in M&E should be established to backstop Country Offices on technical

    issues (like indicator selection and formulation, templates for monitoring systems etc.). This should include building partnersips with regional networks and associations. This may be provided through an Evaluation Advisor position located in a Regional Centre, as is the case in of the Regional Bureau for Europe and the CIS. (Regional Bureau)

    Country Office senior management should be encouraged to strongly and explicitly support

    the improvement of monitoring. Otherwise, as examples in the region have shown, all efforts in this direction will remain ineffective. (Country Offices)

    Establish a corporate database on indicators. Core sets of indicators that can be drawn upon

    and adapted by UNDP at the country, regional and global levels. Indicators should be disaggregable by gender, geography, etc. (Bureau for Development Policy with Evaluation Office)

    Quality controls should be enhanced to ensure that indicators and CO monitoring systems in

    general meet minimum quality standards. (Regional Bureau with the Operations Support Group and Evaluation Office)

    Institutional Arrangements Relevant section of the Evaluation Policy

    Senior management of country offices, regional bureaux, practice and policy bureaux and the associated funds and programmes will establish an appropriate institutional arrangement to manage evaluation.

    Key issues and implications The system of having a focal point for evaluation within Country Offices has not been effective. Many offices were unaware that the responsibility existed, and it was noted that this approach had not led to the best management, quality, dissemination and use of evaluative evidence. While full monitoring and evaluation units at the Country Office level are ideal, it was considered that only few, larger COs can afford to have such a unit. It was suggested that while the arrangements for small offices have to be different from that of large offices, there was some consensus that, ideally, there should be a minimum of one full time professional dedicated to M&E and learning. Where offices are small, this may be a shared position across UN agencies. To ensure independence of evaluation and in recognition of the programmatic nature of monitoring, it was suggested that the M&E professional should manage the evaluation commissioning process and provide guidance and quality assurance on monitoring to programme officers. To ensure independence, CO M&E professionals should not conduct get directly involved with the conduct of evaluation, or conduct the monitoring per se. Recommended actions A typology of sustainable institutional arrangements that will service the monitoring,

    evaluation and knowledge management functions of country offices should be developed

  • 7

    based on lessons from experience. This should provide sufficient flexibility to be used and adapted by offices of different sizes in different contexts. (Evaluation Office)

    For large country offices, the establishment of monitoring, evaluation and knowledge

    management units should be considered. The Indonesia CO provides a strong case example. (Country Offices)

    For small Country Offices at least one person should have the explicit task of being

    responsible for evaluations, providing technical backstopping and a quality control function for evaluations. This should be a core task of a staff member which is reflected in his/her TORs and RCA process. (Country Offices)

    All efforts should be made to ensure that the M&E function is harmonized with other UN

    agencies (Regional Bureau and Country Offices), and with government and national M&E systems. (Country Offices)

    Resources Relevant section of the Evaluation Policy

    Senior management of country offices, regional bureaux, practice and policy bureaux and the associated funds and programmes will ensure adequate resources for evaluation. Country offices, regional bureaux, and practice and policy bureaux will be required to prepare an evaluation plan, cost this plan, and allocate the requisite funds from appropriate project and programme budgets.

    Key issues and implications Existing budget practice for evaluation was found to vary considerably from country to country. Resourcing project evaluations, where required, was largely practiced through percentage allocation. For outcome and other evaluations, a variety of approaches were found to be practiced, including earmarked project funds, central resource pool (e.g. Afghanistan), the use of general management funds or developing specific M&E projects (e.g. Indonesia). It suggested that wherever possible, opportunities should be sought to cost-share evaluation with government, other UN agencies and donors. In particularly this should be sought where evaluating the contributions of common outcomes, as in the UNDAF. Recommended actions Consistent planning for evaluations and earmarking funds must become standard practice in

    line with the Evaluation Policy. (Country Offices) Different models, responding to different sized offices and opportunities should be researched

    and presented as cases which Country Offices can consider, adapt and use. (Bureau of Management with Regional Bureau and Evaluation Office)

  • 8

    Capacity development Relevant sections of the Evaluation Policy

    Directors of Regional Bureaux, in their exercise of line oversight should support and guide country office capacity in evaluation, including establishing regional expertise and evaluation support systems. Country Offices, regional bureaux and practice and policy bureaux require technical and managerial expertise for commissioning and using evaluation for their programmes. Evaluation should strengthen partnerships with governments and key stakeholders. It should build the capacity of national institutions to implement, monitor and evaluate.

    Key issues and implications The capacity to better evaluate within the country offices, amongst the consultants hired, and with partners was a recurring theme during the workshop. It was noted that capacity in monitoring and evaluation is generally weak, and needs to be strengthened across the organization. This was not seen with respect solely to focal points, but for all programme staff. A counter view was expressed that the variability in the conduct, quality and use of evaluations by UNDP related more to a lack of incentives and political will – recognizing the value of strong evaluation – than the lack of capacity to do it. The demand for capacity development in monitoring and evaluation was also expressed from national government participants. In particular, new modalities such as SWAPs were found to place greater technical demands on UNDP country offices to build national capacities, including for monitoring and evaluation. This demand has been met by a number of country offices, including Vietnam, Bhutan and Indonesia. In Vietnam, for example, UNDP has supported M&E capacity development with the national programme for poverty reduction in ethnic minority areas. The shift upstream in evaluation towards evaluating outcomes and the contribution to development effectiveness was also found to require a different set, or skill set of evaluators. It was noted that the stock of competent and experienced outcome evaluators was small. To address this, it was suggested that the search for evaluators needs to be broadened, but also greater investment in orientating and capacitating existing evaluators in, inter alia, outcome evaluation. In each of these areas, it was noted that the Evaluation Policy should provide the basis to guide UNDP discussions at country level to build up internal capacity to meet more demand for evaluation government policy. Recommended actions Strengthen the in-house capacity and systems on monitoring and evaluation through

    developing an evaluation toolkit (based on the forthcoming revised Evaluation Handbook) and on-line training modules. (Evaluation Office and Learning Resource Centre)

    Consider establishing a community of practice in monitoring and evaluation in the region.

    (Evaluation Office with Regional Centres)

  • 9

    Share experiences in establishing partnerships between UNDP and national governments in the provision of technical assistance in evaluation. (Country Offices, supported by Special Unit for South-South Cooperation with support from the Evaluation Office)

    Quality Enhancement and Assurance Relevant sections of the Evaluation Policy

    All evaluations should meet minimum quality standards defined by the Evaluation Office. To ensure that the information generated is accurate and reliable, evaluation design, data collection and analysis should reflect professional standards, with due regard for any special circumstances or limitations reflecting the context of the evaluation. To ensure this, the professionalism of evaluators and their intellectual integrity in applying standard evaluation methods is critical. The Evaluation Office is responsible for setting evaluation standards, developing and disseminating methodology and establishing the institutional mechanisms for applying the standards; and for assuring (the quality of ) mandatory decentralized evaluations and support the quality assurance of the evaluations conducted by the associated funds and programmes;

    Key issues and implications It was noted that norms and standards, such as the UN Evaluation Group Norms and Standards for evaluation in the UN system, have not been consistently applied and used across UNDP evaluation and with its partners. New standards being developed by various countries were also found to challenge evaluation is positive new ways. The Chinese Government, for example, has evaluation standards for innovation and equity, both of which could be usefully applied in the UNDP context. Human development is multidimensional, and thus evaluation requires a broad set of criteria to properly assess it. The body of methodologies for evaluation in UNDP are not sufficient, and consequently consultants conducting evaluations draw upon their own criteria and tools. These are often not focused on human development ends, and the variety prevents consistency in quality, comparability and aggregate reporting. The quality of outcome evaluations was often found to be poor. In particular, it was noted that their failure to answer pertinent questions, lack of rigorous assessment of the contribution to the outcome(s) in question, and regular factual errors. It was felt that more time and resources should be invested in planning and managing outcome evaluations, including the selection of outcome evaluators. Evaluators from the country and the region should be given priority in line with efforts to build national evaluation capacity. This should also include closer ties with local, national and regional professional evaluation societies and with institutions involved in analytical work and knowledge production and dissemination.. The respective roles of headquarters units and the Regional Centres in supporting Country Offices was not always clear. It was noted, particularly within the broader context of results management and evaluation that there were several units which seemed to have some role, but that the division of responsibility and the clarity in terms of whom Country Offices should communicate with were not always clear.

  • 10

    Recommended actions Better use should be made of the evaluation standards through provision of appropriate

    guidance and establishment of quality assurance system. This is already underway with guidelines and quality criteria developed by the Evaluation Office. (Evaluation Office, Country Office, Regional Bureau)

    An expanded and enhanced vetted roster of evaluators who can support the new generation of

    evaluations is required. The current rosters need to be made more accessible and partner more effectively with regional associations and the like to draw in relevant expertise. (Evaluation Office with Regional Centres)

    New approaches and methods are required to evaluate for human development. (Evaluation

    Office – previous cited) Provide incentives and compliance criteria for senior management in establishing the

    conditions to ensure integrity of project and outcome evaluation. This may include strengthening the focus on compliance against the criteria outlined in the results management guide. (Regional Bureau)

    Clarify and communicate the roles and responsibilities of the Regional Bureau programme

    managers and evaluation focal point; the Regional Centre, Evaluation Office and other HQ units in terms of support and oversight to country offices. (all units)

    Use of Evaluation Relevant section of the Evaluation Policy

    Senior management of country offices, regional bureaux, practice and policy bureaux and the associated funds and programmes will prepare management responses to all evaluations, and ensure and track appropriate, timely implementation of the agreed evaluation recommendation; draw on evaluation findings to improve the quality of programmes, guide strategic decision-making on future programming and positioning, and share knowledge on development experience, and ensure the transparency of, and public access to, all evaluation reports

    Key issues and implications It was noted that typically there are a broad range of users for evaluation, ranging from government planning commissions, line ministries, wider public, and direct beneficiaries in addition to UNDP and donors. While many evaluations serve to enhance the internal direction of programmes, many cited the lack of clear involvement and ownership by partners, reducing the broader relevance and use. More joint evaluation was advocated. The poor timing of evaluations, either through inappropriate planning so that they do not feed policy or programme processes, or delays in the evaluation process can renders the findings useless. Similarly, weak dissemination of evaluation findings, lack of translation to local langugages, lengthy reports and use of UN terminology are all deterrents in the use of evaluation

  • 11

    The follow-up to evaluation is not always consistent. The new management response modality was broadly welcomed, but it was stated that it needs to be clear with UNDP, the government and partners who should provide and own the response. It was also warned that there are pitfalls in establishing a management response system, potentially raising expectations and questions regarding the integrity of information provided. An effective management response system is only possible based on good quality evaluations. Recommended actions Establish guideline on the dissemination of evaluation, including of best practices and case

    studies on use. (Evaluation Office) The management response system needs to be clarified in terms of roles, responsibilities and

    follow-up. (Evaluation Office) The systematic and effective use of evaluation findings and recommendations needs to be

    institutionalized. (Country Offices) The dissemination of evaluation reports needs to be enhanced. As a minimum, every

    stakeholder involved in the evaluation process should receive a copy of the report, a short description how the evaluation is going to be used. (Country Offices, Evaluation Office)

    The disclosure policy for evaluation reports should be clarified. (Evaluation Office)

    Independent Evaluations Relevant section of the Evaluation Policy

    The Evaluation Office is mandated to conduct strategic and thematic evaluations, programme evaluations such as the Assessment of Development Results (ADRs) at the country level, evaluations of global, regional, and South-South programmes, and other evaluations as required.

    Key issues and implications In discussion and presentation on the relevance and use of Evaluation Office conducted evaluations, some government and UNDP Country Office participants stated that they were generally useful, providing a fresh perspective, objective views, and interest learning from multiple countries. The quality and utility of these evaluations could be improved through involvement of country offices and national counterparts. Challenges were also recognized in terms of developing flexible and robust methodologies for evaluating across major themes or countries. The case of the China ADR was cited as an example of weaknesses in terms of the lack of engagement and thus ownership of government, and the lack of a thorough evaluation of UNDP’s contribution, knowledge and sharing. However, the findings were used to focus the Country Programme Document (2006-10), restructure CO around outcomes, develop a more focused project portfolio and establish more linkages with UN efforts.

  • 12

    The importance of independent evaluation was cited, and it was suggested that it might be more appropriate for the Evaluation Office to conduct the UNDAF evaluations, rather than the UN Country Teams. Recommended actions The timing of ADRs must be aligned with the Country Programme cycle to be useful. The

    Evaluation Office has indicated that all ADRs are now timed to do so. (Evaluation Office). ADRs require active support of national government, and need to be more adequately

    discussed with government at the TOR stage. (Evaluation Office) Develop appropriate methodologies for evaluating in different contexts, e.g. crisis countries.

    (Evaluation Office) Review whether it is appropriate and feasible for the United Nations Evaluation Group to

    manage UNDAF evaluations. 3. Way forward on Implementing the Policy The final session of the workshop provided the basis for country groups (UNDP with national government counterparts) to consider what actions they will take in implementing the Evaluation Policy based on the issues and recommendations of the workshop, and what recommendations they may have for other units to support this process. The main issues raised in the session have been drawn together with the key recommendations actions made during the previous three days and summarized below. Way forward for Country Offices with Government Sensitize management and staff within UNDP, government and amongst partners on the

    Evaluation Policy Strengthen ownership of government in evaluation through partnership

    • Principle: Involvement and agreement on all aspect of the evaluation process. • Develop and review evaluation plan jointly with national counterparts and in consultation

    with other development partners. • Dissemination of draft evaluation reports to all key stakeholders for feedback to inform

    final reports. Strengthen in-house capacity and systems

    • Develop institutional arrangements that will service the monitoring, evaluation and knowledge management functions of country offices.

    • Enhance use of existing (and new) resources and training materials and events.

    Strengthen capacity to support government in evaluation • Explore possibilities of developing partnership frameworks with the government to

    provide technical assistances for conducting evaluation of government. • Support south-south knowledge transfer. • Link capacity as a programmatic activity with internal requirements.

  • 13

    Enhance utility of evaluations

    • Engage with government and partners in the selection and design of evaluations, thus improving the likelihood of relevance and use.

    • Ensure that evaluation reports are objective, strategic and “understandable” to the wide range of stakeholders.

    • Make more systematic and effective use of evaluation findings and recommendations • Enhance the dissemination process.

    Recommended Actions to Support Country Offices Recommended actions to be taken by the Regional Bureau with the Regional Centre

    • Develop regional capacity in monitoring and evaluation to support Country Offices • Actively strengthen the quality assurance of evaluation, support its systematic use, and

    provide oversight. • Work with the Evaluation Office and Bureau of Management to identify a typology (size

    of office, context etc) of institutional arrangements and funding modalities that will service the monitoring, evaluation and knowledge management functions of Country Offices.

    Recommended action to be taken by the Evaluation Office

    • Update guidance (the Handbook) on approaches and methodologies for addressing multiple perspectives and the use of mixed methods for evaluating UNDP’s contribution to human development.

    • Strengthen the in-house capacity and systems on monitoring and evaluation through developing an evaluation toolkit and on-line resources.

    • Develop guidelines for enhancing the dissemination and use of evaluation, including clarifying the roles and responsibilities for management response.

    • Review whether it is appropriate and feasible for the United Nations Evaluation Group to manage UNDAF evaluations.

    • Work with the Regional Centres in enhancing a vetted roster of consultants and in developing a regional community of practice in evaluation.

    • Work with the Bureau of Development Policy to establish a corporate database of thematic indicators that can be drawn upon and adapted by UNDP at the country, regional and global levels.

    Recommended action to be taken by other Bureaux and Units

    • Support the continual improvement in the clarity of results and contribution of UNDP in planning documents (MYFF, UNDAF, CPD). (Regional Bureau, Operations Support Group and Bureau for Development Policy/ Capacity Development Group)

    • Establish a clear and consistent approach to results management, supported by standardization of appropriate systems. This should draw upon the positive experiences in the region and from other regions. (Operations Support Group and Bureau of Management)

    • Different models, responding to different sized offices and opportunities should be researched and presented as cases which Country Offices can consider, adapt and use. (Bureau of Management with Regional Bureau and Evaluation Office)

    • Establish a platform for sharing experiences in establishing partnerships between UNDP and national governments in the provision of technical assistance in evaluation. (Country Offices, supported by Special Unit for South-South Cooperation with support from the Evaluation Office)

  • 14

    Annex 1. Agenda

    DAY 1 Monday, 16th October

    7:45 - 8:30 Registration

    Welcome and Opening

    8:30 - 8:45 Welcome and opening remarks Elizabeth Fong, Regional Director, Regional Centre in Bangkok Hafiz Pasha, Director, Regional Bureau for Asia and the Pacific (recorded)

    8:45 - 9:00 Presentation of the workshop agenda Sukai Prom-Jackson and David Rider Smith, Evaluation Office

    Session 1: Evaluation for Development Effectiveness Chair: Ligia Elizondo, Director, Operations Support Group Session Objective: To address the implications of emerging development issues in the region, alignment and harmonization for the priorities and use of evaluation, and identify key issues, challenges and emerging recommendations.

    1. Development priorities and the potential of evaluation in the region A.K. Shiva Kumar, International Development Evaluation Association (IDEAS)

    2. Directions for development evaluation in UNDP Saraswathi Menon

    9:00 - 10:00

    3. Plenary discussion

    10:00 - 10:30

    Break

    10:30 – 12:30

    4. Evaluation priorities and use Government of Vietnam and UNDP Vietnam UNDP China and UNDP Pakistan Q&A Evaluation Office and UNDP India Plenary discussion

    12:30 - 2:00 Lunch

    2:00 - 3:30 Evaluation priorities and use (continued) Small group work

    3:30 - 4:00 Break

    4:00 - 5:00 Reporting back and discussion in plenary

    Rapporteurs Sergelen Dambadarjaa, Programme Specialist, Regional Centre in Bangkok

    Asoka Kasturiarachchi, Policy Specialist, Bureau for Development Policy Suresh Balakrishnan, Chief Technical Advisor, UNDP, Lao PDR

  • 15

    DAY 2 Tuesday, 17th October Session 2.

    Evaluation and managing for development for results Chair: Mr. Nurul Alam, Deputy Director, Evaluation Office Session Objective: To outline the principles for results management and discuss different evaluation and monitoring practices, identifying key issues and challenges for harmonization in the context of UN reform and alignment in a national framework.

    8:30 - 10:00 1. Overview of managing for development results, policy and systems in UNDP Ms. Ligia Elizondo, Director, Operations Support Group Q&A

    10:00 - 10:30

    Break

    10:30 -12:30 2. Monitoring and evaluation for results Mr. Upali Dahanayake, Government of Sri Lanka Mr. Vinod .K. Bhatia, Government of India Q&A on Government presentations Mr. Toshihiro Nakamura, UNDP Indonesia Mr. Toshihiro Tanaka, UNDP Bhutan Q&A on UNDP presentations

    12:30 – 2:00 Lunch

    2:00 – 2:05 Plenary

    2:05- 3:30 Monitoring and evaluation for results (continued) Small group work

    3:30 – 4:00 Break

    4:00-5pm Monitoring and evaluation for results (continued) Reporting back Discussion in plenary

    Rapporteurs

    Lhaba Tshering, Programme Officer, Department of Aid and Debt Management, Bhutan Shafique Rahman, Policy Specialist, Myanmar Thomas Winderl, consultant

    DAY 3 Wednesday, 18th October Session 3.

    Evaluation in context Chair: Mr. Upali Dahanayake, Government of Sri Lanka

    Session Objective: To outline and discuss the role of evaluation in addressing different development challenges pertinent to the region.

    8:30 – 9:15 1. Evaluation in crisis contexts – natural and man-made Saraswathi Menon, Evaluation Office Q&A

    9:15 – 10:00 2. Evaluation to promote public accountability – the case of citizen resource cards Suresh Balakrishnan, Chief Technical Advisor, UNDP, Lao PDR Q&A

  • 16

    10:00 - 10:30

    Break

    Session 4.

    Evaluation Policy at the Country Level Chair: Vineet Bhatia, UNDP Deputy Resident Representative, Democratic People’s Republic of Korea Session Objective: To clarify issues, roles and responsibilities for evaluation at the country level in line with the policy.

    10:30 -12:30 1. Evaluation Policy at the Country Level Evaluation Office Q&A Razina Bilgrami, Regional Bureau for Asia and the Pacific Cheryl-Lynne Kulasingham, Office of Audit and Performance Review Q&A

    12:30 – 2:00 Lunch

    2:00 – 3:30 2. Enhancing the quality of evaluation Small group work

    3:30 – 4:00 Break

    4:00 - 5:00 Reporting back and discussion in plenary

    Rapporteurs

    Suraj Kumar, Programme Analyst, UNDP India V K Bhatia, Adviser (Evaluation), Government of India Rae-Ann Peart, Programme Support/Special Assistant to RR UNDP Maldives

    DAY 4 Thursday, 19th October Session 5.

    The way forward Chair: Saraswathi Menon Session Objective: To clarify roles, responsibilities and a basis for moving forwards in implementing the policy.

    8:30-10:30 1. Key issues emanating from the sessions Reporting back from rapporteurs Q&A

    10:30 - 11:00 Break

    11:00-12:00 2. Addressing issues and developing an office plan Working groups

    12:00 – 1:00 Addressing issues and developing an office plan Reporting back and plenary

    Closing remarks

  • 17

    Annex 2. List of Participants

    Country Participant Title E-mail Address Afghanistan Micheline Baussard M&E Advisor [email protected]

    Bangladesh K.A.M. Morshed Programme Manager, ICT for Development Unit

    [email protected]

    Bangladesh Mohammad Ali Ashraf

    Assistant Resident Representative and Head of Results and Resource Management

    [email protected]

    Bangladesh Rafiqul Islam

    Secretary, Implementation, Monitoring and Evaluation Division, Ministry of Planning, Government of Bangladesh

    [email protected]

    Bhutan Lhaba Tshering

    Programme Officer, Department of Aid and Debt Management (DADM)

    [email protected]

    Bhutan Tenzin Thinley ARR [email protected]

    Bhutan Toshihiro Tanaka DRR [email protected]

    Cambodia Min Muny Programme Manager for Governance Cluster

    [email protected]

    Cambodia RETH Krisna

    Government Officer at the Council for Development of Cambodia (CDC)

    [email protected]

    China Lu Lei

    ARR/Team Leader for Strategic Planning & Management Support , UNDP China

    [email protected]

    China Bai Jing Programme Manager [email protected]

    India Suraj Kumar Programme Analyst [email protected]

    India Vinod Kumar Bhatia Adviser (Evaluation) [email protected] Indonesia Sirman Purba M&E officer [email protected]

    Indonesia Toshihiro Nakamura ARR/Head Planning, Monitoring and Evaluation

    [email protected]

    Iran Saeed Mirzamohammadi Government Counterpart [email protected]

    Iran Shahrzad Amoli PSU Associate [email protected]

    Korea DPR Vineet Bhatia DRR [email protected]

    Laos Sisomboun

    head of UN Devision, Department for International Cooperation of the MOFA of lao PDR.

    [email protected]

    Laos Virachit Vongsak Senior NEX Advisor [email protected]

    Maldives Ali Naseer Mohamed

    Assistant Director General; Department of External Resources, Ministry of Foreign Affairs, Maldives (Government Participant)

    [email protected]

    Maldives Nashida Sattar

    Assistant Resident Representative-Governance, UNDP Maldives

    [email protected]

    Maldives Rae-Ann Peart Programme Support/Special Assistant to RR UNDP Maldives

    [email protected]

    Mongolia Tungalag Majig

    Vice Chairman of the Cabinet Secretariat (CS) and the Head of the Monitoring and Evaluation Department of CS

    [email protected]

    Myanmar Aye Lwin Programme Manager [email protected]

    Myanmar Shafique Rahman Policy Specialist [email protected]

  • 18

    Nepal Dharma Swarnakar Monitoring & Evaluation Analyst [email protected]

    Nepal Theertha Dhakal Programme Director [email protected]

    Pakistan Abdul Qadir Energy and Environment [email protected]

    Pakistan Usman Qazi Crisis Prevention and Recovery [email protected]

    Pakistan Tariq Zaman Khan Joint Secretary, Economic Affairs Division

    [email protected]

    Philippines Jonathan L. Uy Government Counterpart [email protected]

    Philippines Corazon Urquico Programme Management Support Unit

    [email protected]

    Philippines Diana Marie T. Hernandez Government Counterpart

    [email protected]

    Sri Lanka T.G.Wijeratne Banda

    Director General, Department of Foreign Aid and Budget Monitoring, Ministry of Plan Implementaion (MPI)

    [email protected]

    Sri Lanka Prashan Malalasekera Monitoring and Reporting Officer [email protected]

    Thailand Nittaya Mek-Aroonreung Programme Management Associate

    [email protected]

    Thailand Suthanone Fungtammasan Chief of the Evaluation Office, TICA

    [email protected]

    Thailand Pichet Khemthong Programme Official, Evaluation Office, TICA

    [email protected]

    Timor Leste Eusebio da Costa Jeronimo

    Director of Planning and External Assistance, Ministryr of Planning and Finance

    [email protected]

    Timor Leste Lin Cao

    ARR and Head of Planning, Monitoring and Evaluation, and Programme Support Unit

    [email protected]

    Vietnam Luu Quang Khanh

    Deputy Director, Foreign Economic Relations Department, Ministry of Planning and Investment

    [email protected]

    Vietnam Nguyen Tien Phong Manager/ARR, Poverty and Social Development

    [email protected]

    Vietnam Le Le Lan Programme Associate, Programme Support Unit

    [email protected]

    Iran Embassy of Thailand Esmaeil Tekyeh Sadat DPR to ESCAP [email protected]

    Regional Centre staff

    Regional Centrein Bangkok Sergelen Dambadarjaa Programme Specialist

    [email protected]

    Regional Centrein Bangkok Nescha Teckle Regional Advisor-CPR

    [email protected]

    Regional Centre in Colombo Tam Pham

    Chief Knowledge Services Team from the UNDP

    [email protected]

    UNIFEM Amarsanaa Darisuren Programme Manager [email protected]

    UNIFEM Ryratana Rangsitpol Programme Manager [email protected]

    Resource People

    Sri Lanka Upali Dahanayake

    Secretary – Ministry of Policy Development & Implementation and Additional Secretary – Ministry of Plan Implementation

    [email protected]

    Lao Suresh Balakrishnan Chief Technical Advisor, UNDP, Lao PDR

    [email protected]

  • 19

    Spain Thomas Winderl Independent Consultant [email protected]

    India A.K. Shiva Kumar International Development Evaluation Association (IDEAS)

    [email protected]

    HQ staff

    OSG Ligia Elizondo Director, Operations Support Group

    [email protected]

    RBAP Razina Bilgrami Programme Advisor and Evaluation Focal Point

    [email protected]

    EO Saraswathi Menon Director, UNDP Evaluation Office

    [email protected]

    EO Nurul Alam Deputy Director, UNDP Evaluation Office

    [email protected]

    EO David Rider Smith Evaluation Specialist [email protected]

    EO Sukai Prom-Jackson Evaluation Adviser [email protected]

    BDP Asoka Kasturiarachchi Policy Specialist [email protected]

    OAPR Cheryl-Lynne Kulasingham

    Audit Specialist, Regional Audit Service Center Malaysia

    [email protected]

  • 20

    Annex 3. Workshop Evaluation 1. Did the various sessions help you achieve the workshop objectives?

    Not at all Partially Largely Completely Average # respondents

    Session 1: Evaluation for Development Effectiveness 1 2 3 4

    0 9 13 8 3.0 30

    0% 30% 43% 27%

    Comments:

    - bit general, didn't concretely increase understanding

    - more energy from resource persons required to motivate audience

    Session 2: Evaluation and managing for development results 1 2 3 4

    0 6 18 6 3.0 30

    0% 20% 60% 20%

    Comments:

    - good picture of issues and constraints in results management

    - needed more time to discuss OSG's presentation; needed more on issues of actually managing for results

    Session 3: Evaluation in context 1 2 3 4

    0 4 19 7 3.1 30

    0% 13% 63% 23%

    Comments:

    - not clear link between this session and the evaluation policy

    - discussion of natural crises good, but needed more on man-made crises

    Session 4: Evaluation policy at the country level 1 2 3 4

    0 3 19 8 3.2 30

    0% 10% 63% 27%

    Comments:

    - the session needed to be longer given the number of countries and participants

    - still some confusion

    Session 5: Moving Forward 1 2 3 4

    0 2 14 13 3.4 29

    0% 7% 48% 45%

    Comments:

    - valuable recommendations to be followed-up

    Mean 3.1

  • 21

    2. Was the way the workshop was structured and delivered useful in achieving the objectives of the workshop?

    Not at all Partially Largely Completely Average # respondents

    Plenary presentations 1 2 3 4

    0 5 18 7 3.1 30

    0% 17% 60% 23%

    Comments:

    - good opportunity to share wide variety of experiences and views

    - brief summary of presentations to all participants in advance would have been good

    Group work 1 2 3 4

    0 5 16 8 3.0 30

    0% 17% 53% 27%

    Comments:

    - good opportunity to discuss challenges ahead, last group session was particularly good

    - group work too short

    Reporting back from groups 1 2 3 4

    0 2 19 9 3.2 30

    0% 7% 63% 30%

    Comments:

    - too short, thus a little superficial despite the good ideas

    Plenary discussions 1 2 3 4

    0 3 16 10 3.1 30

    0% 10% 53% 33%

    Comments:

    Mean 3.1

    3. Other comments, suggestions and recommendations

    Strengths

    - good, lively and open discussion, excellent exchange between counterparts

    - very interesting and useful, lot of issues covered

    - excellent logistics, good effort by the regional centre

    - good insight into evaluation - "[the workshop] has strengthened my commitment to get the M&E (particularly evaluation) function up and running, but more so the government participant has become extremely sensitised and motivated"

  • 22

    Weaknesses

    - needed more focus and depth to address a number of the issues concerned

    - large number of topics which prevented going into sufficient depth

    - some repetition of issues

    - some problem with time management (raised twice)

    - sessions a bit content loaded

    Suggestions and Recommendations

    Content-related

    - suppress Session 3 (context)

    - provide plenary presentation material to participants in advance

    - present some cases and good practices - and analyze them in depth

    Structure & Process

    - maximum of 2 days with a few strategic presentations, thus reducing repetition

    - need some more creativity in the design of the sessions

    - presentations and Q&A in mornings, group work in afternoon

    - increase group work to 2 hours each

    - change composition of groups in each session, mix people up (raised three times)

    - clarify role of chairs to ensure discussion is focused on the few key issues

    - suggest using the knowledge services unit in the regional centres

    - start the workshop on a Tuesday so people can travel Monday and not use-up their weekend

    Follow-up

    - “have workshop like this every year in the region - just rotate the location around the countries”

    - "keep the spirit of the workshop alive through Eval-Net or an evaluation blog"

    Annex 4. Workshop Presentations A full set of presentations made in each session of the workshop can be downloaded from the workshop website.