The EPEs facilitate regular learning and exchange of experience amongst UNEG members (including those carrying out or managing evaluations according to the practices in different UN organizations) to improve the credibility and utility of UN evaluations and contribute to advancement of the evaluation function. The EPE provides a forum for sharing information and experiences related primarily to evaluation approaches and methodologies and enables peer learning and support.
The overarching EPE theme in 2025 is the Future of Evaluation. At the Summit of the Future in New York in September 2024, Member States signed a new Pact for the Future, pledging a new beginning in international cooperation, and striving for a safer, more peaceful, just, equal, inclusive, sustainable and prosperous world. This inevitably gives rise to questions amongst the evaluation community, e.g. how evaluation should evolve in line with the Pact for the Future? How to strengthen accountability to Member States and donors while ensuring closer collaboration and coordination amongst evaluation functions? How attentive is the UN evaluation community to local needs, stakeholders, and experiences? What opportunities have evaluators seized in recent years, or what new opportunities do evaluators see for their function at the UN? How can evaluators harness advances in knowledge and technology in support of learning and accountability?
Lead facilitator: Tami Aritomi, Evaluation Officer, UNICEF EO, taritomi@unicef.org
Summary: This session will provide an overview of the benefits and challenges of utilizing AI tools to support evaluations, comparing exploratory designs versus theory-based designs. This interactive session will require participants to use public AI-powered assistant platforms (such as ChatGPT or CoPilot) to produce exploratory summary analyses. These summary analyses will be compared to those produced using a locally run AI-tool using theory-based designs from a recent meta-synthesis exercise using information from publicly available evaluations. Discussions about the observed differences between these two use cases as well as the benefits and the ethical considerations around using these different tools and models to produce openly available databases from public sources, will follow.
Lead facilitator: Marta Duda-Nyczak, Programme Management Officer, UNDSS, duda-nyczak@un.org
Co-facilitator: Michele Tarsilla, Chief, Humanitarian Evaluation, UNICEF, mtarsilla@unicef.org and Ali Buzurukov, Chief, Evaluation and Oversight Section, OCHA, buzurukov@un.org
Summary: This session will explore how interagency collaboration in evaluation can drive mutual institutional learning, better use of evaluations, ultimately leading to strategic improvements in both programme delivery and the evaluation function itself. The presenters will share their experience in collaborating between a large evaluation office (UNICEF) and a smaller evaluation function (UNDSS) – with further collaboration with OCHA – and the results before a group discussion on new ideas for inter-agency collaborations to lead to more meaningful dissemination of evaluation evidence, greater acceptance and uptake of evaluation conclusions, and recommendations among decision-makers.
Facilitators: Yuen Ching Ho, Chief, Evaluation Section DMSPC, yuenching.ho@un.org and Kevin Summersgill, Chief of Service, DMSPC, kevin.summersgill@un.org
Summary: The revised Administrative Instruction on evaluation in the Secretariat will likely be issued in late 2024. This presents a good opportunity to reflect on evaluation-related aspects of management reform, and to discuss approaches to further embedding evaluation culture in United Nations Secretariat entities. This will be an opportunity for colleagues to discuss cooperation and collaboration relating to evaluation culture (for example, including sharing experiences of, and approaches to, accessing national evaluation expertise, evaluation management and tracking tools and senior management commitment and support to evaluation).
Facilitator: Janette Murawski, Communications & Knowledge Management Officer, ILO, murawski@ilo.org
Summary: The purpose of the session is to exchange good practices and learn about new and cutting-edge methods that can be applied to improve evaluation use. This will be done by focusing on answering the following two questions.
Facilitators: Sabine Becker-Thierry, Executive and Strategy Officer, sabine.becker@unu.edu, Alexandra Ivanovic, Senior Programme Manager, ivanovic@unu.edu and Nicolas Dubois, Programme Manager, dubois@unu.edu, UNU
Summary: The evaluation experience and guidance developed by UNEG reflects the main areas of the UN’s work: development, humanitarian response, and peacebuilding. Similar guidance on norm setting as well as research is currently limited, not least as fewer UN entities’ mandates focus on the former.
The session will give an overview of the United Nations University’s (UNU) efforts in evaluating research generated by the university with regards to its uptake and use by key stakeholder groups, such as UN agencies, member states, policy makers, and experts, many of which operate further removed from UNU. The session offers an opportunity to learn from UNEG colleagues on how they have approached and mastered the assessment of research-focused UN work.
Participants will have the opportunity to discuss key questions in breakout groups on how best to assess policy-relevant research work in breakout groups. One group will approach the topic from an evaluator perspective while the other group will take the perspective of UNU leadership/researchers. Together, these findings will provide a deeper understanding of the specific tools that can be deployed as well as the challenges that remain when evaluating policy relevant research and its impact.
Lead facilitators: Laura Olsen, Evaluation Specialist, OCHA laura.olsen@un.org and Sara Holst, Evaluation Officer, FAO, sara.holst@fao.org
Co-facilitators: Sarah Gharbi, Senior Evaluation Specialist, ALNAP
Summary: In March 2025, the new ALNAP criteria for humanitarian evaluation will be launched. This EPE session will provide a preview of the new criteria. Authors will present the criteria to participants and describe practical usage in the evaluation of humanitarian action. Presenters will take an interactive approach by applying the new criteria to various case studies, discussing challenges and changes for the humanitarian evaluation community.
Facilitator: Denis Jobin, Senior Evaluation Specialist, UNICEF, djobin@unicef.org
Summary: UNEG members have a shared responsibility to support National Evaluation Capacity Development (NECD) to ensure that member states can effectively monitor, evaluate, and make informed decisions for better development outcomes. At the 2024 NEC Conference (Beijing), the importance of developing national evaluation capabilities as part of capacity development was underlined to ensure that National Evaluation Systems are not only well-established but also capable of fulfilling their roles effectively, ultimately ensuring modern public management, notably learning and accountability. These efforts also align with the United Nations Sustainable Development Cooperation Framework (UNSDCF) and ongoing UN reforms, which advocate for cohesive and integrated support to national priorities, including the development of evaluation capacities and capabilities. By situating NECD within the context of the UNSDCF and UN reform, UNEG members can ensure that their initiatives are coherent, coordinated, and responsive to the evolving needs of member states.
This session offers an opportunity for UNEG participants to share practices, exchange thoughts on the potential benefit of harmonizing approaches to NECD and develop recommendations that could be put forward to, not only UNEG members, but international development partners and member states interested in strengthening national evaluation capacities and capabilities.
Facilitators:
Summary: The session will foster an exchange of organizational experiences focused on specific challenges faced by small evaluation functions as well as opportunities for navigating them individually and collectively. Through interactive features involving the participants, it will invite collective reflection around priorities and potential strategic areas that should be incorporated in the next UNEG Strategy (2025-2029).
Facilitator: Aditi Bhola, Programme Management Officer, OHCHR, aditi.bhola@un.org
Summary: The session will look at the implementation of the revised UNEG guidance on integrating human rights and gender equality (HRGE) in evaluations – some practical examples and techniques for including it in evaluations, lessons learned from recent evaluation cases, and how to overcome any challenges associated with it. The session will include input from practitioners who have implemented the guidance in their evaluations, discussing relevant good practices (including tools) together with an open discussion and Q&A.
Group work may then be included, with participants split into groups and asked to discuss around the following guiding questions to come up with proposals and practices to strengthen work in this area:
Facilitators: Claudia Ibarguen, Chief, Evaluation Section, UNESCO, c.ibarguen@unesco.org and Judit Jankovic, Senior Evaluation Specialist, ICC, Judit.Jankovic@icc-cpi.int, UNEG Peer Review Working Group coordinators
Summary: This session will provide an opportunity for agency representatives to share lessons and reflections from UNEG peer reviews. By using cases studies in small group settings, the session will also consider approaches to maximizing the utility and credibility of UNEG peer reviews especially for smaller evaluation functions, and through discussion, participants will contribute to broadening the discussion beyond the stocktaking exercise on the utility of the peer review conducted in 2022.
Facilitators: Katinka Koke, Specialist, UNITAR, Katinka.koke@unitar.org and Alena Lappo, Evaluation Officer, IAEA, A.Lappo@iaea.org
Co-facilitators: Anand Sivasankara Kurup, Evaluation Officer, WHO, sivasankarakurupa@who.int, Marta Duda-Nyczak, Programme Management Officer, Department of Safety and Security, duda-nyczak@un.org and Andres Botero, Senior Evaluation Officer, IOM, abotero@iom.int
Summary: The session focuses on measuring evaluation use beyond the conventional metrics already in place (e.g., implementation status of recommendations). In doing so, it acknowledges and promotes other types of use, considering both measurement challenges and opportunities of the agencies commissioning the evaluations as well as their respective partners. The session aims first to unpack the construct of evaluation use by delineating it across different types of process use (instrumental, conceptual, and symbolic use). Next it intends to classify the use of findings in a more robust manner, by distinguishing among individual, group and organizational use. The co-facilitators contributing to this session will also interact with the audience to explore additional indicators that could be employed to measure different types of use.
Facilitators: Carlos Tarazona, Senior Evaluation Officer, FAO, Carlos.Tarazona@fao.org and Xin Xin Yang, Multi-country Evaluation Specialist, UNICEF, xxyang@unicef.org
Co-facilitator: Javier Guarnizo, Director, Evaluation Office, UNIDO, J.guarnizo@unido.org
Summary: The recent Summit of the Future underscored the critical importance of sustainable development and South-South Cooperation (SSC) in advancing global development goals. A key focus was the evaluation of SSC initiatives, essential for measuring their impact, scaling effective practices, and fostering accountability in international cooperation—aligning with the Summit's overarching objectives.
Currently, evaluation practices and related literature on SSC remain limited. Only a few UN agencies conduct thematic and project-specific evaluations or incorporate SSC into their Country Programme Evaluations and Reviews. Additionally, many UN agencies and developing countries engaged in SSC do not adhere to OECD-DAC criteria in their assessments. This highlights the need to explore diverse models of SSC evaluation and assess their suitability in varying contexts.
To address this, co-facilitators from Rome, Vienna, and Beijing will share diverse perspectives, presenting their experience, practices and methodologies for evaluating SSC. Participants, working in small groups, will share insights and discuss ways to improve evaluations of SSC interventions.
During the cocktail, participants are invited to showcase posters, reports or other materials to present their novel tools, best practices and real-world applications, and engage with participants on challenges and solutions.
Members interested in bringing a poster or other materials to display should email Nicolas du Bois, UNU: dubois@unu.edu
Lead facilitator: Shivit Bakrania, Evaluation Specialist, UNDP IEO, shivit.bakrania@undp.org
Co-facilitators: Andrea Cook, Executive Director, UN SDG System Wide Evaluation Office, andrea.cook@un.org, and Deborah McWhinney, Senior Evaluation Advisor, UNFPA IEO, mcwhinney@unfpa.org
Summary: This session by the UNEG Evaluation Synthesis Working Group focuses on the evaluation synthesis guidance currently under development, covering cover key sections of the guidance and exploring how colleagues can effectively apply it in their work. A highlight of the session will be an in-depth exploration of mixed methods approaches to synthesis, addressing both the benefits and challenges, as well as examining current practices. Additionally, the session will delve into multi-agency governance and management arrangements for evaluation syntheses.
Lead facilitators: Sara Holst, Evaluation Officer, FAO, sara.holst@fao.org and Laura Olsen, Evaluation Specialist, OCHA, laura.olsen@un.org
Co-facilitator: Michele Tarsilla, Chief, Humanitarian Evaluation, UNICEF, mtarsilla@unicef.org
Summary: The purpose of the session is to solicit feedback from evaluation managers and experts on the UNEG "Guidance on the Integration of Humanitarian Principles in the Evaluation of Humanitarian Action" (published in 20214). This feedback will help refine the Guidance document and ensure that it is relevant, applicable, and user-friendly for humanitarian actors and all evaluation practitioners. The session will also show how humanitarian principles could be of interest to a wider variety of evaluations than initially thought beyond humanitarian evaluations. The World Café format will encourage diverse perspectives and collaborative exploration of key issues related to the use of the guidance.
Lead facilitators: Brook Boyer, Head of Planning, Performance and Results Section, UNITAR, brook.boyer@unitar.org and Mona Selim, Evaluation Officer, WFP, mona.selim@wfp.org
Summary: The session will take stock of progress towards professionalization, particularly in relation to. (i) the launch of the UNEG Certificate Course; and (ii) Evaluation Community Page on the SDGLearn platform.
Participants will discuss feedback from UNEG professionalization initiatives and brainstorm on ways to further disseminate, improve and build on them. Participants will also brainstorm on how to link to and ensure complementarity with other evaluation capacity development initiatives outside UNEG, and opportunities to leverage guidance and initiatives of other working groups in UNEG.
Lead facilitators:
Summary: Building on the mapping of the Decentralized Evaluation (DE) function conducted by UNEG members over the last few years, the UNEG Decentralized Evaluation Working Group has been developing a paper on principles, operational standards, and a potential assessment framework for DE functions across UN agencies. This session provides a great opportunity for UNEG members to discuss the results of this work so far and to consider how principles and operational standards could be applied to each participant’s specific evaluation set-up.
Keynote addresses on the Future of Evaluation:
• Prof. Tshilidzi Marwala, UNU Rector will provide an overview of the latest UN efforts in technology and AI before discussing the opportunities and challenges this offers for the organisation.
• Isabelle Mercier, UNEG Chair and Director, UNDP Independent Evaluation Office will explore how new technologies may support and strengthen the role of evaluation, and what this means for the evaluation community.
Facilitator: Elke Johanna de Buhr, Evaluation Specialist, UNICEF, edebuhr@unicef.org
Co-facilitators:
Summary: There is growing recognition that climate change is both a driver and a result of unsustainable practices in a wide spectrum of areas, such as food, energy, natural resource management, infrastructure, trade and industrial development, travel and transport, and consumption.
An increasing number of UNEG members have gained experience in, and developed guidance for, evaluating climate-related interventions. Meanwhile UNEG has been working on a new Norm and Standard on integrating environmental and social considerations into evaluations.
This session will discuss and reflect on the approaches, methodological challenges and lessons learned from assessing climate-related interventions, as well as on good practices in mainstreaming climate considerations in evaluations. The learning resulting from this exchange will serve as input to UNEG and other guidance currently being developed.
Facilitators:
Summary: This intergenerational session will explore how young and emerging evaluators (YEEs), in collaboration with senior evaluators, can drive the transformation of evaluation processes within the UN. By fostering cross-generational dialogue and learning, the session will demonstrate how the unique perspectives and experiences YEEs can shape the future of global accountability and learning. This session offers an opportunity to deep dive into the findings of the UNEG YEE Working Group on the current status of YEE engagement across UNEG agencies, notably how YEEs can serve as catalysts for innovative, inclusive, and transformative evaluation practices within the UN. Through a world café exercise, participants will experience the role of mentorship, capacity-building, and institutional support enabling YEEs to meaningfully engage before concluding with actionable strategies.
Facilitators: Anand Sivasankara Kurup, Evaluation Officer, WHO, sivasakankarakurupa@who.int and Myriam Van Parijs, Research and Evaluation Manager, UNICEF, mvanparijs@unicef.org
Co-facilitators: Riccardo Polastro, Chief Evaluation Officer, polastror@who.int
Summary: This session explores the use of UN commissioned evaluations by external stakeholders and WHO commissioned evaluations by internal stakeholders. It is intended to offer insights to enhance evidence-based decision-making. The first segment shares emerging findings from a UNEG Working Group 3 study on how external stakeholders, such as donors, governments, and aid agencies, utilize UN-commissioned evaluations. It examines the demand for evaluation, the operational realities of these stakeholders, and how evidence informs policies and programmes. Participants will discuss whether these findings resonate with their expectations. The second segment focuses on WHO’s internal use of evaluations, highlighting key findings from a recent study. It examines how results are applied across WHO operations, identifying types of evaluation use (e.g., instrumental, conceptual) and factors influencing uptake, such as organizational culture and leadership support. The session aims to foster dialogue on enablers, barriers, and best practices, encouraging strategies to improve evaluation application and drive impact across UN agencies.
Facilitator: Andrea Cook, Executive Director, UN SDG System Wide Evaluation Office, andrea.cook@un.org
Co-facilitator: Tom Barton, Evaluation Officer, UN SDG System Wide Evaluation Office, thomas.barton@un.org
Summary: The session, proposed by the UN SDG System Wide Evaluation Office, will set out the key features of the UN SDG System Wide Evaluation Policy and provide an opportunity for reflection on this unique function which enables a system-wide perspective on the efforts by UN entities to support delivery of the SDGs. In small group settings, participants will explore the lessons and implications from the pilot initiative to harness AI tools to map evidence from 950 UN evaluations published since 2021 and generate rapid, user-friendly summaries of evidence to contribute to the 2024 Quadrennial Comprehensive Policy Review.
Moderated by Sabine Becker-Thierry, UNU
Panelists: Andrea Cook, UNSDG System Wide Evaluation Office, Anne-Claire Luzot, WFP, and Robert McCouch, UNICEF