RESULTS BASED MONITORING AND EVALUATION: DESIGN AND EVALUATION
Keywords:
Results-Based Monitoring and Evaluation, Logic Model, Results Chain, Program Evaluation, Capacity DevelopmentAbstract
Results-Based Monitoring and Evaluation (RBM) is a critical process that provides evidence-based information to ensure the efficiency, effectiveness, and transparency of operations, emphasizing outcomes rather than activities or outputs. This article aimed to present the conceptual foundations and principles of RBM, focusing on the design of monitoring and evaluation systems through frameworks such as the Logic Model and the Results Chain. It further discussed comprehensive evaluation dimensions, including appropriateness, effectiveness, efficiency, impact, and sustainability, as well as key success factors ownership, management, maintenance, and credibility. Best practices were also highlighted, such as active stakeholder participation and continuous reflection on program progress. The synthesis revealed that RBM functions as a powerful tool for improving program management and capacity development at individual, organizational, and societal levels. Its ultimate purpose was to ensure the sustained use of results and to contribute to long-term sustainable development
References
Apgar, J. M. et al. (2017). Getting beneath the surface in program planning, monitoring and evaluation: Learning from use of participatory action research and theory of change in the CGIAR Research Program on Aquatic Agricultural Systems. Action Research, 15(1), 15–34.
Global Affairs Canada. (2016). Results-Based Management for International Assistance Programming at Global Affairs Canada: A How-to Guide. Retrieved May 20, 2020, from https://shorturl.asia/rRzVw
Hobson, K. et al. (2013). A Step by Step Guide to Monitoring and Evaluation. Retrieved June 2, 2020, form https://shorturl.asia/gRlGF
International Labour Organization. (2016). STED Results Based Management and M&E Manual. Turin, Italy: International Training Centre of the ILO.
Kusek, J. Z. & Rist, R. C. (2004). Ten Steps to a Results-Based Monitoring and Evaluation System. Washington, DC: The World Bank.
Lusthaus, C. et al. (1999). Capacity Development: Definitions, Issues and Implications for Planning, Monitoring and Evaluation. Universalia Occasional Paper, 35, 1–21.
Markiewicz, A. & Patrick, L. (2016). Developing Monitoring and Evaluation Frameworks. Thousand Oaks, CA: SAGE Publications.
Rossignoli, S. et al. (2015). A Critical Friend: Monitoring and Evaluation Systems, Development Cooperation and Local Government. The case of Tuscany. Evaluation and Program Planning, 50, 63–76.
Simister, N. & Smith, R. (2010). Monitoring and Evaluating Capacity Building: is it Really that Difficult. Retrieved March 10, 2021, from https://www.intrac.org/app/uploads/2010/01/Praxis-Paper-23-Monitoring-and-Evaluating-Capacity-Building-is-it-really-that-difficult.pdf
UNICEF. (2017). Results-Based Management Handbook Working Together for Children. Retrieved March 25, 2021, from https://shorturl.asia/WFc3A
United Nations Development Programme. (2009). Capacity Development: A UNDP Primer. Retrieved May 2, 2020, from https://shorturl.asia/74lTf
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 Journal of MCU Social Science Review

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
In order to conform the copyright law, all article authors must sign the consignment agreement to transfer the copyright to the Journal including the finally revised original articles. Besides, the article authors must declare that the articles will be printed in only the Journal of MCU Journal of Social Sciences. If there are pictures, tables or contents that were printed before, the article authors must receive permission from the authors in writing and show the evidence to the editor before the article is printed. If it does not conform to the set criteria, the editor will remove the article from the Journal without any exceptions.

