Traditional to Tech-Enabled M&E: My Takeaways from SAMEA’s 9th Biennial Conference 2024

Categories: Blogs, Knowledge Hub, Read

Share this:

Traditional to Tech-Enabled M&E: My Takeaways from SAMEA’s 9th Biennial Conference 2024

Blog by Vongai Chibvongodze – Emerging Evaluator at SAMEA (2024)

Introduction

As a conclusion to my journey as an Emerging Evaluator in Cohort 2 of SAMEA’s Young Emerging Evaluators (YEE) programme, I had the opportunity to attend SAMEA’s 9th Biennial Conference. Motivated by my interest in improving data processes within social development work, I focused particularly on the Tech-Enabled Monitoring, Evaluation, Research, and Learning (MERL) strand throughout the conference week.

It was a valuable experience, highlighting both the rapid technological advancements in our field over the past few years and the complexities involved in integrating these tools for M&E purposes. This strand gave me insight into how digital tools and Artificial Intelligence (AI) are reshaping the M&E landscape—bringing both immense potential and notable challenges.

Key Takeaways

One of the most prominent themes was the importance of continuous human involvement and oversight in AI applications for M&E.

As powerful as AI can be in processing large amounts of data quickly, human evaluators remain essential for understanding nuanced contexts and verifying causal relationships. A key lesson shared during a session on integrating AI in evaluations was that AI should support—not replace—human-driven evaluations.

The growing role of AI in M&E and broader social development projects also brings ethical considerations, particularly around data privacy. A presentation on the Love Alliance project, addressing sexual reproductive health, highlighted a critical point: generative AI tools can integrate user data into their models during training. This demands heightened caution and adapted privacy standards when handling sensitive information.

Quality assurance emerged as a non-negotiable in AI-driven M&E. A risk commonly cited was “garbage in, garbage out”—without strict quality checks during data collection, processing, and analysis, AI outputs can be fundamentally flawed. This reminder reinforced that technology’s effectiveness ultimately depends on the quality of human input.

Virtual evaluations were another major focus, particularly following the widespread shift to remote work during the pandemic. Techniques ranging from WhatsApp surveys to Zoom interviews have proven useful, particularly under budget constraints. However, challenges remain, including:

  • Lower participation levels

  • Internet connectivity issues

  • Varying levels of digital literacy among practitioners and participants

These hurdles underscore the need for adaptable approaches that prioritise inclusivity and accessibility.

A session on leveraging AI for data analysis showed how AI-driven tools can simplify complex causal maps and enhance data visualisations. For example, in the Love Alliance Mid-Term Review, AI tools helped quantify and map causal relationships, improving the ability of evaluators to communicate complex findings clearly. Yet again, human interpretation remained vital.

Various digital data management platforms—such as Kinaki—were discussed, highlighting their role in managing large datasets, reducing data loss risks, and facilitating real-time decision-making. These tools are becoming increasingly important, particularly within government institutions and service delivery programmes.

Relevance to Emerging Evaluators

For young and emerging evaluators like myself, Tech-Enabled MERL presents a unique opportunity to harness technology for more impactful evaluations.

The M&E field in South Africa is still developing its digital infrastructure, creating space for innovation. Sessions on co-production approaches and local evaluation practices stressed the importance of collaborative skills.

As Gen Z and Millennial evaluators, with our familiarity with digital tools, we are well positioned to bridge the digital divide by promoting technology solutions that are accessible, inclusive, and grounded in the realities of the communities we serve.

Personal Reflection on the Conference

Reflecting on the conference, I am inspired to deepen my understanding of emerging digital tools and AI functionalities. Workshops such as Mastering Virtual Evaluation and sessions on AI-based approaches introduced me to practical platforms and techniques that I aim to integrate into future projects.

However, I am now also more aware of the ethical responsibilities that come with using these technologies.
As young evaluators, we must advocate for practices that prioritise transparency, accountability, and inclusivity.

We must continuously ask:

  • Who benefits from this technology?

  • How do we ensure it is a tool for all, not just a privileged few?

My call to action to fellow Emerging Evaluators is this: embrace technology critically and ethically, and use it to advance social good.