Will AI Steal My Job? Concerns of an Emerging Evaluator

Categories: Blogs, Knowledge Hub, Read

Share this:

Will AI Steal My Job? Concerns of an Emerging Evaluator

Blog by Joy Banda – Emerging Evaluator at SAMEA (2024)

A key highlight of the SAMEA 2024 Conference was the lessons learnt from various experiences and case studies on the growing adoption and integration of Artificial Intelligence (AI) into Monitoring, Evaluation, Research, and Learning (MERL) practices.

This article reflects on some of these insights, particularly lessons around the need for human centrality in AI use, understanding the context, as well as ethical considerations and limitations associated with AI in M&E. These lessons help to debunk the notion that AI poses a threat to our roles as evaluators; rather, AI should be viewed as a tool to enhance M&E.

My insights are drawn from the Tech-enabled MERL strand at the SAMEA 2024 Conference.

Understanding AI’s Role in M&E

Artificial Intelligence refers to technology that enables computers and machines to simulate human learning, comprehension, problem-solving, decision-making, creativity, and autonomy (IBM, 2024).

AI use in M&E is gaining prominence, bringing numerous benefits. AI-powered mobile and online data collection platforms, chatbots, drones, and satellite imagery are facilitating the efficiency of key M&E processes—such as data collection, cleaning, validation, analysis, and visualisation.

Natural Language Processing (NLP), a key component of AI, enables computers to understand, interpret, and manipulate human language (EvalCommunity, 2024), enhancing the handling of large datasets.

However, despite these efficiencies, AI raises concerns about job security among some professionals. For instance, tools like SurveyMonkey and Kobo Toolbox reduce the need for large field teams, prompting fears about the potential redundancy of human roles.

Will AI Truly Steal Our Jobs?

A recurring theme at the Tech-enabled MERL sessions was the insistence that humans must remain at the centre of AI development and use.

This was powerfully illustrated in a case study presented by Jaya Sojen from JET Services. It compared thematic analysis outputs from a traditional Qualitative Data Analysis Software (Atlas.ti) versus an AI tool (AiLYZE) for an education intervention evaluation.

The findings showed that although AiLYZE could summarise data efficiently, it lacked the depth and nuance offered by human-guided qualitative software analysis. The AI tool required frequent training and adjustments to produce meaningful results—emphasising the need for human oversight.

As an Emerging Evaluator, I realise the importance of leveraging AI’s strengths while preserving the human touch essential to effective evaluation practice.

Virtual Evaluation: AI as an Enabler

Virtual Evaluation provides another example of leveraging AI without replacing human expertise.

Virtual evaluation adapts traditional evaluation approaches by conducting planning, data collection, analysis, and reporting remotely, without physical contact (Hazell, 2023).

During the SAMEA Conference, a workshop on Virtual Evaluation challenged participants to think about evaluating an inaccessible community remotely. Tools such as Miro (an online whiteboard for brainstorming and planning), Kobo Toolbox, and SurveyMonkey were highlighted as effective platforms.

However, the workshop also reinforced that human involvement is critical, especially during design and interpretation stages where AI lacks contextual awareness and cultural sensitivity.

Staying Relevant in the Age of AI

A vital takeaway for Emerging Evaluators is the importance of keeping pace with AI developments to remain relevant. Practical steps include:

  • Attending conferences, webinars, and workshops

  • Registering for AI courses

  • Engaging with AI-related literature

  • Developing knowledge products, such as reflection papers or blogs

These activities not only build capacity but also position evaluators to thrive alongside AI rather than being replaced by it.

For additional insights, the SAMEA/DPME Virtual Evaluation Guideline is a valuable resource for understanding key principles, ethical considerations, and practical applications.

Ethical Considerations and Limitations

The Tech-enabled MERL sessions also highlighted critical ethical considerations:

  • Privacy and data security

  • Informed consent

  • Safeguarding participants’ confidentiality

Moreover, virtual data collection can weaken rapport-building with participants and limit opportunities for rich, in-depth insights (SAMEA/DPME, 2024).

Contextual challenges, such as language barriers, technological access, and infrastructural limitations, must also be carefully managed.

Conclusion: AI as an Ally, Not a Threat

The examples and lessons from the SAMEA 2024 Tech-enabled MERL strand demonstrate that AI should be viewed as an ally, not a threat to M&E professionals.

While AI can automate repetitive tasks, human expertise remains indispensable for contextual analysis, ethical evaluation, and strategic decision-making.

Emerging Evaluators should focus on upskilling, staying informed, and actively engaging with AI developments to ensure they remain invaluable assets in the evolving M&E landscape.


References

  • EvalCommunity (2024). Artificial Intelligence (AI) and Evaluation.

  • Freer, G. (2024). Surfing the AI Tsunami: Using AI to Enhance, Not Replace, Participation in Evaluation. SAMEA Conference Presentation

  • Hazell, E. (2023). SAMEA Virtual Evaluation Guideline and Lessons Learned from Virtual Evaluation.

  • IBM (2024). What is Artificial Intelligence (AI).

  • SAMEA/DPME (2024). Undertaking Virtual Evaluations Guideline.

  • Sojen, J. (2024). Prospects and Constraints of Using AI Technologies for Monitoring and Evaluation in Africa. JET Education Services.