Africa Evaluation Blog: Why it is time to make evaluation more relevant for Africa (2) – Development of Africa-driven and -rooted Evaluation Systems and Practices
Posted 6 months ago
Why Africa-rooted Evaluation? In my first blog on this platform I summarized the reasons why the Western evaluation paradigm that has in the past influenced evaluation practices in Africa significantly, are still doing so. I motivated why this paradigm should perhaps be contextualized more effectively on the African continent, and adapted better to conditions that are in most respects quite different from the Western contexts within which evaluation developed into its current manifestation as a transdisciplinary global profession.
Before one can consider what changes are needed, though, it is important to take note of the direction that the development of evaluation in Africa took since the late 1970s. Chapter 2 in Cloete, Rabie & De Coning 2014 addresses this in more detail.
Start of an Africa-driven Evaluation Approach
The development of a systematic Africa-driven evaluation approach has been summarised and assessed well by Spring & Patel. A network of evaluation practitioners was created by UNICEF in Nairobi, Kenya in 1977 to enhance capacity-building for UNICEF and other evaluations in East Africa. This initiative attempted to create indigenous African evaluation capacity that can start taking responsibility for initiating and driving systematic evaluations of development programmes in Africa. In March 1987, an Organisation for Economic Cooperation and Development (OECD) Development Assistance Committee (DAC) seminar brought together donors and beneficiaries of development programmes to discuss objectives, means and experiences in evaluation. The outcome was an awareness of the need to strengthen evaluation capacities of developing countries, also in Africa. The OECD published the summary of the discussions in 1988 in its report titled Evaluation in Developing Countries: A Step towards Dialogue. This initiative called for a series of seminars to be held at regional level to intensify dialogue, discuss problems unique to each region, and recommend concrete and specific actions with a view to strengthening the evaluation capacities of developing countries.
Other prominent facilitators for evaluation capacity-building on the African continent during these early years have been the African Development Bank and World Bank Operations Evaluation Departments. Two initial conferences hosted respectively by these two multilaterals in 1998 and 2000 raised further awareness around evaluation capacity development in Africa.
The first seminar on evaluation in Africa, which was presented jointly by the African Development Bank (ADB) and Development Assistance Committee (DAC), was held in Abidjan, Cote d’Ivoire, 2-4 May 1990. Its objectives included the clarification of evaluation needs as perceived by African countries themselves and the exploration of ways and means of strengthening self-evaluation capacities.
A follow-up seminar was carried out in 1998 in Abidjan, with the following objectives:
• To provide an overview of the status of evaluation capacity in Africa in the context of public sector reform and public expenditure management;
• To share lessons of experience about evaluation capacity development concepts, constraints, and approaches in Africa;
• To identify strategies and resources for building M&E supply and demand in African countries; and
• To create country networks for follow-on work.
The discussions of the 1998 seminar underlined important directions in African administration and aid agencies. It specifically identified the global trend toward more accountable, responsive and efficient government in African states.
The two approaches of sensitising policy makers (the Abidjan approach, based on the World Bank framework), and of spreading general awareness and building evaluation capacity (the Nairobi/UNICEF approach), were synergistic in creating a home-grown Africa-driven demand for evaluation.
The Lagos Plan of Action that was adopted at the first Extraordinary Economic Summit of the Organisation of African Unity (OaU, precursor to the current African Union) in Lagos, Nigeria in April 1980, strengthened this initiative. The Lagos Plan of Action was an African reaction to Western-driven structural adjustment programmes imposed on African countries from the early eighties onward. The main argument was that Africa and the different regions in Africa should develop their own policy capacities in parallel to the African Capacity Building Initiative of the World Bank and the UNDP. This development emphasised regional economic independence and highlighted the need for improvement in regional policy capacity. CODESRIA, an alternative policy capacity development agency was also established in 1973 as an independent pan-African research organisation primarily focusing on promoting social sciences research in Africa.
These developments emphasised the need for institutionalising more resources in Africa for African researchers to initiate and do more independent policy evaluation and research on African issues from an Africa-driven perspective.
Emergence of Africa-rooted Evaluation
These internally-driven capacity-building initiatives laid the foundations for the later development of pleas for more effective Africa-rooted self-assessment and peer review approaches. Africa-rootedness can be defined not only as the development of internal African capacity or the initiation or driving force behind systematic evaluations of African development programmes, but going one step further to base these evaluations on more appropriate evaluation values, practices and paradigms that originated on the African continent instead of elsewhere outside of Africa and imported or imposed on this continent. The main assumption behind Africa-rooted evaluation approaches is that they could be more appropriate because they are home-grown and not foreign to African values, practices and institutions.
In the late 1990s, increasing concern started to emerge among African participants in the various internationally organised evaluation structures and processes, about the nature and impacts of the structural adjustment programmes of the World Bank and the IMF, as well as about the Western-dominated evaluation paradigms underlying the evaluations undertaken in Africa. In a bibliographic review of evaluations in Africa, Spring and Patel found in that the majority of these evaluations have been requested by donors and international agencies. The majority of the first authors were still not African. Of the original 133 articles that were reviewed, for example, three-quarters had a first author with a western name, fifteen percent were clearly African, and it was not clear in twelve percent of the cases. African author participation was acknowledged as second or third author in twelve percent of the total. There is some room for confusion as many of the authors and reviewers are African, but with names of European or Asian origin. While the authors are mostly non-African, the reviewers, however, are nearly all African, by conscious design of the authors.
The African Evaluation Association (AfrEA) came into being in 1999. AfrEA was established at a ground-breaking inaugural pan-African conference of evaluators held in Nairobi, Kenya, with 300 participants from 26 African countries. Patel was instrumental in this process, supported by the Kenyan and other African country evaluation societies, and financially supported by UNICEF. The current AfrEA website explains its origins as follows:
“Until 1999 there were few opportunities to network and share evaluation experiences in Africa.
• Evaluators worked in isolation. They were seldom trained in evaluation approaches, methodology and standards, and tended to be technical specialists or management consultants recruited to serve as evaluation consultants.
• Although a few national evaluation networks existed, they were isolated and often unable to mobilise the capacities and resources to facilitate effective networking and sharing of knowledge within and between countries.
• Evaluation capacity building efforts were sporadic and mostly driven by international development organisations.
• There were few attempts to nurture advanced level evaluation expertise, to promote training placed in African contexts and evaluation approaches, or to highlight African evaluation expertise on international platforms.
• Demand for evaluation was low and the use of evaluation for learning and decision-making limited and dominated by accountability to international aid agencies.”
AfrEA’s rationale in life is to try to address some of these challenges. It operates as an umbrella association for national evaluation associations and networks, and as a resource for evaluators in countries without such networks. Its main strategic objectives are:
• “to promote and strengthen evaluation for real and sustained development in Africa;
• to promote Africa rooted and Africa led evaluation;
• to encourage the development and documentation of high quality evaluation practice and theory;
• to establish and support national evaluation associations and special evaluation
• interest groups;
• to facilitate capacity building, networking and information sharing on evaluation among
• evaluators, policymakers, researchers and development specialists; and
• to share African evaluation perspectives and expertise at relevant forums.”
The establishment of AfrEA in 1999 constituted an important capacity building and networking opportunity for everyone interested in systematic M&E practices on the African continent from an indigenous African perspective. Regular AfrEA conferences have been organised since then, with the most important one for purposes of this contribution probably the 2007 Conference in Niamey, Niger, where a day-long special session with support from NORAD led to a formal statement encouraging Africa to ‘Make Evaluation our Own’ (AfrEA Special Stream Statement, 2007), later transformed by AfrEA into a ‘Made in Africa’ strategy for evaluation. The stream was designed to bring African and other international experiences in evaluation and in development in to help stimulate the debate on an African approach to M&E. The following key issues to prioritise were identified. They provide crucial insights into the strategic intent of developing and Africa-rooted evaluation approach:
• “Currently much of the evaluation practice in Africa is based on external values and contexts, is donor driven and the accountability mechanisms tend to be directed towards recipients of aid rather than both recipients and the providers of aim;
• For evaluation to have a greater contribution to development in Africa it needs to address challenges including those related to country ownership; the macro-micro disconnect; attribution; ethics and values; and power-relations;
• A variety of methods and approaches are available and valuable to contributing to frame our questions and methods of collecting evidence. However, we first need to re-examine our own preconceived assumptions; underpinning values, paradigms (e.g. transformative v/s pragmatic); what is acknowledged as being evidence; and by whom, before we can select any particular methodology/approach”
AfrEA therefore started out with the intention not only to consolidate an Africa-driven evaluation culture but also to transform that culture into an explicit Africa-rooted culture of evaluation. However, since 2012 the organisation seems to be plagued with internal resource and management problems complicating the transformation of its deliberations and resolutions into practical systematic internal and external capacity-building strategies and programmes from an African perspective. This malaise is illustrated by the fact that after 17 years in existence, the current AfrEA Constitution is not yet publicly available on its website at the time of writing this post.
Current Status of Africa-rooted Evaluation in Africa
The current African Peer Review Mechanism (APRM) of the AU is the best current example of an Africa-driven and -rooted national political governance evaluation system that illustrates one of the more concrete results of these developments. It is supposed to measure the extent of democratisation in African states, not on the basis of pure Western models of multi-party democracy, but emphasising more the values of accountability, representivity of and responsiveness of African governments to their respective civil societies. Unfortunately this institutionalised political evaluation system has since its inception in 1996 not functioned well, largely as a result of a lack of sufficient motivation among heads of African states to commit their governments to this self-evaluation process. Steven Gruzd’s realistic assessment of the current problems plaguing the APRM, illustrates many of the systemic obstructions currently complicating progress with Africa-driven and–rooted evaluation processes on the continent.
The public and international profiles of African evaluation practices can and should be improved. According to the 2012 Bellagio report, “African Thought Leaders’ Forum on Evaluation and Development – Expanding Thought Leadership in Africa”, African evaluators are still not as visible on international evaluation platforms as they can and should be. Evaluations in Africa are further unfortunately still largely commissioned by non-African stakeholders who mostly comprise international donor or development agencies who run or fund development programmes on the continent.
This is still a sensitive issue for many African evaluators, because perceptions have emerged in circles both in Africa and outside the continent that African evaluators have to improve their international competitiveness compared to their northern hemisphere counterparts because the profession in Africa is relatively new and there is much room for improvement. More African evaluation case studies need to be written up and disseminated globally. Available resources for African evaluators to travel to international conferences and other international events should be accessed better. There is also a scarcity of qualified and experienced professional African evaluation scholars.
This situation is changing slowly, however, and more opportunities are created for African evaluators to become increasingly internationally exposed and competitive. This is now inter alia facilitated by the publication of AfrEA’s new mouthpiece, the African Evaluation Journal. Another excellent example of a good practice in this regard is the interesting project KM4DEV in Zimbabwe, trying to improve development initiatives through better knowledge management.
The realistic bottom line is that the only way in which an internationally competitive African evaluation profession customised to address the unique African context can be developed, is if African evaluators themselves take firm control of what is needed and pursue those strategic goals in a dedicated way by prioritising adequate resources for this purpose and then managing these strategic projects in an efficient and effective manner.
This also implies changing mind-sets, behaviour patterns and current reactive processes and projects into pre-emptive strategies to change the future of evaluation in Africa in a direction that will benefit African countries more than the current situation does.
If this is not achieved, evaluation in Africa will not be able to take its deserved place on the global evaluation stage, and we will continuously debate the need for change and lamenting why this does not happen.
My next posting here will contain a brief assessment of which aspects of the current Western evaluation paradigm need to be more effectively “Africanised”, in order to create more effective Africa-rooted evaluation approaches and methodologies. This will be followed by another post containing some practical guidelines for a road map how to get there without undue delay.