I attended the EES conference in Maastricht, in the Netherlands at the end of September 2016, together with quite a few other SAMEA members. The EES conference was exceptionally well organized and the programme of plenary and paper presentations was of outstanding quality. Not only does the EES conference afford participants with the opportunity to engage with a wide variety of European Evaluation colleagues, but some of the interested North American and South American colleagues also attended and presented their inputs.
I noticed that the discourse at EES has shifted significantly since 2008, which is the last time I attended one of their conferences. I share some of my personal observations.
- In 2008 there was quite a lot of talk about how to promote Evidence Informed Decision making. One could almost not attend a session without someone mentioning the phrase: “Evidence Informed Decision Making”. In 2016, the discourse has moved on, and there were quite a few participants who seemed to suggest that we need to find new ways of informing decisions, and informing not only decision makers but the public. The sentiment was expressed by some that it seems that we are in a post-evidence world, where things are moving backwards. The misinformation that led to the Brexit decision, and the public opinion of issues about the migration in Europe were cited as examples.
- In 2008 there was quite a lot of talk about Impact Evaluation and so called “rigorous methodologies” for conducting Impact Evaluation. It seems that the debate has been taken down a notch or two, with somewhat tacit acceptance of different methodologies. As Michael Patton mentioned in one of the plenary sessions – “Rigor does not reside in methods, but in rigorous thinking”
- My impression from 2008 was that there were quite a few introductory presentations about Systems thinking and complexity in evaluation. In 2016 these concepts were mentioned almost in every session I attended at the conference. I think this is an area that we will also start seeing more of in the South African context.
- Where Aid Effectiveness was a bigger focus in earlier Evaluation conferences, this conference had a significant focus on ensuring that Sustainable Results are achieved. A concern was expressed about development projects’ focus on measurable results at the end of a funding cycle at the expense of more sustainable longer term results. An argument for ensuring that the process of implementation supports long term sustainability was forwarded by a few evaluators, including Ian C Davies and Zenda Ofir.
New topics of discussion at this conference were the Sustainable Development Goals, the role of Big Data in Evaluation, and the work of various EvalAgenda 2020 groups . Discussions on Professionalizing of Evaluation, and methods such as Realist Evaluation continued also at this conference.
My favourite highlight of the conference was when, in the opening plenary , Elliot Stern, Massimo Florio and Inge De Wolf were asked by Frans Leeuw to comment on how they would approach an evaluation requested by a politician interested in learning what works for resettling migrants in Europe. It really is not possible to represent the very rich and animated discussion with just a short summary but I was quite intrigued by the different approaches suggested: One panellist indicated that she would start with basic facts and figures before designing the evaluation, another indicated he would have coffee with an academic who has studied the issues around migration before deciding, and the other explained how he would take an action-evaluation approach over a six month period to start informing the longer term programme of evaluation. Florence Etta, a previous AfrEA president, interjected with a comment that in six months’ time thousands of people might be dead while the evaluators are deciding on methodologies. The take-away for me was that evaluators do affect real issues, and real peoples’ lives.
I appreciated the opportunity to connect with colleagues from all over the world. I must acknowledge the financial support provided by Evalpartners that allowed me to attend the conference. I always feel like a little bit of nerd for finding discussions about data and methodology and evaluations so exciting, but I’ve been told that us evaluators are the nerds of the development profession!
Something to contribute?
Please contact our secreteriat if you think you might have something we could publish here.
- Using the B Impact Assessment to help companies “Measure What Matters”.
- The Role of Evaluators in Evaluation Utilisation.
- Big Data for Evaluation.