To round off this July's posts on evaluation utilisation, here is summary of the key points raised:
- There are a a number of practical things that evaluators can do to enhance the use of their work. From a participation perspective, evaluators can enhance use by ensuring that all relevant stakeholders are included in the evaluation process. From a product perspective, evaluators can present their findings more clearly, and in a more supportive way - as the "critical friend". A lot of these elements of enhancing use are in the evaluators' sphere of influence. There are however a number of challenges that fall outside of the evaluator's sphere of influence such as the 'readiness' of an organisation for evaluation in terms of evaluation culture, and buy-in to the evaluation process.
In our experience related to buy-in, we recognise that our sphere of influence is relatively small with regards to our clients' or commissioners' evaluation culture. What we have found useful is developing a strong relationship with a champion of evaluation on the side of the client. This 'champion' is someone who buys-in to the evaluation and the evaluation process, and is someone that is able to influence the people and processes in their organisation / team. Another element that is important is active engagement with steering committees and ensuring that steering committee meetings are interactive, and ensuring that input is actively sought.
From an evaluation products perspective, we have found that in producing high utility products, the key competencies of evaluators are:Communication skills (written reports, data visualisation and simple language); understanding of the political context in which the interventions are taking place; the ability to decipher policy implications of findings and recommendations; the ability to craft usable recommendations, taking into account limitations for the implementers; the ability to facilitate workshops and other forms of communication with a range of stakeholders to help them understand the evaluation process; the ability to produce a range of communication products to suit the different stakeholders; the ability to apply appropriate evaluation methods and use these to come up with logical findings that arise from the research process; and formal academic training in evaluation.
- Related to use, it is important to encourage not only instrumental use, but also process use. We need to make sure that we don't miss out on crucial learning moments related to the process of evaluation. This is an important consideration and shows that we need to be active throughout the evaluation, and not miss out on crucial learning moments related to the process of evaluation.
- Finally, in terms of checking the utility of our own work, there are a number of tools and meta-evaluation checklists that are quite helpful. These include: Michael Quinn Patton's Utilisation-Focused Evaluation Checklist; Stufflebeam's Programme Meta-Evaluation Checklist which includes a section on utility; and The American Joint Committee on Standards for Educational Evaluation (AJCSEE)'s Programme Evaluation Standards which includes a section on utility.
Something to contribute?
Please contact our secreteriat if you think you might have something we could publish here.
- M&E Resources and presentations.
- Using the B Impact Assessment to help companies “Measure What Matters”.
- The Role of Evaluators in Evaluation Utilisation.