The AHRC recommend using a logic model for engagement and evaluation planning, and they use the Kirkpatrick Model for levels of potential impact:
Reaction
- the initial response to participation
Learning
- changes in people's understanding, or raising their awareness of an issue
Behaviour
- whether people subsequently modify what they do
Results
- to track the long-term impacts of the project on measurable outcomes
Reaction
You may want to set objectives regarding things like perceived levels of enjoyment and usefulness. You can assess reactions in three main ways: by getting people to write down their response (usually by questionnaire); by talking to them one-to-one or in focus groups; by observing them.
If you want to know whether people enjoyed the project/found it useful/learnt something, you can also find out what they particularly did or didn’t enjoy, what was most and least useful and what they would change and why. You can also get information on the environment; e.g. comfort of the venue, quality of refreshments.
The easiest time to get initial reactions is when people are taking part in the project. It may also be worthwhile to get a more considered response a short time after the actual interaction when people have had time to reflect.
Funders appreciate evaluation strategies that provide feedback on lessons learned, good practice, successful and unsuccessful approaches. If you understand why something went wrong, it can help improve things for the future – a ‘lessons learned’ section will enable better practices to evolve.
Learning
You can find out quite easily what, if anything, people think that they have learnt from your programme/project within the reaction data: you can ask them to tell you what they think they have gained, and whether they have a more complete view or understanding of the issue.
Behaviour
Tracking and measuring changes in behaviour is resource-intensive: you’ll need to know what the baselines were and will need some sort of ongoing contact to monitor change. You might rely on self-evaluation, but you may want independent verification. Either way, you will need resources and expertise capable of delivering this sort of evidence.
Results – long-term impact
Tracking people with whom you have engaged over an extended period is the most straightforward way of assessing long-term impact. However if you only track the people you engaged with, there is no ‘control group’ to allow you to ascribe changes to your project rather than to other influences. The resource implications for this are considerable – it is only practical for large scale projects with budgets to match.
Reporting results
You should carefully consider the evidence you have collected, thinking about what it tells you. Negative outcomes should not be ignored – they may be helpful in providing ‘lessons learned’ for future programmes/projects. The positive and negative findings from an evaluation should be fed back into the decision-making process for future programmes/projects. An example template for reporting back to funders is given at Annex 3.
In addition to providing a report for your funders, you may also consider reporting your findings in other ways to a wider public. Perhaps you could put highlights from the evaluation on your website, or publish some case studies of exemplary work conducted during the programme/project.
Once the evaluation is completed, you may also like to consider the process itself. There may be things you have learnt from the process and things you would like to change for future evaluation cycles – perhaps your aims were too vague so you would like to think about making them more measurable in the future; or a monitoring tool worked particularly well, and you now have a questionnaire template to adapt for future use.
Source:
http://www.ahrc.ac.uk/documents/guides/understanding-your-project-a-guide-to-self-evaluation/