Skip to main navigation Skip to main content
The University of Southampton
Public Engagement with Research

Welcome to the University of Southampton

Planning your evaluation

This toolkit will guide you through the planning process for evaluating your public engagement project/activity.  This process should be a fundamental part of your project/activity planning, as one should inform the other.

Remember, there is no one evaluation method that suits every public engagement activity, and you may find it useful to use a combination to collect and analyse data for your evaluation.

Whatever you decide, you should take into account the following when planning your evaluation.

You should develop your evaluation plan alongside your project/activity plan, as the two should be linked.

Your evaluation plan should summarise what , why and how you are going to do things. It doesn't need to be a long document and should include:

  • Aim - what do you want to achieve with the evaluation? (The big picture)
  • Objectives - what do you need to do to achieve your aim? Link these to your project objectives and ensure they are Specific; Measurable; Achievable; Relevant; Time defined
  • Identify the stakeholders/audience
  • Evaluation Questions - what do you (and/or other stakeholders) want to know? Think in terms of measuring outputs (results), outcomes (overall benefits or changes) and impact (longer term effects, influences or benefits)
  • Methodology - what strategy will you choose?
  • Data collection - what techniques will you use to collect your evidence? (see more below)
  • Data analysis - how will you analyse your data? (see more below)
  • Reporting - who will be reading your report?

Getting started

The first step is to be clear about the aims and objectives of your public engagement project or activity.  What are you trying to achieve? For example, are you looking for participants to leave knowing more than when they arrived? If so, how much more?

Download the Logic Model template (below) and record your aims and objectives in it  - we've included a completed example to help get you started.  You may find the ACE Generic Learning Outcomes (linked below) useful in this context.

Now you can explore the most effective ways, for your context, to measure whether you have met your aims and objectives - by proceeding to Step 2.

Methods and Measures

1. Explore and plan

Think about, or review, potential target audiences and engagement approaches which may fit with your objectives.  Use the boxes below to explore ideas.

Graphic on data collection and analysis

How will you measure this?

Data collection can often be woven directly into your engagement activities. Click for more on PE techniques/approaches

2. How to collect your data

Before you get into the detail, you need to ask yourself the following questions:

1. Is your activity seeking to understand a problem or issue?

2. Is your activity seeking to prove a hypothesis?

3. Are you seeking to measure the learning generated by your activity?

Your answers will help you to decide whether you need to use mostly qualitative or quantitative data: you should not rely only on one of these, but there will be a method that suits your activity best. Have a look at the Evaluation Styles interactive PowerPoint below to get an idea of the methods on offer.

Fill in more of your Logic Model once you have thought things through.

Whatever methods/measures you choose, its essential to keep your project or activity aims in mind throughout the process.

Consider: suitability for audience, the questions you want answered, the time needed (for participants and evaluators), venue and location, and ethical considerations (respect, honesty, ownership, integrity, confidentiality).

There many different (and creative!) ways to collect data,, with their own strengths and weaknesses: e.g. response cards, questionnaires, interviews, focus groups, graffiti walls, drawings, observation, video, images/photos...

Analysing and Reporting

1. Analysing your data

This is often an iterative process, moving back and forth across three steps:

1. Noticing and collecting (downloading/typing up/labelling/debriefing)

2. Sorting and thinking (listening/reading/processing quantitative data)

3. Critical analysis and interpretation (comparing, contrasting, exploring themes & patterns/describing and illustrating findings eg tables, charts, text, quotes)

It is important to:

  • Allow plenty of time
  • Refer back to original aim, objectives and evaluation questions
  • Look for patterns and group data (i.e. coding)
  • Find representative quotes
  • Look for contradictory data
  • Be critical of your interpretation of data
  • Be reflective - what worked well? What didn't work well? What would you do differently?

2. Reporting on your findings

When writing up your evaluation findings, keep in mind the audience(s) you are writing for, and tailor your report(s) accordingly. A well-considered evaluation plan will have enabled you to identify your reporting-audience(s).

Use the concertina drop-down boxes below to explore the requirements and guidance from different funders.

Even if you are not reporting to a particular funder, you may be able to draw on the guidance for your own reporting.

Research Councils UK (RCUK)

RCUK contributed to this ' Evaluation - Practical Guidelines ' handbook, which outlines:

  • Building an evaluation strategy
  • Gathering data
  • Data handling
  • Reporting

In addition, there is further advice on evaluation questionnaires and further reading among the document's annexes.

Science and Technology Facilities Council (STFC)

The STFC have a strategy and evaluation guidance webpage , with their own downloadable Public Engagement Evaluation Framework:

Economic and Social Research Council (ESRC)

For any public engagement activity to be successful, it is important to plan all elements. This guide takes you through the different steps you should consider to make your activity successful.

Public engagement activities can take place at any stage of a research programme, for example:

  • Project start-up: involving stakeholders at this stage of the process can help shape the research agenda. This ensures that the research tackles pertinent issues.
  • Preliminary findings: sharing preliminary findings with key groups not only increases awareness but can tease out issues, helping shape later stages of research or analysis.
  • Project end: sharing and testing research findings both with stakeholders and with other groups who might be interested in the research, including the general public. This raises awareness of the research and of social science; it can potentially enable the outputs to be used more widely and have greater impact.
  • Other times: public engagement activities don’t have to be linked to specific projects. For many groups, meeting and working with a researcher is a valued experience and provides a unique opportunity to understand research findings and processes. It may also provide new, unexpected research opportunities.

Public engagement projects can also provide insight and direction for future research.

Source: http://www.esrc.ac.uk/public-engagement/public-engagement-guidance/guide-to-public-engagement/

Arts and Humanities Research Council (AHRC)

The AHRC recommend using a logic model for engagement and evaluation planning, and they use the Kirkpatrick Model for levels of potential impact:

Reaction - the initial response to participation

Learning - changes in people's understanding, or raising their awareness of an issue

Behaviour - whether people subsequently modify what they do

Results - to track the long-term impacts of the project on measurable outcomes

Reaction

You may want to set objectives regarding things like perceived levels of enjoyment and usefulness. You can assess reactions in three main ways: by getting people to write down their response (usually by questionnaire); by talking to them one-to-one or in focus groups; by observing them.

If you want to know whether people enjoyed the project/found it useful/learnt something, you can also find out what they particularly did or didn’t enjoy, what was most and least useful and what they would change and why. You can also get information on the environment; e.g. comfort of the venue, quality of refreshments.

The easiest time to get initial reactions is when people are taking part in the project. It may also be worthwhile to get a more considered response a short time after the actual interaction when people have had time to reflect.

Funders appreciate evaluation strategies that provide feedback on lessons learned, good practice, successful and unsuccessful approaches. If you understand why something went wrong, it can help improve things for the future – a ‘lessons learned’ section will enable better practices to evolve.

Learning

You can find out quite easily what, if anything, people think that they have learnt from your programme/project within the reaction data: you can ask them to tell you what they think they have gained, and whether they have a more complete view or understanding of the issue.

Behaviour

Tracking and measuring changes in behaviour is resource-intensive: you’ll need to know what the baselines were and will need some sort of ongoing contact to monitor change. You might rely on self-evaluation, but you may want independent verification. Either way, you will need resources and expertise capable of delivering this sort of evidence.

Results – long-term impact

Tracking people with whom you have engaged over an extended period is the most straightforward way of assessing long-term impact. However if you only track the people you engaged with, there is no ‘control group’ to allow you to ascribe changes to your project rather than to other influences. The resource implications for this are considerable – it is only practical for large scale projects with budgets to match.

Reporting results

You should carefully consider the evidence you have collected, thinking about what it tells you. Negative outcomes should not be ignored – they may be helpful in providing ‘lessons learned’ for future programmes/projects. The positive and negative findings from an evaluation should be fed back into the decision-making process for future programmes/projects. An example template for reporting back to funders is given at Annex 3.

In addition to providing a report for your funders, you may also consider reporting your findings in other ways to a wider public. Perhaps you could put highlights from the evaluation on your website, or publish some case studies of exemplary work conducted during the programme/project.

Once the evaluation is completed, you may also like to consider the process itself. There may be things you have learnt from the process and things you would like to change for future evaluation cycles – perhaps your aims were too vague so you would like to think about making them more measurable in the future; or a monitoring tool worked particularly well, and you now have a questionnaire template to adapt for future use.

Source: http://www.ahrc.ac.uk/documents/guides/understanding-your-project-a-guide-to-self-evaluation/

Natural Environment Research Council (NERC)

NERC have a comprehensive guidance document based on their 'Pathways to Impact', which you can access here :

Biotechnology and Biological Sciences Research Council (BBSRC)

BBSRC have a dedicated grants guide that includes a 'monitoring, evaluation and use of information' section:

Leverhulme Trust

The Leverhulme Trust have a webpage full of downloadable report instructions for grant holders, depending on the type of funding you have received:

Engineering and Physical Sciences Research Council (EPSRC)

EPSRC recognise the importance of Pathways to Impact, and have posted innovative examples as well as guidance :

National Institute for Health Research (NIHR)

NIHR host a dedicated evaluation trials and studies coordination centre , here at the University of Southampton:

Cancer Research UK (CRUK)

CRUK have research evaluation guidance for their grant holders here :

Next Steps

We hope you found this toolkit useful.

Before you go, you might want to use this checklist:

  • Do you have a logic model for your project/activity which incorporates evaluation?
  • Do you have a target deadline(s for completion of your activity/project?
  • In terms of data, have you taken the following into account?
    • What data will be created and by whom?
    • Have you defined and agreed roles and responsibilities?
    • What software and services are required?
    • How will you name/describe/reference your data?
    • How will you share data with collaborators and are restrictions/permissions needed?
    • How will you store data (short and long term)?
  • Have you decided which methods you will use to analyse your data?
  • How will you review your project? (review dates/phases/mechanisms/reporting)
  • Have you considered the potential short/medium/long-term impact(s)of your project/activity?

You can read more about evaluation for public engagement on our companion page: https://southampton.likn.co/per/support/evaluation-guide.page?

If you have a particular question or problem relating to evaluating public engagement, please email [email protected]

Privacy Settings
Powered by Fruition