Ensure that the purpose of your evaluation is identified and clearly articulated.
What and Why
To develop your evaluation plan you will need to identify the:
- Purpose of the evaluation
- Aims and objectives
- Approach and data requirements
- Resource requirements
What is the purpose of the evaluation?
It is important to be clear right at the beginning:
- Why are we conducting an evaluation?
- Who is it for?
These are really important questions to get right as they will inform the design of your evaluation. We recommend that you work in partnership with your stakeholders to clearly identify and agree the purpose and aims and objectives are for the evaluation. Working in partnership with your stakeholders allows you to explore the different views, values and priorities of each stakeholder and what they would like to gain from the evaluation.
What are aims and objectives?
The aim will be the overall statement of what the evaluation will do (also sometimes called goal); the objectives are specific statements of what the evaluation will try to achieve in order to meet the overall aim. We recommend that your objectives should be SMART, this means that they are Specific, Measurable, Achievable, Relevant and Time bound.
This will help keep your evaluation realistic, achievable and easy to communicate. Your aim and objectives for the evaluation will guide your work influencing the approach you take to the evaluation, what information you need to collect and the tools to do this, so it is important to get them right.
What approach should I take?
To help with this aspect we recommend that a focus on the following questions
- What information (data) do I need to help me to answer my evaluations aims and objectives?
- What information do I already have? Secondary data
- What are the gaps and what information (data) do I need to collect to fill them? Primary data
- What has happened before and where are we now? Baseline data
- What can we compare with? Control/comparator/benchmarking
Consider using a mixture of methods i.e. collecting both numerical (quantitative) and narrative (qualitative) data as this will strengthen your evaluation and help you to triangulate your findings. This might mean using existing data (secondary data) or it may be necessary to create new data (primary data). This might also include using multiple methods – can you utilise clinical audit data for example?
There are lots of frameworks and approaches to evaluation which can be confusing – we have not included these here – but check out our tools if you would like more information.
What resources do I need?
It is important that all the resources needed to undertake the evaluation are identified and estimated at the outset (Brophy, Snooks and Griffiths 2008) and costed into your business cases and plans. It is very easy to under estimate the cost of an evaluation and there is very little specific external funding available for conducting [service] evaluations.
The level of resources you need will depend on the purpose of the evaluation. For example, if the purpose of your evaluation is to improve your services then an evaluation conducted internally (self-evaluation) using existing project resources would probably be sufficient. If, however you want to demonstrate your service has had an impact or is cost-effective you may need to consider an independent evaluation using more robust evaluation approaches (Ref: What do we mean by standards of evidence? NPC and CLiNKS guide).
In terms of costs of an evaluation, this varies, but from experience we recommend for projects up to about a million that approximately 10% of project value is allocated to the evaluation. For larger scale projects, i.e. above approx. £500k, this may decrease proportionately ranging from 5% to 2%.
Planning your Evaluation
Watch this short video produced by University of the West of England in collaboration with the West of England Academic Health Science Network.
Creating a Risk Register
It is a good idea (and good practice) to keep a list of risks that may affect the findings or timescales of your evaluation. It can be helpful to share this with the project sponsor or funder, particularly if their involvement in mitigating against these risks will improve the accuracy, timeliness or value of the final evaluation report. A risk register is a live document and needs to be reviewed and updated through the life of the evaluation. You can find an example risk register in the Toolbox below.
The following tools are either internal resources developed by the APCRC or external resources we have found useful.
Aims and objective setting
- APCRC Aims and objectives guide
- APCRC Quick Guide to Understanding aims and objectives; monitoring and evaluation; measures
- APCRC EvaluationPlanTemplate in word
- APCRC EvaluationPlanTemplate with guide
- APCRC Guide to Procuring an Independent Evaluation and Invitation to Tender Template – in development
- APCRC Guide to building Evaluation into Service Specifications
- APCRC Quick Guide to Evaluation Approaches
- APCRC Guide to evaluation designs
- Best Practice Guidelines in the Ethics and Governance of Service Evaluation
- NPC and Clinks guide to What do we mean by standards of evidence?
Evaluation Risk Register
Please note we are not responsible for the content of external sites and are for guidance only.
In determining the amount required to finance the evaluation function, other organizations have estimated that 3–5% of the programme budget should be used for evaluation
Evaluation Practice Handbook
- Research, Development and/or evaluation team. Don’t forget if you work for Bristol, North Somerset or South Gloucestershire CCG to come and see the APCRC
- Quality Improvement leads
- Collaboration for Applied Health Research and Care West (CLAHRCwest)
- West of England Academic Health Science Network (West of England AHSN)
- Or your local universities
Consider your evaluation early alongside the development of the service and build your requirements into your project or programme management processes and resources.
Every study, no matter how well it is conducted, has some limitations. This is why it does not seem reasonable to use the words “prove” and “disprove” with respect to research findings. It is always possible that future research may cast doubt on the validity of any hypothesis or the conclusions from a study.
Psychology and Society