Glossary

From ‘attribution’ to ‘triangulation’ – we know that evaluation and evidence terminology can be confusing. Our A-Z glossary of terms explains what the commonly used terms mean.

A

Attribution

“Causal link between observed (or expected to be observed) changes and a specific intervention”

WHO (2013) Evaluation Practice Handbook apps.who.int/iris/bitstream/10665/96311/1/9789241548687_eng.pdf

B

Baseline

“A set of measurements before any intervention starts (after any initial ‘run-in’ period with no intervention), with which subsequent results are compared”

Benchmark

“Evaluate (something) by comparison with a standard: we are benchmarking our performance against external criteria”

Benefit

A benefit is a measurable improvement resulting from change considered to be advantageous by at least one stakeholder that contributes to an organisational change. Intangible benefits relate to issues such as improvements in health and wellbeing or quality of life.

Budget impact analysis (BIM)

Aims to estimate the change in expenditure to a specific budget holder and primarily assesses financial affordability of an intervention.

C

Clinical effectiveness

The application of interventions which have been shown to be efficacious to appropriate patients in a timely fashion to improve patients' outcomes and value for the use of resources.

Confounder

“Confounding refers to a situation in which a measure of the effect of an intervention or exposure is distorted because of the association of exposure with other factor(s) that influence the outcome under investigation. This can lead to erroneous conclusions being drawn, particularly in observational studies”

Cost

The economic definition of cost (also known as opportunity cost) is the value of opportunity forgone, strictly the best opportunity forgone, as a result of engaging resources in an activity. Note that there can be a cost without the exchange of money. Also the economists' notion of cost extends beyond the cost falling on the health service alone. For example, it includes costs falling on other services and on patients themselves.

Cost effectiveness analysis (CEA)

Aims to examine the costs of various approaches to achieving a specific health outcome. The analysis measures outcomes in ‘natural units’.

Cost utility analysis (CUA)

CUA aims to determine cost in terms of utilities, in quantity and quality of life. The incremental cost of a programme from a particular point of view is compared to the incremental health improvement expressed in the unit of quality-adjusted life years (QALYs).

Cost-benefit analysis (CBA)

Aims to determine whether the economic value of an intervention can justify its costs, by comparing the cost of two or more alternatives and reviewing the return on investment.

D

Demand

The need, ability and willingness to pay for a commodity.

Developmental evaluation

“Supports innovation development to guide adaptation to emergent and dynamic realities in complex environments”

Direct costs

All resources that are consumed when providing a health promotion programme. These may be incurred by the health promotion service, community or clients.

Discounting

The mathematical procedure for adjusting future costs and outcomes of health‐care interventions to “present value”. This adjusts for differences in the timing of cost (expenditure) compared to health benefits (outcomes).

E

Economic evaluation (economic appraisal)

The comparison of alternative courses of action in terms of their costs and consequences, with a view to making a choice.

Effectiveness

The extent to which programmes achieve their objectives, in real-life settings.

Efficacy

The effect of an intervention under ideal conditions, with participants fully complying with the programme.

Efficiency

Maximising the benefit to any resource expenditure (funds, expertise, time etc) or minimising the cost of any achieved benefit.

Evaluability

“The extent to which an activity or a programme can be evaluated in a reliable and credible fashion”

WHO (2013) Evaluation Practice Handbook apps.who.int/iris/bitstream/10665/96311/1/9789241548687_eng.pdf

Evaluability assessment

“Early review of a proposed activity in order to ascertain whether its objectives are adequately defined and its results verifiable”

WHO (2013) Evaluation Practice Handbook apps.who.int/iris/bitstream/10665/96311/1/9789241548687_eng.pdf

Evaluation

“measure/judge current service without reference to a standard; What standard does this service achieve?”

www.hra.nhs.uk/documents/2013/09/defining-research.pdf

“A study in which research procedures are used in a systematic way to judge the quality or value of a service or intervention, providing evidence that can be used to improve it”

West of England Evaluation Strategy Group, 2013

Experimental evaluation

“A research design in which the researcher has control over the selection of the participants in the study, and these participants ae randomly assigned to treatment or control group”

F

Formative evaluation

“An evaluation intended to improve performance”

WHO (2013) Evaluation Practice Handbook apps.who.int/iris/bitstream/10665/96311/1/9789241548687_eng.pdf

H

Health economics

The study of how scarce resources are allocated among alternative uses for the care of sickness and the promotion, maintenance and improvement of health. This includes the study of how health care and health-related services, their costs and benefits and health itself are distributed among individuals and groups in society.

I

Impact evaluation

“Objective test of what changes have occurred and the extent to which these can be attributed to the policy” or intervention / service

Incremental cost-effectiveness ratio (ICER)

Obtained by dividing the difference between the costs of the two interventions by the difference in the outcomes (ie the extra cost per extra unit of effect).

Indirect costs

These relate to the losses to society incurred as a result of participating in the programme, such as the impact on production, domestic responsibilities and social and leisure activities.

Intangible costs

These relate to issues such as anxieties and impact on quality of life resulting from participation in the programme. These are generally difficult to measure and value and are often not included in the construction of the cost profile of an economic evaluation.

L

Logic model

“Logic models describe the relationship between an intervention’s inputs, activities, outputs, outcomes, and impacts”

www.gov.uk/government/publications/the-magenta-book

“A logic model is a systematic and visual way to present and share your understanding of the relationships among the resources you have to operate your program, the activities you plan, and the changes or results you hope to achieve”

wkkf.issuelab.org/resource/logic-model-development-guide.htm

N

Natural experiments

“Work best in circumstances where a relatively large population is affected by a substantial change in a well-understood environmental exposure, and where exposures and outcomes can be captured through routine data sources, such as environmental monitoring and mortality records”

O

Outcome measure

“Changes (desirable and undesirable) in individuals and populations that are attributed to” an intervention or service.

“Denotes the effects of care on the health status of patients and the population”

P

Participatory evaluation

“Improve program implementation and outcomes by actively engaging all stakeholders in the evaluation process”

Perspective

The point of view from which an analysis is carried out. The NHS perspective considers costs and benefits from the point of view of the healthcare system.

Present values

The value in today’s terms of future costs or benefits (after discounting).

Process evaluation

“A study which aims to understand the functioning of an intervention, by examining implementation, mechanisms of impact, and contextual factors”

UK Medical Research Council (MRC) guidance (2014) Process Evaluation of Complex Interventions mrc.ac.uk/documents/pdf/mrc-phsrn-process-evaluation-summary-guidance

Process measure

“Interactions between healthcare practitioner and patient; a series of actions, changes, or functions bringing about a result (such as mammography screening rate)”

“Denotes what is actually being done in giving and receiving care”

Q

Qualitative

“Qualitative research is used to explore and understand people’s beliefs, experiences, attitudes, behaviour and interactions. It generates non- numerical data, e.g. a patient’s description of their pain rather than a measure of pain. In health care, qualitative techniques have been commonly used in research documenting the experience of chronic illness and in studies about the functioning of organisations. Qualitative research techniques such as focus groups and in-depth interviews have been used in one-off projects commissioned by guideline development groups to find out more about the views and experiences of patients and carers”

Quality-adjusted life years (QALYs)

Calculated by adjusting the estimated number of life-years an individual is expected to gain from an intervention for the expected quality of life in those years. The quality of life score will range between 0 for death to 1 for perfect health, with negative scores being allowed for states considered worse than death.

Quantitative

“Quantitative research generates numerical data or data that can be converted into numbers, for example clinical trials or the National Census, which counts people and households”

Quasi-experimental evaluation

“A quasi-experiment is an observational study in which the subjects to be observed are not randomly assigned to different groups in order to measure outcomes, as in a randomized experiment, but grouped according to a characteristic that they already possess”

S

Sensitivity analysis

Assessing the robustness of an economic model by examining the changes when key variables are changed over a specified range.

Structure measure

“Measures of organisational characteristics (such as staffing ratios, number of hospital beds)” or “Denotes attributes of the setting in which care occurs”

Summative evaluation

An evaluation “conducted at the end of an intervention (or a phase of that intervention) to determine the extent to which anticipated outcomes were produced”

WHO (2013) Evaluation Practice Handbook apps.who.int/iris/bitstream/10665/96311/1/9789241548687_eng.pdf

T

Theory based evaluation

“Combining outcome data with an understanding of the process that led to those outcomes”

Triangulation

“Triangulation, first used in 1959, is defined as a combination of multi methods in a study of the same object or event to depict more accurately the phenomenon being investigated”

The Evaluation and Evidence toolkits go hand in hand. Using and generating evidence to inform decision making is vital to improving services and people’s lives.

About the toolkits