This is about…
|
Evaluating impact
|
||
Applicable to level(s)
|
Single practice
|
Network of practices
|
Regional or national networks
|
Likely skills and resources needed
|
Clinical Management Data collection and analysis
|
||
Likely difficulty
|
|||
Likely time commitment
|
|||
Do…
|
Remember that cumulative, small changes can make a big difference
|
||
Don’t…
|
Over-complicate your evaluation
|
||
Illustrations
|
Here is a simple audit of asthma plans carried out at one practice in Leeds.
Please send us any examples of quality improvement projects and clinical audits you would like to share.
If you are interested in research and want to see what a rigorous, ‘real world’ randomised trial looks like, see the randomised trial findings from ASPIRE21.
General practices were randomly assigned to receive an implementation package targeting diabetes control or risky prescribing (Trial 1); blood pressure control or anticoagulation in atrial fibrillation (Trial 2). The main outcomes were respectively: achievement of all recommended levels of haemoglobin A1c, BP, and cholesterol; risky prescribing levels; achievement of recommended BP; and anticoagulation prescribing.
The implementation package produced a significant clinically and cost-effective reduction in one target only: risky prescribing. We concluded that an adaptable implementation package was cost-effective for targeting prescribing behaviours within the control of clinicians, but not for more complex behaviours that also required patient engagement. Given known associations between risky prescribing combinations and increased morbidity, mortality, and health service use, a scaled-up risky prescribing implementation package could have an important population impact.
|
||
Helpful resources
|
What is the aim of evaluation?
The main aim of an evaluation is to find out whether the improvement approach achieved its intended goals. This will involve measuring any change in the processes of care, in patient outcomes, or both. There also are opportunities to address other evaluation questions, such as why the approach worked (or not) and how can it be improved or adapted for another problem.
Whilst this manual may also be of interest to those planning improvements as part of a research project, with the aim of generating new, generalisable knowledge, it does not cover research designs. There are resources available to understand and guide research evaluations.3 22-26
Did the improvement approach work?
Essentially, this involves conducting an audit cycle to assess any differences in care or outcomes before and after the improvement approach. Considerations include:
- Agreeing key outcomes in advance
- Using the same method to collect and analyse data before and after implementation of the improvement approach
- Timing of data collection to capture any short term or longer term impacts – processes of care are more likely to change before patient outcomes
No battle plan ever survives contact with the enemy.
Helmuth von Moltke the Elder
|
Why did the improvement approach work (or not?)
There are many explanations as it why improvement approaches don’t work as planned. Possible explanations include:
- Unrealistic expectations about predicted or hoped for effects
- Loss of fidelity (‘How can we put our plan into action?’)
- Timing of data collection – did you miss any transient but important early effects, or is it too early to detect any important longer term impacts
- The data collected did not capture effects (although beware of rationalising too much after the event)
There are a number of ways to get an indication of why an improvement approach did or did not work as planned. These are similar to methods outlined earlier in ‘Why aren’t we achieving our goals?’
Deciding the next step
If the improvement approach largely worked as planned, you will need to decide whether to continue or repeat it in order to maintain your achievement. Having learned from this experience, you may also wish to move on and select the next priority to tackle…