This study was funded by the National Institute for Health Research (NIHR) [Programme Grants for Applied Research (Grant Reference Number RP-PG-1209-10040)]. The views expressed are those of the authors and not necessarily those of the NIHR or the Department of Health and Social Care.
You can download a full version of this guide, or you can access individual sections below.
This guide is based upon a major research programme, Action to Support Practices Implementing Research Evidence (ASPIRE). The research was led by the University of Leeds and brought together collaborators including the West Yorkshire clinical commissioning groups, patients and the public, and representatives from the National Institute for Health and Care Excellent (NICE). Over 200 general practices from West Yorkshire took part in the research programme.
- general practices combining efforts
- focusing their attention on ‘high impact’ clinical priorities
- underpinned by a sound evidence based
- associated with scope for improvement
This is about…
|
Setting priorities for change
|
||
Applicable to level(s)
|
Single practice
|
Network of practices
|
Regional or national networks
|
Likely skills and resources needed
|
Clinical Management
|
||
Likely difficulty
|
|||
Likely time commitment
|
|||
Do…
|
Apply some criteria to justify your choice
|
||
Don’t…
|
Get hi-jacked by strong views or vested interests
|
||
Illustrations
|
Developing ‘high impact’ guideline-based quality indicators for UK primary care: This is an example from research which illustrates a structured consensus process.
|
||
Helpful resources
|
How NICE prioritises quality standards.
A checklist for prioritising clinical practice recommendations for action.
|
- Strength of evidence underpinning clinical practice recommendations
- Burden of illness, e.g. prevalence, severity, costs
- Fit with explicit national or local priorities and initiatives
- Potential for significant patient benefit, e.g. longevity, quality of life, safety of care
- Scope for improvement upon current levels of adherence, e.g. from perceived current low levels or unacceptably high variations
- Feasibility of measuring progress, e.g. from routinely collected clinical data
- Extent to which following a recommendation is directly within the control of individual practice teams or professionals
- Likelihood of achieving cost savings without patient harm
- You might have little or no choice over what to focus on! There is no shortage of national and local priorities. You will struggle to address all of these at the same time and therefore you could focus, say, on a limited number of clinical practice recommendations selected from on clinical guideline.
- Who needs to be involved as you will require different perspectives and skills, e.g. clinicians, practice support staff, patients and carers, commissioning, public health
- How high the stakes are. A one-off, informal meeting will usually suffice for a general practice. Larger organisations or networks, which need to be accountable and transparent, might consider using a structured consensus process.
This is about…
|
Measuring adherence to recommended practice
|
||
Applicable to level(s)
|
Single practice
|
Network of practices
|
Regional or national networks
|
Likely skills and resources needed
|
Clinical Administrative Data collection and analysis
|
||
Likely difficulty
|
|||
Likely time commitment
|
|||
Do…
|
Think about what routinely recorded clinical data might already be available
|
||
Don’t…
|
Attempt to construct overly complicated indicators
|
||
Illustrations
|
From research studies:
Variations in achievement of evidence-based, high-impact quality indicators in general practice.
Prescribed opioids in primary care.
High risk prescribing in primary care patients particularly vulnerable to adverse drug events.
|
||
Helpful resources
|
|
- Whether there are existing indicators or sets of routinely collected data which will be sufficient for your needs, e.g. prescribing indicators, Quality and Outcome Framework (QOF) data.
- The advantages and disadvantages of measuring processes or outcomes of care (Box 1).
- The advantages and disadvantages of single or composite (combined) indicators (Box 2).
- How reliably and accurately coded routinely collected data are. Some types of data are generally coded reliably in general practice (e.g. prescribing, certain diagnostic tests, diagnoses for patients on disease registers) whilst others are not (e.g. referrals, diagnoses not systematically recorded for disease registers).
- Defining the targeted patient (‘denominator’) population (e.g. all coded type 2 diabetes) or particular sub-populations (e.g. coded type 2 diabetes with recorded poorer control).
- Defining those (‘numerator’) patients with evidence of a recommended clinical intervention offered or received or meeting defined treatment targets.
- Deciding whether to collect data to understand any likely variations in practice, e.g. patient demographics, co-morbidities.
- Developing or adapting existing searches of electronic patient data.
- Piloting and refining searches prior to large scale data collection.
- How to include all or sample general practices to ensure the data apply to ‘typical’ practices which have not self-selected.
- Seeking approval, if required, from general practices for data collection.
- Adherence to information governance requirements.
- Overall level of adherence for each indicator; if high there may be no need for further action except for positive feedback; if low or lower than expected, consider further action if room for improvement exists.
- Patterns of variation between general practices, e.g. can substantial variation confidently be explained away by known differences in practice population demographics?
- Patterns of variation between any patient sub-groups, e.g. age, gender, co-morbidities.
- Likely chance variation, especially when dealing with smaller numbers of practices or patients.
- Unexpected findings to prompt consideration and investigation of plausible alternative explanations, e.g. errors in searches, limitations of coding.
Process of care indicators
|
Outcome indicators
|
Useful if there is strong evidence predicting better outcomes if process of care followed, e.g. reduced stroke risk for anticoagulation in atrial fibrillation
|
Can assess what are ultimately important to patients, e.g. quality of life
|
Less useful if patient outcomes not tightly linked to processes of care, e.g. screening or case-finding for depression4
|
Factors other than healthcare provided may influence outcomes, e.g. co-morbidities
|
Measurement can help understand variations in patient outcomes, e.g. higher levels of asthma exacerbations might be linked to poorer provision of patient asthma plans5
|
May need to adjust statistically for casemix to enable fair comparisons between practices
|
Often available as routinely collected data, e.g. prescribing, test ordering
|
Intermediate outcomes can help assess responses to treatment, e.g. blood pressure control
|
Single indicators
|
Composite indicators
|
Often simpler to apply, e.g. proportion of people with diabetes whose blood pressure is adequately controlled
|
Can summarise one or more key aspects of quality of care to help rapid interpretation of indicators, e.g. proportion of people with diabetes who receive all recommended processes of care
|
Allow detection of specific aspects of care that need attention, e.g. albumin:creatinine ratios in chronic kidney disease
|
Composite indicators only as good as their underlying single indicators
|
This is about…
|
Understanding gaps between current and recommended practice
|
||
Applicable to level(s)
|
Single practice
|
Network of practices
|
Regional or national networks
|
Likely skills and resources needed
|
Clinical Administrative Management
|
||
Likely difficulty
|
|||
Likely time commitment
|
|||
Do…
|
Consider the range of individual, team and organisational level factors that can influence clinical care
Focus on identifying the most important factors that you can change
|
||
Don’t…
|
Assume that lack of knowledge is the main explanation for evidence-practice gaps
|
||
Illustrations
|
From research studies:
A qualitative study to understand adherence to multiple evidence-based indicators in primary care.
A qualitative study to understand long-term opioid prescribing for non-cancer pain in primary care.
A systematic review of barriers to effective management of type 2 diabetes in primary care.
|
||
Helpful resources
|
There are many frameworks which set out various ways of grouping factors that influence practice. Some are rather detailed but this sample illustrates a range of approaches:
A checklist for identifying determinants of practice (see Table 1).
|
- Those which are most important, e.g. frequently encountered, pivotal steps in patient pathways
- Those with strongest consensus amongst team members
- Those most amenable to change, e.g. staff beliefs and processes of care as opposed to structures and wider environmental factors
- Those which can be readily linked to one or more approaches to change practice
This is about…
|
Evidence-based approaches to improve practice
|
||
Applicable to level(s)
|
Single practice
|
Network of practices
|
Regional or national networks
|
Likely skills and resources needed
|
Clinical Management
|
||
Likely difficulty
|
|||
Likely time commitment
|
|||
Do…
|
Accept that most approaches to improvement practice have modest effects which can accumulate if used consistently over time to produce a significant impact
|
||
Don’t…
|
Waste time on complicated and costly improvement fads
|
||
Illustrations
|
Education, informatics, and financial incentives for safer prescribing.
Pharmacist-led feedback, educational outreach support for safer prescribing.
Feedback to high antibiotic prescribers.
Brief educational messages for diabetes.
A review of computerised decision support.
A review of audit and feedback.
A review of educational meetings.
|
||
Helpful resources
|
Recommendations on audit and feedback.
Examples of audit and feedback.
|
- Strength of evidence. Some approaches have a stronger evidence-base than others. For example, audit and feedback has been tested in randomised trials many times across a range of settings and clinical topics. Whilst there are no guarantees it will work consistently for a given problem, there are ways to improve the chances of success – such as providing repeated rather than one-off feedback and including explicit action plans with feedback. In contrast, there is a much more limited evidence base on financial incentives, suggesting that you should use this approach with caution.
- The nature of the implementation problem. You need to apply some judgment in deciding which improvement approaches may work best for a given clinical problem. For example, computerised prompts can reduce errors of omission in prescribing decisions. However, they are less likely to work when tackling more complex issues, such as counselling patients or reducing emergency readmissions.
- Fit with available resources and skills. You need to make the best use of existing resources, such as practice pharmacists in auditing prescribing and educating the team.
- Unintended consequences. Some approaches may not work as intended or even have undesired side effects. For example, feedback on clinical performance showing a large gap between actual and recommended practice can be demotivating, or prescribing safety prompts which appear on-screen after you have made a clinical decision and counselled a patient on treatment can de-rail a consultation.
- The balance of costs and benefits. The effects of interventions may not always pay for themselves. For example, for educational outreach visits to reduce prescribing, the costs of educator and staff participation time may eclipse any savings. However, if the same approach of education outreach was even only modestly successful in improving your practice’s use of clinically effective strategies to promote weight loss or reduce smoking, the longer term population health benefits could outweigh the upfront costs.
- Single versus combined approaches. It is often possible to combine different approaches to improve practice, for example, educational outreach with audit and feedback. In some cases this can make sense if the approaches are complementary, e.g. if the outreach meetings aim to reinforce action planning following feedback. However, combined approaches can be more costly. Furthermore, there is no convincing evidence that combined approaches are more effective than single approaches – although this may be because evaluators have ‘thrown in the kitchen sink’ in efforts to address more difficult improvement problems.
- Effects are in the range, if not better, than those of many recommended clinical treatments.
- Effects can be worthwhile in relation to costs of improvement approaches.
- Effects of improvement approaches can be complementary and cumulative over time.
This is about…
|
Developing a plan of action
|
||
Applicable to level(s)
|
|
Network of practices
|
Regional or national networks
|
Likely skills and resources needed
|
Clinical Management
|
||
Likely difficulty
|
|||
Likely time commitment
|
|||
Do…
|
Think logically about how you might link different barriers to and enablers of best practice to improvement approaches
|
||
Don’t…
|
Make this more complicated than you really need to
|
||
Illustrations
|
This is how we developed an approach to change practice. It is fairly complex because it was used for research purposes.
This study is from secondary care but shows how an approach to change practice was developed based upon barriers and enablers.
|
||
Helpful resources
|
This is a list of 93 behaviour change techniques20: We do not suggest that you learn it! However, you might wish to look through if you are looking for new ways to help change the behaviour of health professionals (or patients).
|
- Developing approaches to improve practice can sometimes become complicated and challenging within limited timelines and resources. Behaviour change techniques offer a checklist of active ingredients to consider.
- Behaviour change techniques can be linked to different barriers and enablers. For example, limited abilities to recall all relevant clinical information when making a prescribing decision can be helped by prompts and reminders. There is no rule book (yet) on how to match behaviour change techniques to barriers and enablers; some degree of judgment is usually needed.
- Different improvement approaches can include similar behaviour change techniques. For example, audit and feedback can also include all or most of those mentioned earlier for educational outreach visits. This is useful to bear in mind if resources are available for audit and feedback but not for educational outreach visits. Therefore, it may be possible to deliver similar active ingredients but within different improvement approaches. However, if you are using more than one improvement approach (e.g. both educational outreach visits and audit and feedback), some degree of duplication may help reinforce any critical behaviour change techniques.
- Known evidence of effectiveness of the improvement approach (e.g. educational meetings), including what factors are likely to make them more, or less, effective
- Known barriers to and enablers of improvement
- Available resources and skills (e.g. routinely collected data for audit and feedback, skills in designing computerised prompts)
- Likely feasibility – how confident you are that the approach will work as intended
Barriers and enablers
|
Behaviour change techniques
|
Evidence-based approaches
|
||
Audit and feedback
|
Educational outreach visits
|
Computer prompts
|
||
Limited awareness or recall of treatment goals
|
Inform and prompt recall of clinical goals
|
|
|
|
Limited awareness of clinical benefit
|
Emphasize positive consequences of changing clinical practice (and negative consequences of not doing so)
|
|
|
|
Limited insight into scope for improving practice
|
Comparative feedback
|
|
|
|
Inability to recall all relevant clinical information at time of consultation
|
Triggered prompts and reminders
|
|
|
|
Risk of good intentions to change fading
|
Action planning
|
|
- Meet with practice staff, in a group or individually, your improvement approach is designed to help. Ask them to think aloud as they work through any instructions, processes or materials. Let them know that you particularly want to hear about problems that they might think that you don’t want to hear! Ask if they can suggest any solutions to these problems.
- Then probe people on (how likely is it to work in real life, seriously?), coherence (does the overall improvement approach make sense to them?), comprehensiveness (are all of the most important barriers addressed?) and fit (are there opportunities to embed the intervention within existing routines and resources?)
- Make adjustments as you proceed. If this is important enough, it is worth investing time in further meetings to get it right.
- Pilot the whole improvement approach or its separate components (e.g. computerised prompts) in a small number of practices. Again, actively probe for issues, especially around feasibility and fit with routines and resources.
This is about…
|
Preparing for the launch
|
||
Applicable to level(s)
|
|
Network of practices
|
Regional or national networks
|
Likely skills and resources needed
|
Clinical Administrative Management
|
||
Likely difficulty
|
|||
Likely time commitment
|
|||
Do…
|
Consider whether you have the commitment and resources to embed changes within your practice or network
|
||
Don’t…
|
Choose a launch period that clashes with competing initiatives or known busy periods
|
- Timing to avoid interference (or even align) with any other major initiatives or known peak periods (e.g. winter flu)
- Whether to go for a phased or ‘big bang’ start; the former is suitable if you have limited resources and allows more for continuing refinement following feedback whilst the latter allows clarity around a launch date
- Whether this is a one-off campaign or you can embed and sustain your improvement approach
- Is the approach designed as intended, i.e. to address all or most major known barriers by embedding relevant behaviour change techniques?
- Are those responsible for delivery sufficiently trained, e.g. are staff delivering educational outreach visits trained to a sufficient standard, or are those people nominated as local opinion leaders ‘on message?’
- Are arrangements in place to ensure that the improvement approach can be delivered on time to all practices and staff targeted?
- Do targeted practices and staff actually receive all components of the improvement approach?
- Do targeted practices and staff actually take any subsequent action prompted or supported by the improvement approach?
This is about…
|
Evaluating impact
|
||
Applicable to level(s)
|
Single practice
|
Network of practices
|
Regional or national networks
|
Likely skills and resources needed
|
Clinical Management Data collection and analysis
|
||
Likely difficulty
|
|||
Likely time commitment
|
|||
Do…
|
Remember that cumulative, small changes can make a big difference
|
||
Don’t…
|
Over-complicate your evaluation
|
||
Illustrations
|
Here is a simple audit of asthma plans carried out at one practice in Leeds.
Please send us any examples of quality improvement projects and clinical audits you would like to share.
If you are interested in research and want to see what a rigorous, ‘real world’ randomised trial looks like, see the randomised trial findings from ASPIRE21.
General practices were randomly assigned to receive an implementation package targeting diabetes control or risky prescribing (Trial 1); blood pressure control or anticoagulation in atrial fibrillation (Trial 2). The main outcomes were respectively: achievement of all recommended levels of haemoglobin A1c, BP, and cholesterol; risky prescribing levels; achievement of recommended BP; and anticoagulation prescribing.
The implementation package produced a significant clinically and cost-effective reduction in one target only: risky prescribing. We concluded that an adaptable implementation package was cost-effective for targeting prescribing behaviours within the control of clinicians, but not for more complex behaviours that also required patient engagement. Given known associations between risky prescribing combinations and increased morbidity, mortality, and health service use, a scaled-up risky prescribing implementation package could have an important population impact.
|
||
Helpful resources
|
- Agreeing key outcomes in advance
- Using the same method to collect and analyse data before and after implementation of the improvement approach
- Timing of data collection to capture any short term or longer term impacts – processes of care are more likely to change before patient outcomes
No battle plan ever survives contact with the enemy.
Helmuth von Moltke the Elder
|
- Unrealistic expectations about predicted or hoped for effects
- Loss of fidelity (‘How can we put our plan into action?’)
- Timing of data collection – did you miss any transient but important early effects, or is it too early to detect any important longer term impacts
- The data collected did not capture effects (although beware of rationalising too much after the event)
- Demonstrating evidence for CQC
- Educational outreach instructions for risky prescribing
- Educational outreach set up for risky prescribing
- Instructions for SystmOne search for risky prescribing
- Protocol for risky prescribing prompts
- Search algorithm for risky prescribing