Ian Scott - Princess Alexandra Hospital QLD - KEYNOTE ADDRESS | Clinical Audit – Is It Up To the...

Post on 19-Jun-2015

406 views 0 download

Tags:

description

Ian Scott delivered the presentation at the 2014 Clinical Audit Improvement Conference. The Clinical Audit Improvement Conference explored the role of clinical audit in the new era of National Care Standards. For more information about the event, please visit: http://bit.ly/clinicalaudit14

Transcript of Ian Scott - Princess Alexandra Hospital QLD - KEYNOTE ADDRESS | Clinical Audit – Is It Up To the...

Clinical audit – is it up to the task in the new era of national care standards, revalidation and

disinvestment?

Ian Scott

Director of Internal Medicine and Clinical Epidemiology, Princess Alexandra HospitalAssociate Professor of Medicine, University of Queensland

Chair, Acute Medicine, QH SGMCSNPast Chair, QH Cardiac Collaborative

Member, ACSQHC Clinical Standards Advisory CommitteeSenior Advisor, CSANZ/NHFA Clinical Guidelines Executive Group

4th Clinical Audit Conference

Sydney

25, 26/8/14

Defining audit• Quality improvement process that seeks to

improve patient care and outcomes through systematic review of care against explicit criteria and the implementation of change

Audit types• Standards-based audit

– Snapshot audits, periodic audits, continuous –– registries, cohorts

– Individual vs group

– Single vs multidisciplinary

• Adverse occurrence screening and critical incident monitoring– peer review cases which have caused concern or from which there was an

unexpected outcome– multidisciplinary team review and learning

• Peer review– individual cases discussed by peers to determine, with the benefit of

hindsight, whether the best care was given– recommendations made from these reviews are often not pursued as there is

no systematic method to follow.

• Patient surveys and focus groups– methods used to obtain users' views about the quality of care received

Emerging roles for clinical audits

• National healthcare quality standards– System failures and suboptimal care– Patient risk and harm– Unwarranted variation in practice

• Clinician revalidation– Assurance of minimum level of competence to

safeguard patient safety

• Disinvestment– Minimising use of ineffective or harmful

interventions– Reducing waste

Not all variation is bad

• Appropriate variations in care (optimal care) do occur due to a person’s particular illness or choices

• Suboptimal care– underuse of care: benefits of a treatment or procedure

clearly outweigh any potential harm from its use, but it is not used

– overuse of care: when a treatment or procedure is widely used, but the evidence of its benefits is limited or missing

– misuse of care: when the care provided to a person is not based on their values and preferences, and the risks and benefits of alternative treatments have not been fully explained to them

Cautions from the NHS• Concerns about effectiveness and cost-benefits as a policy

instrument for health care quality improvement– Barton et al . 1995; Committee of Public Accounts 1995)

• Theory behind audit ill defined and untested, contributing to confusion on how it should be implemented and practised

– Nolan & Scott 1993; Fulton 1996; Hopkins 1996; Miles et al .1996)

• Apparent false assumption by policy makers and others that health care professionals intuitively understand audit methods and are able to apply these effectively

• Process for formal introduction of audit lacks accountability and contains no independent means of verifying and assuring its rigour and effective application

– Pollitt 1993; Miles et al 1996; Scottish Office 1998)

• Many audits considered to be poorly designed, have problems with standard setting, characterized by data collection inconsistencies, completion failure

– Johnston et al 2000

Audit of audits

• Review of 25 audit projects within a 27-bed intensive care unit of a UK teaching hospital

• Each audit project reviewed independently by two assessors using Audit Project Assessment Tool (APAT) produced by the UK Clinical Governance Support Team

• All used only retrospective data.• Audit projects were contributed from all sections of the

multidisciplinary critical care team but there were few truly multidisciplinary projects. – Four were re-audits - three showed service improvement; one showed

deterioration

• Eleven (47%) classified as good quality projects using the APAT

• Despite the clinical audit programme being active and well supported, objective evidence of clinical governance benefit was lacking.

Anderson et al Br J Hosp Med 2012

Barriers to effective audit

• Concerns about the purpose of audit

• Failings in leadership

• Confidentiality issues– does audit require ethical approval?

• Increased workload and feasibility

• Fear of disciplinary action and litigation

• Scepticism about its actual value

• One-off, retrospective, ‘project’ or ‘research’

Audit cycle

Evidence for audit and feedback

• 140 RCTs of audit and feedback

• Intervention effects ranged from substantially positive (70% increase in desired behaviour) to negative (9% absolute decrease)

• Median adjusted RD: 4.3% absolute increase (IQR 0.5% to 16%) in desired clinician behaviour– Effect size varied based on behaviour targeted by intervention

• Feedback more effective:– when baseline performance is low– source is a supervisor or colleague– is provided more than once– is delivered in both verbal and written formats– includes both explicit targets and an action plan. In addition, the

Ivers et al Cochrane Database Syst Rev 2012(6):CD000259.

Problems with audits

• Unexpected variations over time and space in:– disease severity, prevalence of co-morbid

disease

– definitions of clinical findings and even diseases

– dosage, route, etc. of treatments

– methods used to follow up patients to determine outcomes

Problems with audit• Incomplete, unrepresentative coverage

– patients omitted – systematic baseline differences between patients who

receive different therapies– data abstracted differently from records if there is a good

or a bad outcome (hindsight bias)– gaming of data

• Problems with coding and classification systems– loss of clinically relevant detail– misuse of codes, or application of differing coding– use of different versions of a coding system

• Hazards in the analysis of clinical databases– poor understanding of statistics– temptation to perform multiple cross-tabulations and tests

of significance (‘data dredging’)– temptation to conduct inappropriate subgroup analyses

Meaningful audit

• Determining thresholds– for acceptable performance (‘summative’)

• minimum standard of quality/competence• ongoing quality improvement (‘formative’)

• Measurement fidelity– Evidence-based and standardized specifications– Adequate sampling– Adjustment for confounding patient factors (for outcomes)– Psychometric properties

• Validity: the extent audit measures what it purports to measure• Reliability: inherent measurement error (or signal to noise ratio)

– Internal consistency, test–retest reliability (or reproducibility); intra-rater and inter-rater reliability

• Actionable feedback • Change in clinical processes• Re-audit

Actionable feedback

Interpretable formatsDisseminated to all relevant parties

National standards

• ACSQHC health service standards 1-10

• ACSQHC situation-specific standards

• Acute coronary syndromes

• Acute stroke

• Antimicrobial stewardship

• Orthopaedic procedures

National standards

“The Standards provide a nationally consistent and uniform set of measures of safety and quality for application across a wide variety of health care services. They propose evidence-based improvement strategies to deal with gaps between current and best practice outcomes that affect a large number of patients.”

William J Beerworth, Chair, 2011

Audit tools

Audit tool

National service standards

• Hundreds of audit questions• Standards now part of accreditation processes

– But does accreditation work?

• Many standards relate to committees, policies and procedures, documentation requirements, risk management systems– Audits at facility level ask for ‘ …evidence of..’– Unit/patient level more specific (% n/N)

– But no mandatory minimum sample sizes or intervals

• Many standards assume we know what optimal care is and how to achieve it – when we don’t– Tension between perceived need to improve care and knowing

how to do it

Health service standards – my cut1 – Governance for Safety and Quality in Health Service Organisations

• Failures in credentialing, peer disclosure, rules and oversight pertaining to impaired doctors • Failure in patient disclosure and public reporting• Failure in teamwork and interdisciplinary collaboration• Mandatory review of all unexpected deaths, arrests, ICU transfers, SAC 3 events

2 – Partnering with Consumers• Failure in shared decision-making

3 – Preventing and Controlling Healthcare Associated Infections• Handwashing, bare to elbows, no lanyards or ties• Antimicrobial stewardship • Good care of invasive devices

4 – Medication Safety• Better training of doctors and nurses; point of care decision support• More clinical pharmacists on wards

5 – Patient Identification and Procedure Matching• Bar coding• Time out/WHO surgical checklist

6 – Clinical Handover• SBAR at every handover• Transition summaries

7 – Blood and Blood Products• Authorised release

8 – Preventing and Managing Pressure Injuries • Foam mattresses, regular turnover, proper nutrition, continence aids

9 – Recognising and Responding to Clinical Deterioration• Contextualised track and trigger charts; ACP and ARPs

10 – Preventing Falls and Harm from Falls• Medication reviews, closer supervision at high-risk times and places

Risk management – my cut

• To manager – what keeps you awake at night?

• To clinicians – Would you recommend your hospital/unit/ward to any of your

relatives or close friends?– Do you feel safe from professional disillusionment, burn-out,

containment, isolation?

• To patients (and carers/families)– Do you trust the hospital/unit/ward to look after you and

have your best interests at heart?– Do you feel able to voice any concerns about your care

freely and openly without fear of retribution or criticism

What about teamwork?

Sir Muir Gray 2013

Acute stroke care

Slow improvement

Who’s not providing data?

Yr Survey response

Clinical audit data

% responding hospitals providing

data

% responding hospitals vs

2007

2007 254 89 35% 100%

2009 206 96 47% 81%

2011 188 108 57% 74%

2013 177 177 70% 70%

Different data sources – different results - QOF NHS

10 stroke QOF targets for general practiceObtained from entries into computer software

1. The register of patients with stroke or TIA defined by a list of Read codes 2. % new patients with presumptive stroke who have been referred for confirmation of the diagnosis by CT or MRI scan.3. % record of smoking status in the last 15 months (except known ex-smokers).4. % who smoke and who received smoking cessation advice or referral to a specialist service, if available5. % who have a record of BP in the notes6. % in whom the last BP reading (measured in last 15 months) is 150/90 or less.7. % with a total cholesterol measured in the last 15 months.8. % last measured total cholesterol 5 mmol/l or less.9. % with non-haemorrhagic stroke, or a history of TIA, who record aspirin, an alternative antiplatelet therapy or an anticoagulant is being taken (unless a contraindication or side effects are recorded).10. % who have had influenza immunisation in the preceding quarter

Different data sources – different results – RCP guideline

RCP guideline criteriaObtained from chart review

Acute assessment and management• Time taken between diagnosis and assessment.• RCP: brain scan and seen in secondary care within 1 day (TIA 7 days).• GP contract: referral for confirmation of diagnosis and brain scan after 1 April

2003.• Received rehabilitation.Risk factor monitoring/quality of secondary prevention• BP: <150/90 (GP contract); <140/85 (RCP).• BP in diabetics: <145/85 (GP contract); <130/80 (RCP).• Cholesterol: <5 mmol/l (GP contract); <3.5 mmol/l (RCP).• Smoking status, BMI, alcohol intake.Drug therapy• Antiplatelet therapy (or recorded contraindicated) for non-haemorrhagic

stroke/TIA.• Warfarin use in patients with atrial fibrillation.• Use of ACE inhibitors or thiazides.• Flu vaccination.

Different data sources – different results

Williams, de Lusignon Inform Prim Care 2006

Different data sources – different results

Williams, de Lusignon Inform Prim Care 2006

Health service evaluation and clinical audit

• Audits must be properly designed with adequate statistical power and conducted over the full audit cycle

• Mixed methods are needed in uncovering the ‘why’ of suboptimal care and deciding ‘how’ to improve

• Given limited resources and the need to promote clinician engagement, audits should be conducted (?predominantly) at the unit/department level for well defined conditions/scenarios for which there is evidence or at least strong expert consensus as to what constitutes optimal care

Challenges in revalidation

• Requires equitable distribution of measures across all performance attributes required of modern professional

• Clinical expertise and decision-making receiving higher weightings

• Not all measures will have equal validity– Chart audits resource-intense, but provide valid information on clinical

processes

– Patient satisfaction tools are cheap, of questionable validity in assessing quality of technical care, although perhaps more reliable in assessing communication skills and attributes of ‘patient-centredness’ and ‘timeliness’

• No proven measures in assessing how well physicians:– collaborate with others in achieving desired outcomes (teamwork)– improve patients’ knowledge and understanding of their health (patient

empowerment)– keep up to date with new developments (currency of practice)– appreciate their strengths and weaknesses (insight)

Challenges in revalidationAgreed standards for satisfactory performance

• Standard or threshold that delimits acceptable performance while minimizing the risk of misclassifying physicians as performing well or poorly.

• Reliability of assessment directly related to the strength of the evidence base for the management of the clinical conditions being reviewed– Declines when professional debate ensues over what constitutes appropriate

practice

• For most performance measures, an absolute threshold may not exist, or even if such a boundary does exist, it is likely to require further adjustment to account for other causes of less than perfect performance– idiosyncratic patient reactions, patient choice– threshold of >80% but <100%– bottom or top percentiles of a distribution,

• too arbitrary• unhelpful in situations where performance across physicians tightly clustered

Challenges in revalidation

Sound measurement properties• Standardized specifications

– capable of being collected in a reproducible fashion across multiple clinicians and settings of care

• Adequate sampling– Joint Commission in the USA recommends at least 10 different measures

per physician

– For each chosen measure, the number of observations (or sample size) must be large enough to overcome the problem of random variation

– At least 100 patients with diabetes per physician would be needed to achieve 80% reliability for most diabetes-related process-of-care measures

– Also effects of clustering

– Little correlation with performance on other measures

– Statistical process control (SPC) methods confer high sensitivity in detecting unfavourable trends concerning uncommon adverse outcomes

Challenges in revalidation

Feasibility, ease of use and sustainability• Performance measures must be feasible to collect in an efficient

and reliable manner over the long term– Large scale clinical registries or hospital-level quality initiatives

that include clinical data may be reasonably affordable– particularly if physicians practise wholly or predominantly in one

site

• But what about private physicians who visit multiple clinics or small facilities– such data collection will be substantially more expensive and the

cost borne personally– increasing availability of EMR and automated laboratory data may

mitigate some of these expenses

• While discrete outcomes and some processes of care are readily measurable, functional status more relevant to some specialties such as rheumatology or neurology, is more problematic, especially over long periods of follow up

Challenges in revalidation

• Attribution accuracy and controllability

• Timeliness

• Metric balance

Unintended consequences

Monitoring for unintended adverse consequences• Performance measurement may, in a few individuals,

evoke refusal to provide data, gaming of the data, or even outright fraud– Rules around mandatory participation and careful auditing of

data have to be part of any measurement and reporting system

• Physicians may start to focus their efforts on areas being directly measured to the detriment of other aspects of care not being monitored– Include measures that assess a wide spectrum of care,

which, to be sustainable, may require rotating through a set of measures over a cycle of assessment

Self-audit• Self-collection of personal performance data, reflection on gaps between

performance and standards, and development and implementation of learning or quality improvement plans by individual care providers

• Six studies evaluated the impact of self-audit programs

• No program based on a model or theory that informed its design

• All studies showed improved compliance with care delivery guidelines and/or improved patient outcomes– although these findings were largely self-reported

• Programs varied so features associated with benefit could not be identified– Professional quality reflections, learning needs analyses, sentinel event review

• More research needed in developing training programs and tools that address and evaluate a variety of competencies across different disciplines – using more rigorous research designs, including both quantitative and

qualitative approaches

Gagliardi et al J Contin Educ Health Prof 2011

Self-audit tools

Challenges in revalidation

• Multiple assessment methods involving multiple reviewers and a variety of data sources are preferred to single or a small number of methods and/or data sources in order to overcome the respective problems of content (or skill) specificity and bias or inaccuracy involving data sources

Challenges in revalidation

• Achieving high sampling rates for several different assessment methods, even if they are not highly standardized, probably gives a more accurate picture of overall performance than relying on a small number of methods which, while highly standardized and reliable, are associated with lower rates of sampling

Clinical audit and revalidation

• At the level of assessing performance of individual clinicians for summative purposes, clinical audit by itself is an inadequate tool and should not be relied upon

Disinvestment

Overdiagnosis and overtreatment- marginal medicine

• Majority of audits are about underuse and misuse of clinical interventions, not overuse

• Challenge – relatively few evidence-based measures of ineffective or harmful care– Guidelines need to state ‘do not do’ recommendations more

often– Choosing Wisely campaign in US– Elshaug et al: 150 ineffective items on MBS

• Level of evidence– Evidence of no benefit vs no evidence of benefit

• Potential negative consequences

Overdiagnosis and overtreatment- marginal medicine

Matthias, Baker JAMA 2013

Disinvestment and clinical audit

• Early days but will feature more and more

Closing comments

• Clinical audits are worthwhile but apply and use them well– What is the purpose, what is the desired standard of care, and

how will we best use the information gained?

• Don’t conduct any audit unless you intend to re-audit if care is found to be suboptimal and improvement is thought essential

• Clinicians need to be fully engaged in audits

• Audits related to governance procedures, documentation compliance and other organisational mandates should not dominate over direct patient care audits

• Enough provocation for one talk

• Questions?