Measuring the quality of diabetes care

2
Pract Diab Int April 2003 Vol. 20 No. 3 Copyright © 2003 John Wiley & Sons, Ltd. 81 L EADER How do we reliably measure the quality of care in dia- betes? This is an important and topical question, one that is raised by Dr Baksi and colleagues in their paper ‘Continuous quality control in diabetes care – implica- tions’. 1 In 2003 we have seen the publication of three important documents that have enormous implica- tions for the delivery of quality diabetes care in the UK. These are the long-awaited Delivery Strategy of the National Service Framework (NSF) for Diabetes, 2 its supporting Information Strategy, 3 and the propos- als for the new GP contract. 4 Quality of care is a key theme running through all of these publications and they cannot be regarded in isolation. The NSF empha- sises the need for clinical governance to be the local delivery mechanism for ensuring progress to safe, high-quality care, and the GP contract proposals aim to address this through the introduction of a quality and outcomes framework that will systematically and substantially reward practices on the basis of the qual- ity of care delivered. Performance indicators What is then perhaps surprising is that we do not yet have an outright consensus on those quality measures that are deemed to reflect the quality of care, nor the interpretation of their analysis. This is not for want of trying. There have been numerous programmes over the past decade or more, local, national and interna- tional, that have developed various sets of performance and/or outcome indicators. Most of these are remark- ably similar in their content, although probably the most definitive attempt was from the National Centre for Health Outcomes Development. 5 However, as part of the NSF activity these various sets of outcome indicators have been collated and a set of proposed performance indicators was made available for consultation on the NSF website at the time the NSF Standards document was published. While gaining broad support it was recognised that many of these indicators were aspira- tional, being currently unmeasurable due to the lack of supporting information systems to gather the data. It is intended that the next phase of indicator devel- opment will be to share the results of this consultation exercise with the Commission for Health Improvement (CHI), soon to be the Commission for Healthcare Audit and Inspection (CHAI), and responsible for per- formance ratings and indicators. One might hope that from this it will not be a very difficult task to develop a standard toolkit of all these measures, with the sup- porting datasets, data definitions and interpretations, although validation exercises will still be required. This would enable a service to select a specific profile of measures to meet the needs of the quality issues being examined in any specific setting. A national set of indi- cators from these could also be mandated to obtain national comparisons. To this end the recently established National Clinical Audit Support Programme (NCASP) has been tasked, in collaboration with CHAI, to develop national comparative clinical audit in adult diabetes, and to pro- vide national benchmarked data, by the end of 2004. This is currently a formidable task given the state of pri- mary and secondary care information systems and the difficulty of collecting and aggregating population-wide data. However, for the year 2004/5 there is a ministerial expectation that national outcome information on dia- betes will be available to support progress with the NSF implementation. In the first instance, this may be achievable if the population with diabetes, the denomi- nator, can be defined. This demands knowing not only those persons with diabetes, but also the type of dia- betes and the date of diagnosis along with basic demo- graphic data. Useful national comparative outcome data could then be generated by triangulation of this information with the nationally collected Hospital Episode Statistics (HES) data, always providing confi- dentiality issues can be adequately addressed. However, a whole new dimension has now been added by the new GP contract, assuming that it is accepted. It identifies diabetes as one of the ten key clinical domains for introducing a quality and out- comes framework based on the best available research evidence. It must be a cause for concern that the qual- ity indicators adopted as part of this diabetes frame- work will pre-empt much of this national work on developing performance and outcome indicators. Eighteen measures have been selected within diabetes to assess the quality of care. Each is individually weighted, with minimum and maximum thresholds, to gain up to 99 points for diabetes care (there are a total of 550 points available for the 10 clinical domains) that attract financial rewards for the practices where targets are met. It remains to be seen whether this is a good or a bad thing. On the positive side, any solution, even one that may be not be perfect, is better than none as there will always be opportunities to review and modify the process once it is established, and it must be said that the quality indicators adopted in this framework are very much what one would expect, and are probably non-contentious. On the down side, it may have the effect of removing an element of self- determination by Primary Care Trusts (PCTs) regard- ing how they assess their local quality issues, as GP information systems will undoubtedly adjust to meet the demands of these quality indicators. There is also the inevitable, lingering concern that setting weighted targets for this points reward system will have the effect of distorting clinical practice, although the high rewards for measuring and achieving good glycaemic (up to 27 points) and hypertensive control (up to 20 points) is entirely appropriate. Data recording The success of implementation of these quality frame- works will depend on appropriate and accurate feed- back to the practices concerned. Meaningful compar- Measuring the quality of diabetes care

Transcript of Measuring the quality of diabetes care

Page 1: Measuring the quality of diabetes care

Pract Diab Int April 2003 Vol. 20 No. 3 Copyright © 2003 John Wiley & Sons, Ltd. 81

LEADER

How do we reliably measure the quality of care in dia-betes? This is an important and topical question, onethat is raised by Dr Baksi and colleagues in their paper‘Continuous quality control in diabetes care – implica-tions’.1 In 2003 we have seen the publication of threeimportant documents that have enormous implica-tions for the delivery of quality diabetes care in theUK. These are the long-awaited Delivery Strategy ofthe National Service Framework (NSF) for Diabetes,2

its supporting Information Strategy,3 and the propos-als for the new GP contract.4 Quality of care is a keytheme running through all of these publications andthey cannot be regarded in isolation. The NSF empha-sises the need for clinical governance to be the localdelivery mechanism for ensuring progress to safe,high-quality care, and the GP contract proposals aimto address this through the introduction of a qualityand outcomes framework that will systematically andsubstantially reward practices on the basis of the qual-ity of care delivered.

Performance indicatorsWhat is then perhaps surprising is that we do not yethave an outright consensus on those quality measuresthat are deemed to reflect the quality of care, nor theinterpretation of their analysis. This is not for want oftrying. There have been numerous programmes overthe past decade or more, local, national and interna-tional, that have developed various sets of performanceand/or outcome indicators. Most of these are remark-ably similar in their content, although probably the mostdefinitive attempt was from the National Centre forHealth Outcomes Development.5 However, as part ofthe NSF activity these various sets of outcome indicatorshave been collated and a set of proposed performanceindicators was made available for consultation on theNSF website at the time the NSF Standards documentwas published. While gaining broad support it wasrecognised that many of these indicators were aspira-tional, being currently unmeasurable due to the lack ofsupporting information systems to gather the data.

It is intended that the next phase of indicator devel-opment will be to share the results of this consultationexercise with the Commission for Health Improvement(CHI), soon to be the Commission for HealthcareAudit and Inspection (CHAI), and responsible for per-formance ratings and indicators. One might hope thatfrom this it will not be a very difficult task to develop astandard toolkit of all these measures, with the sup-porting datasets, data definitions and interpretations,although validation exercises will still be required. Thiswould enable a service to select a specific profile ofmeasures to meet the needs of the quality issues beingexamined in any specific setting. A national set of indi-cators from these could also be mandated to obtainnational comparisons.

To this end the recently established NationalClinical Audit Support Programme (NCASP) has been

tasked, in collaboration with CHAI, to develop nationalcomparative clinical audit in adult diabetes, and to pro-vide national benchmarked data, by the end of 2004.This is currently a formidable task given the state of pri-mary and secondary care information systems and thedifficulty of collecting and aggregating population-widedata. However, for the year 2004/5 there is a ministerialexpectation that national outcome information on dia-betes will be available to support progress with the NSFimplementation. In the first instance, this may beachievable if the population with diabetes, the denomi-nator, can be defined. This demands knowing not onlythose persons with diabetes, but also the type of dia-betes and the date of diagnosis along with basic demo-graphic data. Useful national comparative outcomedata could then be generated by triangulation of thisinformation with the nationally collected HospitalEpisode Statistics (HES) data, always providing confi-dentiality issues can be adequately addressed.

However, a whole new dimension has now beenadded by the new GP contract, assuming that it isaccepted. It identifies diabetes as one of the ten keyclinical domains for introducing a quality and out-comes framework based on the best available researchevidence. It must be a cause for concern that the qual-ity indicators adopted as part of this diabetes frame-work will pre-empt much of this national work ondeveloping performance and outcome indicators.Eighteen measures have been selected within diabetesto assess the quality of care. Each is individuallyweighted, with minimum and maximum thresholds, togain up to 99 points for diabetes care (there are a totalof 550 points available for the 10 clinical domains)that attract financial rewards for the practices wheretargets are met. It remains to be seen whether this is agood or a bad thing. On the positive side, any solution,even one that may be not be perfect, is better thannone as there will always be opportunities to reviewand modify the process once it is established, and itmust be said that the quality indicators adopted in thisframework are very much what one would expect, andare probably non-contentious. On the down side, itmay have the effect of removing an element of self-determination by Primary Care Trusts (PCTs) regard-ing how they assess their local quality issues, as GPinformation systems will undoubtedly adjust to meetthe demands of these quality indicators. There is alsothe inevitable, lingering concern that setting weightedtargets for this points reward system will have theeffect of distorting clinical practice, although the highrewards for measuring and achieving good glycaemic(up to 27 points) and hypertensive control (up to 20points) is entirely appropriate.

Data recordingThe success of implementation of these quality frame-works will depend on appropriate and accurate feed-back to the practices concerned. Meaningful compar-

Measuring the quality of diabetes care

Page 2: Measuring the quality of diabetes care

LEADER

ison will be of paramount importance. To this end, theconsistency of data recording and casemix issues mustbe addressed. Standards are the key to all of this.Standardising the coded clinical terms employed,aligning these with related developments such asREAD code/SNOMED Clinical Terms, providingagreed definitions and methods of analysis are essen-tial. As part of the Dataset Development Programmethe Diabetes Dataset Project Board,6 one of the sup-porting activities of the Information Strategy for theDiabetes NSF, is working to develop these standardsand manage their submission to the NHS InformationStandards Board. Through a phased programme, itwill provide a Diabetes User Dataset (already availableon the website), a Core Dataset for use in primary andsecondary care, a clinical audit dataset and other spe-cific extension diabetes datasets to support specialistcare, e.g. paediatric care, eye disease, cardiovasculardisease.

The QUIDS (Quality Indicators in DiabetesServices) programme7 has gone some way to identify-ing and providing solutions to address denominatorand casemix issues. Practice-based registers8 may beincomplete for a variety of reasons, but the most sig-nificant is the contribution of the undiagnosed popu-lation with diabetes. Comparison of the numbers ofpeople with diabetes on a practice register with thatpredicted by the demographics of the practice popula-tion will be essential in identifying completeness ofascertainment. From both the NSF and quality frame-work perspectives, this latter denominator will providethe most accurate reflection of the delivery of care inthat practice upon which to base rewards. The GP con-tract proposals do not identify the denominator to beused, but one could envisage that using the predictedrather than actual population may make many of thetargets unachievable in the short term. Nor are theinfluences of other casemix issues that may confoundinterpretation addressed, particularly the influences ofsocial deprivation. There would seem to be a substan-tial amount of additional work necessary to ensure theapplication of these measures is a fair reflection ofpractice.

Quality of careQuality is, of course, not just about biomedical indica-tors. Organisational excellence and the perceptionsand experience of the service provided to the personswith diabetes are equally important. These are brieflyaddressed within the GP contract although there havebeen in recent years more extensive pieces of work toaddress this. The Audit Commission produced thereport Testing Times9 and a very detailed instrumentthat can be applied by District Audit Services to reviewthe quality of secondary care services. The results of thisactivity can be reviewed on their website.10 There hasalso been the development and piloting by the RoyalCollege of Physicians of a self-assessment and externalassessment framework, based on the EuropeanFoundation for Quality Management framework, suit-able for measuring the quality of locality diabetes ser-

vices, in both primary and secondary care. Patient ques-tionnaires are used in both these assessment processesbut given the importance of the involvement of personswith diabetes in the development and quality of servicesthere needs to be standardisation in this area too.

While there has been a lot of national activity, muchof which is ongoing, it is being driven by differentorganisations often with different agendas and onecannot help but reflect that there is a need for some‘joined up thinking’ to bring this all together in a sin-gle overarching programme. The ultimate goal forassessing the quality of care is for the measures to beentirely derived from information collected as part ofthe routine process of care. This is one of the centraltenets of Information for Health.11 This will only beachieved by the full implementation of electronicpatient records and indeed only then reliably throughthe integration of care records across the varioushealth communities as envisaged by the IntegratedCare Records Services (ICRS) programme.12

Unfortunately, this may still be at least five years away,but if, at the very least, we can achieve consensus onthe measures and create the necessary supportingnational standards it should then be reasonablystraightforward for the local health communities toincorporate these into their information systems.

References1. Baksi A, Ball R, Bedford S, et al. Continuous quality

control in diabetes care – implications. PracticalDiabetes Int 2003; 20(3): 85–88.

2. National Service Framework for Diabetes: DeliveryStrategy: www.doh.gov.uk/nsf/diabetes/research/

3. National Service Framework for Diabetes:Information Strategy: www.doh.gov.uk/diabetes/nsf

4. GMS Contract: www.bma.org.uk/ap.nsf/Content/NewGMSContract

5. Home P, Coles J, Mason A, Wilkinson, eds. HealthOutcome Indicators: Diabetes. Report of a WorkingGroup to the Department of Health. Oxford:National Centre for Health Outcomes Development,1999.

6. Information on work in progress to agree specifica-tion, dataset and term set for diabetes registers isavailable at: nww.nhsia.nhs.uk/phsmi/datasets/pages/datasetschedule.asp

7. QUIDS programme: www.nice.org.uk8. Practice based registers: see: www.nhsia.nhs.uk/

phsmi/datasets/pages/9. Testing Times. Audit Commission, 2000.10.Audit Commission: www.audit-commission.gov.uk11.DoH Information for Health 1998 (HSC 1998/168).

Building the Information Core – implementing theNHS plan, December 2000. Delivering 21st Century ITSupport for the NHS – National Strategic Programme:www.doh.gov.uk/ipu/develop/index.htm

12.Integrated Care Record Services: see www.doh.gov.uk/ipu/whatnew/specs_12d.htm

Dr Nick Vaughan, Consultant PhysicianBrighton and Sussex University Hospitals NHS Trust

82 Pract Diab Int April 2003 Vol. 20 No. 3 Copyright © 2003 John Wiley & Sons, Ltd.