SECTION 1: EVIDENCE-BASED QUALITY · PDF fileSECTION 1: EVIDENCE-BASED QUALITY IMPROVEMENT,...

12

Click here to load reader

Transcript of SECTION 1: EVIDENCE-BASED QUALITY · PDF fileSECTION 1: EVIDENCE-BASED QUALITY IMPROVEMENT,...

Page 1: SECTION 1: EVIDENCE-BASED QUALITY  · PDF fileSECTION 1: EVIDENCE-BASED QUALITY IMPROVEMENT, PRINCIPLES, AND PERSPECTIVES Quality Improvement Methods in Clinical Medicine Paul

SECTION 1: EVIDENCE-BASED QUALITY IMPROVEMENT,PRINCIPLES, AND PERSPECTIVES

Quality Improvement Methods in Clinical Medicine

Paul E. Plsek, MS

ABSTRACT. This article surveys the methods and toolsof quality improvement used today in health care. Specifi-cally, we describe how clinicians can use these methods toimpact the clinical practice of medicine. Improvementteams from a variety of health care organizations have re-ported the successful use of basic methods such as groupwork, flowcharting, data collection, and graphical dataanalysis. In addition to these incremental, problem-solvingmethods borrowed from the industrial practice of improve-ment, we have also seen the use of specific process designmethods in health care applications such as care path de-velopment. The pace of change in health care has also led tothe practical development of newer methods for rapid cycleimprovement. We will review the basic approach behindthese methods and illustrate key elements such as the ideasof change concepts and small-scale tests of change. Unfor-tunately, whereas these methods have been very successfuland highly appealing to improvement practitioners, theymay also have inadvertently widened a gulf between thesepractitioners and traditional health-services and clinical re-searchers. We offer an assessment of this issue and suggestways to narrow the communication gap. Measurement hasalso traditionally been a part of the thinking about qualityassurance and improvement in health care. We review thenew philosophy of measurement that has emerged fromrecent improvement thinking and describe the use of con-trol charts in clinical improvement. Benchmarking andmultiorganizational collaboratives are more recent innova-tions in the ways we approach improvement in health care.These efforts go beyond simple measurement and explorethe why and how associated with the widespread variationin performance in health care. We explore a variety ofhealth care examples to illustrate these methods and thelessons learned in their use. We conclude the article with anoverview of four habits that we believe are essential forhealth care organizations and individual clinicians to adoptto bring about real improvement in the clinical practice ofmedicine. These are the habits for: 1) viewing clinical prac-tice as a process; 2) evidence-based practice; 3) collaborativelearning; and 4) change. Pediatrics 1999;103:203–214; qualityimprovement methods, clinical medicine, benchmarking, col-laboration, rapid cycle improvement.

ABBREVIATIONS. RDS, respiratory distress syndrome; IVH, in-traventricular hemorrhage; PDSA, Plan-Do-Study-Act; OB/GYN,obstetrics/gynecology; IHI, Institute for Healthcare Improvement.

Structured quality improvement—the use of sys-tems thinking, data analysis, and teams to bringabout better outcomes, higher satisfaction, im-

proved processes, and reduced variation—is nowwidely practiced in health care. Whereas early effortswere primarily directed at administrative and serviceprocesses, the techniques are increasingly being ap-plied to clinical issues as well.1–5

In this article we will examine how clinicians canuse the methods of quality improvement to impactthe practice of medicine. We will survey the methodsof improvement science from general industry anddescribe the newer models of rapid cycle improve-ment. We will also show how clinicians are using thestatistical methods associated with control charts toimprove care for patients. Finally, we will describehow health care leaders are using benchmarking andmultiorganizational collaboratives to take advantageof existing variation in practice as an opportunity forlearning.

MODELS AND TOOLS IMPORTED INTO HEALTHCARE FROM INDUSTRIAL QUALITY

MANAGEMENTThe modern approach to the management of qual-

ity in health care borrows heavily from the qualitymanagement science in use for decades in generalindustry.6,7 Industrial quality management science—also known as continuous quality improvement ortotal quality management—is an eclectic collection oftechniques borrowed from the fields of systems the-ory, statistics, engineering, psychology, and others.Many of these improvement methods have been inuse in general industry for more than 50 years, andnew techniques are constantly being developed.

Improvement Teams and Project ModelsMany health care organizations have used multi-

disciplinary teams as a mechanism for bringingabout improvements in quality. The use of teams forquality improvement goes back to the 1930s and thework of industrial quality experts such as JosephJuran.8 Improvement teams typically consist of 3 to 9people who routinely work in the care process underinvestigation. For example, an improvement team atHealthPartners (Minneapolis, MN) that increased pe-diatric immunization rates consisted of two pediatri-cians, a family practitioner, a pediatric head nurse, aclinic manager, and an improvement facilitator.9 Themakeup of improvement teams explicitly recognizesthat good care often depends on the coordinated

From Paul E. Plsek and Associates, Inc, Roswell, Georgia.Mr Plsek is an independent consultant in the field of quality management,with special focus in health care. He is also a Senior Fellow at the Institutefor Healthcare Improvement in Boston, MA.Received for publication Sep 9, 1998; accepted Sep 9, 1998.Address correspondence to Paul E. Plsek and Associates, Inc, 1005 Allen-brook Lane, Roswell, GA 30075.PEDIATRICS (ISSN 0031 4005). Copyright © 1999 by the American Acad-emy of Pediatrics.

PEDIATRICS Vol. 103 No. 1 January 1999 203

Page 2: SECTION 1: EVIDENCE-BASED QUALITY  · PDF fileSECTION 1: EVIDENCE-BASED QUALITY IMPROVEMENT, PRINCIPLES, AND PERSPECTIVES Quality Improvement Methods in Clinical Medicine Paul

activities of many different people; it cannot be theresult solely of the action of a single clinician, how-ever well-intentioned or skilled.

A quality improvement model, such as the one inFig 1, usually guides the work of these improvementteams. Models provide a high-level road map toremind the team to explore thoroughly the workprocess under study, involve key staff, and rely onthe scientific method to guide decisions.10 In additionto guiding improvement efforts, such models alsoestablish a common approach and vocabulary forimprovement; making it easier for people from dif-ferent disciplines and backgrounds to work together.

The thinking process that lies behind improve-ment models should be quite familiar to clinicians.Systematic process improvement is conceptually thesame as the scientific method used in research andclinical decision-making. Joseph Juran,11 a leadingexpert in the field of quality, explicitly acknowl-edged this connection back in the 1940s when heused the terms “diagnostic journey” and “remedialjourney” to describe his idealized approach to indus-trial quality improvement.

The work of improvement teams is also aided by aset of simple engineering and statistical tools. Thesemethods too have been in wide use in general indus-try for more than 50 years and are described fully ina variety of references.12–15

Tools for Process DescriptionA flowchart graphically depicts the sequence of

steps in a work process. It is a chronological descrip-tion of a process. In health care, flowcharts mightdescribe the flow of patients (the process for admit-ting); information (the handling of lab orders and

test results); materials (the flow of supplies fromreceiving to the operating theater); or thought (theclinical algorithm for the treatment of low back pain).Sometimes, particularly in early process improve-ment efforts, the flowchart is the only tool needed.As teams document the sequence of activities, theyoften uncover redundant steps, wasted effort, andunnecessary complexity. In such cases, improvementcan be a simple matter of common sense.

Whereas a flowchart describes the process as achronological sequence, cause-effect analysis seeks tounderstand the process as a system of causal factors.A cause-effect diagram can be constructed around aclinical area of interest (for example, “what are thefactors that lead to good diabetes care?”) or a prob-lem area (for example, “what causes nosocomial in-fections?”). To ensure comprehensive thinking,cause-effect analysis is often guided by explicit con-sideration of generic categories of factors such aspeople, equipment, supplies, information, methods,measurements, and environment.16,17

Tools for Data CollectionThe scientific method underlies classic industrial

improvement methodology and calls us to be objec-tive in our thinking. Therefore, improvement teamsalso often use simple tools for data collection.18

Data collection begins with the formulation of aspecific question for which we are seeking an an-swer. For example, a team working to reduce theincidence of pneumothorax in neonates might ask:“what percentage of infants born in our deliveryroom at less than 30-week’s gestational age receivedprophylactic surfactant?”

Having formed specific questions, the improve-ment team typically gathers data using simple checksheets, data sheets, interviews, and surveys. A checksheet is a form for gathering data that enables one toanalyze the data directly from the form. For example,a team working to improve the timeliness of theadministration of thrombolytic therapy to chest painpatients in the emergency department might con-struct a check sheet to study the distribution of timesfrom presentation in the emergency room to admin-istration of the thrombolytic. The check sheet mightconsist of a sheet of graph paper with a horizontalaxis labeled in 10-minute intervals from 0 to 180minutes. Each time thrombolytic therapy is admin-istered, a responsible nurse places an X in the appro-priate 10-minute interval column to indicate theelapsed time since the patient presented. After sometime, a histogram of administration times emerges.The mean, spread, and characteristic shape of thetime distribution can be read directly from the datacollection form. In contrast, a data sheet is a form forrecording data for which additional processing isrequired. In the thrombolytic therapy improvementcase, this might be a simple logbook with text andnumerical entries for each patient. Interviews andsurveys are used when the question of interest in-volves perceptions. For example, the thrombolytictherapy team might want to monitor patient satisfac-tion as they change the processes in the emergencydepartment.

Fig 1. Quality improvement project model. Such models providehigh-level, problem-solving guidance to improvement teams. Thismodel is based on the work of Joseph Juran, as described in PlsekPE, Onnias A, Early J. Quality Improvement Tools. Wilton, CT: JuranInstitute; 1989.

204 SUPPLEMENT

Page 3: SECTION 1: EVIDENCE-BASED QUALITY  · PDF fileSECTION 1: EVIDENCE-BASED QUALITY IMPROVEMENT, PRINCIPLES, AND PERSPECTIVES Quality Improvement Methods in Clinical Medicine Paul

Tools for Data AnalysisData collection leads naturally to data analysis.

Keeping in mind that the purpose of the data is tostimulate constructive change, this analysis must besimple enough for everyone in the care process tounderstand. So, although quality management prac-titioners do use computational statistical methodssuch as analysis of variance and tests of hypothesis(for example, t tests), simple graphical analysis meth-ods are preferred because more people understandthem. Experience in health care has shown that al-though the physicians, nurses, and secretaries on ateam might not be equally facile with an analysis ofvariance table, they can all learn to look for certaindepartures from the classic bell-shaped curve in asimple histogram. Therefore, simple tools such as barcharts, histograms, line graphs, and scatter diagramsare the staples of data analysis in quality improve-ment projects.

It is important to note here that although the clas-sic methods of industrial quality improvement arebased on the scientific method, they were never in-tended to stand up to the rigor demanded in a full-scale medical research study. Berwick19 has sug-gested that improvement efforts represent a newtype of knowledge-generating mechanism that liessomewhere between the currently unacceptable op-tion of simply living with the rampant variation indaily practice, and the full rigor of relatively expen-sive and time-consuming traditional research. Othershave made similar points about the potential pitfallsof over-reliance on classic statistical methods in qual-ity improvement and other decision-making situa-tions.20,21

Tools for Collaborative WorkWidespread staff involvement and unprecedented

cooperation across traditional boundaries are keyprinciples of quality improvement whose roots againgo back to the 1940s industrial practice of qualitymanagement.22,23 Collaborative work is especiallycritical to success in improving health care becausehealth care is a system of interdependent resources,the functioning of which depends primarily on howwell we communicate with one another.24 To supportthis collaborative work, quality management practi-tioners have borrowed several tools (ie, brainstorm-ing, nominal group technique, and conflict resolutionmethods) from fields such as psychology and orga-nizational development.25–27

Models and Tools for Process (Re)DesignWhereas the graphical and team tools cited above

are most often used to improve existing processes,industrial quality science also includes models andtools for planning or redesigning processes.28–31

The first step in these design efforts typically in-volves defining the aim of the new process based onanalysis of customer needs. Two illustrative healthcare examples are Gustafson and colleagues’32 use ofthe critical incident technique (also called momentsof truth analysis) to guide the redesign of informa-tion-giving processes in breast cancer care at the

University of Wisconsin Medical Center (Madison,WI) and Niles and colleagues’33 use of focus groupsto guide the redesign of cardiac care at the Dart-mouth-Hitchcock Medical Center (Lebanon, NH).

Process (re)design project teams typically proceedby constructing flowcharts of an ideal process tomeet the customers’ needs. These flowcharts leadnaturally to focused planning for the internal hand-offs between individuals and departments that areoften the sources of process breakdowns. Next, thedesign team reviews the ideal process step-by-stepand plans for realistic contingencies using techniquessuch as failure modes and effects analysis.34 Finally,the design team plans for the measurements andcontrols that are needed to assure quality.

The importance of process design has led to thedevelopment of new tools specifically adapted forhealth care. Critical paths (also called care paths,clinical paths, care maps, and other names) aremultidisciplinary, high-level process design effortsthat specify key milestones in the care process forpatients in a given diagnostic category.35,36 Theseexplicit milestones aid coordination of care, helpreduce length of stay, improve quality of care,increase patient and family involvement, and en-hance cross-departmental cooperation. Similarly,clinical guidelines and algorithms outline the pro-cess of clinical decision-making and thereby helpfocus the design of clinical practice.37,38 Both criti-cal paths and guidelines are used widely in thedesign of health care processes; with good re-ported results over several clinical topics.39 – 41 Bothare described further in an article by Bergman42 inthis collection.

RAPID CYCLE IMPROVEMENT IN HEALTH CAREAlthough most improvement teams that use the

classic, industrial quality management methods de-scribed above find them effective in making contin-uous improvement in health care processes, manyleaders complain about the slow pace of such efforts.This frustration has led to the recent development ofenhancements to these traditional methods that canbe generally described under the umbrella of rapidcycle improvement.

Based on empirical observation in four health careorganizations that had accelerated their improve-ment efforts, Alemi and colleagues43 offer severalsuggestions for speeding up the improvement pro-cess. They suggest that the pace of improvementefforts can be accelerated by a combination of suchthings as:

Being thoughtful about topic selection.Using good meeting skills and optimizing time

management.Focusing on testing the change you wish to make,

rather than detailed analysis of the current process.Collecting only the data you really need. (The au-

thors go so far as to suggest that a team identify thedata elements it thinks it needs, concoct a set of data,analyze that data fully, and then reflect on whetherall the data requested is really essential to formingconclusions!)

Thinking about replication of changes throughout

SUPPLEMENT 205

Page 4: SECTION 1: EVIDENCE-BASED QUALITY  · PDF fileSECTION 1: EVIDENCE-BASED QUALITY IMPROVEMENT, PRINCIPLES, AND PERSPECTIVES Quality Improvement Methods in Clinical Medicine Paul

the organization from the very beginning of theproject by widely sharing information about theproject in progress.

Model for Rapid Cycle ImprovementWhereas Alemi and colleagues43 offer general ad-

vice for accelerating improvement, Nolan, Langley,and their colleagues44,45 have formalized much of thislearning in a new model for improvement (see Fig 2).The theory behind this model, supported by experi-ence, suggests that organizations able to make rapidgains have an underlying capacity to answer threequestions:

What are we trying to accomplish? (aim)Example: Reduce respiratory distress syndrome

(RDS) and intraventricular hemorrhage (IVH) by50% in infants 501 to 1500 g.

How will we know if a change is an improvement?(measurement)

Example: Incidence of RDS and IVH in infants 501to 1500 g.

What changes can we make that will result in animprovement? (change concept, explained below)

Example: Adherence to National Institutes ofHealth Consensus Conference recommendations forantenatal corticosteroid treatment.46

Based on the answers to these three questions, thetheory and experience suggests that successful orga-nizations then set in place small-scale tests of changein Plan-Do-Study-Act (PDSA) cycles. That is, theysystematically plan a specific adaptation of thechange concept that they think will move them closerto the stated aim, they do it, they study the effect ofthe intervention through measurements, and they acton their learning to start another cycle. (The concept

of building knowledge through PDSA cycles has arich and long history. Shewhart47 and Deming48

brought the concept into industrial quality manage-ment, but its roots can be traced to philosopher JohnDewey. The similar models of Donald Schon49 andDavid Kolb50 in the fields of education and organi-zational development represent a parallel develop-ment.)

Figure 3 provides an example of a small-scale testof change for the aim stated above. The team’s planis to work with one obstetrics/gynecology (OB/GYN) practice to implement a specific process toidentify at-risk women and administer the recom-mended treatment. Note that the small-scale natureof the change involves working with a single OB/GYN practice. The team will learn something bydoing this. Of course, several cycles will be needed toaccomplish the overall aim; but the key point is thatnothing in a system of care will change until some-thing starts changing. Rapid, small-scale PDSA cy-cles build the momentum of change; something thatis often missing in large-scale change efforts basedon comprehensive data collection and one time, all-or-nothing implementation.

It is important to note that measurement must alsobe tailored to fit the small scale of a particular cycle.By working with only a single OB/GYN practice, theteam cannot expect to significantly alter the overallRDS and IVH rates in its neonatal intensive care unit.Instead, they have chosen an appropriate measure-ment for this specific test of change: a before-and-after comparison of the percentage of at-risk womenfrom this particular OB/GYN practice who haveproperly received antenatal corticosteroid treatment.In perhaps a few weeks’ time the team can learnenough about the process it has developed with thisone OB/GYN practice to begin planning the nextcycle of change.

In rapid cycle improvement, pace is crucial. Thetheory suggests—and experience confirms—that it isbetter to run small cycles of change soon, rather thanlarge ones after a long time. The reason is that eachcycle, properly done, is informative, and provides abasis for further improvement. The more cycles, themore learning.51

The simple tools of classic, industrial quality man-agement science can also be useful in these rapidcycle improvement efforts. For example, the teammight find it useful to construct a flowchart of thenew process to be followed in the OB/GYN practice.They might also jointly develop a simple check sheetto record data on at-risk women and display thesedata on bar charts. The caution here goes back to theadvice of Alemi and colleagues52 cited above. Tomaintain an appropriate pace in the improvementeffort, the team should optimize its use of meetingtime, focus on flowcharting the future process ratherthan consuming time documenting the existing pro-cess, and question how much data are really neededto develop a level of comfort that the new proceduresare effective enough to warrant moving on to thenext cycle of change.

Fig 2. A model for rapid-cycle improvement. Based on conceptsdescribed in Langley GJ, Nolan KM, Nolan TW, Norman CL,Provost LP. The Improvement Guide: A Practical Approach to Enhanc-ing Organizational Performance. San Francisco, CA: Jossey-Bass;1996.

206 SUPPLEMENT

Page 5: SECTION 1: EVIDENCE-BASED QUALITY  · PDF fileSECTION 1: EVIDENCE-BASED QUALITY IMPROVEMENT, PRINCIPLES, AND PERSPECTIVES Quality Improvement Methods in Clinical Medicine Paul

Change ConceptsThe notion of a change concept is an important

aspect of rapid cycle improvement. A change con-cept is a general idea that responds to the thirdquestion in the model of Fig 2: “What changes canwe make that will result in an improvement?” Nolanand Schall53 note:

“Ideas for change can come from a variety ofsources: critical thinking about the current system,creative thinking, observing the process, a hunch, anidea from the scientific literature, or an insightgained from a completely different situation. Achange concept is a general idea, with proven meritand sound scientific or logical foundation, that canstimulate specific ideas for changes that lead to im-provement.”

In the above example, “adherence to National In-stitutes of Health Consensus Conference recommen-dations for antenatal corticosteroid treatment” is achange concept. It is a generally good idea, in thiscase based on high-grade evidence from the scientificliterature, which should lead to the accomplishmentof the aim.

Change concepts are generally good ideas, notspecific ideas ready-to-implement. The task of therapid cycle improvement team is to adapt the changeconcept to their specific context and map out thespecifics of implementation. This adaptation to localcontext is key. It increases commitment to the changeand explicitly recognizes that the local practice ofmedicine has some inherent level of appropriatevariation.

The notion of change concepts has a slipperiness to

it that should be acknowledged but should not beallowed to get in the way of its practical use. Con-cepts can be expressed at different levels of abstrac-tion.54,55 For example, a higher level of abstraction forthe change concept above might be the statement:“Adhere to evidence-based, consensus guidelines.”This is a more general statement of a good idea forchange that could be applied to many clinical issues.The change concept that the team used is simply amore specific statement in reference to the team’saim; but still not specific enough to say exactly whattests of change the team should try in its local con-text. How abstract or specific we should make ourstatement of change concepts is primarily a matter ofstyle and situation. In reality, there is always a rangeof possible concept statements from the very abstractto the very specific. As we move up the ladder ofabstraction, the concepts should be based on soundlogic and good sense. We could say that changeconcepts should fit one of the grades of evidencefrom the literature on evidence-based medicine.56 Aswe move down the ladder, the concepts and specifictests of change should be logically connected, in-creasingly sensitive to local context, and increasinglyconcrete.

The fact that change concepts can be expressed atvarious levels leads to the notion that change con-cepts can be catalogued and made generally avail-able to aid improvement work. Such a catalog shouldgreatly accelerate improvement efforts by avoidinglong searches for potentially useful ideas to try inPDSA cycles. Several such catalogues have been pro-posed; covering topics such as waits and delays,

Fig 3. Sample report from a rapid cy-cle improvement effort. Hypothetical,for illustration purposes only.

SUPPLEMENT 207

Page 6: SECTION 1: EVIDENCE-BASED QUALITY  · PDF fileSECTION 1: EVIDENCE-BASED QUALITY IMPROVEMENT, PRINCIPLES, AND PERSPECTIVES Quality Improvement Methods in Clinical Medicine Paul

medical errors, asthma care, and intensive care.57–62 Itis probably only a matter of time before a unifiedcatalogue of change concepts appears.

To illustrate the utility of a catalogue of changeconcepts, consider the experience of two teams in-volved in rapid cycle improvement through partici-pation in Breakthrough Series Collaboratives spon-sored by the Institute for Healthcare Improvement(IHI, Boston, MA). The IHI regularly assembleschange concept catalogues in specific clinical areasbased on literature reviews and recommendationsfrom a panel of experts. A team at The Mayo Clinic(Rochester, MN) had as its overall aim to improvethe outpatient care provided to asthma patients inthe Family Medicine Clinic. Through participation inthe IHI project, the team learned about the followingchange concepts: build capacity for routine assess-ment of patient outcomes (through the use of stan-dardized assessment tools); reduce unintended vari-ations in care (through simple guidelines and inprescribing practices); streamline the process of care(as it applies to both anticipatory patient manage-ment and acute episodes); and build informationsystems capacity (to establish routine feedback ofdata on outcomes). By implementing these conceptsthrough a dozen or so small-scale cycles of change,the Mayo team reduced hospitalization rates in var-ious age groups by 23% to 47% and reduced emer-gency department visits by 22%.63 Similarly, a teamfrom Centura–St. Anthony Central Hospital (Denver,CO) implemented 3 change concepts from a list of 11concepts developed by an expert group on adultintensive care. By adapting these 3 concepts to theirlocal context—create a formal process, consider peo-ple to be in the same system, and reach agreement onexpectations—the team decreased readmission to theintensive care unit from 15.6% to 9.8%, increasedoverall patient and family satisfaction from 70% to95%, decreased costs for sedation drugs by 80%, anddecreased average direct cost per case from $5900 to$4100.64

Future Directions and Critical AssessmentWith the ever-increasing pressure for change in

health care, rapid cycle improvement methods andcatalogues of change concepts are welcomed en-hancements to classic quality improvement methods.The model of Fig 2 and the concept of small-scaletests of change to establish momentum stresses im-plementation of what is known to improve care.Health care desperately needs to implement what itknows.

The current practice of rapid cycle improvementcould be strengthened, however, by the morethoughtful use of evidence grading in the presenta-tion of change concepts. Several evidence-gradingsystems exist and most allow a category for expertopinion based on experience, observation, and com-mon sense.65 The current catalogues of change con-cepts do not always explicitly grade their recommen-dations, although some of the recommendations arebased on sound studies published in peer-reviewedjournals. The absence of explicit grading of evidenceand the fact that some change concepts are obviously

only opinion, has unnecessarily allowed critics ofrapid cycle improvement to label the entire effort asunscientific (which it is not).

Advocates of rapid cycle improvement could alsohelp spread the methods more widely in health careby having the minimal discipline to use simple testsof significance (or control charts, explained below) inreporting overall results. The small-scale, rapid cyclenature of the PDSA cycles that are at the heart of themethod obviously (and correctly, in my opinion)precludes the use of tests of significance in each cycleof change. However, since the aim and overall mea-surement do remain fixed during a period of monthsas the various cycles of change take place, data ac-cumulate throughout time allowing overall tests ofsignificance in at least a before and after comparison.Most reports on rapid cycle improvement projects,such as those cited above, merely state the overallimprovement (such as “from 15.6% to 9.8%”) orshow a time series of measures in run chart form. Toavoid getting bogged down in massive data collec-tion at every cycle of change, proponents of rapidcycle improvement are sometime unnecessarily defi-ant in avoiding the use of statistical techniques; evenwhen they have accumulated enough data duringseveral cycles of change to enable the use of suchtechniques. The lack of reports of statistical signifi-cance, even when such tests would be easy to do,widens the gulf between practitioners of practical,real-time improvement of health care and those de-manding more experimental rigor; unnecessarily de-laying the diffusion of good ideas that will improvehealth care for all.

THE USE OF MEASUREMENT TO SUPPORTIMPROVEMENT IN HEALTH CARE

Measurement of performance has always been anintegral part of the quality management sciences.Measurement provides the feedback loop in the sys-tem that helps one maintain performance at a desiredlevel (quality control and assurance) and signals theneed for change or the accomplishment of produc-tive change (quality improvement). Similarly, thereis a long tradition of measurement in health care. Theintroduction of quality management science inhealth care organizations should be seen as buildingon, not tossing out, the rich tradition and currentefforts in the measurement of performance in healthcare.

A New Philosophy of MeasurementAlthough quality management science builds on

the tradition of measurement in health care, it alsoencourages us to seek three new objectives. First,quality management science encourages us to ex-pand the scope of our thinking about what is impor-tant to measure by prominently featuring the percep-tions of patients and families as valid indicators ofquality, in addition to clinical and professionallybased views of performance. Second, quality man-agement science focuses on cross-functional pro-cesses and suggests that we view measurements asintegrated systems that must be managed by cross-functional teams rather than having one set of mea-

208 SUPPLEMENT

Page 7: SECTION 1: EVIDENCE-BASED QUALITY  · PDF fileSECTION 1: EVIDENCE-BASED QUALITY IMPROVEMENT, PRINCIPLES, AND PERSPECTIVES Quality Improvement Methods in Clinical Medicine Paul

sures tracked by medical staff, another set by nurs-ing, and still another by administration. Third,quality management science calls into question thetraditional use of measurement as a way of allocatingrewards and punishments to individuals. Berwick’s66

seminal article on this topic, in which he describedthe “search for bad apples” that seems to character-ize our traditional approach to measurement, iswidely cited as the trigger that initiated the introduc-tion of industrial quality management science intohealth care.

Control ChartsA specific measurement tool from the quality man-

agement sciences that does warrant mention here inthis overview of improvement methods for clinicalmedicine is the control chart.67–69 A control chart is aline graph of data with superimposed horizontallines indicating statistically derived upper and lowercontrol limits (Fig 4). These upper and lower limitsindicate the range of variability that one would ex-pect if the variation is subject only to small, ran-domly occurring factors that are inherent in the pro-cess (so-called common cause variation). If themeasured data points fall randomly within the con-trol limits, we say that the process is stable andpredictable; performance will continue to fall withinthe limits as long as the process remains as it is.Furthermore, we can also assert that further im-provements in performance can only come aboutthrough fundamental changes in the process itself.Reacting to individual ups and downs in the datawithin the control limits (for example, “infectionrates were higher this month, I’ll speak to the staffand admonish them to do better”) is called tamper-ing and is likely to be counterproductive.70,71

When data values fall outside the control limits, orexhibit certain unnatural patterns within the controllimits, there is statistical evidence of a so-called spe-cial cause. The evidence in the data suggests that thevariation is not random and we should, therefore, beable to isolate the source and remove it from thesystem.

Carey and Lloyd72 provide several case studies ofthe use of control charts for clinical quality measure-ment of laboratory turn-around time, cesarean sec-tion rate, patient falls, use of restraints, and medica-tion errors. Nelson and colleagues73 give examples ofthe use of control charts in monitoring length of stay,time to therapy, and duration of therapy for commu-nity-acquired pneumonia (Dartmouth MedicalSchool, Lebanon, NH). Use of a control chart explic-itly recognizes that such indicators will naturallyvary from reporting period to reporting period. Bothwringing our hands in the Quality Assurance com-mittee because the numbers are a little worse thismonth, and congratulating ourselves when they are alittle better might well be a waste of effort.

Control charts can also be useful at the level of careto individual patients. Carey and Lloyd74 provide anexample of the use of a control chart to help a phy-sician and her cancer patient who has undergone anautologous bone marrow transplant make sense ofthe variation in platelet counts from numerous dailyblood draws. Gibson and colleagues75 (John HunterHospital, New South Wales) report on the use ofcontrol charts to help physicians understand the de-gree of variability in individual asthma patient’speak expiratory flow rates. The analysis of the con-trol charts in this case enabled physicians and pa-tients to work together to individualize care plans tomanage asthma exacerbations at an earlier stage.

Fig 4. Control chart. This is an example of what control chart practitioners call a “p-chart.” The data indicate the percentage of ordersin a daily, random sample of 150 at an outpatient pharmacy for which patients had to wait more than 10 minutes. The mean (average)percentage of untimely orders during the 20-day period was 18.6% (0.186). This is indicated by the solid, horizontal center line on thechart. The standard deviation of this time-series of data were calculated using the formula SD 5 SQRT {[mean percentage 3 (1 2 meanpercentage)]/[sample size]}; where here the mean percentage is 0.186 and sample size is 150. This is the normal approximation of thestandard deviation of the binomial distribution and indicates the expected variation because of random causes (what control chartpractitioners call “common cause variation”). The upper and lower control limits (UCL and LCL, respectively) are set by convention at63 SD beyond the mean. “X” on the control chart indicates nonrandom (special cause) variation. Special cause variation is defined by runrules based on normal distribution theory. Data points for days 8 and 9 are marked because 2 out of 3 consecutive points lie more than2 SD beyond the mean. Data points for days 15 to 18 are marked because 4 out of 5 consecutive points lie more than 1 SD beyond the mean.Both of these are indications of statistically significant shifts in the performance of the process.

SUPPLEMENT 209

Page 8: SECTION 1: EVIDENCE-BASED QUALITY  · PDF fileSECTION 1: EVIDENCE-BASED QUALITY IMPROVEMENT, PRINCIPLES, AND PERSPECTIVES Quality Improvement Methods in Clinical Medicine Paul

Several clinicians have reported on the use of controlcharts to respond appropriately to variation in bloodsugar readings for diabetics.76,77 Laffel and col-leagues78 have analyzed continuous data streamsfrom intensive care unit patients that suggest thatphysicians may be over-medicating patients in re-sponse to natural variation in readings. Further clin-ical applications of control charts will surely developin the coming years.

BENCHMARKING AND ITS APPLICATION TOCLINICAL IMPROVEMENT

Benchmarking is the process of comparing one’sperformance to that of others.79–81 The formal prac-tice of benchmarking is a relatively recent innovationin the quality management sciences, having beendeveloped in the 1980s at Xerox. Benchmarking be-gins with standardized, comparative measurement,but true benchmarking goes deeper to understandwhy there are performance differences betweenseemingly similar processes.

Building Process KnowledgeIt is important to underscore the phrase “true

benchmarking” in the preceding paragraph. Manyactivities that are called benchmarking in health careare actually only comparisons of performance indi-cators across clinicians or organizations. Althoughsuch indicator benchmarking provides informationabout where one stands relative to peers, it giveslittle information about why someone else’s perfor-mance is better. Knowing why something is better isthe key to improving one’s own processes. O’Connerand colleagues82 point out the inherently “insularnature of clinical medicine” and go on to note that“the inadequacy in the detail of the information oncurrent clinical practice prevents knowledge of thefine structure of care and makes studies linking prac-tice to outcomes difficult.” Process benchmarkingexplicitly seeks to uncover knowledge about this finestructure; knowledge that can lead to improvement.

Specific methods for uncovering this knowledgevary. Usually, but not always, these details are un-covered through site visits. For example, the mem-bers of the Northern New England CardiovascularDisease Study Group83 conducted site visits to eachothers’ locations to observe the processes of cardiacsurgery. Multidisciplinary teams consisting of sur-geons, nurses, profusionists, and industrial engineersspent a day or two at the host institution observingthe entire cardioarterial bypass graft process fromthe catheterization conference through postoperativecare. The hosts conducted business as usual, whilethe visitors focused on identifying similarities anddifferences compared with their own processes. Thevisitors and hosts then worked together to documentthe observations from each site visit in reports thatKasper et al84 describe as “candid and high in infor-mation content.” Information gleaned from these vis-its led to a 24% reduction in in-hospital mortalityafter cardioarterial bypass graft surgery among the23 cardiothoracic surgeons who participated in theeffort (P , .001).

The Vermont-Oxford Network NIC/Q Project

used a similar series of site visits to study practicesthat led to reductions in nosocomial infection ratesand better care for infants with chronic lung dis-ease.85 Before embarking on day-long visits, the mul-tidisciplinary visiting teams conducted internal stud-ies of their own local practices and identified specificquestions to explore during the visit. Benchmarkingexperts cite such careful preparation as one of thekeys to successful benchmarking. It is the essence ofthe distinction that Garvin86 makes between truebenchmarking and “industrial tourism.”

The drawback of benchmarking studies, such asthose cited above, is the time and cost associatedwith site visiting. However, there are examples ofsuccessful benchmarking efforts that address thisdrawback. For example, the SunHealth Alliance, apartnership of more than 200 hospitals, employs acentral council to identify topic areas and collectrelevant performance indicators from its members.87

From these data, the council selects a group of 10 to20 organizations; including some poor performers,some superior performers, and some in between.Then, instead of a series of site visits, each organiza-tion brings its flowcharts and other process docu-ments to a multiday meeting, during which theypour over the documents and data to cull the bestpractices. Finally, members of the collaborative selectthe best practices that fit well with their own envi-ronment, develop an action plan, implementchanges, and report back on results.

Although this approach eliminates the time andcost of numerous site visits, it does presume a rela-tively high level of sophistication in the use of theprocess analysis and data collection tools of qualitymanagement. This can be a significant drawback as itis not uncommon for clinicians and staff to be un-aware of why their performance is superior or infe-rior to peers. The quality of the benchmarking anal-ysis is also limited by the information that groupsbring with them to the meeting, whereas in a sitevisit this information might be more routinely un-covered by the probing of outsiders not steeped inthe traditions of the host organization. Despite thepotential for drawbacks in this approach, SunHealthhas reported improvements through group bench-marking in the clinical areas of circulatory disorderswith cath (DRG 124), pneumonia (DRG 89), total hipreplacement (DRG 209), acute myocardial infarction(DRG 122), and angioplasty (DRG 112).88 Using asimilar approach of group meetings, the 12 hospitalsof the Benchmarking Effort for Networking Chil-dren’s Hospitals (BENCHmark) also reported signif-icant reductions in waiting times and costs.89

UniHealth, another large, multiorganization con-sortium, further streamlines the benchmarking pro-cess by using a full-time staff department to assesscomparative data and conduct site visits at the bestperforming organizations.90 The investigative staffthen share the best practice concepts with other Uni-Health organizations through a series of regionalconferences. Implementation is at the discretion ofthe individual facilities.

In a project on joint replacements, the UniHealthbenchmarking effort with superior performing insti-

210 SUPPLEMENT

Page 9: SECTION 1: EVIDENCE-BASED QUALITY  · PDF fileSECTION 1: EVIDENCE-BASED QUALITY IMPROVEMENT, PRINCIPLES, AND PERSPECTIVES Quality Improvement Methods in Clinical Medicine Paul

tutions identified three best practice concepts: earlyreferral to physical therapy, preadmission visits byhome health staff, and standardization of surgicalprosthetics.91 Note that good benchmarking studies,regardless of the format used, produce change con-cepts that could feed rapid, small-scale PDSA cyclesfor implementation. Replication of the joint replace-ment change concepts by one UniHealth hospitalresulted in reduced costs per case of $2000 and a 50%length of stay reduction (9 days to 4.5 days).

The potential drawback in this method of usingcentralized staff to conduct the site visits is the lackof involvement of local care-process staff who willneed to make the changes necessary to bring aboutimprovement. Visiting an organization and actuallyseeing a different way to do something is much morepersonally convicting in regard to the need forchange than is reading about something or hearing areport.

Potentially Better PracticesThe term “best practices” from the benchmarking

literature carries an unfortunate connotation that canimpede the deployment of benchmarking as a clini-cal improvement technology in health care. Becauseof the strong research tradition in health care, theterm best practices implies to some that the identi-fied practices are the result of an exhaustive searchand rigorous experimental verification. This is notthe intent of benchmarking. Benchmark is practicaland action-oriented in its analysis; it is not a rigorousresearch methodology. Such rigor is not demandedin the industrial world where the concepts of bench-marking and best practices were developed.

Although it is awkward, I prefer the phrase “po-tentially better practices” to describe what healthcare benchmarking efforts are after. This phrase sug-gests that we are looking for good improvementideas that have both the logical appeal and experi-ence in practice to suggest they might lead to im-provements in our own organization. We will notknow if they are truly better until we adapt them toour local context, implement them, and measure theresults. Because we cannot be sure, we must guardagainst unintended adverse outcomes, as we wouldin introducing any variation in practice. We will alsonot know whether such better practices are general-izable across all health care organizations until weconduct a formal, controlled trial. This may not bepractical and, so, we may never know whether thepractices are generalizable; we will only know if theyworked or failed when we tried them. Finally, weshould hold no illusions about these practices beingthe best in any absolute sense. We are limited by theorganizations and experts we have consulted; evenbetter practices may be out there somewhere andwill most certainly appear in the future.

Benchmarking as a way to get ideas for improve-ment is a promising technology that breaks throughthe isolation that many clinicians report as the un-derlying cause for the well-documented variation inclinical practice in health care. However, benchmark-ing can be a time-consuming and expensive tool. It iscertainly not indicated when appropriate change

concepts are already available in the literature orthrough common knowledge. On the other hand,benchmarking may be the only way we will everuncover new knowledge for improvement in someclinical areas where a program of systematic, ran-domized trials is impractical because of such thingsas the large number of potential factors to study, therapid pace of technological change, or the nature ofthe intervention itself. Doing benchmarking well re-quires good observational skills, rather sophisticatedknowledge of one’s own local practices, and the abil-ity to transform the learning from a visit into realchange in local practice. Benchmarking may not bethe easiest way to start with an improvement effort,but on some topics it may ultimately be the only wayto go.

CLINICAL IMPROVEMENT THROUGHMULTIORGANIZATION COLLABORATIVES

The history of the improvement methods that wehave discussed so far is that they were developedprimarily to assist individuals and individual orga-nizations in meeting improvement goals. Even inbenchmarking, which obviously requires at least oneother partner to learn from, the improvement effortcould be in only one of the partners.

More recently, in general industry but especially inhealth care, practitioners are discovering the powerof collaborative efforts across multiple organizationsthat have banded together under a common im-provement aim.92 Within these collaborative groups,organizations can pool data and information re-sources, learn from the rich set of experiences repre-sented by the variation in practice, think outside thebox of their local culture and customs, discuss waysto overcome common barriers, and mutually encour-age one another through the tough processes ofchange.

We have already seen some examples of this typeof collaborative effort. The IHI Breakthrough SeriesCollaboratives, the Northern New England Cardio-vascular Study Group, and the Vermont-Oxford Net-work NIC/Q Project are illustrative of the manyimprovement collaboratives that have emerged inhealth care.

Key Concepts Behind CollaborativesFigure 5 lists the key concepts behind these col-

laboratives.93 To be successful, collaborative mem-bers must have enough variability in practice amongthem to make coming together for comparisons use-

Fig 5. Key concepts behind successful improvement collabora-tives.

SUPPLEMENT 211

Page 10: SECTION 1: EVIDENCE-BASED QUALITY  · PDF fileSECTION 1: EVIDENCE-BASED QUALITY IMPROVEMENT, PRINCIPLES, AND PERSPECTIVES Quality Improvement Methods in Clinical Medicine Paul

ful. Further, members must have the willingness toshare this variation openly. Unfortunately, the trendtoward mergers and acquisitions in the health careindustry is placing stress on multiorganizational im-provement collaboratives. Organizations can be un-derstandably reluctant to share the details of theirperformance on quality indicators if they fear thatthese data might be used to put them at a competi-tive disadvantage. To avoid this difficulty, some col-laboratives are formed solely from organizationswithin an existing health care system, other collabo-ratives explicitly select organizations that are notdirect competitors, and others require members toagree formally to full disclosure within the group butstrict nondisclosure to parties outside the group.

Collaborative members must also have or acquirethe skills both to know their own internal processesdeeply and to inquire wisely into the processes ofothers. The tools of classic quality management—flowcharts, data collection, group working, bench-marking, and so on—can be valuable here. Collabo-rative improvement efforts do not replace anorganization’s quality management efforts; rather,they depend and build on them.

Finally, collaborative members must have thecommitment and skills to implement what theylearn—replicate the potentially better practices thecollaborative uncovers—and determine throughmeasurement whether there has been an actual im-provement in the local context. An improvementcollaborative cannot be all talk and no action.

Current Results and Future TrendsThe Northern New England Cardiovascular Study

Group was the first large collaborative to report sta-tistically significant clinical results with the loweringby 24% of in-hospital mortality after cardiac sur-gery.94

Organizations in the IHI’s Breakthrough Series col-laboratives have made many dramatic improve-ments in waiting times, asthma care, cardiac surgery,cesarean section rates, adverse drug events, adultintensive care, and others. The article by Kilo95 in thiscollection provides more details on these efforts.

Similarly, Horbar, Rogowski, and Plsek96 describessignificant reductions in infection rates and chroniclung disease among the 10 centers who participatedin the Vermont-Oxford Network NIC/Q Project col-laborative. Importantly, this group is the first majoreffort to report not only significant results in a beforeand after comparison within participating organiza-tions, but also a statistically significant result whenthe group of 10 centers who participated in the col-laborative were compared with a larger database ofneonatal intensive care units not in the study. Morecomparisons to secular trends such as this areneeded to boost confidence in the effectiveness ofthese methods.

Improvement collaboratives seem to be springingup throughout health care, sponsored by a variety oforganizations. I have been personally involved incollaboratives sponsored by The HMO Group (atrade association of managed care plans, New Bruns-wick, NJ); the Centers for Disease Control (Atlanta,

GA); the peer review organizations in Massachusettsand New York; The Health Care Forum (San Fran-cisco, CA); the Voluntary Hospitals of America(Dallas, TX); the Ontario CQI Network (Toronto,Canada); the National Health Service in Britain(Buckinghamshire, England); and the Health CareSystem of Sweden (Stockholm, Sweden). Undoubt-edly, there are many more than these in all corners ofthe United States and around the world.

Collaboration to improve health care representsthe finest traditions of medicine. It remains to be seenwhether the emergence of these many collaborativegroups is a true reflection of those deep traditions, ormerely another passing fad in health care. It alsoremains to be seen whether the increasing pressuresof competition and intellectual property protectionin the health care industry will close off future col-laboration on clinical issues.

The building of new knowledge through multior-ganizational collaboratives benefits us all. Patientsobviously benefit from improved outcomes and re-duced risk of complications. Clinicians and otherhealth care professionals also benefit through theintrinsic reward of contributing to improved care.Clinicians involved in collaborative improvementefforts almost universally report enjoying the experi-ence of interacting with colleagues on focused clinicalissues. It is one way to rise above the ever-encroachingbusiness issues and refocus on the care in health care.

CONCLUSION: DEVELOPING FOUR HABITS FORIMPROVEMENT

Although not wide-spread yet, there is consider-able evidence building that the methods surveyedhere are effective approaches for clinical improve-ment. Certainly, there is a great deal of case-by-caseevidence of effectiveness. However, it would be na-ive to ignore the many anecdotal reports of improve-ment efforts that fail. These failed efforts do not oftenappear in the literature; there is considerable positivereporting bias.

At the same time, we should not be surprised thatwe are not able to report the generalizable effective-ness of interventions such as quality managementwith the same degree of certainty that we can a newdrug therapy applied to a patient population. Thedifference lies in understanding the underlying sta-bility and predictability of the systems in which weare intervening.

In the drug therapy case, our intervention is inhuman body systems. Human body physiology isremarkable stable across the population, at least atthe macrolevel. If a certain drug therapy is effectivein patients in an experimental study group, we canbe reasonably certain that it will be effective in thepopulation as a whole.

Such is not the case when we talk about interven-tions in human organizations and care process sys-tems. Such systems can be vastly different from siteto site. The effectiveness of an intervention is, there-fore, inherently context-dependent. The methods ofimprovement must be adapted for use in the localsetting. The fact that there is evidence of the effec-tiveness of improvement methodologies in at least

212 SUPPLEMENT

Page 11: SECTION 1: EVIDENCE-BASED QUALITY  · PDF fileSECTION 1: EVIDENCE-BASED QUALITY IMPROVEMENT, PRINCIPLES, AND PERSPECTIVES Quality Improvement Methods in Clinical Medicine Paul

some settings indicates that these methods can beeffective. However, it says little about whether theywill be effective in a given situation. Such is thenature of organizational and process interventions.

Nevertheless, we can speculate on the characteris-tics of organizations that are more likely to be suc-cessful most of the time with the improvement meth-ods described here. Such organizations will likelydemonstrate four key habits:

The Habit of Viewing Clinical Practice as a ProcessGood health care depends on the complex coordi-

nation of many factors, and the efforts of many peo-ple. Clinicians and health care organizations that areable to improve care routinely will inherently beginfrom this premise and will increasingly reject themore restrictive, discipline-based view of care.

The Habit for Evidence-based PracticeAlthough it is probably not appropriate to com-

pletely eliminate variation in health care, cliniciansand health care organizations that routinely improvewill accept the fact that there is rampant unintendedvariation in health care. A significant proportion ofthe care delivered daily is not consistent with whatwe know to be most effective. Clinical improvementis primarily about the continuous effort to bring thedaily practice of health care more in line with ourknowledge of what works.

The Habit for Collaborative LearningAlthough there is a great deal of knowledge for

improvement available in the literature as a result ofstandard research techniques such as randomizedcontrolled trials, the knowledge of what makes forgood care processes is currently locked up in unex-amined variation in practice. The only way we mayever get at this knowledge is through collaborativelearning with others. Improvement oriented individ-uals and organizations will start from the premisethat it is better to be open and curious, than defen-sive.

The Habit for ChangeNo matter how much we know, improvement only

comes about when we do something differently. Cli-nicians and health care organizations that are suc-cessful at improvement know that improvement re-quires change. Although we certainly do not want todo harm nor make change for change’s sake, holdingfast to “the way we have always done it” is a pre-scription for mediocrity. Successful practice requirescontinual change.

The explicit development of these habits will be amajor focus of the Vermont-Oxford Network Evi-dence-Based Quality Improvement Collaborative forNeonatology, a new multiorganizational collabora-tive described in more detail by Horbar.97

Continuous improvement in the clinical practice ofmedicine is in the finest traditions of medicine. It isan undeniable fact of the past and the present. Themethods of improvement described here should notbe taken as an indication that past efforts werewrong and should be discredited. Rather, these new

methods, along with those that will surely be devel-oped in the future, should be seen as building on thepast tradition of improvement in health care. Thesenew methods can help us further accelerate the pace.

REFERENCES1. Blumenthal D. Total quality management and physician’s clinical deci-

sions. JAMA. 1993;269:2775–27782. Carlin E, Carlson R, Nordin J. Using continuous quality improvement

tools to improve pediatric immunization rates. Jt Comm J Qual Impr.1996;22:277–288

3. Heckman M, Ajdari SY, Esquivel M, et al. Quality improvement prin-ciples in practice: reduction of umbilical cord blood errors in the laborand delivery suite. J Nurs Care Qual. 1998;12:47–54

4. Berwick DM, Nolan TW. Physicians as leaders in improving healthcare:a new series in Annuls of Internal Medicine. Ann Intern Med. 1998;128:289–292

5. Lowers, J. Reducing variation in cataract surgery technique cuts com-plications. The Quality Letter for Healthcare Leaders. 1998;10:19

6. Berwick DM, Godfrey AB, Rossener, J. Curing Health Care: New Strategiesfor Quality Improvement. Jossey-Bass: San Francisco, CA; 1990

7. Laffel G, Blumenthal, D. The case for using industrial quality manage-ment science in health care organizations. JAMA. 1989;262:2869–2873

8. Juran JM. Managerial Breakthrough. New York, NY: McGraw-Hill; 19649. Carlin E, Carlson R, Nordin J. op. cit.

10. Plsek PE. Quality improvement models. Quality Management in HealthCare. 1993;2:69–81

11. Juran JM. op. cit.12. Plsek PE, Onnias A, Early JF. Quality Improvement Tools. Wilton, CT:

Juran Institute, Inc; 198913. Plsek PE. Techniques for the management of quality. Hospital and Health

Services Administration. 1995;40:50–7914. Ishikawa K. Guide to Quality Control. New York, NY: UNIPUB; 198515. Wadsworth HM, Stephens KS, Godfrey AB. Modern Methods for Quality

Control and Improvement. New York, NY: John Wiley & Sons; 198616. Ishikawa K. op. cit.17. Smith GF. Determining the cause of quality problems: lessons from

diagnostic disciplines. ASQ Qual Management J. 1998;5:24–4118. Plsek PE. Planning for data collection. Quality Management in Health

Care. 1994;2:69–8119. Berwick DM. Harvesting knowledge from improvement. JAMA. 1996;

275:877–88820. Cohen J. The earth is round (P , .05). Am Psychol. 1994;49:997–100321. Jordan HS. How rigorous need a quality improvement study be? Jt

Comm J Qual Improv. 1995;21:683–69122. Juran JM. op. cit.23. Ishikawa K. What Is Total Quality Control: The Japanese Way. Translated

by David J. Lu. Englewood Cliffs, NJ: Prentice-Hall; 198524. Ferguson S, Howell T, Bataldin P. Knowledge and skills needed for

collaborative work. Quality Management in Health Care. 1993;2:1–1125. Scholtes PR. The Team Handbook. Madison, WI: Joiner Associates; 198826. Mosel D, Shamp MJ. Enhancing quality improvement team effective-

ness. Quality Management in Health Care. 1993;1:47–5727. Clemmer TP, Spuhler VJ, Berwick DM, Nolan TW. Cooperation: the

foundation for improvement. Ann Intern Med. 1998;128:47–5728. Juran JM. Juran on Quality By Design. New York, NY: The Free Press;

199229. Plsek, PE. The systematic design of health care processes. Quality in

Health Care. 1997;6:40–4830. Larson JA. Healthcare Redesign Tools and Techniques. New York, NY:

Quality Resources; 199731. Lenz PR, Sikka A. Reengineering Health Care: A Practical Guide. Tampa,

FL: American College of Physician Executives; 199832. Gustafson DH, Taylor JO, Thompson S, Chesney, P. Assessing the needs

of breast cancer patients and their families. Quality Management inHealth Care. 1992;2:6–17

33. Niles N, Tarbox G, Schults W, et al. Using qualitative and quantitativepatient satisfaction data to improve the quality of cardiac care. Jt CommJ Qual Improv. 1996;22:323–335

34. Plsek, PE. FMEA for process quality planning. Proceedings of the 1989Annual Quality Congress of the American Society for Quality Control. Mil-waukee, WI: ASQC; 1989

35. Mosher C. Upgrading practice with critical pathways. Am J Nurs. 1992:41–44

36. Coffey RJ, Richards JS, Remmert CS, et al. An introduction to criticalpaths. Quality Management in Health Care. 1992;1:45–54

SUPPLEMENT 213

Page 12: SECTION 1: EVIDENCE-BASED QUALITY  · PDF fileSECTION 1: EVIDENCE-BASED QUALITY IMPROVEMENT, PRINCIPLES, AND PERSPECTIVES Quality Improvement Methods in Clinical Medicine Paul

37. Woolf SH. Practice guidelines: a new reality in medicine. Part 1: Recentdevelopments. Arch Intern Med. 1990;150:1811–1817

38. Green E, Katz JM. Practice guidelines: a standard whose time has come.J Nurs Care Qual. 1993;8:23–32

39. Murrey KO, Gottlieb LK, Schoenbaum SC. Implementing clinicalguidelines: a quality management approach to reminder systems. QualRev Bull. 1992;18:423–433

40. Gregor C, Pope S, Werry D, Dodek P. Reduced length of stay andimproved appropriateness of care with a clinical path for total knee orhip arthroplasty. Jt Comm J Qual Improv. 1996;22:617–628

41. Rudisill PT, Phillips M, Payne CM. Clinical paths for cardiac surgerypatients: a multi-disciplinary approach to quality improvement out-comes. J Nurs Care Qual. 1994;8:27–33

42. Bergman D. Evidence-based guidelines and critical pathways for qual-ity improvement. Pediatrics. 1999;103(suppl):225–232

43. Alemi F, Moore S, Headrick L, et al. Rapid improvement teams. Jt CommJ Qual Improv. 1998;24:119–129

44. Nolan TW, Schall MW. Reducing Delays and Waiting Times Throughout theHealthcare System. Boston, MA: Institute for Healthcare Improvement;1996

45. Langley GJ, Nolan KM, Nolan TW, Norman CL, Provost LP. The Im-provement Guide: A Practical Approach to Enhancing Organizational Perfor-mance. San Francisco, CA: Jossey-Bass; 1996

46. NIH consensus development panel on the effect of corticosteriods forfetal maturation on perinatal outcomes. JAMA. 1995;273:413–418

47. Shewhart WA. Economic Control of Quality of Manufactured Product. NewYork, NY: Van Nostrand; 1931

48. Deming WE. Out of the Crisis. Cambridge, MA: Center for AdvancedEngineering Study; 1986

49. Schon DA. Educating the Reflective Practitioner. Toward a New Design forTeaching and Learning in the Professions. San Francisco, CA: Jossey-Bass;1988

50. Kolb DA. Experiential Learning. Experience as the Source of Learning andDevelopment. Englewood Cliffs, NJ: Prentice Hall; 1984

51. Berwick DM. Developing and testing changes in delivery of care. AnnIntern Med. 1998;128:651–656

52. Alemi et al. op cit.53. Nolan TW, Schall MW. op. cit.54. Provost LP, Langley GJ. The importance of concepts in creativity and

improvement. Quality Progress. 1998;31:31–3855. Plsek PE. Creativity, Innovation, and Quality. Milwaukee, WI: ASQ Qual-

ity Press; 199756. US Dept of Health and Human Services, Public Health Service. Clini-

cian’s Handbook of Preventative Services: Put Prevention into Practice. Wash-ington, DC: US Government Printing Office; 1994

57. Langley GJ, Nolan KM, Nolan TW, Norman CL, Provost LP. The Im-provement Guide: A Practical Approach to Enhancing Organizational Perfor-mance. San Francisco, CA: Jossey-Bass; 1996

58. Altov H. The Art of Inventing: And Suddenly the Inventor Appeared. Trans-lated and adapted by Lee Shulyak. Worcester, MA: Technical Innova-tion Center; 1994

59. Leape LL. Error in medicine. JAMA. 1994;272:1851–185760. Nolan TW, Schall MW. Reducing Delays and Waiting Times Throughout the

Healthcare System. Boston, MA: Institute for Healthcare Improvement;1996

61. Weiss KB, Mendoza G, Schall MW, Berwick DM, Roessner J. ImprovingAsthma Care in Children and Adults. Boston, MA: Institute for HealthcareImprovement; 1997

62. Rainey TG, Kabcenell A, Berwick DM, Roessner J. Reducing Costs andImproving Outcomes in Adult Intensive Care. Boston, MA: Institute forHealthcare Improvement; 1996

63. Weiss et al., op. cit.64. Rainey et al., op. cit.65. Guyatt HG, Sackett DL, Sinclair JC, et al. User’s guide to the medical

literature. IX. A method for grading health care recommendations.JAMA. 1995;274:1800–1804

66. Berwick DM. Continuous improvement as an ideal in health care.N Engl J Med. 1989;320:53–56

67. Shewhart W. op. cit.68. Plsek PE. Introduction to control charts. Quality Management in Health

Care. 1992;1:65–7469. Carey RG, Lloyd RC. Measuring Quality Improvement in Healthcare: A

Guide to Statistical Process Control Applications. New York, NY: QualityResources; 1995

70. Deming WE. op. cit.71. Berwick DM. Continuous improvement as an ideal in health care.

N Engl J Med. 1989;320:53–5672. Carey, Lloyd, op. cit.73. Nelson EC, Splaine ME, Batalden PB, Plume SK. Building measurement

and data collection into medical practice. Ann Intern Med. 1998;128:460–466

74. Carey, Lloyd, op. cit.75. Gibson PG, Wlodarczyk J, Hensley MJ, Murree-Allen K, Olson LG,

Saltos N. Using quality-control analysis of peak expiratory flow record-ings to guide therapy for asthma. Ann Intern Med. 1995;123:488–492

76. Nelson EC, Splaine ME, Batalden PB, Plume SK. Building measurementand data collection into medical practice. Ann Intern Med. 1998;128:460–466

77. Oniki TA, Clemmer TP, Arthur LK, Linford LH. Using statistical qualitycontrol techniques to monitor blood glucose levels. Proc Ann SympComput Appl Med Care. 1995;586–590

78. Laffel G, Luttman R, Zimmerman S. Using control charts to analyzeserial patient-related data. Quality Management in Health Care. 1994;3:70–77

79. Tucker FG, Zivan SM, Camp R. How to measure yourself against thebest. Harvard Bus Rev. 1987;65:8–10

80. Camp RC. Benchmarking: The Search for Industry Best Practices That Leadto Superior Performance. Milwaukee, WI: Quality Press; 1989

81. Gift RG, Mosel D. Benchmarking in Health Care: A Collaborative Approach.New York, NY: American Hospital Publishing; 1994

82. O’Connor GT, Plume SK, Wennberg JE. Regional organization for out-comes research. Ann N Y Acad Sci. 1993;703:44–51

83. O’Connor GT, Plume SK, Olmstead EM, et al. A regional intervention toimprove the hospital mortality associated with coronary artery bypassgraft surgery. JAMA. 1996;275:841–846

84. Kasper JF, Plume SK, O’Connor GT. A methodology for QI in thecoronary artery bypass grafting procedure involving comparative pro-cess analysis. Qual Rev Bull. 1992;18:129–133

85. Horbar JD, Rogowski J, Plsek PE, et al. Collaborative quality improve-ment for neonatal intensive care. Pediatr Res. 1998;43:177A

86. Garvin DA. Building a learning organization. Harvard Bus Rev. 1993;71:78–92

87. Patrick M, Alba T. Health care benchmarking: a team approach. QualityManagement in Health Care. 1994;2:38–47

88. ibid.89. Porter JE. The benchmarking effort for networking children’s hospitals

(BENCHmark). Jt Comm J Qual Improv. 1995;21:395–40690. Kennedy M. Benchmarking clinical processes. Qual Lett Healthcare Lead-

ers. 1994;6:691. ibid.92. Plsek PE. Collaborating across organizational boundaries to improve

the quality of care. Am J Infect Control. 1997;25:85–9593. ibid94. O’Connor GT, Plume SK, Olmstead EM, et al. op. cit.95. Kilo C. Improving care through collaboration. Pediatrics. 1999;

103(suppl):384–39396. Horbar JD, Rogowski J, Plsek PE, et al. op. cit.97. Horbar JD. The Vermont-Oxford Network: evidence-based quality im-

provement for neonatology. Pediatrics. 1999;103(suppl):350–359

214 SUPPLEMENT