The Effectiveness of a Multicenter Quality.1

6
 The Effectiveness of a Multicenter Quality Improvement Collaborative in Reducing Inpatient Mortality  Eugene Kroch, PhD, Michael Duan, MS, John Martin, MPH, Richar d Bankowi tz, MD, MBA, and Marla Kugel, MPH Motivation and Background: This study examines the evidence that a particular quality improvement collaborative that focused on Quality, Efficiency, Safety and Transparency (QUEST) was able to improve hospi- tal performa nce. Setting:  The collaborativ e included a range of impro vement vehicles, such as sharing customized comparative reports, conducting online best  practices forums, using 90-day rapid-cycl e initiatives to test specific inter- ventions, and conducting face-to-face meetings and quarterly one-on-one coaching sessions to elucidate opportunities. Methods:  With these kinds of activities in mind, the objective was to test for the presence of an overall QUESTeffect  via statistical analysis of mor- tality results that spanned 6 years (2006   2011) for more than 600 acute care hospitals from the Premier alliance. Results:  The existence of a QUEST effect wa s confirmed from comple- mentary approaches that include comparison of matched samples (collab- orative participant s against controls) and multiv ariate analysis. Conclusion:  The study concludes with a discussion of those methods that were plausible reasons for the successes. Key Words: hospital quality, collaborative improvement, inpatient mortality (  J Patient Saf   2015;11: 67   72)  A t present, there is growing interest in the use of improvement collaboratives as a means of more rapidly achieving positive change, and several large-scale multicenter projects have begun. The evidence for the effectiveness of collaboratives as a means of accelerating improvement, however, has been mixed. 3,14 The reasons for the lack of uniformity may include the highly variable nature of the collaboratives and the assortment of methods and strategies used. It is essential, therefore, that we begin to under- stand whether and what collaborative methodologies are effective and in what contexts. This article presents the results of one such collaborative in reducing inpatient mortality, details the methods used, and discusses possible reasons for the successes. On January 1, 2008, the Premier health care alliance launched a multiyear performance improvement collaborative focusing on Quality, Efficiency, Safety and Transparency (QUEST). The hy-  poth esis was that, with the use of the pow er of colla bora tion, improvement would occur at a more rapid pace. The framework in QUEST, through a strategic partnership with the Institute for Healt hcare Improv ement (IHI), used a speci fic impr ovement model 8 that attributes lack of improvement in health care as a failure of will, a failure of ideas, or a failure of execution.  The QUE ST fra mew ork also used exp eri ence gai ned in an ear lier initia tive, the Premie r Hospit al Qualit y Incent ive Demonstrati on (HQ ID) , a 6-y ear pro ject wit h the Cen ter s for Med ica re and Med ic- aid Services. 9 That demonstration was successful in achieving its  pri mary goa l, adh ere nce to ev iden ce-b ased med icin e (EB M) 1,11 ; however, an impact on mortality has yet to be demonstrated. 5,16,17 In this arti cle , we test the hy pot hes is that par tic ipa nts in a struc - tured collaborative were more successful in reducing mortality than hospitals that did not participate. To do so, we start with a comparison of risk-adjusted mortality trends between the 2 co- horts of hospitals, the initial charter  membe rs of QUEST and a group of non-QUEST hospitals that had access to the same soft- ware quality improvement tools. We also test the hypothesis using a multivariate model that isolates the effect of QUEST from other factors that may have impacted mortality. 4,6,7,13 METHODS Collaborative Execution Framework The methods used in the QUEST collaborative (described lat er) ga ve rise to the spe cific req uir ements for par tic ipa tio n, namely, (1) commitment of senior leadership including the CEO, (2) use of a standard set of data analytic products that enabled captu re of measu remen t data, (3) an agreement that all data wou ld  be trans parent within the colla bora tiv e. The QUES T collab ora- tiv e framew ork also requi red agreem ent on specific measures and methods to be used as well as agreement on the definition of top performance targets. With regard to the mortality improve- ment target, participants elected to study risk-adjusted, all-cause, hospi tal-wide mortal ity . The partici pants examined 3 poten tial methods for risk adjusting the data and chose to use the method initially developed by CareScience, 10,15 which adjusts for pallia- tive care and comorbid conditions among other factors. A target  perf orman ce of an observ ed-ex pecte d (O/E) morta lity ratio of 0.82 was chosen because it represented the lowest 25th percentile of mortality in the baseline period. Eac h part icip ant rec ei ved a rep ort on a qua rter ly bas is, hig hlig hti ng the ins tit utio n's O/E va lue and distance fro m thegoal, which also included a breakdow n of clinical subgroups that repre- sented the highest areas of opportunity for improvement. Partici-  pants also had access to an anal ytical tool that allo wed them to exp lor e dat a in gre at det ail . Par tic ipa nts cou ld see the per for mance of all fellow participants and could drill into the data to find top  perf ormer s in any giv en area. The CEO of each insti tutio n was also provided with a yearly performance summary. In addition, Premier staff examined the pooled data to deter- mine the greate st oppo rtuni ties for morta lity reduct ion (con dition s wh ere the num ber of dea ths grea tly exc eede d the mod el pre dict ion ). Through another aspect of the collaborative, staff also became aware of substantial variation in the approach to and documenta- tion of secondary diagnoses and in particular palliative care. From the Premier , Inc, Charlo tte, NC, and Leonard Dav is Instit ute of the Univers ity of Pennsylvania, Philadelphia, PA. Corres ponde nce: EugeneKroch, PhD , Premi er, Inc, 11303 4 Ballan tyne Corpora te Place, Charlotte, NC 28277 (e mail: [email protected] m ); or Leo nar d Da vis Ins tit ute of the Uni ver sit y of Pe nns ylv ani a, 364 1 Loc ust Walk , Philadelphia, PA 19104, (e mail: [email protected] ). The authors disclose no conflict of interest. This study had no external funding and was fully funded by the employer of all study authors, Premier Inc. Copyright © 2015 Wolters Kluwer Health, Inc. All rights reserved. This is an open-access article distributed under the terms of the Creative Commons Attribution-No nCommercia l-NoDeriv atives 3.0 License, where it is per- miss ible to downloa d and share the wor k pro vide d it is prop erl y cited . The work cannot be changed in any way or used commercially. ORIGINAL ARTICLE  J Patient Saf   Volume 11, Number 2, June 2015  www.journalpatientsafety.com  67 Copyright © 2015 Wolters Kluwer Health, Inc. All rights reserved.

description

The Effectiveness of a Multicenter Quality Improvement Collaborative in Reducing Inpatient Mortality #SegPac #Ptsafety

Transcript of The Effectiveness of a Multicenter Quality.1

  • The Effectiveness of a MulticeCollaborative in Reducin

    Eugene Kroch, PhD, Michael Duan, MS, John Mand Marla Ku

    ORIGINAL ARTICLEperformers in any given area. The CEO of each institution wasalso provided with a yearly performance summary.

    In addition, Premier staff examined the pooled data to deter-mine the greatest opportunities for mortality reduction (conditionswhere the number of deaths greatly exceeded themodel prediction).

    Leonard Davis Institute of the University of Pennsylvania, 3641 LocustWalk,Philadelphia, PA 19104, (email: [email protected]).

    The authors disclose no conflict of interest.This study had no external funding and was fully funded by the employer of all

    study authors, Premier Inc.Copyright 2015 Wolters Kluwer Health, Inc. All rights reserved. This is anPlace, Charlotte, NC 28277 (email: [email protected]); orsented the highest areas of opportunity for improvement. Partici-pants also had access to an analytical tool that allowed them toexplore data in great detail. Participants could see the performanceof all fellow participants and could drill into the data to find top

    From the Premier, Inc, Charlotte, NC, and Leonard Davis Institute of the Universityof Pennsylvania, Philadelphia, PA.Correspondence: EugeneKroch, PhD, Premier, Inc, 113034BallantyneCorporateMotivation and Background: This study examines the evidence thata particular quality improvement collaborative that focused on Quality,Efficiency, Safety and Transparency (QUEST) was able to improve hospi-tal performance.Setting: The collaborative included a range of improvement vehicles,such as sharing customized comparative reports, conducting online bestpractices forums, using 90-day rapid-cycle initiatives to test specific inter-ventions, and conducting face-to-face meetings and quarterly one-on-onecoaching sessions to elucidate opportunities.Methods:With these kinds of activities in mind, the objective was to testfor the presence of an overall QUESTeffect via statistical analysis of mor-tality results that spanned 6 years (20062011) for more than 600 acute carehospitals from the Premier alliance.Results: The existence of a QUEST effect was confirmed from comple-mentary approaches that include comparison of matched samples (collab-orative participants against controls) and multivariate analysis.Conclusion: The study concludes with a discussion of those methodsthat were plausible reasons for the successes.

    Key Words: hospital quality, collaborative improvement, inpatientmortality

    (J Patient Saf 2015;11: 6772)

    A t present, there is growing interest in the use of improvementcollaboratives as a means of more rapidly achieving positivechange, and several large-scale multicenter projects have begun.The evidence for the effectiveness of collaboratives as a meansof accelerating improvement, however, has been mixed.3,14 Thereasons for the lack of uniformity may include the highly variablenature of the collaboratives and the assortment of methods andstrategies used. It is essential, therefore, that we begin to under-stand whether and what collaborative methodologies are effectiveand in what contexts. This article presents the results of one suchcollaborative in reducing inpatient mortality, details the methodsused, and discusses possible reasons for the successes.

    On January 1, 2008, the Premier health care alliance launcheda multiyear performance improvement collaborative focusing onQuality, Efficiency, Safety and Transparency (QUEST). The hy-pothesis was that, with the use of the power of collaboration,improvement would occur at a more rapid pace. The frameworkopen-access article distributed under the terms of the Creative CommonsAttribution-NonCommercial-NoDerivatives 3.0 License, where it is per-missible to download and share the work provided it is properly cited.The work cannot be changed in any way or used commercially.

    J Patient Saf Volume 11, Number 2, June 2015

    Copyright 2015 Wolters Kluwer nter Quality Improvementg Inpatient Mortalityartin, MPH, Richard Bankowitz, MD, MBA,gel, MPH

    in QUEST, through a strategic partnership with the Institute forHealthcare Improvement (IHI), used a specific improvementmodel8 that attributes lack of improvement in health care as afailure of will, a failure of ideas, or a failure of execution. TheQUEST framework also used experience gained in an earlierinitiative, the Premier Hospital Quality Incentive Demonstration(HQID), a 6-year project with the Centers for Medicare andMedic-aid Services.9 That demonstration was successful in achieving itsprimary goal, adherence to evidence-based medicine (EBM)1,11;however, an impact on mortality has yet to be demonstrated.5,16,17

    In this article, we test the hypothesis that participants in a struc-tured collaborative were more successful in reducing mortalitythan hospitals that did not participate. To do so, we start with acomparison of risk-adjusted mortality trends between the 2 co-horts of hospitals, the initial charter members of QUEST and agroup of non-QUEST hospitals that had access to the same soft-ware quality improvement tools. We also test the hypothesis usinga multivariate model that isolates the effect of QUEST from otherfactors that may have impacted mortality.4,6,7,13

    METHODS

    Collaborative Execution FrameworkThe methods used in the QUEST collaborative (described

    later) gave rise to the specific requirements for participation,namely, (1) commitment of senior leadership including the CEO,(2) use of a standard set of data analytic products that enabledcapture of measurement data, (3) an agreement that all data wouldbe transparent within the collaborative. The QUEST collabora-tive framework also required agreement on specific measuresand methods to be used as well as agreement on the definitionof top performance targets. With regard to the mortality improve-ment target, participants elected to study risk-adjusted, all-cause,hospital-wide mortality. The participants examined 3 potentialmethods for risk adjusting the data and chose to use the methodinitially developed by CareScience,10,15 which adjusts for pallia-tive care and comorbid conditions among other factors. A targetperformance of an observed-expected (O/E) mortality ratio of0.82 was chosen because it represented the lowest 25th percentileof mortality in the baseline period.

    Each participant received a report on a quarterly basis,highlighting the institution's O/E value and distance from the goal,which also included a breakdown of clinical subgroups that repre-Through another aspect of the collaborative, staff also becameaware of substantial variation in the approach to and documenta-tion of secondary diagnoses and in particular palliative care.

    www.journalpatientsafety.com 67

    Health, Inc. All rights reserved.

  • each calendar quarter by comparing the observed mortality rate

    Multivariate AnalysisIn the multivariate analysis, the QUEST effect is inferred

    from a parametric estimation of a general regression model, whichhas the following functional form:

    yhqxhqhq;hqwhere yhq is the O-E difference (O-E Diff) mortality rate of

    hospital h at quarter q. The vector xhq includes hospital character-istics, data evolving control factors, and QUEST indicators. isthe marginal effect of the independent variables on the O-E Diffrate, and hq is the random error component of the model.

    In the model, hospital characteristics include bed size, teach-ing status, rural or urban location, and geographic area location.Data control factors are specified as yearly and quarterly (for sea-sonality) dummy variables. They are intended to capture the effectof general evolution of clinical practice and coding completenessduring the study period. Descriptive statistics of the variables arein Table 2.

    The QUESTeffects on risk-adjusted mortality are modeled at3 levels of parametric restriction. The first, most constrainedmodel specification contains a fixed QUEST effect (specified asa binary QUEST flag) and a QUEST linear trend effect, repre-sented by the number of quarters that have passed since a hospi-tal joined QUEST. The flag is turned on for those quarters whenthe hospital participated in QUEST. For example, a hospital thatjoined QUEST in the first quarter of 2010 will get the flag turnedon starting with that quarter and for all subsequent quarters of par-ticipation. If a hospital drops from QUEST, the flag is turned off.

    In the second model specification, the linearity of the trend

    Kroch et al J Patient Saf Volume 11, Number 2, June 2015and the expected risk-adjusted mortality rate at the hospitallevel. The O/E ratios were tracked across a baseline period ofthird quarter of 2006 through the second quarter of 2007 anda performance period of the first quarter of 2008 through thelast quarter of 2011.

    The second approach was to conduct a multivariate analysisthat uses a cross-section time-series regression model to isolate aQUESTeffect from other factors that might explain both hospitaleffects and time trends. Table 1 summarizes the sample, giving thehospital counts by year, QUEST status, and cohort. Note that weTo support QUEST hospitals in improving their performancein the delivery of evidence-based care, Premier provided severalofferings, from educational calls, customized action plans, Web-based resources, to sprints and mini collaboratives. A sprintis a short-term, rapid-cycle improvement education series designedto drive and sustain change in specific indicators or processes ofcare. Mini collaboratives are more intensive 6- to 9-month improve-ment initiatives focused on a specific condition, disease state, orprocess of care.

    Measuring the Collaborative's ImpactAll observational information and data derived from a data-

    base maintained by Premier, which includes a pooled hospitalcross-section time-series sample, consisting of approximately36 million deidentified inpatient discharges from approximately650 hospitals during a 6-year time frame.

    We took 2 approaches to measuring the impact of theQUEST collaborative on mortality. The first approach, largely de-scriptive, tracked hospital mortality trends over 4 years since thestart of the collaborative, comparing QUEST participants withother Premier hospitals with access to the same software toolsbut not participating in the collaborative (non-QUEST group).For the descriptive trend analysis, mortality was measured in

    TABLE 1. Study Hospitals and QUEST Status

    QUEST Status Y2006 Y2007 Y2008 Y2009 Y2010 Y2011

    Charter member 141 141 136 136Class 2009 29 28 28Class 2010 36 35QUEST subtotal 0 0 141 170 200 199Non-QUEST 366 373 321 324 392 424Total 366 373 462 494 592 623

    Some hospitals may not have consecutive data across the entire time frame.have a full 2 years of non-QUEST data before the launch ofQUEST in 2008. This formal inferential setting makes it possi-ble to conduct hypothesis tests on the timing and strength of theQUEST effect.

    Because coding practices can affect the expected mortalitygenerated from the risk adjustment predictive model selected forQUEST, an analysis of coding practices was performed on the2 cohorts to determine any factors that might be contributing tothe observed differences. Non-QUEST hospitals were matchedon bed size, urban/rural location, teaching status, and geographi-cal region, and we examined the International Classification ofDiseases, Ninth Revision, diagnosis data from the 141 hospitalsin QUEST cohort and 141 hospitals in the matched non-QUESTcohort. Because palliative care code is considered an importantmarker for mortality risk, we specifically examined the frequen-cies of that code (V66.7).

    68 www.journalpatientsafety.com

    Copyright 2015 Wolters Kluwer Heffect is relaxed by interacting the QUEST flag with annual timeeffects (yearly dummy variables). The third model specification

    TABLE 2. Descriptive Statistics of the Regression Data

    QUEST Non-QUEST Overall

    No. observations(hospital quarters)

    2851 7749 10,600

    Observed mortalityrate, average

    1.90% 2.03% 2.00%

    Expected mortalityrate, average

    2.62% 2.35% 2.42%

    O-E Diff mortalityrate, average

    0.72% 0.32% 0.43%

    Beds (099) 14.5% 20.5% 19.2%Beds (100199) 18.7% 21.3% 25.1%Beds (200399) 37.9% 34.6% 30.7%Beds (400+) 28.8% 23.7% 25.1%Teaching (COTH) 16.6% 12.5% 13.6%Rural location 16.2% 26.6% 23.8%Northeast 15.6% 12.8% 13.6%Midwest 29.1% 18.8% 21.5%South 42.3% 43.1% 42.9%West 13.0% 25.3% 22.0%Y2006 0.0% 9.3% 6.8%Y2007 0.0% 18.9% 13.8%Y2008 19.8% 16.0% 17.1%Y2009 23.9% 16.0% 18.1%Y2010 28.5% 18.6% 21.3%Y2011 27.8% 21.1% 22.9%COTH, Council of Teaching Hospitals.

    2015 Wolters Kluwer Health, Inc. All rights reserved.

    ealth, Inc. All rights reserved.

  • is the least constrained, allowing for cohort effects, as well as flex-ible time effects by interacting the QUEST flag with the cohort, aswell as the year. A hospital's cohort is determined by the year itjoined QUEST, that is, charter membership (starting in 2008),the class that started in 2009, and the class of 2010.

    Another variant of the model introduces full hospital effects,which effectively removes the influence of all potential latent ef-fects, thereby isolating the timing effects. This variant is appliedto each of the 3 aforementioned model specificationsversionb as distinct from the original version a described earlier. Ver-sion b is a type of Heckman specification2 to remove selectionbias. In this setting, all hospital effects are represented by dummyvariables, which replace all control traits, such as size, teachingstatus, location, and the like. Finally, version c of the model in-troduces hospital random effects to account for the correlation ofobservations over time within a hospital and the correlation ofpatients within a hospital. Hence, we allow for nonindependenceof hospital disturbances over time, which, if treated as fixed, havethe potential of inflating the significance of the QUEST effect. Ifthe QUESTeffect were purely based on self-selection into the col-laborative, then the time effects would vanish in favor of sorting

    period for QUEST hospitals and non-QUEST hospitals was 0.98

    Figure 3 depicts the results of a paired t test of matched hospitals,which tests the null hypothesis that between the QUEST cohortand the non-QUEST controls, the difference in the number ofsecondary diagnoses per patient is zero (heavy horizontal line at

    J Patient Saf Volume 11, Number 2, June 2015 Multicenter Quality Improvement Collaborativeand 1.07, respectively. By the end of the 4-year performance

    TABLE 3. Hospital Characteristics of QUEST and Non-QUESTCohorts

    HospitalCharacteristic

    No Constraints Data in All Quarters

    QUESTNon-

    QUEST QUESTNon-

    QUEST

    n = 136 n = 317 n = 123 n = 135

    Bed size % % % %

  • 0.00). The point estimate of that difference in our sample is theboundary between the lower half (dash shaded region) of the

    mortality. The size of the effect varied depending on the modelspecification and whether controls for latent hospital effects were

    FIGURE 3. Paired t test of QUEST and non-QUEST. Average numberof secondary diagnosis per patient.

    FIGURE 4. QUEST and non-QUEST comparison. Palliativecare coding.

    Kroch et al J Patient Saf Volume 11, Number 2, June 201595% confidence interval and the upper half (stripe shaded region).Although the point estimate does not cross over the zero line untilthe middle of 2010, the confidence interval contains the zero linein all but 5 quarters, so in only those 5 quarters was the number ofsecondary codes per patient in each cohort statistically different.

    Similar results were found in an analysis of the use of the pal-liative care code (v66.7), which was a code that QUEST memberswere particularly attentive to because it is a significant contributorto the risk of mortality. Despite the potential to differentiateQUEST participant, we found no statistical differences betweenthe 2 matched comparison cohorts. Indeed, although the non-QUEST control had slightly lower use rate of the code at the be-ginning of the study period, they had a higher rate by the fourthquarter of 2010 (Fig. 4).

    Multivariate AnalysisAll 3 alternative specifications of the regression model dem-

    onstrated statistically significant QUEST effects for risk-adjusted

    TABLE 4. QUEST Effect in 9 SpecificationsVariableModel 1a(R2 = 0.244)

    Model 1b(R2 = 0.638) Model1c

    Model 2a(R2 = 0.244)

    QUEST_Flag 0.18 0.09 0.12QUEST_Trend 0.00 0.00 0.00QUEST_Y2008 0.20QUEST_Y2009 0.20QUEST_Y2010 0.09QUEST_Y2011 0.21Charter_Y2008Charter_Y2009Charter_Y2010Charter_Y2011Class09_Y2009Class09_Y2010Class09_Y2011Class10_Y2010Class10_Y2011

    Items in bold indicate that coefficient is significant at 0.01 level.

    70 www.journalpatientsafety.com

    Copyright 2015 Wolters Kluwer Hintroduced into themodel. However, the estimated coefficients werehighly consistent across the restriction, as summarized in Table 4.

    The results from models 1a, 2a, and 3a assume that latent ef-fects do not bias the results, whereas models 1b, 2b, and 3b (theHeckman variation) are included in Table 4 to show how the intro-duction of fully interacted hospital changes the estimates of theQUEST effect. This model variant effectively removes the influ-ence of potential latent effects, thereby isolating the timingeffects and removing selection bias. In this setting, all hospital ef-fects are represented by dummy variables, which replace all con-trol traits, such as size, teaching status, location, and the like.The random-effect versions of the models are represented in spec-ifications 1c, 2c, and 3c. In each variant of the 3 model specifica-tions, the full fixed-effects and random-effects results are that theQUEST effects are attenuated to approximately half of the morerestricted model, which demonstrates that self-selection alonecannot explain all of the QUEST effect. More specific results foreach model are described in the following sections.Model 2b(R2 = 0.638) Model 2c

    Model 3a(R2 = 0.244)

    Model 3b(R2 = 0.639) Model 3c

    0.10 0.130.12 0.140.04 0.000.05 0.08

    0.20 0.10 0.130.23 0.13 0.160.10 0.04 0.000.19 0.02 0.060.05 0.04 0.060.03 0.01 0.020.25 0.13 0.170.11 0.05 0.020.23 0.05 0.09

    2015 Wolters Kluwer Health, Inc. All rights reserved.

    ealth, Inc. All rights reserved.

  • J Patient Saf Volume 11, Number 2, June 2015 Multicenter Quality Improvement CollaborativeDISCUSSIONThe study found that hospitals participating in Premier's QUESTCollaborative reduced the O/E mortality ratio as much as 10%more than a matched group of non-QUEST Premier hospitalsa group committed to quality improvement with access to manyof the tools QUEST participants had. The matching result wascorroborated in the formal multivariate analysis.

    We found no evidence that these improvements in O/E mortal-ity could be attributed solely to improved coding and more precisedocumentation; rather, all hospitals' expected rate of mortality isincreasing across the board as documentation improves. This islikely due to pressure for accurate coding as hospital paymentsbecome increasingly tied to risk-adjusted outcomes.

    In addition, focused discussions with QUEST members helpedto identify factors that contributed to the success of the collab-orative. Several themes have emerged, which correspond to ourmodel for collaboration. (1) Building willthat all results anddata are transparent to everyone in the collaborative has been citedas a means to provide a sense of urgency, and participants oncecomplacent in the assumption that they were providing the highestpossible care have often been confronted with a different reality.(2) Sharing ideasbecause data are collected in a common for-mat, participants and staff are able to identify those islands oftop performance in specific domains that serve asmodels. (3)Col-laborative executionorganized sprints and collaboratives pro-vide a structured means to facilitate improvement much like themanagement systems of successful enterprises.

    In addition, we have corroborating evidence that hospitals tookconcrete actions to lower their O/E ratios: O was driven down by anumber of interventions that took place in a matter of months,Model 1The QUEST hospitals have a risk-adjusted mortality rate

    0.18% lower than non-QUEST hospitals, assuming the same hos-pital mix. Given an overall mortality rate of approximately 2%, theabsolute difference of 0.18 percentage points is a relative differ-ence of approximately 9% in favor of QUEST hospitals. A similarbut smaller effect was seen in models 1b and 1c (random hospitaleffects). Nevertheless, we found no statistically significant linearrelationship between risk-adjusted mortality and the duration ofa hospital's participation in QUEST.

    Model 2The interactive variables between QUEST flag and year

    dummies allow the QUEST trend effect to be nonlinear. In eachyear, QUEST hospitals performed better than non-QUEST hospi-tals. In the model 2a, this effect was statistically significant eachyear. However, there has been no progressive QUEST effect overtime in either the fixed- or random-effects versions (2b and 2c).

    Model 3In this model, the QUEST effect is examined by each class.

    Charter members performed significantly better than non-QUESThospitals in all 4 years, but there is no progressive QUESTeffectover time. Both classes 2009 and 2010 started with no significantdifference from non-QUEST hospitals but ended performing bet-ter in 2011 in model 3a, suggesting a strong lag effect for mortal-ity reduction associated with QUEST membership. Interestingly,controlling for latent effects in models 3b and 3c significantly re-duced the size effect of the coefficient for the class of 2010 in thefinal year of this study.

    such as advances in sepsis treatment and better discharge manage-ment that led to greater use of hospice care.

    2015 Wolters Kluwer Health, Inc. All rights reserved.

    Copyright 2015 Wolters Kluwer The results of the multivariate model established an undeniableand statistically significant QUEST effect. The question, how-ever, is whether it is purely a selection effect, rather than the resultof collaborative efforts to reduce mortality. Evidence for the latteremerged from controlling for the potential selection effect (QUESTparticipation greater for relatively higher-performing hospitals atthe outset). Indeed, the ensuing statistical results revealed thatthe selection effect could not explain the QUESTeffect. Althoughit is true that the median O/E mortality ratio started lower in theQUEST cohort, the individual hospital O/E ratios were widelydistributed with many QUEST hospitals underperforming relativeto peers. For mortality in the cohort to remain consistently lowerthan the control group, sustained and ongoing improvement wasnecessary.

    Although these results do not rule out a selection effect,they do indicate that it is unlikely to be the entire explanationbecause the specification of our multivariate model explicitlycaptures the time effect across hospitals. In so doing, the QUESTeffect parameter estimates control for individual hospital effectsthat would be the source of the selection effect. In other words,because we include data from the pre-QUEST period (2006 and2007), our measure of the QUEST effect means at the very leastthat QUEST hospitals changed their behavior with the outset ofQUEST in 2008.

    The version of our model that includes fully interactive hospitalfixed effects controls for all hospital-specific latent (unobserved)characteristics, within the bounds of our available data, whichleaves only time effects to be explained. If selection explainedthe entire QUEST effect, the effect would vanish in that versionof the model, but that did not happen. Approximately half of theQUEST effect remained. Hence, at least some of the QUEST ef-fect is about what hospitals did during their participation in theQUEST collaborative.

    Although the results from this study are compelling, they maynot be applicable to all hospitals because hospital members of thePremier database have resources required to be in the quality im-provement alliance. In particular, they have specific analytic data-base capabilities, access to benchmarks, and sufficient staff toengage in the sprints and mini collaboratives. Not all hospitalshave such resources. Although QUESTwas designed to be scal-able and adoptable by any hospital in the nation, a hospital's lackof resources may hinder its ability to participate in such a collab-orative.18 However, a recent study found no association betweenmortality and hospital margins, so it may just be related to howsuch a program would be prioritized.12

    In this study, we only examined only inpatient deaths and notdeaths occurring in a 30-day period. Therefore, some of the reduc-tion in mortality could arise from patients being allowed to die inother settings such as home or hospice. We know that the ap-proach to end-of-life care varies greatly, and in fact, some hospi-tals specifically addressed this.10 We feel that matching theneeds and preferences of the patient to an appropriate end-of-lifesetting is itself an improvement in patient-centered and family-centered care.

    There is much speculation about the reasons for improvementin QUEST hospitals. Many activities were available exclusivelyto QUESTmembers; however, most were of an educational natureand not directly interventional. It was not possible, therefore, toidentify the exact activities that each QUEST member imple-mented. To that end, this study does not draw a direct causal path-way between these individual activities and outcomes (mortality).As wemove forward with QUEST, we are focused on directly cap-turing the activities and interventions hospitals are participating

    in. We expect to determine in which contexts each of these be-comes important.19

    www.journalpatientsafety.com 71

    Health, Inc. All rights reserved.

  • CONCLUSIONSGiven the relative improvement of the QUEST collaborative par-ticipants compared with their peers, this study has potential policyimplications with regard to transparency, promotion of successtactics, providing a platform for structured improvements, andgoal setting. This is evident when contrasting QUEST with ourprevious effort to establish a major improvement collaborativeHQID. Although it shared many of the features of the Centersfor Medicare and Medicaid Services/Premier HQIDcommitmentof senior leadership, collection of a common data set via standardtools, and transparencyQUEST differed from HQID in severalimportant ways, which we feel may have contributed to its suc-cess. Whereas HQID used an educational portal to post improve-ment ideas, QUEST actively sought out islands of excellenceand actively promoted success tactics through a variety of mech-

    7. Kruse GB, Polsky D, Stuart EA, et al. The impact of hospitalpay-for-performance on hospital and Medicare costs. Health Serv Res.2012;47:21182136.

    8. Langley GJ, Moen R, Nolan KM, et al. The Improvement Guide: APractical Approach to Enhancing Organizational Performance, 2nd ed,San Francisco, CA: Jossey-Bass Publishers, 2009.

    9. Kroch EA, Van Dusen C. Centers for Medicare and Medicaid Services(CMS)/Premier Hospital Quality Incentive Demonstration Project.Charlotte, NC: Premier, Inc.; 2012. Available at https://www.premierinc.com/quality-safety/tools-services/p4p/hqi/downloads/premier-hqid-final-white-paper-28nov2012.pdf. Accessed July 23, 2013.

    10. Kroch E, Johnson M, Martin J, et al. Making hospital mortalitymeasurement more meaningful: incorporating advance directives andpalliative care designations. Am J Med Qual. 2010;25:2433.

    Kroch et al J Patient Saf Volume 11, Number 2, June 2015anisms. In addition, whereas HQID relied for the most part onhospitals to structure their own improvement efforts internally,QUEST provided a structured platform for collaborative execu-tion. Finally, HQID was structured as a tournament, with a rel-ative (highest 25 percentile) top performance goal, ensuring, by itsnature, that 75% of the participants would not be top performers.QUESTused a fixed goal set in advance, providing a condition inwhich every participant could achieve the goal and fostering anenvironment of mutual assistance and collaboration. All of thesefactors seemed to have potentially played a role in improvingmortality outcomes in QUEST but will require further study todraw the direct causal pathway. To better understand the mecha-nism of the effect, we recommend further research to help docu-ment causation.

    REFERENCES1. Grossbart SB. What's the return? Assessing the effect of

    pay-for-performance initiatives on the quality of care delivery. Med CareRes Rev. 2006;63:29S48S.

    2. Heckman J. Sample selection bias as a specification error. Econometrica.1979;47:153161.

    3. Hulscher ME, Schouten LM, Grol RP, et al. Determinants of success ofquality improvement collaboratives: what does the literature show?BMJ Qual Saf. 2013;22:1931.

    4. Jha A, Orav EJ, Epstein AM. The effect of financial incentives on hospitalsthat serve poor patients. Ann Intern Med. 2010:299306.

    5. Jha A, Joynt KE, Orav EJ, et al. The long-term effect of premier pay forperformance on patient outcomes. N Engl J Med. 2012;366:16061615.

    6. Kahn CN III, Ault T, Isenstein H, et al. Snapshot of hospital qualityreporting and pay-for-performance under Medicare. Health Aff. 2006;25:148162.72 www.journalpatientsafety.com

    Copyright 2015 Wolters Kluwer H11. Lindenauer PK, Remus D, Roman S, et al. Public reporting and pay forperformance in hospital quality improvement. N Engl J Med. 2007;356:486496.

    12. Ly DP, Jha AK, Epstein AM. The association between hospital margins,quality of care, and closure or other change in operating status. J Gen InternMed. 2011;26:12911296.

    13. Mehrota A, Damberg CL, Sorbero ME, et al. Pay for performance in thehospital setting; what is the state of the evidence?. Am J Med Qual. 2009;24:1928.

    14. Nadeem E, Olin SS, Hill LC, et al. Understanding the components ofquality improvement collaboratives: a systematic literature review.MilbankQ. 2013;91:354394.

    15. Pauly MV, Brailer DJ, Kroch EA. The corporate hospital rating project:measuring hospital outcomes from a buyers perspective. Am J Med Qual.1996;11:112122.

    16. Ryan AM. Effects of the Premier hospital quality incentive demonstrationonMedicare patient mortality and cost.Health Serv Res. 2009;44:821841.

    17. Ryan AM, Blustein J, Casalino L P. Medicare's flagship test ofpay-for-performance did not spur more rapid quality improvementamong low-performing hospitals. Health Aff (Millwood). 2012;31:797805.

    18. Tripp Umbach. The Negative Employment Impacts of the Medicare Cuts inthe Budget Control Act of 2011. Pittsburgh, PA: Tripp Umbach; 2012.Available at: http://www.aha.org/content/12/12sep-bcaeconimpact.pdf.Accessed May 3, 2013.

    19. Van Citters AD, Nelson EC, Schultz L, et al. UnderstandingContext-specific Variation in the Effectiveness of the QUEST Program:An Application of Realist Evaluation Methods. Podium presentation at theAcademy Health Annual Research Meeting. Orlando, FL June 18, 2012.Available at http://www.academyhealth.org/files/2012/sunday/citters.pdf.Accessed July 12, 2014. 2015 Wolters Kluwer Health, Inc. All rights reserved.

    ealth, Inc. All rights reserved.