Comparative effectiveness in neurosurgery: what it means ... · scale. Efficacy trials usually...

5
Neurosurg Focus / Volume 33 / July 2012 Neurosurg Focus 33 (1):E1, 2012 1 M EDICAL science is heavily dependent on com- parisons of treatments, diagnostic tests, devices, and management strategies. There is ample evi- dence, however, that medical practice often strays from choices that reflect the best scientific evidence, and this variability affects the cost of medical care (http://www. dartmouthatlas.org/). The standard scientific comparison involves the clinical trial, in which 2 or more groups are compared with respect to safety, efficacy, and/or costs. A hallmark of prospective trials is that the investigators choose one or more primary measures of efficacy prior to data collection; the success of a trial depends on how well the experimental group performs compared with controls. Attempts are made to reduce bias and ensure internal va- lidity (that is, that any differences measured are solely due to the experimental or control therapies). To ensure this, most trials have exclusion criteria that limit baseline patient characteristics (for example, sex, age, and compli- ance) and disease characteristics (for example, duration, severity, and comorbidities), and often they dictate many details of treatment. The FDA requires proof of efficacy before it approves new drugs or medical devices. Frequently, small clinical trials are unable to reach adequate statistical significance to distinguish between 2 approaches to patient care. Multiple randomized con- trolled trials on the same topic can frequently be com- bined to increase statistical power and aid in decision making. Meta-analysis is a statistical model that pools the ratio of efficacies (experimental vs control groups) from multiple studies. 22 Effectiveness Versus Efficacy The strict requirements for internal validity in a clinical trial may limit the generalizability (or external validity) of a trial’s results and conclusions. Everyday care of sick patients presents challenges different from a clinical trial and may affect diagnostic and therapeutic choices. Geographic settings (urban vs rural), care deliv- ery systems, factors such as patient or doctor preferences, or patient-doctor relationships can influence response and compliance. In short, effectiveness is how a particular health care approach fares in real world situations involv- ing care of typical patients. Greater generalizability usually requires gathering evidence from more diverse sources, both published and unpublished. The process usually begins with a systematic review, in which data are collected for analysis. Since sys- tematic reviews are also performed for comparative effi- cacy research, Gartlehner and colleagues outlined several criteria that distinguish effectiveness from efficacy trials. 12 Efficacy trials are usually done only in tertiary care settings; the effectiveness settings should reflect the initial care fa- cilities available to a diverse population with the condition of interest. Effectiveness trials should reflect the full spec- trum of disease encountered, including comorbidities, vari- able compliance rates, and use of other medications. Surro- gate outcomes, such as symptom scores, laboratory data, or Comparative effectiveness in neurosurgery: what it means, how it is measured, and why it matters SHERMAN C. STEIN, M.D. Department of Neurosurgery, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, Pennsylvania Comparative effectiveness research has recently been the subject of intense discussion. With congressional support, there has been increasing funding and publication of studies using comparative effectiveness and related methodology. The neurosurgical field has been relatively slow to accept and embrace this approach. The author out- lines the procedures and rationale of comparative effectiveness, illustrates how it applies to neurosurgical topics, and explains its importance. (http://thejns.org/doi/abs/10.3171/ 2012.2.FOCUS1232) KEY WORDS comparative effectiveness cost-effectiveness quality of life utility decision analysis 1 Abbreviations used in this paper: CAS = carotid artery stenting; CEA = carotid endarterectomy; CER = comparative effectiveness research; GOS = Glasgow Outcome Scale; ICER = incremental cost-effectiveness ratio; MI = myocardial infarction; QALY = quality-adjusted life year; QOL = quality of life; RCT randomized controlled trial; TBI = traumatic brain injury. Unauthenticated | Downloaded 02/25/21 09:30 AM UTC

Transcript of Comparative effectiveness in neurosurgery: what it means ... · scale. Efficacy trials usually...

Page 1: Comparative effectiveness in neurosurgery: what it means ... · scale. Efficacy trials usually exclude protocol violators; ef-fectiveness trials should always be done on an intent-to-treat

Neurosurg Focus / Volume 33 / July 2012

Neurosurg Focus 33 (1):E1, 2012

1

Medical science is heavily dependent on com-parisons of treatments, diagnostic tests, devices, and management strategies. There is ample evi-

dence, however, that medical practice often strays from choices that reflect the best scientific evidence, and this variability affects the cost of medical care (http://www.dartmouthatlas.org/). The standard scientific comparison involves the clinical trial, in which 2 or more groups are compared with respect to safety, efficacy, and/or costs. A hallmark of prospective trials is that the investigators choose one or more primary measures of efficacy prior to data collection; the success of a trial depends on how well the experimental group performs compared with controls. Attempts are made to reduce bias and ensure internal va-lidity (that is, that any differences measured are solely due to the experimental or control therapies). To ensure this, most trials have exclusion criteria that limit baseline patient characteristics (for example, sex, age, and compli-ance) and disease characteristics (for example, duration, severity, and comorbidities), and often they dictate many details of treatment. The FDA requires proof of efficacy before it approves new drugs or medical devices.

Frequently, small clinical trials are unable to reach adequate statistical significance to distinguish between 2 approaches to patient care. Multiple randomized con-trolled trials on the same topic can frequently be com-

bined to increase statistical power and aid in decision making. Meta-analysis is a statistical model that pools the ratio of efficacies (experimental vs control groups) from multiple studies.22

Effectiveness Versus EfficacyThe strict requirements for internal validity in a

clinical trial may limit the generalizability (or external validity) of a trial’s results and conclusions. Everyday care of sick patients presents challenges different from a clinical trial and may affect diagnostic and therapeutic choices. Geographic settings (urban vs rural), care deliv-ery systems, factors such as patient or doctor preferences, or patient-doctor relationships can influence response and compliance. In short, effectiveness is how a particular health care approach fares in real world situations involv-ing care of typical patients.

Greater generalizability usually requires gathering evidence from more diverse sources, both published and unpublished. The process usually begins with a systematic review, in which data are collected for analysis. Since sys-tematic reviews are also performed for comparative effi-cacy research, Gartlehner and colleagues outlined several criteria that distinguish effectiveness from efficacy trials.12 Efficacy trials are usually done only in tertiary care settings; the effectiveness settings should reflect the initial care fa-cilities available to a diverse population with the condition of interest. Effectiveness trials should reflect the full spec-trum of disease encountered, including comorbidities, vari-able compliance rates, and use of other medications. Surro-gate outcomes, such as symptom scores, laboratory data, or

Comparative effectiveness in neurosurgery: what it means, how it is measured, and why it matters

Sherman C. Stein, m.D.Department of Neurosurgery, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, Pennsylvania

Comparative effectiveness research has recently been the subject of intense discussion. With congressional support, there has been increasing funding and publication of studies using comparative effectiveness and related methodology. The neurosurgical field has been relatively slow to accept and embrace this approach. The author out-lines the procedures and rationale of comparative effectiveness, illustrates how it applies to neurosurgical topics, and explains its importance.(http://thejns.org/doi/abs/10.3171/ 2012.2.FOCUS1232)

Key WorDS • comparative effectiveness • cost-effectiveness • quality of life • utility • decision analysis

1

Abbreviations used in this paper: CAS = carotid artery stenting; CEA = carotid endarterectomy; CER = comparative effectiveness research; GOS = Glasgow Outcome Scale; ICER = incremental cost-effectiveness ratio; MI = myocardial infarction; QALY = quality-adjusted life year; QOL = quality of life; RCT randomized controlled trial; TBI = traumatic brain injury.

Unauthenticated | Downloaded 02/25/21 09:30 AM UTC

Page 2: Comparative effectiveness in neurosurgery: what it means ... · scale. Efficacy trials usually exclude protocol violators; ef-fectiveness trials should always be done on an intent-to-treat

S. C. Stein

2 Neurosurg Focus / Volume 33 / July 2012

time to disease recurrence, are frequently used in efficacy trials; the primary outcome in an effectiveness trial should capture the net effect on a health outcome. Study duration in efficacy trials is often limited; the duration of an effec-tiveness trial should be long enough to reflect the clinical setting. Effectiveness studies should limit adverse events to those found to be relevant in prior efficacy and safety trials and should use objective scales to measure their impact on health. Sample size should be adequate to detect at least a minimally important difference on a health-related QOL scale. Efficacy trials usually exclude protocol violators; ef-fectiveness trials should always be done on an intent-to-treat basis (for example, noncompliance is an important factor in determining effectiveness).

Measuring EffectivenessEffectiveness studies use the same end points as do

efficacy studies. These can be categorized in several ways. First, we must decide what we are measuring: mortality, frequency of complications, physical and/or psychosocial functioning (or impairment), overall QOL, satisfaction with health state, or cost of the disease and its care. The end point should reflect an outcome that is especially rel-evant in distinguishing between 2 management strategies. Some years ago, a multicenter RCT comparing carotid endarterectomy with medical care for carotid atheroscle-rosis was interpreted by many as failing to show a signifi-cant effect of surgery.10 Death was chosen as the primary outcome of the study. Although it is true that there was no significant difference between the 2 groups, most deaths were due to MI, a condition that carotid artery surgery cannot be expected to affect.

The outcome metric may involve disease-specific or global measures. A good example is measuring effective-ness in TBI. Our measurement may take a form such as the Glasgow Outcome Scale (GOS). This is an ordinal (the scores are ordered from worst to best outcomes) and functional scale, which is disease specific. Its weakness is that it is not parametric; we cannot compute the mean val-ue of a series, and we cannot use simple statistics to com-pare one group to another. One way to overcome these limitations is to collapse the scale into a binary measure (for example, alive or dead; favorable vs adverse outcome) and compare the frequencies of the 2 groups. However, this risks loss of information and distortion of the true values measured.1 Advanced statistical techniques, such as sliding dichotomy17 and ordinal analysis21 have been suggested.

Using a global measure of health-related QOL has the advantage of incorporating additional aspects of the patient’s health profile into the calculation, thus getting a more inclusive picture of disease impact, allowing com-parisons across different disease states, and facilitating cost-effectiveness and other studies. The goal is to view outcomes from the perspective of the patient.24 These measures often involve a standardized questionnaire, the answers to which are used to calculate QOL in in-dividual health domains or weighted to provide a single summary score. An example is the popular 36-item Short Form Health Survey, which has been proposed as a QOL

scale for patients having suffered TBI.18 Quality of life can also be gauged by utility, a quantitative measure of how strongly a patient or potential patient prefers a given health outcome in the face of uncertainty. Utility can be measured directly22 or indirectly by using a questionnaire such as the EuroQOL. Utility scores are parametric; thus, they can be averaged and compared using standard sta-tistical approaches. Conversion of ordinal scores, such as GOS2 and 36-item Short Form Health Survey,4 into utility values has been reported and may facilitate analysis.

As important as QOL is quantity of life, that is, how long a given health state lasts. Measures include expected longevity or years of life and QALYs. The latter repre-sents both quality and quantity and is calculated by mul-tiplying each year of expected life by the expected QOL that year, then summing the products. This gives the over-all utility of one’s health state.22 The converse of QALYs is disability-adjusted life years. This measure expresses the number of years lost due to ill health, disability, or early death and represents the overall disease burden on an individual or a society.

Critics have complained that QALYs reflect only health-related QOL and not all societal preferences, have the potential to discriminate against populations with illnesses that society believes are unimportant, and rely on preference weights that may not reflect every relevant population.19 Furthermore, the use of QALYs does not distinguish between a modest benefit to a large number and a dramatic benefit to a very few. However, sensitivity analyses can be used to test the validity and robustness of the assumptions, and many authorities agree that a pre-ferred alternative has not yet been validated.19

Comparative Effectiveness AnalysisAs with efficacy studies, data can be obtained from

new trials or from systematic reviews. Comparative ef-fectiveness studies can also use data collected for other purposes, such as databases and observational trials. Standard meta-analyses compare treatment effect or risk in 2 groups. They require each study used to measure the same outcome variable in both groups and can only look at one outcome variable at a time. This limitation is il-lustrated by a recent meta-analysis of RCTs comparing carotid endarterectomy (CEA) with angioplasty plus stent insertion (CAS) for carotid artery stenosis.8 The authors’ results are inconclusive. They showed that, over the short term, CEA was associated with a higher rate of MI and cranial nerve injury, whereas CAS was associated with more strokes and a higher mortality rate. What they could not decide was which approach had the better overall re-sults. Similar ambiguity was also seen with late outcomes.

Effectiveness studies allow greater analytical flexibil-ity to address comparisons like this, in which outcomes have several dimensions (uneventful outcome, stroke, MI, death, or cranial nerve injury). Our stroke example can be expressed as an expected value decision tree,22 in which each branch represents a possible outcome of the decision to treat by CEA or CAS. If we know the probability and the utility of each branch, we can calculate the average utility of a patient undergoing CEA or CAS (Fig. 1), thereby con-clusively comparing the 2 treatments. Of course, the model

Unauthenticated | Downloaded 02/25/21 09:30 AM UTC

Page 3: Comparative effectiveness in neurosurgery: what it means ... · scale. Efficacy trials usually exclude protocol violators; ef-fectiveness trials should always be done on an intent-to-treat

Neurosurg Focus / Volume 33 / July 2012

Comparative effectiveness in neurosurgery

3

as illustrated is simplified, as it does not consider late out-comes. We can also supplement effectiveness data by using meta-analytical techniques to pool observational studies,7–9 thus improving the power of our analysis.

Cost data can also be incorporated in comparisons. Many so-called “cost-effectiveness” studies use hospital charges as a surrogate for costs. This is misleading, as it represents costs only for uninsured patients, and probably very few of them. First, we must decide the perspective to take; in other words, cost to whom? Most commonly, cost-effectiveness studies use a societal perspective.13 This is cer-tainly the appropriate perspective for policy makers to take when evaluating new technology. Increasingly, hospitals are quantifying costs for procedures and inpatient and outpa-tient services, making it possible for investigators to use a hospital perspective. For certain uses, the perspective of the insurer, the equipment supplier, or the patient is appropriate. Considering the recent trend toward more uninsured patients and higher deductibles and copays,5 a patient perspective may be preferable for some costs. For some analyses we also consider “indirect” costs, such as lost wages associated with illness.

Sometimes it is simply enough to compare the costs of 2 procedures, particularly if there is no difference in effec-tiveness. In business, when there is a need to decide wheth-er a change in activity is worthwhile, a cost-benefit analysis is done. The expected costs of the new procedure are sub-tracted from the expected benefits; the decision depends on whether the change saves or loses money. In medicine we are uncomfortable valuing human life and health in mon-etary terms. Instead, most medical studies use a form of cost-effectiveness analysis, in which we quantify the cost for a given improvement in effectiveness. Any of the vari-ous outcomes can be used as a measure of effectiveness. However, cost utility analysis, in which utilities of all pos-sible outcomes of therapy are calculated, is used most com-monly.13 The absolute cost-effectiveness ratio (cost divided by effectiveness) is meaningless and should never be used in scientific publications. When comparing 2 procedures or management strategies, we must always calculate in-cremental cost-effectiveness to decide which approach is more cost-effective. If we are comparing treatment A with treatment B, the ICER is given by the following formula: ICER = (CostA - CostB)/(EffectivenessA - EffectivenessB).

By convention, we do not report negative ICER val-ues. If, for example, treatment A is less expensive and more effective than treatment B, A is said to “dominate” B. We only have to decide whether one approach is more cost-effective than another is when it is both more effec-tive and more expensive. There is considerable literature on society’s willingness to pay for a given amount of ef-fectiveness. Traditionally, the threshold for cost-effective-ness was considered between $50,000 and $60,000 for every QALY gained. Recent studies suggest the present threshold is actually much higher, perhaps in the range of $150,000–200,000/QALY.3,14 If we are using a patient’s perspective for costs, we may want to use a “willingness to pay” analysis.20 This involves asking patients or po-tential patients how much they would be willing to pay for a given health outcome. It estimates both costs and effectiveness in financial terms. Although this approach simplifies calculations, many are offended by assigning monetary value to health states.19

A hypothetical example may help clarify some of these concepts. In a comparison of 2 hypothetical treat-ments for carotid artery stenosis, summarized in Fig. 1, I have given values (purely hypothetical) for the probability, utility, and costs associated with each possible outcome. These values are shown in Table 1. We assume that the short-term outcome extends to 1 year. Therefore, an un-eventful outcome has a utility of 1 and a value of 1 QALY. Multiplying the probability of each outcome of Treatment A by its utility and adding the products (a process known as “folding back” the decision tree) gives the expected outcome for the average patient undergoing Treatment A. The same process calculates the expected costs, in which the model adds the proportional costs of complications to those of the initial procedure. These are reported in Table 2. As is evident Treatment A is more expensive than Treatment B, but also gives superior results. Applying the formula for the incremental cost-effectiveness of Treat-ment A: ICER = (CostA - CostB)/(EffectivenessA - Effec-tivenessB) = $37,500/QALY.

Fig. 1. Decision tree to calculate hypothetical comparative effective-ness and costs of the average patient undergoing Treatment A or Treat-ment B.

Unauthenticated | Downloaded 02/25/21 09:30 AM UTC

Page 4: Comparative effectiveness in neurosurgery: what it means ... · scale. Efficacy trials usually exclude protocol violators; ef-fectiveness trials should always be done on an intent-to-treat

S. C. Stein

4 Neurosurg Focus / Volume 33 / July 2012

This is well within the limits of cost-effectiveness, making Treatment A the more cost-effective treatment for carotid stenosis, at least in this hypothetical example.

Comparative Effectiveness Research in Neurosurgery

Comparative effectiveness research, as a research methodology, was stimulated by the Agency for Health-care Research and Quality (AHRQ), a branch of the De-partment of Health and Human Services. There was con-cern about increasing medical costs, slow progress in re-search, and a perception that practitioners were not fully informed of the latest data on treatment. Accordingly, the Agency for Healthcare Research and Quality held a sym-posium in early 2006 to assess research methodology and dissemination of findings. The goals were to maximize benefits of care, while simultaneously minimizing harms and controlling costs. The following year the Congres-sional Budget Office published a white paper6 with the idea of funding research to review evidence that would support these goals, namely CER. In its American Recov-ery and Reinvestment Act of 2009, Congress authorized more than $400 million in challenge grants for medical research. The National Institutes of Health, which admin-istered the grants, listed 213 specific challenge topics for which applications were being solicited. A recent search of MEDLINE showed exponential growth of publications on the subject since 2007.

Neurosurgery is an almost ideal field for CER. There are many unanswered questions. The volume of patients with most neurosurgical diseases is relatively small, mak-ing research projects both lengthy and costly. There are relatively few RCTs, and they are difficult to fund and are often unsuccessful. Patients often refuse randomization, and crossovers between treatment groups are common. Recently, an expert panel in TBI suggested that future research concentrate on comparative effectiveness stud-ies.15 A recent comprehensive review of CER in neuro-surgery summarizes its history, scope, and implications.16 Neurosurgery as a field has been slow to respond, in part because of a paucity of available funding. Of the 213

topics advanced by the National Institutes of Health for challenge grants, not a single one was primarily neuro-surgical.23 The rigors of a structured literature search, the esoteric mathematical modeling and statistics needed for more comprehensive comparative effectiveness stud-ies may also dampen interest. However, it is worth noting that efforts are being made to use research of this sort to control health care spending.11 If neurosurgeons do not participate in CER or at least understand it, we will not be in a position to challenge reports from outside our spe-cialty that may endanger our freedom to choose the best treatment options for our patients.

Disclosure

The author reports no conflict of interest concerning the mate-rials or methods used in this study or the findings specified in this paper.

References

1. Altman DG, Royston P: The cost of dichotomising continuous variables. BMJ 332:1080, 2006

2. Aoki N, Kitahara T, Fukui T, Beck JR, Soma K, Yamamoto W, et al: Management of unruptured intracranial aneurysm in Ja-pan: a Markovian decision analysis with utility measurements based on the Glasgow Outcome Scale. Med Decis Making 18:357–364, 1998

3. Braithwaite RS, Meltzer DO, King JT Jr, Leslie D, Roberts MS: What does the value of modern medicine say about the $50,000 per quality-adjusted life-year decision rule? Med Care 46:349–356, 2008

4. Brazier J, Roberts J, Deverill M: The estimation of a prefer-ence-based measure of health from the SF-36. J Health Econ 21:271–292, 2002

5. Claxton G, Levitt L: The Economy and Medical Care. Henry J. Kaiser Family Foundation, 2011 (http://healthreform.kff.org/notes-on-health-insurance-and-reform/2011/november/the-economy-and-medical-care.aspx) [Accessed April 23, 2012]

6. Congressional Budget Office: Research on the Comparative Effectiveness of Medical Treatments: Issues and Options for an Expanded Federal Role. Washington, DC: Congres-sional Budget Office, 2007 (http://www.cbo.gov/sites/default/files/cbofiles/ftpdocs/88xx/doc8891/12-18-comparativeeffec tiveness.pdf) [Accessed April 23, 2012]

7. Dreyer NA, Tunis SR, Berger M, Ollendorf D, Mattox P, Gliklich R: Why observational studies should be among the tools used in comparative effectiveness research. Health Aff (Millwood) 29:1818–1825, 2010

8. Economopoulos KP, Sergentanis TN, Tsivgoulis G, Mario-lis AD, Stefanadis C: Carotid artery stenting versus carotid endarterectomy: a comprehensive meta-analysis of short-term and long-term outcomes. Stroke 42:687–692, 2011

9. Einarson TR: Pharmacoeconomic applications of meta-analy-

TABLE 1: Probabilities and utilities for hypothetical comparison

OutcomeTreatment A Treatment B

UtilityProbability Cost ($) Probability Cost ($)

uneventful 0.860 13,000 0.900 10,500 1cranial nerve injury 0.055 +2,000 0 +2,000 0.85MI 0.025 +13,500 0.015 +13,500 0.75stroke 0.045 +45,000 0.065 +45,000 0.60death 0.015 +5,000 0.020 +5,000 0

TABLE 2: Hypothetical comparative effectiveness and costs

Expected Outcomes Treatment A Treatment B

utility 0.95 0.93cost $14,500 $13,750

Unauthenticated | Downloaded 02/25/21 09:30 AM UTC

Page 5: Comparative effectiveness in neurosurgery: what it means ... · scale. Efficacy trials usually exclude protocol violators; ef-fectiveness trials should always be done on an intent-to-treat

Neurosurg Focus / Volume 33 / July 2012

Comparative effectiveness in neurosurgery

5

sis for single groups using antifungal onychomycosis lacquers as an example. Clin Ther 19:559–569, 1997

10. Fields WS, Maslenikov V, Meyer JS, Hass WK, Remington RD, Macdonald M: Joint study of extracranial arterial occlu-sion. V. Progress report of prognosis following surgery or non-surgical treatment for transient cerebral ischemic attacks and cervical carotid artery lesions. JAMA 211:1993–2003, 1970

11. Garber AM, Sox HC: The role of costs in comparative effective-ness research. Health Aff (Millwood) 29:1805–1811, 2010

12. Gartlehner G, Hansen RA, Nissman D, Lohr KN, Carey TS: Criteria for Distinguishing Effectiveness From Efficacy Trials in Systematic Reviews.. Rockville, MD: Agency for Healthcare Research and Quality, 2006 (http://www.ahrq.gov/downloads/pub/evidence/pdf/efftrials/efftrials.pdf) [Accessed April 23, 2012]

13. Gold M, Siegel J, Russell L: Cost-Effectiveness in Health and Medicine. New York: Oxford University Press, 1996

14. Le QA, Hay JW: Cost-effectiveness analysis of lapatinib in HER-2-positive advanced breast cancer. Cancer 115:489–498, 2009

15. Maas AI, Menon DK, Lingsma HF, Pineda JA, Sandel ME, Manley GT: Re-orientation of clinical research in traumatic brain injury: report of an international workshop on compara-tive effectiveness research. J Neurotrauma 29:32-46, 2012

16. Marko NF, Weil RJ: An introduction to comparative effective-ness research. Neurosurgery 70:425–434,2012

17. Murray GD, Barer D, Choi S, Fernandes H, Gregson B, Lees KR, et al: Design and analysis of phase III trials with ordered outcome scales: the concept of the sliding dichotomy. J Neu-rotrauma 22:511–517, 2005

18. Neugebauer E, Bouillon B, Bullinger M, Wood-Dauphinée S: Quality of life after multiple trauma—summary and recom-mendations of the consensus conference. Restor Neurol Neu-rosci 20:161–167, 2002

19. Neumann PJ, Greenberg D: Is the United States ready for QA-LYs? Health Aff (Millwood) 28:1366–1371, 2009

20. Olsen JA, Smith RD: Theory versus practice: a review of ‘will-ingness-to-pay’ in health and health care. Health Econ 10: 39–52, 2001

21. Roozenbeek B, Lingsma HF, Perel P, Edwards P, Roberts I, Murray GD, et al: The added value of ordinal analysis in clini-cal trials: an example in traumatic brain injury. Crit Care 15: R127, 2011

22. Sox HC Jr, Blatt MA, Higgins MC, Marton KI: Medical Deci-sion Making. Philadelphia: American College of Physicians, 2007

23. US Department of Health and Human Services: American Recover and Reinvestment Act of 2009. Challenge Grant Applications: Omnibus of Broad Challenge Areas and Spe-cific Topics Washington, DC: US Department Of Health And Human Services, National Institutes of Health, 2009 (http://grants.nih.gov/grants/funding/challenge_award/omnibus.pdf) [Accessed April 23, 2012]

24. Wu AW, Snyder C, Clancy CM, Steinwachs DM: Adding the patient perspective to comparative effectiveness research. Health Aff (Millwood) 29:1863–1871, 2010

Manuscript submitted January 24, 2012.Accepted February 20, 2012.Please include this information when citing this paper: DOI:

10.3171/2012.2.FOCUS1232. Address correspondence to: Sherman C. Stein, M.D., Department

of Neurosurgery, Hospital of the University of Pennsylvania, 310 Spruce Street, Philadelphia, Pennsylvania 19106. email: [email protected].

Unauthenticated | Downloaded 02/25/21 09:30 AM UTC