Sandercock, 2011 -_negative_trials_(int_j_stroke)_pdf

2
Negative results: why do they need to be published? Peter Sandercock This short narrative review article defines ‘negative results’ and cites several ethical and scientific reasons why such studies should be made publicly available. Key words: clinical trial, methodology, publication bias, stroke units, systematic reviews, treatment What do I mean by ‘negative results’? The term applies to studies conducted both in human and in animal subjects and encompasses three different types of result: truly inconclusive with ‘no evidence of effect’, generally because the study was too small and inadequately powered (several of the small studies included in the Cochrane system- atic review of stroke units are in this category) (1); a well-conducted study, which is sufficiently large to provide ‘clear evidence of no effect’, i.e. that any effect is too small to be worthwhile pursuing either clinically or in further research (the Clots in Legs Or Stockings after Stroke (CLOTS) trial of graded compression stocking for deep vein thrombosis prevention is a good example) (2); or clear evidence of harm when benefit had been expected. Unfortunately, many such ‘negative’ yet still important studies in man (3) and in animals (4) remain unpublished. Why should negative studies be published? The most important reason is ethics. If human subjects have given consent to participate in a clinical research study, be it a treat- ment trial, or an observational study, they have done so in the clear understanding that the research results will in some way be of benefit to other people and contribute to scientific advance. Furthermore, these human subjects have exposed themselves to risk and inconvenience by participating in the study, and the justification for doing that ‘good deed’ should be that the author makes the data publicly available and ensures that it is put to good use. Although animals do not give consent to participate in research, we still have an ethical duty to make the best use of the data from any animals used in such research. There are many scientific reasons for publishing negative or neutral (i.e. uninformative) studies; chiefly, they contain valu- able information, which should become part of the scientific record on the subject under study. Systematic reviews are an essential part of the research cycle (5). When scientists plan new research studies (clinical trials or observational studies in humans or experiments on laboratory animals), the first step should be a systematic review of the evidence (5). Such a review may reveal that the question has already been answered reliably, or it may indicate that a further study is justified. For example, the UK Medical Research Council and the UK Health Technology Appraisal Programme require that new applica- tions for clinical trial funding should have performed (or at least cite) an up-to-date systematic review of the subject to ensure that the new research really is justified. If during the course of a clinical study, a large negative or neutral study is published, this might require the trial steering committee or data monitoring committee to pause for thought and consider whether the study should continue or be modified in some way. At the end of any clinical study, the results should pref- erably be presented in context of all the available evidence. For clinical trials, this is a requirement of the Consolidated Standards of Reporting Trials (CONSORT) guidelines on publication of randomized clinical trials (6). Systematic reviews can put a small but strikingly positive study into context. For example, when a strikingly positive small study is viewed within the totality of the evidence avail- able, it becomes evident that it is a freak ‘lucky’ result arising from the play of chance, and not a reliable estimate of the true effect (4,7). The availability of the ‘negative’ studies then ensures that the one small positive (but outlying) study is interpreted appropriately. There are numerous examples from the literature where small clinical trials or small studies of genetic associations produce striking positive results (often leading to a high-profile paper in a major journal), which are then not subsequently replicated when larger, more reliable (neutral or negative) studies are published (8–10). To be reliable, systematic reviews need to include all rel- evant randomized trials. The Stroke Unit Trialists’ Collabora- tive systematic review of organized inpatient stroke care (1) very clearly demonstrates the importance of making negative trial results available for inclusion in systematic reviews. Four- teen of the 16 trials comparing a stroke unit with general medical ward care were ‘negative’ for the effect on death (they were all underpowered to detect a moderate but clinically Correspondence: Peter Sandercock, Division of Clinical Neurosciences, University of Edinburgh, Western General Hospital, Bramwell Dott Building, Edinburgh EH4 2XU, UK. Email: [email protected] Twitter: @IST_3 Conflict of interest: None declared. DOI: 10.1111/j.1747-4949.2011.00723.x Leading opinion © 2011 The Author. International Journal of Stroke © 2011 World Stroke Organization Vol 7, January 2012, 32–33 32

Transcript of Sandercock, 2011 -_negative_trials_(int_j_stroke)_pdf

Page 1: Sandercock, 2011 -_negative_trials_(int_j_stroke)_pdf

Negative results: why do they need to be published?

Peter Sandercock

This short narrative review article defines ‘negative results’and cites several ethical and scientific reasons why suchstudies should be made publicly available.

Key words: clinical trial, methodology, publication bias, strokeunits, systematic reviews, treatment

What do I mean by ‘negative results’? The term applies tostudies conducted both in human and in animal subjects andencompasses three different types of result:

• truly inconclusive with ‘no evidence of effect’, generallybecause the study was too small and inadequately powered(several of the small studies included in the Cochrane system-atic review of stroke units are in this category) (1);

• a well-conducted study, which is sufficiently large to provide‘clear evidence of no effect’, i.e. that any effect is too small to beworthwhile pursuing either clinically or in further research(the Clots in Legs Or Stockings after Stroke (CLOTS) trialof graded compression stocking for deep vein thrombosisprevention is a good example) (2); or

• clear evidence of harm when benefit had been expected.Unfortunately, many such ‘negative’ yet still important

studies in man (3) and in animals (4) remain unpublished.Why should negative studies be published? The most

important reason is ethics. If human subjects have givenconsent to participate in a clinical research study, be it a treat-ment trial, or an observational study, they have done so in theclear understanding that the research results will in some waybe of benefit to other people and contribute to scientificadvance. Furthermore, these human subjects have exposedthemselves to risk and inconvenience by participating in thestudy, and the justification for doing that ‘good deed’ shouldbe that the author makes the data publicly available andensures that it is put to good use. Although animals do not giveconsent to participate in research, we still have an ethical dutyto make the best use of the data from any animals used in suchresearch.

There are many scientific reasons for publishing negative orneutral (i.e. uninformative) studies; chiefly, they contain valu-able information, which should become part of the scientificrecord on the subject under study. Systematic reviews are anessential part of the research cycle (5). When scientists plannew research studies (clinical trials or observational studies inhumans or experiments on laboratory animals), the first stepshould be a systematic review of the evidence (5). Such areview may reveal that the question has already been answeredreliably, or it may indicate that a further study is justified. Forexample, the UK Medical Research Council and the UK HealthTechnology Appraisal Programme require that new applica-tions for clinical trial funding should have performed (or atleast cite) an up-to-date systematic review of the subject toensure that the new research really is justified. If during thecourse of a clinical study, a large negative or neutral study ispublished, this might require the trial steering committee ordata monitoring committee to pause for thought and considerwhether the study should continue or be modified in someway. At the end of any clinical study, the results should pref-erably be presented in context of all the available evidence.For clinical trials, this is a requirement of the ConsolidatedStandards of Reporting Trials (CONSORT) guidelines onpublication of randomized clinical trials (6).

Systematic reviews can put a small but strikingly positivestudy into context. For example, when a strikingly positivesmall study is viewed within the totality of the evidence avail-able, it becomes evident that it is a freak ‘lucky’ result arisingfrom the play of chance, and not a reliable estimate of the trueeffect (4,7). The availability of the ‘negative’ studies thenensures that the one small positive (but outlying) study isinterpreted appropriately. There are numerous examples fromthe literature where small clinical trials or small studies ofgenetic associations produce striking positive results (oftenleading to a high-profile paper in a major journal), which arethen not subsequently replicated when larger, more reliable(neutral or negative) studies are published (8–10).

To be reliable, systematic reviews need to include all rel-evant randomized trials. The Stroke Unit Trialists’ Collabora-tive systematic review of organized inpatient stroke care (1)very clearly demonstrates the importance of making negativetrial results available for inclusion in systematic reviews. Four-teen of the 16 trials comparing a stroke unit with generalmedical ward care were ‘negative’ for the effect on death (theywere all underpowered to detect a moderate but clinically

Correspondence: Peter Sandercock, Division of Clinical Neurosciences,University of Edinburgh, Western General Hospital, Bramwell DottBuilding, Edinburgh EH4 2XU, UK.Email: [email protected]: @IST_3

Conflict of interest: None declared.

DOI: 10.1111/j.1747-4949.2011.00723.x

Leading opinion

© 2011 The Author.International Journal of Stroke © 2011 World Stroke Organization Vol 7, January 2012, 32–3332

Page 2: Sandercock, 2011 -_negative_trials_(int_j_stroke)_pdf

important difference), and four were never published.However, the overall estimate of the effect from all the trialstogether is that stroke unit care reduces the odds of death by17% (95% confidence interval 4–29%) (P = 0·01), a resultwhich has helped support the introduction of stroke unit careinto practice worldwide. What would have happened if all ofthe apparently ‘negative’ studies had remained unpublished?We might well have ‘lost’ one of the most significantly benefi-cial interventions for stroke!

There are many obstacles to publication of ‘neutral or nega-tive’ studies; the first is the author. A study that does not havea big fat juicy P-value does not stir the author’s adrenalinsufficiently to write the paper in the first place. It may not bein the commercial interest of the sponsors of negative researchfor the results to be published. Journals like clear stories withdefinite results to increase their sales. Negative studies do notdo much for journal income! There have been reports thatsome journals would only publish results from studies that arestatistically significant at the P < 0·05 level. This is clearlyabsurd, but I suspect that the practice still continues (perhapsrather more covertly these days). In the fast-moving world ofthe Internet and social media, where only the most strikinglypositive results gain their ‘15 minutes of fame’, it is difficult toensure that neutral or negative results from well-conductedresearch would reach the public domain.

There are a number of possible solutions. Journal editorscould (perhaps) be persuaded to prioritize publishing well-conducted negative research over poorly conducted positiveresearch. There are easier alternatives: although the Journalof Negative Results in Biomedicine (http://www.jnrbm.com/)provides a rather specific destination, there are now manyopen-access publishing journals. Furthermore, several grant-giving bodies, such as the UK Medical Research Council andthe Wellcome Trust, expect data from research they havefunded to be published in an open-access format (applicationsfor research grants now need to incorporate anticipated costsfor such publications). For authors without such funding,many institutions now make datasets accumulated by theirscientists freely available via open data repositories (University

of Edinburgh Datashare site is an example http://datashare.is.ed.ac.uk/).

In summary, if a study involves informed consent inhumans or the use of whole animals, and has been satisfacto-rily conducted, it should appear in the publicly available sci-entific record, irrespective of its overall conclusions.

Acknowledgements

I would like to thank William Whiteley, Charles Warlow, andNorberto Cabral for helpful comments.

References1 Stroke Unit Trialists’ Collaboration. Organised inpatient (stroke unit)

care for stroke. Cochrane Database of Systematic Reviews 2007; Issue 4.Art. No.: CD000197. DOI: 10.1002/14651858.CD000197.pub2.

2 Dennis M, Sandercock PA, Reid J et al. Effectiveness of thigh-lengthgraduated compression stockings to reduce the risk of deep veinthrombosis after stroke (CLOTS trial 1): a multicentre, randomisedcontrolled trial. Lancet 2009; 373:1958–65.

3 Gibson L, Brazzelli M, Thomas B, Sandercock P. A systematic reviewof clinical trials of pharmacological interventions for acute ischaemicstroke (1955–2008) that were completed, but not published in full.Trials 2010; 11:43. http://www.trialsjournal.com/content/11/1/43

4 Sena ES, van der Worp HB, Bath PMW, Howells DW, Macleod MR.Publication bias in reports of animal stroke studies leads to majoroverstatement of efficacy. PLoS Biol 2010; 8:e1000344.

5 Bath PM, Gray LJ. Systematic reviews as a tool for planning andinterpreting trials. Int J Stroke 2009; 4:23–7.

6 Moher D, Hopewell S, Schulz KF et al. CONSORT 2010 explanationand elaboration: updated guidelines for reporting parallel group ran-domised trials. BMJ 2010; 340:c869.

7 Collins R, MacMahon S. Reliable assessment of the effects of treatmenton mortality and major morbidity, I: clinical trials. [Review] [74 refs].Lancet 2001; 357:373–80.

8 Ioannidis JPA. Contradicted and initially stronger effects in highlycited clinical research. JAMA 2005; 294:218–28.

9 Ioannidis JP. Why most discovered true associations are inflated.Epidemiology 2008; 19:640–8.

10 Ioannidis JPA, Panagiotou OA. Comparison of effect sizes associatedwith biomarkers reported in highly cited individual articles and insubsequent meta-analyses. JAMA 2011; 305:2200–10.

Leading opinionP. Sandercock

© 2011 The Author.International Journal of Stroke © 2011 World Stroke Organization Vol 7, January 2012, 32–33 33