Making all research results publically available: the cry of systematic reviewers.

22
Making all research results publically available: the cry of systematic reviewers

Transcript of Making all research results publically available: the cry of systematic reviewers.

Making all research results publically available: the cry of systematic reviewers

Outline of talk

• Definitions• History• Who contributes to publication bias• Impact of publication bias• Funnel plots and interpreting them• Outcome reporting bias• Critical appraisal of systematic review

Memory jog

• Grey literature• “That which is produced on all levels of government,

academics, business and industry in print and electronic formats, but which is not controlled by commercial publishers”

• Publication bias• “Investigators, reviewers, and editors submit or accept

manuscripts for publication based on the direction or strength of the study findings”

Publication bias

• 1959• 294 reports from 4 leading psychology journals• 97.3% reported statistically positive results

• 1986-1987• 456 reports from three leading from psychology

journals and 3 healthcare journals (NEJM, Am J Epi, Am J Pub Health)

• 97% of psychology journals reported positive results• 85% of Medical journals reported positive results

• Researchers• Are likely the major source

• Peer reviewers• Experimental evidence has shown that reviewers are

highly influenced by the direction and strength of results in a submitted manuscript1

• Editors

Publication bias: contributors (1)

Publication bias: contributors (2)

• Editors!

Letter from the editor for a major environmental/toxicological journal to the author of a submitted manuscript

Consequences of non-publication bias(1)

• Management (survival) of ovarian cancer• results of 13 published trials• results of 16 trials (including 3 registered only)

• Pooling the results of published trials only• statistically favouring combination chemotherapy compared to

alkylating agent (16% advantage)• Pooling all 16 trials

• non-significant advantage of 5% • Provides clinicians and patients alike with differing

estimates as to the purported effectiveness of a cancer intervention.

Consequences of non-publication bias(2)

• Methods• compared the results of 365 published trials with 102 ‘grey’

trials, included in 33 systematic reviews• Excluding the results of grey literature exaggerated the

treatment effectiveness by 15%, on average• Grey literature accounts for approximately 25% of studies

included in systematic reviews• The 102 grey literature randomized trials included more than

23,000 participants

Publication Bias: impact

• Consequences for Systematic Reviews and Meta-Analyses• Biased summary estimates that are falsely positive,

precise and accurate

• Consequences of guideline and health policy development• Practice may be influenced (and even mandated) by

false conclusions

Publication Bias: Detection

• The Funnel Plot• A measure of the treatment effect size plotted against

a measure of the study’s sample size or precision• The precision of the estimation of the true effect

increases with larger sample sizes• Funnel plots investigate whether studies with little

precision (small studies) give different results from studies with greater precision (larger studies)

Funnel Plots

Interpreting Funnel Plots

• The ability of researches to identify bias using funnel plots was shown in one study to be 53%

Funnel plot asymmetry

Causes:• Selection bias

• Publication bias – (One of many reasons!)

• Language bias• Citation bias• Multiple publication bias

• True Heterogeneity• Intensity of intervention• Characteristics of the patient population

• Methodological Quality• Outcome measure and analysis• Chance

Publication Bias: time lag bias

• Statistically significant positive studies published before null studies

• Systematic reviews –a cross section cut in time• Thus, trials with positive results could dominate

the literature and could introduce bias for several years

Recommendations for examining and interpreting funnel plot asymmetry in meta-analyses of

randomised trials

• As a rule of thumb, tests for funnel plot asymmetry should not be used when there are fewer than 10 studies. • test power is usually too low to distinguish chance from

real asymmetry• When there is evidence of funnel plot asymmetry, publication

bias is only one possible explanation • As far as possible, a testing strategy should be specified in

advance. Applying and reporting many tests is discouraged: if more than one test is used, all test results should be reported.

Recommendations for examining and interpreting funnel plot asymmetry in meta-analyses of

randomised trials

• Test results should be interpreted in the context of visual inspection of funnel plots.• for example, are there studies with markedly different

intervention effect estimates or studies that are highly influential in the asymmetry test? Even if an asymmetry test is statistically significant, publication bias can probably be excluded if small studies tend to lead to lower estimates of benefit than larger studies or if there are no studies with significant results

(intra-study) Publication bias

• Selective reporting bias• Outcome reporting bias

• Typically statistically positive• Selected by investigators (post hoc)

• Data analyses reporting bias

Some salient results

• Nearly two-thirds had a change in at least one primary outcome between the protocol and publication

• Statistically significant outcomes had a higher likelihood of being reported compared to non-significant ones

Do researchers have a social obligation to study participants?

• Other realms of life• airline industry• hotels

The SSRI story

• Selective serotonin reuptake inhibitors [SSRI] commonly used class of antidepressants to treat major depression in children

• Concerns that use of these drugs increase risk of suicide

• Results of systematic review• published data support use of paxil• addition of unpublished data tip the harm/benefit balance

and do not support use of drug

PRISMA statement

• Guideline for reporting systematic reviews and meta-analyses• Item 15

• “Specify any assessment of risk of bias that may affect the cumulative evidence (e.g., publication bias, selective reporting within studies)..”

Rationale for reporting assessment of bias across studies

• Reviewers should explore the possibility that the available data are biased.

• They may examine results from the available studies for clues that suggest there may be:• missing studies (publication bias) • missing data from the included studies (selective reporting

bias)