Download - Publication bias

Transcript

1116

Publication bias

SiR-De Melker and colleagues (Sept 4, p 621) suggest thatdoctors should not bother about publication bias because theythink that its worst effects will be only that patients mightreceive forms of care that have no useful effects. As a reasonablywell-informed potential patient I resent their complacency.The consequences of medical acquiescence in publication

bias are that patients should be expected (i) to accept theunwanted side-effects of ineffective treatments (such as

diethylstilboestrol during pregnancy, for example); (ii) to

accept advice, based on evidence that is less complete than itneed be, about the effects of health care; (iii) to be invited toparticipate in research that has been conceived and designed onthe basis of unnecessarily incomplete information; and (iv) tocontribute to research (both as indirect funders and as patients)that may not be published if the results come as a

disappointment or an embarrassment to the investigators orsponsors. Even if de Melker and colleagues believe that doctorsshould accept these consequences of publication bias, I do notthink that patients should be expected to do so.

It is unfortunate that discussions of misconduct in medicalresearch continue to be dominated by consideration of casereports of dramatic instances of deliberate fraud. The moreinsidious occurrence of biased under-reporting of research hasbeen studied and documented much more systematically,2 andit is almost certainly a more widespread form of scientificmisconduct.3 As the public in general, and lay members ofresearch ethics committees in particular, become more

informed about publication bias, it will increasingly come to beseen as an ethical, not a medical, problem.

lain ChalmersThe UK Cochrane Centre, NHS R & D Programme, Summertown Pavilion, Middle Way,Oxford OX2 7LG, UK

1 Lock S, Wells F, eds. Fraud and misconduct in medical research.London: BMJ, 1993.

2 Dickersin K, Min YI. NIH clinical trials and publication bias (article).Online J Curr Clin Trials (serial online) 1993 Apr 28; 1993 (Doc no 50).

3 Chalmers I. Underreporting research is scientific misconduct. JAMA1990; 263: 1405-08.

SIR-I read with interest the letter by de Melker andcolleagues. I wish them a long and healthy life. Since they arepotential patients, their laissez-faire attitude about publicationbias and its sequelae seems unfortunate.Much of the argument about publication bias has centred

around science. Investigators have documented its existence,’factors that affect its occurrence and the extent of theproblem and have shown that it can lead to systematic errorsin the evaluation of treatments.’ However, besides beingregarded as scientific misconduct,5 publication bias is a markerfor a much broader issue-ethical irresponsibility.

All well-designed clinical trials, large and small, positive andnegative, contribute to knowledge. When patients participatein clinical trials, investigators have a responsibility to report theresults. Ethically, it is inappropriate to allow patients to riskexperimental therapy, with its sometimes life-threateningside-effects, and then not report the results. Without completeknowledge of all trials we will be unable to address whethertrials are being conducted that are fair, equitable, and inclusiveof all groups in society.6 Granting agencies will not knowwhether they are funding needlessly duplicated trials. Theywill also be unable to take a broader view as to the direction offuture trials.To ensure that the results of all clinical trials are available, we

must prospectively register them at inception. Research ethicscommittees have a major responsibility to ensure that all

clinical trials that they approve are registered. Without thisassurance such committees will continue to be in a position ofapproving protocols without knowledge about whether theyneed to be conducted and without assurance that the resultswill be written up and made available. We are all ethicallyobliged to change this situation.

David Moher

Clinical Epidemiology Unit, Ottawa Civic Hospital, Ottawa, Ontario K1E 4Y9, Canada

1 Dickersin K. The existence of publication bias and risk factors for itsoccurrence. JAMA 1990; 263: 1385-89.

2 Dickersin K, Min Y, Meinert CL. Factors influencing publication ofresearch results: follow-up of applications submitted to twoinstitutional review boards. JAMA 1992; 267: 374-78.

3 Dickersin K, Min YI. NIH clinical trials and publication bias (article).Online J Curr Clin Trials (serial online) 1993 Apr 28; 1993 (Doc no 50).

4 Simes, RJ. Publication bias: the case for an international registry ofclinical trials. J Clin Oncol 1986; 4: 1529-41.

5 Chalmers I. Underreporting research is scientific misconduct. JAMA1990; 263: 1405-08.

6 Bennett JC. Inclusion of women in clinical trials: policies forpopulation subgroups. N Engl J Med 1993; 329: 288-91.

Reducing the cost of HIV antibody testingSiR-Tamashiro of the World Health Organization and

colleagues (July 10, p 87; Oct 2, p 866) and your correspondents(Aug 7, p 379) present valuable discussion of ways to reduce thecost of HIV antibody testing. Especially important are thecomments on pooling of five samples for screening of blooddonations or when conducting prevalence surveys of low-prevalence populations, and the notion of strategy I (oneenzyme-linked immunoassay [ELISA]), II (one ELISA asscreener, second as confirmatory), and III (one ELISA asscreener and two different ELISA tests as confirmatory).Unfortunately, Tamashiro’s first report had two omissions.When discussing HIV antibody testing for strategies II and IIIthey state "the first test should have the highest sensitivity,whereas the second and third test should have higherspecificity than the first."; this is not wholly correct. If thesensitivity of the second or third test is lower than the

sensitivity of the first test, then some of the true HIV-positivepersons will not be identified as such. Since those who are HIV

positive on the first test and HIV negative on the second test(with strategy II), or second and third tests (with strategy III)are deemed HIV negative, Tamashiro and colleagues’guidelines would result in excessive false negatives. What theyshould have said is: the first test should have high sensitivityand at least moderate to high specificity, whereas the secondand third test (used for confirmatory testing) should have equalsensitivity and higher specificity than the first.This same difficulty exists in the use of the well-known

western blot as a confirmatory test. As noted by Soriano et al,’the specificity of the western blot tends to be very high(conforming to the standard of Tamashiro et al) but thesensitivity varies considerably, depending on the criteria usedto interpret the band profile. The commonly used US Centersfor Disease Control and Prevention definition had a sensitivityof only 95-9% as reported by Soriano et al.l With such a lowsensitivity, the western blot would fail to confirm 4-1% ofHIV-infected individuals in the group that is identified as HIV

antibody positive by a screening ELISA. FollowingTamashiro’s guidelines, those specimens deemed non-reactiveto the confirmatory test would be judged HIV negative. Thisdifficulty of false negatives can be reduced if clinicians andresearch investigators recognise that a confirmatory assay,different from a screening test, must have both high sensitivityand specificity, and select their tests accordingly.