Publication bias

1
1116 Publication bias SiR-De Melker and colleagues (Sept 4, p 621) suggest that doctors should not bother about publication bias because they think that its worst effects will be only that patients might receive forms of care that have no useful effects. As a reasonably well-informed potential patient I resent their complacency. The consequences of medical acquiescence in publication bias are that patients should be expected (i) to accept the unwanted side-effects of ineffective treatments (such as diethylstilboestrol during pregnancy, for example); (ii) to accept advice, based on evidence that is less complete than it need be, about the effects of health care; (iii) to be invited to participate in research that has been conceived and designed on the basis of unnecessarily incomplete information; and (iv) to contribute to research (both as indirect funders and as patients) that may not be published if the results come as a disappointment or an embarrassment to the investigators or sponsors. Even if de Melker and colleagues believe that doctors should accept these consequences of publication bias, I do not think that patients should be expected to do so. It is unfortunate that discussions of misconduct in medical research continue to be dominated by consideration of case reports of dramatic instances of deliberate fraud. The more insidious occurrence of biased under-reporting of research has been studied and documented much more systematically,2 and it is almost certainly a more widespread form of scientific misconduct.3 As the public in general, and lay members of research ethics committees in particular, become more informed about publication bias, it will increasingly come to be seen as an ethical, not a medical, problem. lain Chalmers The UK Cochrane Centre, NHS R & D Programme, Summertown Pavilion, Middle Way, Oxford OX2 7LG, UK 1 Lock S, Wells F, eds. Fraud and misconduct in medical research. London: BMJ, 1993. 2 Dickersin K, Min YI. NIH clinical trials and publication bias (article). Online J Curr Clin Trials (serial online) 1993 Apr 28; 1993 (Doc no 50). 3 Chalmers I. Underreporting research is scientific misconduct. JAMA 1990; 263: 1405-08. SIR-I read with interest the letter by de Melker and colleagues. I wish them a long and healthy life. Since they are potential patients, their laissez-faire attitude about publication bias and its sequelae seems unfortunate. Much of the argument about publication bias has centred around science. Investigators have documented its existence,’ factors that affect its occurrence and the extent of the problem and have shown that it can lead to systematic errors in the evaluation of treatments.’ However, besides being regarded as scientific misconduct,5 publication bias is a marker for a much broader issue-ethical irresponsibility. All well-designed clinical trials, large and small, positive and negative, contribute to knowledge. When patients participate in clinical trials, investigators have a responsibility to report the results. Ethically, it is inappropriate to allow patients to risk experimental therapy, with its sometimes life-threatening side-effects, and then not report the results. Without complete knowledge of all trials we will be unable to address whether trials are being conducted that are fair, equitable, and inclusive of all groups in society.6 Granting agencies will not know whether they are funding needlessly duplicated trials. They will also be unable to take a broader view as to the direction of future trials. To ensure that the results of all clinical trials are available, we must prospectively register them at inception. Research ethics committees have a major responsibility to ensure that all clinical trials that they approve are registered. Without this assurance such committees will continue to be in a position of approving protocols without knowledge about whether they need to be conducted and without assurance that the results will be written up and made available. We are all ethically obliged to change this situation. David Moher Clinical Epidemiology Unit, Ottawa Civic Hospital, Ottawa, Ontario K1E 4Y9, Canada 1 Dickersin K. The existence of publication bias and risk factors for its occurrence. JAMA 1990; 263: 1385-89. 2 Dickersin K, Min Y, Meinert CL. Factors influencing publication of research results: follow-up of applications submitted to two institutional review boards. JAMA 1992; 267: 374-78. 3 Dickersin K, Min YI. NIH clinical trials and publication bias (article). Online J Curr Clin Trials (serial online) 1993 Apr 28; 1993 (Doc no 50). 4 Simes, RJ. Publication bias: the case for an international registry of clinical trials. J Clin Oncol 1986; 4: 1529-41. 5 Chalmers I. Underreporting research is scientific misconduct. JAMA 1990; 263: 1405-08. 6 Bennett JC. Inclusion of women in clinical trials: policies for population subgroups. N Engl J Med 1993; 329: 288-91. Reducing the cost of HIV antibody testing SiR-Tamashiro of the World Health Organization and colleagues (July 10, p 87; Oct 2, p 866) and your correspondents (Aug 7, p 379) present valuable discussion of ways to reduce the cost of HIV antibody testing. Especially important are the comments on pooling of five samples for screening of blood donations or when conducting prevalence surveys of low- prevalence populations, and the notion of strategy I (one enzyme-linked immunoassay [ELISA]), II (one ELISA as screener, second as confirmatory), and III (one ELISA as screener and two different ELISA tests as confirmatory). Unfortunately, Tamashiro’s first report had two omissions. When discussing HIV antibody testing for strategies II and III they state "the first test should have the highest sensitivity, whereas the second and third test should have higher specificity than the first."; this is not wholly correct. If the sensitivity of the second or third test is lower than the sensitivity of the first test, then some of the true HIV-positive persons will not be identified as such. Since those who are HIV positive on the first test and HIV negative on the second test (with strategy II), or second and third tests (with strategy III) are deemed HIV negative, Tamashiro and colleagues’ guidelines would result in excessive false negatives. What they should have said is: the first test should have high sensitivity and at least moderate to high specificity, whereas the second and third test (used for confirmatory testing) should have equal sensitivity and higher specificity than the first. This same difficulty exists in the use of the well-known western blot as a confirmatory test. As noted by Soriano et al,’ the specificity of the western blot tends to be very high (conforming to the standard of Tamashiro et al) but the sensitivity varies considerably, depending on the criteria used to interpret the band profile. The commonly used US Centers for Disease Control and Prevention definition had a sensitivity of only 95-9% as reported by Soriano et al.l With such a low sensitivity, the western blot would fail to confirm 4-1% of HIV-infected individuals in the group that is identified as HIV antibody positive by a screening ELISA. Following Tamashiro’s guidelines, those specimens deemed non-reactive to the confirmatory test would be judged HIV negative. This difficulty of false negatives can be reduced if clinicians and research investigators recognise that a confirmatory assay, different from a screening test, must have both high sensitivity and specificity, and select their tests accordingly.

Transcript of Publication bias

1116

Publication bias

SiR-De Melker and colleagues (Sept 4, p 621) suggest thatdoctors should not bother about publication bias because theythink that its worst effects will be only that patients mightreceive forms of care that have no useful effects. As a reasonablywell-informed potential patient I resent their complacency.The consequences of medical acquiescence in publication

bias are that patients should be expected (i) to accept theunwanted side-effects of ineffective treatments (such as

diethylstilboestrol during pregnancy, for example); (ii) to

accept advice, based on evidence that is less complete than itneed be, about the effects of health care; (iii) to be invited toparticipate in research that has been conceived and designed onthe basis of unnecessarily incomplete information; and (iv) tocontribute to research (both as indirect funders and as patients)that may not be published if the results come as a

disappointment or an embarrassment to the investigators orsponsors. Even if de Melker and colleagues believe that doctorsshould accept these consequences of publication bias, I do notthink that patients should be expected to do so.

It is unfortunate that discussions of misconduct in medicalresearch continue to be dominated by consideration of casereports of dramatic instances of deliberate fraud. The moreinsidious occurrence of biased under-reporting of research hasbeen studied and documented much more systematically,2 andit is almost certainly a more widespread form of scientificmisconduct.3 As the public in general, and lay members ofresearch ethics committees in particular, become more

informed about publication bias, it will increasingly come to beseen as an ethical, not a medical, problem.

lain ChalmersThe UK Cochrane Centre, NHS R & D Programme, Summertown Pavilion, Middle Way,Oxford OX2 7LG, UK

1 Lock S, Wells F, eds. Fraud and misconduct in medical research.London: BMJ, 1993.

2 Dickersin K, Min YI. NIH clinical trials and publication bias (article).Online J Curr Clin Trials (serial online) 1993 Apr 28; 1993 (Doc no 50).

3 Chalmers I. Underreporting research is scientific misconduct. JAMA1990; 263: 1405-08.

SIR-I read with interest the letter by de Melker andcolleagues. I wish them a long and healthy life. Since they arepotential patients, their laissez-faire attitude about publicationbias and its sequelae seems unfortunate.Much of the argument about publication bias has centred

around science. Investigators have documented its existence,’factors that affect its occurrence and the extent of theproblem and have shown that it can lead to systematic errorsin the evaluation of treatments.’ However, besides beingregarded as scientific misconduct,5 publication bias is a markerfor a much broader issue-ethical irresponsibility.

All well-designed clinical trials, large and small, positive andnegative, contribute to knowledge. When patients participatein clinical trials, investigators have a responsibility to report theresults. Ethically, it is inappropriate to allow patients to riskexperimental therapy, with its sometimes life-threateningside-effects, and then not report the results. Without completeknowledge of all trials we will be unable to address whethertrials are being conducted that are fair, equitable, and inclusiveof all groups in society.6 Granting agencies will not knowwhether they are funding needlessly duplicated trials. Theywill also be unable to take a broader view as to the direction offuture trials.To ensure that the results of all clinical trials are available, we

must prospectively register them at inception. Research ethicscommittees have a major responsibility to ensure that all

clinical trials that they approve are registered. Without thisassurance such committees will continue to be in a position ofapproving protocols without knowledge about whether theyneed to be conducted and without assurance that the resultswill be written up and made available. We are all ethicallyobliged to change this situation.

David Moher

Clinical Epidemiology Unit, Ottawa Civic Hospital, Ottawa, Ontario K1E 4Y9, Canada

1 Dickersin K. The existence of publication bias and risk factors for itsoccurrence. JAMA 1990; 263: 1385-89.

2 Dickersin K, Min Y, Meinert CL. Factors influencing publication ofresearch results: follow-up of applications submitted to twoinstitutional review boards. JAMA 1992; 267: 374-78.

3 Dickersin K, Min YI. NIH clinical trials and publication bias (article).Online J Curr Clin Trials (serial online) 1993 Apr 28; 1993 (Doc no 50).

4 Simes, RJ. Publication bias: the case for an international registry ofclinical trials. J Clin Oncol 1986; 4: 1529-41.

5 Chalmers I. Underreporting research is scientific misconduct. JAMA1990; 263: 1405-08.

6 Bennett JC. Inclusion of women in clinical trials: policies forpopulation subgroups. N Engl J Med 1993; 329: 288-91.

Reducing the cost of HIV antibody testingSiR-Tamashiro of the World Health Organization and

colleagues (July 10, p 87; Oct 2, p 866) and your correspondents(Aug 7, p 379) present valuable discussion of ways to reduce thecost of HIV antibody testing. Especially important are thecomments on pooling of five samples for screening of blooddonations or when conducting prevalence surveys of low-prevalence populations, and the notion of strategy I (oneenzyme-linked immunoassay [ELISA]), II (one ELISA asscreener, second as confirmatory), and III (one ELISA asscreener and two different ELISA tests as confirmatory).Unfortunately, Tamashiro’s first report had two omissions.When discussing HIV antibody testing for strategies II and IIIthey state "the first test should have the highest sensitivity,whereas the second and third test should have higherspecificity than the first."; this is not wholly correct. If thesensitivity of the second or third test is lower than the

sensitivity of the first test, then some of the true HIV-positivepersons will not be identified as such. Since those who are HIV

positive on the first test and HIV negative on the second test(with strategy II), or second and third tests (with strategy III)are deemed HIV negative, Tamashiro and colleagues’guidelines would result in excessive false negatives. What theyshould have said is: the first test should have high sensitivityand at least moderate to high specificity, whereas the secondand third test (used for confirmatory testing) should have equalsensitivity and higher specificity than the first.This same difficulty exists in the use of the well-known

western blot as a confirmatory test. As noted by Soriano et al,’the specificity of the western blot tends to be very high(conforming to the standard of Tamashiro et al) but thesensitivity varies considerably, depending on the criteria usedto interpret the band profile. The commonly used US Centersfor Disease Control and Prevention definition had a sensitivityof only 95-9% as reported by Soriano et al.l With such a lowsensitivity, the western blot would fail to confirm 4-1% ofHIV-infected individuals in the group that is identified as HIV

antibody positive by a screening ELISA. FollowingTamashiro’s guidelines, those specimens deemed non-reactiveto the confirmatory test would be judged HIV negative. Thisdifficulty of false negatives can be reduced if clinicians andresearch investigators recognise that a confirmatory assay,different from a screening test, must have both high sensitivityand specificity, and select their tests accordingly.