2003 Estimates of Randomized Controlled Trials Across Six Areas of Childhood Intervention_A...
Transcript of 2003 Estimates of Randomized Controlled Trials Across Six Areas of Childhood Intervention_A...
7/28/2019 2003 Estimates of Randomized Controlled Trials Across Six Areas of Childhood Intervention_A Bibliometric Analysis
http://slidepdf.com/reader/full/2003-estimates-of-randomized-controlled-trials-across-six-areas-of-childhood 1/14
American Academy of Political and Social Science
Estimates of Randomized Controlled Trials across Six Areas of Childhood Intervention: ABibliometric AnalysisAuthor(s): Anthony PetrosinoReviewed work(s):Source: Annals of the American Academy of Political and Social Science, Vol. 589, MisleadingEvidence and Evidence-Led Policy: Making Social Science More Experimental (Sep., 2003), pp.190-202Published by: Sage Publications, Inc. in association with the American Academy of Political and Social
Science
Stable URL: http://www.jstor.org/stable/3658566 .
Accessed: 13/03/2013 22:02
Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at .http://www.jstor.org/page/info/about/policies/terms.jsp
.JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of
content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms
of scholarship. For more information about JSTOR, please contact [email protected].
.
Sage Publications, Inc. and American Academy of Political and Social Science are collaborating with JSTOR
to digitize, preserve and extend access to Annals of the American Academy of Political and Social Science.
http://www.jstor.org
This content downloaded on Wed, 13 Mar 2013 22:02:14 PMAll use subject to JSTOR Terms and Conditions
7/28/2019 2003 Estimates of Randomized Controlled Trials Across Six Areas of Childhood Intervention_A Bibliometric Analysis
http://slidepdf.com/reader/full/2003-estimates-of-randomized-controlled-trials-across-six-areas-of-childhood 2/14
Estimates of
Randomized
ControlledTrialsacrossSixAreas of
Childhood
Intervention:
A Bibliometric
Analysis
By
ANTHONY PETROSINO
190
Data on the frequencyof experimentsare elusive. One
way to estimate how manyexperimentsare done is byanalyzingthe contents of bibliographicdatabases. Thisarticle
analyzes
the citation information from six
majorbibliographicdatabases to estimate the proportion ofrandomized(orpossiblyrandomized)experimentscom-
pared to all outcome or impact evaluation studies. Thefocus of the article is on the evaluation of programsdesigned for children (from birth to eighteen years of
age). The results indicate that randomized studies areused in nearly70 percent of childhood interventionsinhealth care but probably n 6 to 15percent of kindergar-ten through twelfth-grade interventions in educationandjuvenile justice. The article concludes with discus-sion aboutthese data,particularly n suggestionsof how
to produce more outcome studies, and randomizedexperiments,of childhood interventions.
Keywords: randomizedexperiments;evaluationstud-ies: programsfor children;bibliometrics
esearchers across the social sciences con-tinue to urge randomized controlled trials
(RCTs)to evaluate interventions. For
example,in a book in honor of social statistician Richard
Savage, Lincoln Moses and Frederick Mosteller
(1997) provided examples of influential social
experiments and urged readers to "just do it!"This converges with recommendations to con-
duct experiments from researchers in criminol-
ogy (Sherman 1998; Farrington, Ohlin, and Wil-
son 1986), education (Fitz-Gibbon 1999;Petersen 1999), social work (Macdonald, 1999),
and evaluators more generally (Weiss 1998).
AnthonyPetrosino s coordinatorand SteeringCommit-tee memberfor the CampbellCollaboration Crimeand
JusticeGroup (www.aic.gov.au/campbellcj).He is alsoa
consultantfor the Study on Decisions in Education atHarvard. He workedfor several yearsfor statejusticeagencies in New Jersey and Massachusetts,completedhis Ph.D. in criminaljustice at Rutgers University in
1997, and accepteda SpencerPost-DoctoralFellowshipin Evaluation at the Harvard Children's Initiative.
Topicsof recentarticlesincludeMegan'sLaw (Crime &
Delinquency 1999), communitypolicing (Police Quar-terly2001), school-baseddrug prevention(Annalsof the
DOI: 10.1177/0002716203254694
ANNALS, AAPSS, 589, September2003
This content downloaded on Wed, 13 Mar 2013 22:02:14 PMAll use subject to JSTOR Terms and Conditions
7/28/2019 2003 Estimates of Randomized Controlled Trials Across Six Areas of Childhood Intervention_A Bibliometric Analysis
http://slidepdf.com/reader/full/2003-estimates-of-randomized-controlled-trials-across-six-areas-of-childhood 3/14
ESTIMATES OF RANDOMIZED CONTROLLED TRIALS
Such recommendations areeasy to understand. The RCT remains the most per-suasive design for establishing causal inference between an intervention and sub-
sequent observed effects. The RCT, if implemented and executed properly, cancounter the internalvalidity problems that threaten nonrandomized studies (Weiss1998; Cook and Campbell 1979).
Increased numbers of RCTs is especially important in light of new internationalinitiatives such as the Campbell Collaboration (www.campbellcollaboration.org)that will prepare, update, and disseminate rigorous scientific syntheses known as
systematic reviews. The Campbell Collaboration is especially interested in review-
ing high-quality evaluation studies, especially experiments. All things being equal,reviewers are more confident in their resultswhen they arebased on several exper-iments rather than a few or one.
Getting good numbers on the frequency of randomized trials, however, remainssomewhat elusive. The most common perception, despite the emphasis on RCTsas the "gold standard" of evaluation research, is that randomized field trials are
quite rare in most areas of public policy intervention (e.g., Cook and Payne 2002;Petersen 1999). In contrast, Robert Boruch, Snyder,and DeMoya (1999) used the
publication date of reported trials to examine trends in social experimentation.Across a variety of fields, their trend data show an increase in experimentation.
Another method for estimating the frequency of trialsis to analyze the abstractsand citations from bibliographic databases of scientific literature. For example,
Miech, Nave, and Mosteller (1998) estimated that less than 1 percent of allabstracts in the Educational Research Information Clearinghouse (ERIC)referred to RCTs of K-12 cognitive interventions. Their work converges with the
early conclusions by researchers who collaborated on an international effort tobuild the Campbell Collaboration Social, Psychological, Educational, and Crimi-
nological TrialsRegister (C2-SPECTR). The 1 million or more abstracts covered
by the electronic andhand searches done to build C2-SPECTR have so farresultedin less than 12,000 citations to randomized orpossibly randomized trials(Petrosinoet al. 2000).
But research that compares the percentage of reported trialswith total numberof abstracts suffers from a "denominator"problem (Rosenthal 1998); that is, theabstractsare made up of agreat number of basic researchpapers, advocacy papers,theoretical articles, book reviews, and other writings that have nothing to do with
evaluating an intervention. In other words, estimating the number of RCTs from
AmericanAcademyof Political andSocialScience2003), andconveniencestorerobbery(CrimePreventionStudies2003).
NOTE:This workwassupported n partbya grantfromthe SpencerFoundationto the Harvard
Children'sInitiative;a Mellon Foundationgrantto the Center for Evaluation,American Acad-emy of Arts and Sciences;and a Smith-RichardsonFoundationgrantto the JerryLee Center on
Criminology,Universityof Pennsylvania.The articlerepresentsthe sole opinion of the authoranddoes not representanyinstitution,person,or funder.Manythanksto SirIainChalmers,JoelGamer, Frederick Mosteller, Sean Riordan, Carol Weiss, and Stuart Yeh for their helpfulsuggestions.
191
This content downloaded on Wed, 13 Mar 2013 22:02:14 PMAll use subject to JSTOR Terms and Conditions
7/28/2019 2003 Estimates of Randomized Controlled Trials Across Six Areas of Childhood Intervention_A Bibliometric Analysis
http://slidepdf.com/reader/full/2003-estimates-of-randomized-controlled-trials-across-six-areas-of-childhood 4/14
THE ANNALS OF THE AMERICANACADEMY
the total abstract database is likely to produce low estimates of controlled trials,since most abstracts refer to literature in which RCTs are not relevant.
The purpose of this article is to provide a different method for estimating thefrequency in which RCTs are used to evaluate children's programs. Rather than
using total abstracts in particular databases as the denominator to provide esti-
mates, I use the total outcome studies abstractedwithin these databases. The ques-tion I answer is: In all situations in which an outcome study of a children'sprogramis reported, how many were RCTs?
Why a Focus on the Evaluation
of Children'sPrograms?More attention is being focused on evaluation of programs for children.' For
example, the Spencer and WT. Grant Foundations awarded the HarvardChildren's
Initiative2funding to sustain a fellowship programof doctorates from diverse fields
to analyze how programs for children are evaluated and how such evaluations canbe improved. The Initiative also supported an EvaluationTask Force through 2001made up of faculty members from Harvard and nearby institutions and chargedwith the responsibility to discuss the challenges of evaluating programs for chil-dren and
developinnovations for
overcomingthem. Within the context of these
larger,multidisciplinaryefforts, I gathered information on the evaluation of child-
related interventions in some majorareas, including juvenile justice, child protec-tion, education, medicine, mental health, and more general social programs.
But what exactly is a child-related intervention? Any program, including law or
policy, that directly targets persons younger than eighteen (including developingfetuses in prenatalcare programs), includes them as clients orparticipants,or has a
direct measurable goal of improving their well-being, is a children's program(Petrosino 2000). For example, in criminal justice, this would include programsthat provide services to child victims of crime and policies that punished juvenile
offenders andwould even encompass stricter laws or treatment for adultperpetra-tors if outcomes included some measure of subsequent "crime against children."
Programscan be directly delivered to children or their families or to largerunits of
analysis such as housing projects, schools, neighborhoods, and cities.
Method
To estimate the number of RCTs that are conducted when evaluating child-
focused interventions, I analyzed the citation and abstract information containedin majorscientific bibliographic databases. This is one method ofbibliometrics andis similar to content analysis (Gauthier 1998). Although uncommon in the social
sciences, bibliometric analysishas the advantageof providing a picture of researchwithout the expense of retrieving and analyzing the original reports. Other exam-
192
This content downloaded on Wed, 13 Mar 2013 22:02:14 PMAll use subject to JSTOR Terms and Conditions
7/28/2019 2003 Estimates of Randomized Controlled Trials Across Six Areas of Childhood Intervention_A Bibliometric Analysis
http://slidepdf.com/reader/full/2003-estimates-of-randomized-controlled-trials-across-six-areas-of-childhood 5/14
ESTIMATES OF RANDOMIZED CONTROLLED TRIALS
pies besides the Miech, Nave, and Mosteller (1998) piece include Fassbender's
(1997) content analysisof 109 abstracts on parapsychology,Duff's (1995) study of
abstracts on the "information society," and White's (1986) examination of morethan 300 dissertation abstracts in public administration. Petrosino (2000) used
bibliometric analysis to determine how many abstracts to evaluation reportsincluded description of mediating or moderating variables.
Specifically, I examined six areas of childhood intervention: education, juvenilejustice, child protection, mental health, health care, and general social program-ming. For each of these areas,a majorelectronic bibliographic databasewas identi-
[E]stimatingthe numberof RCTsfromthe total abstractdatabaseis likelyto
produce low estimatesof controlledtrials,
since most abstractsreferto literature
in which RCTsare not relevant.
fled. Table 1 lists each intervention area, the database, the years of publicationssearched, the type of documents that are abstracted, and the total number of
abstracts for the years searched. Three of the databases contain abstracts of pub-lished journal articles only (MEDLINE, SOCIOFILE [includes SociologicalAbstracts and Social
Planning/Policy
& Development Abstracts], and PSYCINFO
[online version of Psychological Abstracts);3the others abstract a variety of docu-ments including book chapters, dissertations, government and technical reports,conference papers, and book reviews (ERIC, NCCAN [National Clearinghouseon Child Abuse and Neglect], and CriminalJustice Abstracts).
Abstracts are only a "proxy" r indirect measure of what is actually contained in
evaluation reports. Given the low quality of abstracts in some of the databases,abstractsmay be a very poor proxy measure. Despite the variabilityin the amountand quality of information reported, the type of design used in the evaluation is
usuallyprovided in the abstract. But other problems with using abstractsmay chal-
lenge the findings in this research. For example, some outcome studies are notmade available so as to come to the attention of bibliographic database publishers.It is unknown-and maybe unknowable-to what extent the estimates of RCTs foroutcome studies cited in the databases differ from estimates for the evaluations notabstracted. A second challenge is that overlap across fields and databases is com-
193
This content downloaded on Wed, 13 Mar 2013 22:02:14 PMAll use subject to JSTOR Terms and Conditions
7/28/2019 2003 Estimates of Randomized Controlled Trials Across Six Areas of Childhood Intervention_A Bibliometric Analysis
http://slidepdf.com/reader/full/2003-estimates-of-randomized-controlled-trials-across-six-areas-of-childhood 6/14
TABLE 1
INTERVENTION AREAS AND MAJORBIBLIOGRAPHIC DATA
TypeInterventionArea Database YearsSearched A
Education ERIC 1996-98 PublishJuvenilejustice CriminalJusticeAbstracts 1996-99 PublishChildprotection NCCAN 1996-99 PublishMentalhealth PSYCINFO 1998 PublishHealth care MEDLINE January hroughMarch1999 PublishSocialprograms-general SOCIOFILE 1997-98 Publish
NOTE:ERIC = EducationResearchInformationClearinghouse;NCCAN = NationalClearinghouseon Ch
online versionofPsychologicalAbstracts;MEDLINE = Medical LiteratureAnalysisandRetrievalSystemOAbstracts and SocialPlanning/Policy&DevelopmentAbstracts.
This content downloaded on Wed, 13 Mar 2013 22:02:14 PMAll use subject to JSTOR Terms and Conditions
7/28/2019 2003 Estimates of Randomized Controlled Trials Across Six Areas of Childhood Intervention_A Bibliometric Analysis
http://slidepdf.com/reader/full/2003-estimates-of-randomized-controlled-trials-across-six-areas-of-childhood 7/14
ESTIMATESOF RANDOMIZED CONTROLLED TRIALS
mon; that is, the same RCT may be abstracted in two or more of the databases. In
response, I report estimates within databases and avoid totals that would inevitably
double-count the same RCTs.To generate large enough samples of outcome evaluations, I searched two yearsof SOCIOFILE (1997-98), three years of ERIC (1996-98), and four years ofNCCAN and CriminalJustice Abstracts (1996-99). PSYCINFO (1998) generateda significant number of potential studies in the single year searched, whileMEDLINE produced enough potential hits in three months (JanuarythroughMarch 1999). Search strategies for each database were developed and were
broadly constructed so that the widest possible pool of abstracts to studies of chil-dren'sprograms could be identified ("potentialpool").4
Estimatesof RCTsand PRTs n the social
sciencesare much higherwhen comparingto the total numberof outcomestudies
reportedratherthan to total abstracts.
Once the potential pool of eligible abstracts was identified, I visually inspectedeach abstract to determine that it met the following criteria:(1) it abstracted a sin-
gle outcome or impact study and was not aprocess evaluation or review of multiplestudies, (2) children were either the programparticipants or one of the direct tar-
getsof the research, (3) the
studywas carried out in the field and not in a
laboratoryor simulation, (4) it was written in English, (5) it reported specific outcome infor-
mation, and (6) it was not a single-subject evaluation. The Anglophone require-ment was a necessary bias to avoid the costs of translating non-English abstracts.
Specific outcome information included mention of either direction or magnitudeof program effects; abstracts that described an evaluation in progress were notincluded. If two or more abstracts referred to the same study in one database, theone providing the most information was included. This happened only once ortwice per database.
As Table 2 illustrates, only a small proportion of abstracts in the databases met
the entry criteria. A majorityof "potential pool" abstractswere not actual evalua-tions but descriptions of design and methodology, wisdom pieces or notes on evalu-ation theory, process or formative studies, program descriptions, or advocacypapers. One of the disappointments of this analysiswas finding nearlyas many arti-cles on "how to do" evaluation than actual outcome studies.
195
This content downloaded on Wed, 13 Mar 2013 22:02:14 PMAll use subject to JSTOR Terms and Conditions
7/28/2019 2003 Estimates of Randomized Controlled Trials Across Six Areas of Childhood Intervention_A Bibliometric Analysis
http://slidepdf.com/reader/full/2003-estimates-of-randomized-controlled-trials-across-six-areas-of-childhood 8/14
THE ANNALS OF THE AMERICANACADEMY
TABLE2
PIPELINE OF RELEVANT ABSTRACTS FOR EACH DATABASE
Database TotalAbstracts Potential Pool Outcome Studies
ERIC 56,401 419 83CriminalJusticeAbstracts 7,810 479 74MEDLINE 18,333 167 80NCCAN 3,356 250 49PSYCINFO 46,984 293 63SOCIOFILE 33,933 560 53
NOTE:ERIC =EducationResearchInformationClearinghouse;NCCAN = NationalClearing-house on ChildAbuse andNeglect; PSYCINFO is the online versionof PsychologicalAbstracts;
MEDLINE = MedicalLiteratureAnalysisandRetrievalSystemOnline;SOCIOFILE includesSociologicalAbstracts and SocialPlanning/Policy& DevelopmentAbstracts.
For each abstract of an outcome study, I determined if the study involved ran-domization of individuals or other units of analysisto intervention and control con-ditions. If a clear statement of random assignment was in the abstract, I coded it asan RCT. Unfortunately, some abstracts contain statements like "comparedto con-
trols,""subjectswere assigned to treatment and control conditions," "researchers
conducted an experimental evaluation of the intervention," or "participantswereevenly divided into experimental and control groups." Such phrases indicate thatthe outcome study could have potentially been an RCT. I coded these abstractsas
potentially randomized trials (PRTs). The goal was sensitivity in identifying as
manycontrolled trialsaspossible. If no language concerning assignment into studyconditions was used, I conservatively assumed that randomization was not used.
Results
Table 3 presents the results across the six databases. Estimates of RCTs andPRTsin the social sciences are much higher when comparing to the total number ofoutcome studies reported rather than to total abstracts.Even at the low end, RCTsand PRTs combined constitute 16 percent of education-related evaluationsabstracted in ERIC and 33 percent of mental health evaluations abstracted inPSYCINFO. Using abstracts of outcome studies as the denominator-rather thantotal abstracts-increases the proportion of RCTs(andPRTs)reported. This is trueeven if one focuses exclusively on RCTs. Even at the low end, the proportions ofabstractsclearlymentioning randomization were 6 percent in education (ERIC), 8
percent in general social programs (SOCIOFILE), and 11 percent in criminaljus-tice (Criminal Justice Abstracts).
According to this analysis, RCTs appear to be the design of choice to evaluateinterventions forchildren in medicine and health care, makingup 62 percent of the
reports abstracted in MEDLINE. Table 3 also points out a distinction between
196
This content downloaded on Wed, 13 Mar 2013 22:02:14 PMAll use subject to JSTOR Terms and Conditions
7/28/2019 2003 Estimates of Randomized Controlled Trials Across Six Areas of Childhood Intervention_A Bibliometric Analysis
http://slidepdf.com/reader/full/2003-estimates-of-randomized-controlled-trials-across-six-areas-of-childhood 9/14
ESTIMATES F RANDOMIZED ONTROLLED RIALS
TABLE 3
RANDOMIZED CONTROLLED TRIALS (RCTs) AND
POTENTIAL RANDOMIZED TRIALS (PRTs) FOR EACH DATABASE
RCTs as a
PercentageRCTs + of RCTs +
RCTs PRTs PRTs PRTs
Database Outcome Studies n % n % n % n %
ERIC 83 5 6 8 10 13 16 5 39Criminal
Justice
Abstracts 74 8 11 6 8 14 19 8 57MEDLINE 80 49 62 5 6 54 68 49 91NCCAN 49 8 16 4 9 12 25 8 75PSYCINFO 63 10 15 11 18 21 33 10 48SOCIOFILE 53 4 8 5 9 9 17 4 44
NOTE:ERIC = EducationResearchInformationClearinghouse;NCCAN = NationalClearing-house on ChildAbuse andNeglect; PSYCINFO is the onlineversionof PsychologicalAbstracts;MEDLINE = MedicalLiteratureAnalysisand RetrievalSystemOnline;SOCIOFILE includes
SociologicalAbstracts and SocialPlanning/Policy& DevelopmentAbstracts.
fields in how RCTs are reported in abstracts.Although the number of abstracts toRCTscompared to PRTs was fairlyeven for four of the bibliographic databases, thiswas not true in the medical and health care abstracts captured by MEDLINE.More than 90 percent of the RCT + PRT category could be defined as RCTs.
NCCAN, with eight of twelve (75 percent) in the RCT + PRT category defined as
RCTs,was the only other database with aclear majorityof easily defined abstracts.
ConvergingEvidenceGiven the trend data that Boruch, Snyder,and DeMoya (1999) have marshaled
to show randomized trials on the increase, there is reason to be somewhat suspectof these data. It is reasonable to think that the proportion of randomized trialshasincreased in the five years or so since the abstracts were analyzed, given the surgeof interest in experimental studies (Weisburd and Petrosino forthcoming).
I decided to conduct a few checks for convergence. First, I did a quick runusingthe National Criminal Justice Reference Service (NCJRS) database. The NCJRSdatabase provides an online and
electronically
accessible database of abstractsrel-
evant to crime andjustice. NCJRS includes a wide range of published and unpub-lished materials with a special emphasis on documents generated by the U.S. fed-eral government and its funding. Only documents for the most recent years,January1998 through February 2003, were retrieved. To generate a small set of
abstracts, the search strategy was designed to retrieve only those with the words
197
This content downloaded on Wed, 13 Mar 2013 22:02:14 PMAll use subject to JSTOR Terms and Conditions
7/28/2019 2003 Estimates of Randomized Controlled Trials Across Six Areas of Childhood Intervention_A Bibliometric Analysis
http://slidepdf.com/reader/full/2003-estimates-of-randomized-controlled-trials-across-six-areas-of-childhood 10/14
THE ANNALS OF THE AMERICAN ACADEMY
evaluation orjuvenile in the title. Even though the search was abbreviated, theresults converge with the earlier findings. Of the thirty-three NCJRS abstracts to
outcome studies involving juveniles meeting the search criteria with publicationdates duringthe 1998 to 2003 period, two were cites to randomized experiments (7
percent), and two abstracted evaluations that possibly used randomization (7 per-cent). The RCT + PRT combination was 14 percent. These proportions for NCJRSjuvenile evaluations are close approximates to the earlier estimates from Criminal
Justice Abstracts (8 percent RCT and 11 percent PRT).
Although it was not a systematic search, I also examined some other papers that
have looked at percentage of evaluation budgets devoted to randomized trials byfunding agencies. For example, Garner and Visher (2003) report that the U.S.
National Institute ofJustice-America's
chief funder ofcriminological
research
and evaluation studies-averaged spending about 1 to 2 percent of its total
research and evaluation budget each year during the 1990s on experiments.Boruch, Snyder,and DeMoya (1999) reported that the budget for experiments bythe U.S. Department of Education Office of Evaluation and Research in 1998 was
slightly higher. Approximately 10 percent of the total budget for the office was
spent on randomized experiments. Farrington (2003 [this issue]) reports that ofthe scores of evaluations being funded under the United Kingdom's ambitiousCrime Reduction Programme, only the restorativejustice experiments being car-ried out by Lawrence Sherman and Heather Stranginvolve random assignment.
Discussion
Considering only RCTs, estimates still range from 6 percent of education-related evaluations to 18 percent of outcome studies in mental health. Whether
such numbers are adequate can be debated. Gortmaker (1998) astutely noted, fol-
lowing the Miech, Nave, and Mosteller presentation (1998), that "maybethe rightnumber of trials is being done." For some interventions, such as national laws or
policies, RCTs cannot be conducted. In some sense, there is still a denominatorproblem, as I could not determine whether randomization was ethically or practi-
cally possible in outcome evaluations where it was not used.These brisk data, however limited, do raise some questions. Why have health
care and medicine embraced randomized field trials as the design of choice forinterventions with children? Shepherd (2003) notes that this was not always thecase but that the ethos of "dono harm" and the increased rigor of the randomizedtrial (thereby being more likely to identify harmful interventions) were factors inthe rise of the RCT. McCord (2003) identifies sixjuvenile justice programs, such asthe Cambridge-Somerville Youth
Study,
that
began
with
great
intentions and-
against all expectations-seemingly increased delinquency. It is speculation, but it
may be that more evidence, like that which McCord marshaled of harmful inter-
ventions, may lead to a similar ethos in the social sciences.The qualityof abstracts in scientific bibliographic databases varies considerably.
MEDLINE is clearly at the top end of the continuum, written with good detail
198
This content downloaded on Wed, 13 Mar 2013 22:02:14 PMAll use subject to JSTOR Terms and Conditions
7/28/2019 2003 Estimates of Randomized Controlled Trials Across Six Areas of Childhood Intervention_A Bibliometric Analysis
http://slidepdf.com/reader/full/2003-estimates-of-randomized-controlled-trials-across-six-areas-of-childhood 11/14
ESTIMATES OF RANDOMIZED CONTROLLED TRIALS
about the study'smethods and other essentials. Most social science bibliographicdatabases lack this kind of detail. The quality of abstracting has ramifications for
these counts, particularlyin whether an abstract is categorized as referring to anRCT or a PRT.I also do not know if these analyseswould come to different conclusions without
their exclusive focus on children. It is possible that RCTsare more difficult in social
settings to conduct with children and that a smaller proportion should be expectedthan if adult participants were involved. It is also unknown how the exclusion of
abstracts written in languages other than English would affect the proportionsreported here, although it is perceived that RCTs (at least in social science) are
quite rare outside the United States.As these data are cross-sectional slices of
bibliographicindexes, no trends are
reported. It was not possible to test Boruch, Snyder,and DeMoya's (1999) conclu-sion that experiments are on the increase. The data from multiple sources, includ-
ing those presented here, indicate a curious anomaly: experiments are on theincrease but remain a small percentage of the evaluation portfolio. Perhaps RCTsand evaluation research in general are on a similarupwardproduction trend so that
experiments remain a proportionately similarpercentage of outcome studies evenwhile increasing.
ConclusionThe sheer number of programs targeting children is enormous. In 1999, I con-
ducted a series of interviews with the evaluation or research manager of severalstate government agencies charged with administering such programs. For exam-
ple, one agency oversaw the administration of four thousand programs for fiscal
year 1998-at a cost of $400 million. A discouraging finding of those interviews is
that not only were experiments not done (I did not find one), but very few studiesthat would meet the definition of "outcome" evaluation were done. Most of the
studies that were turned in as evaluations were tabulations of input data, that is,number of kids participating, demographics of the kids, and so on.
Any agenda to improve the evaluation of children's programs has to start with
getting good outcome studies done, including randomized experiments. Mosteller
(1990) has pointed out that although everyone sees the need to increase the num-ber of good evaluations, methods for accomplishing the task are elusive. One fre-
quently suggested method is to increase or redirect funding to provide support for
experiments. Lawrence W. Sherman and his colleagues at the University of Mary-land (1997), following their exhaustive review of crime prevention programs for
the U.S. Congress, stated that "the federal requirement that all funded programsconduct an evaluation has resulted in almost none of them getting done."
Why this should happen is not a mystery.There are many factors involved. For
one, programmanagers are resistant to takingfunds awayfrom operational costs to
spend on an evaluation that maybe equivocal at best-and cost them further funds
at worst. Even when monies are allocated to evaluation, they are generally insuffi-
199
This content downloaded on Wed, 13 Mar 2013 22:02:14 PMAll use subject to JSTOR Terms and Conditions
7/28/2019 2003 Estimates of Randomized Controlled Trials Across Six Areas of Childhood Intervention_A Bibliometric Analysis
http://slidepdf.com/reader/full/2003-estimates-of-randomized-controlled-trials-across-six-areas-of-childhood 12/14
THE ANNALS OF THE AMERICANACADEMY
cient to get rigorous work done. For example, Carolyn Turpin-Petrosino and
Anthony Petrosino (2003) report on an outcome evaluation of a community-
policing program in four public housing units in Portland, Maine. This was con-ducted on approximately $10,000 before university overhead deductions (whichtook 25 percent in this instance) and relied on teaching assistants, students takingdirected study credits, and pro bono work to conduct a socially useful evaluation.But the funding could only sustain the kind of descriptive evaluation and survey of
Accordingto this analysis,RCTsappearto be the design of choice to evaluate
interventionsfor childrenin medicine
and healthcare, making up 62 percent
of the reportsabstractedin MEDLINE.
residents that-though a major improvement over input data alone-is still all toocommon in the crime prevention literature. At the end of the day, it is difficultunder such circumstances to know if the program was really worth it all.
Sherman et al.'s(1997) conclusions stress a reallocation of program and evalua-tion funding. Instead of asking 100 percent of the sites to do an evaluation and get-ting little-or nothing-in return, the money generally allocated to research (usu-
ally 5 to 10 percent of the total budget) should be pooled to support rigorous,
socially useful evaluations in a small number of sites. In addition, Sherman et al.advocated for a big science approach by the U.S. National Institute of Justice-similar to that of the U.S. National Institutes of Health, with more money investedin rigorous outcome evaluations. It is very encouraging that one agency-the U.S.
Department of Education-has embraced this approach with the creation of itsWhat WorksClearinghouse, Institute for Education, and Center for Evaluation. In
time, such a strategy could lead to a compilation of a large pool of rigorous studiesof programs for children, including RCTs.
Notes1.This mirrorsacademicinterest in childrengenerally.For example,Duke Universityhasbegun a Cen-
terfor ChildPolicy; he AmericanAcademyof Arts and Scienceshas its ownInitiatives orChildrenprogram;and the National Center for Infants, Toddlers and Families has started an interdisciplinaryprogramforsenior andjuniorfellows.
200
This content downloaded on Wed, 13 Mar 2013 22:02:14 PMAll use subject to JSTOR Terms and Conditions
7/28/2019 2003 Estimates of Randomized Controlled Trials Across Six Areas of Childhood Intervention_A Bibliometric Analysis
http://slidepdf.com/reader/full/2003-estimates-of-randomized-controlled-trials-across-six-areas-of-childhood 13/14
ESTIMATES OF RANDOMIZED CONTROLLED TRIALS
2. Formerlythe HarvardProjecton Schoolingand Children.
3. SOCIOFILE and PSYCINFOprovidecitation informationfor dissertationsbut not the abstract.
4. Generally, his includedcombinationof terms likechildren, nfants,youths,andsoon,with terms such
as evaluation,assessment, outcome, impact, effect, and so on. The exact search strategiescan be obtainedfrom the authorupon request.
References
BoruchR. F., B. Snyder,and D. DeMoya. 1999. The importanceof randomizedfield trials n delinquencyresearchand other areas.Paper presented to the AmericanAcademyof Arts and Sciences, 13 May, n
Cambridge,MA.
Cook, Thomas D., and Donald T. Campbell. 1979. Quasi-experimentation:Design and analysis issuesfor
field settings. Chicago:Rand McNally.
Cook, T., and M. Payne. 2002. Objecting to the objections to using random assignment in educationalresearch.Chap.6 in Evidence matters:Randomized rials in educationresearch,edited by F. Mosteller
and R. Boruch.Washington,DC: BrookingsInstitution.
Duff, AlistairS. 1995. The "informationsociety"asparadigm:Abibliometricinquiry. ournalof InformationScience21 (5): 390-95.
Farrington,David P.2003. Britishrandomizedexperimentsin crime andjustice. Annalsof the American
Academy of Political and Social Science 589:150-69.
Farrington,DavidP, LloydE. Ohlin,andJamesQ.Wilson. 1986. Understandingandcontrollingcrime. New
York:Springer-Velag.Fassbender,Pantaleon.1997. Parapsychology nd the neurosciences:A computer-basedcontent analysisof
abstracts n the database"MEDLINE" from 1975-1995. Perceptualand Motor Skills84 (2):452-54.
Fitz-Gibbon,CarolT. 1999.Education:Highpotential
notyet
realized.PublicMoney
&Management
19 (1):33-40.
Gamer,Joel, and ChristyVisher. 2003. The productionof criminologicalexperiments.Evaluation Review
27:316-35
Gauthier,Elaine. 1998. Bibliometricanalysis of scientific and technologicalresearch:A user'sguide to the
methodology.InformationSystemfor Science andTechnology Project,StatisticsCanada.Available rom
http://collection.nlc-bnc.ca/100/200/30 1/statcan/science_innovation88f0006-e/1998/no008/
88F0006XIB98008.pdf.
Gortmaker,Steve. 1998.Personalcommunication o the authoratameeting of the EvaluationTask orcefor
the HarvardChildren'sInitiative,September,in Cambridge,MA.
Macdonald,Geraldine. 1999.Evidence-based socialcare:Wheels off the runway?PublicMoney&Manage-ment 19 (1): 25-32.
McCord,Joan.2003. Curesthat harm:Unanticipatedoutcomesof crimepreventionprograms.AnnalsoftheAmericanAcademy of Political and Social Science587:16-30.
Miech, E. J.,B. Nave, and F. Mosteller. 1998.A raredesign:The role of field trialsin evaluatingschoolprac-tices. Paperpresented at the HarvardChildren'sInitiative EvaluationTask force meeting, October,in
Cambridge,MA.
Moore, MarkH. 1998. Personalcommunication to the author at a meeting of the EvaluationTask orce for
the HarvardChildren'sInitiative,September,in Cambridge,MA.
Moses, Lincoln E., and Frederick Mosteller.1997. Experimentation:Just do it! Chap. 12 in Festschrift n
honor of RichardE. Savage,edited by B. D. Spencer.New York:OxfordUniversityPress.
Mosteller,Frederick.1990. Summingup. InThefutureofmeta-analysis,edited by KennethW.Wachterand
Miron L. Straf.NewburyPark,CA:Sage.
Petersen, Paul. 1999. Rigorous trials and tests should precede adoption of school reforms. Chronicle ofHigherEducation,22 January,p. B4.
Petrosino,Anthony.2000. Mediatorsandmoderators nthe evaluationof children'sprograms:Currentprac-tice and agendafor improvement.Evaluation Review 24:47-72.
Petrosino,Anthony J., Robert F Boruch,CathRounding,Steve McDonald, and lain Chalmers.2000. The
Campbell CollaborationSocial, Psychological, Educational and CriminologicalTrials Register (C2-
201
This content downloaded on Wed, 13 Mar 2013 22:02:14 PMAll use subject to JSTOR Terms and Conditions
7/28/2019 2003 Estimates of Randomized Controlled Trials Across Six Areas of Childhood Intervention_A Bibliometric Analysis
http://slidepdf.com/reader/full/2003-estimates-of-randomized-controlled-trials-across-six-areas-of-childhood 14/14
202 THE ANNALS OF THE AMERICAN ACADEMY
SPECTR)to facilitatethe preparationand maintenance of systematicreviews of social and educational
interventions. EvaluationResearch n Education 14 (3/4): 293-307.
Rosenthal,Robert. 1998. Personalcommunication to the authorat a meeting of the EvaluationTaskForce
for the HarvardChildren'sInitiative,September,in Cambridge,MA.
Shepherd,JonathanP.2003. Explaining eastorfaminein randomized ield trials:Medical science andcrimi-
nology compared.EvaluationReview 27:290-315.
Sherman,L. W. 1998. Evidence-basedpolicing. Washington,DC: Police Foundation.
Sherman, Lawrence W., Denise Gottfredson, Doris MacKenzie, John Eck, Peter Reuter, and Shawn
Bushway.1997. Preventingcrime:Whatworks,what doesn't,what'spromising.A reportto the United
States Congress. College Park: University of Maryland,Department of Criminology and Criminal
Justice.
Turpin-Petrosino,Carolyn,and Anthony Petrosino. 2003. An evaluation of the Portland, Maine police-
housing authoritypartnershipprogram.Manuscript.Weisburd, David, and Anthony Petrosino. Forthcoming. Experiments, criminology.In Encyclopedia of
social measurement,edited by KimberlyKempf.Weiss, Carol H. 1998. Evaluation:Methodsfor studying programsand policies. Upper Saddle River,NJ:
Prentice Hall.
White, JayD. 1986. Dissertationsandpublicationsin public administration.Public AdministrationReview
46 (3): 227-34.