Meeting Challenges Policy: The Campbell - cebma.org · of Evidence-Based Policy: The Campbell...

21
14 Meeting the Challenges of Evidence-Based Policy: The Campbell Collaboration By ANTHONY PETROSINO, ROBERT F. BORUCH, HALUK SOYDAN, LORNA DUGGAN, and JULIO SANCHEZ-MECA Anthony Petrosino is a research fellow at the Center of Evaluation, American Acad- emy of Arts and Sciences and coordinator for the Campbell Crime and Justice Group. Robert F. Boruch is University Trustee Professor at the University of Pennsylvania Graduate School of Education and cochair of the Campbell Collaboration Steering Group. Haluk Soydan is the director of the Center for Evaluation at the Center for Evalua- tion, Swedish National Board of Health and Welfare and a cochair of the Campbell Col- laboration Steering Group. Lorna Duggan is a forensic psychiatrist in the United Kingdom and reviewer for the Cochrane Collaboration. Julio Sanchez-Meca is a psychology professor at the University of Murcia in Spain and the director of the Unit for Meta-Analysis. ABSTRACT: Evidence-based policy has much to recommend it, but it also faces significant challenges. These challenges reside not only in the dilemmas faced by policy makers but also in the quality of the evaluation evidence. Some of these problems are most effectively ad- dressed by rigorous syntheses of the literature known as systematic reviews. Other problems remain, including the range of quality in systematic reviews and their general failure to be updated in light of new evidence or disseminated beyond the research community. Based on the precedent established in health care by the international Cochrane Collaboration, the newly formed Campbell Collaboration will prepare, maintain, and make accessible systematic reviews of re- search on the effects of social and educational interventions. Through mechanisms such as rigorous quality control, electronic publication, and worldwide coverage of the literature, the Campbell Collaboration seeks to meet challenges posed by evidence-based policy. NOTE: We appreciate the assistance of Maureen Matkovich and colleagues at the National Criminal Justice Reference Service who supplied the data in Figure 1. The first author’s work was supported by grants from the Mellon Foundation to the Center for Evaluation and from the Home Office to Cambridge University. Thanks to Brandon C. Welsh for his comments on earlier drafts of the article. at Vrije Universiteit 34820 on March 8, 2010 http://ann.sagepub.com Downloaded from

Transcript of Meeting Challenges Policy: The Campbell - cebma.org · of Evidence-Based Policy: The Campbell...

14

Meeting the Challengesof Evidence-Based Policy:

The Campbell Collaboration

By ANTHONY PETROSINO, ROBERT F. BORUCH, HALUK SOYDAN,LORNA DUGGAN, and JULIO SANCHEZ-MECA

Anthony Petrosino is a research fellow at the Center of Evaluation, American Acad-emy of Arts and Sciences and coordinator for the Campbell Crime and Justice Group.

Robert F. Boruch is University Trustee Professor at the University of PennsylvaniaGraduate School of Education and cochair of the Campbell Collaboration SteeringGroup.

Haluk Soydan is the director of the Center for Evaluation at the Center for Evalua-tion, Swedish National Board of Health and Welfare and a cochair of the Campbell Col-laboration Steering Group.

Lorna Duggan is a forensic psychiatrist in the United Kingdom and reviewer for theCochrane Collaboration.

Julio Sanchez-Meca is a psychology professor at the University of Murcia in Spainand the director of the Unit for Meta-Analysis.

ABSTRACT: Evidence-based policy has much to recommend it, but italso faces significant challenges. These challenges reside not only inthe dilemmas faced by policy makers but also in the quality of theevaluation evidence. Some of these problems are most effectively ad-dressed by rigorous syntheses of the literature known as systematicreviews. Other problems remain, including the range of quality insystematic reviews and their general failure to be updated in light ofnew evidence or disseminated beyond the research community. Basedon the precedent established in health care by the internationalCochrane Collaboration, the newly formed Campbell Collaborationwill prepare, maintain, and make accessible systematic reviews of re-search on the effects of social and educational interventions. Throughmechanisms such as rigorous quality control, electronic publication,and worldwide coverage of the literature, the Campbell Collaborationseeks to meet challenges posed by evidence-based policy.

NOTE: We appreciate the assistance of Maureen Matkovich and colleagues at the NationalCriminal Justice Reference Service who supplied the data in Figure 1. The first author’s workwas supported by grants from the Mellon Foundation to the Center for Evaluation and from theHome Office to Cambridge University. Thanks to Brandon C. Welsh for his comments on earlierdrafts of the article.

at Vrije Universiteit 34820 on March 8, 2010 http://ann.sagepub.comDownloaded from

15

DONALD Campbell (1969) was aninfluential psychologist whowrote persuasively about the needfor governments to take evaluationevidence into account in decisionsabout social programs. He also recog-nized, however, the limitations of theevidence-based approach and thefact that government officials wouldbe faced with a number of political di-lemmas that confined their use of re-search. The limits of evidence-based

policy and practice, however, residenot only in the political pressuresfaced by decision makers when im-plementing laws and administrativedirectives or determining budgets;they also reside in problems with theresearch evidence.

Questions such as, What works toreduce crime in communities? arenot easily answered. The studies thatbear on these questions are oftenscattered across different fields andwritten in different languages, aresometimes disseminated in obscureor inaccessible outlets, and can be ofsuch questionable quality that inter-pretation is risky at best. How canpolicy and practice be informed, if notpersuaded, by such a fragmentedknowledge base comprising evalu-ative studies that range in quality?Which study, or set of studies, if anyat all, ought to be used to influencepolicy? What methods ought to beused to appraise and analyze a set ofseparate studies bearing on the samequestion? And how can the findingsbe disseminated in such a way thatthe very people Donald Campbellcared about-the decision makers in

government and elsewhere-receivefindings from these analyses that

they trust were not the product ofadvocacy?

Donald Campbell unfortunatelydid not live long enough to bear wit-ness to the creation of the interna-tional collaboration named in hishonor that ambitiously attempts toaddress some of the challenges posedby evidence-based policy. The Camp-bell Collaboration was created to pre-pare, update, and disseminate sys-tematic reviews of evidence on whatworks relevant to social and educa-tional intervention (see http://campbell.gse.upenn.edu). The targetaudience will include decisionmakers at all levels of government,practitioners, citizens, media, andresearchers.

This article begins with a discus-sion of the rationale for the CampbellCollaboration. We then describethe precedent established by theCochrane Collaboration in health care.This is followed by an overview of theadvent and early progress of theCampbell Collaboration. We con-clude with the promise of the Camp-bell Collaboration in meeting thechallenges posed by evidence-basedpolicy.

RATIONALE

Surge of interest inevidence-based policyThere are many influences on

decisions or beliefs about what oughtto be done to address problems likecrime, illiteracy, and unemployment.Influential factors include ideology,politics, costs, ethics, social back-ground, clinical experience, expertopinion, and anecdote (for example,

at Vrije Universiteit 34820 on March 8, 2010 http://ann.sagepub.comDownloaded from

16

Lipton 1992). The evidence-basedapproach stresses moving beyondthese factors to also consider theresults of scientific studies. Althoughfew writers have articulated thedeterministic view that the term&dquo;evidence-based&dquo; suggests, it is clearthat the vast majority of writersargue that decision makers need to-at the very least-be aware of theresearch evidence that bears on poli-cies under consideration (for exam-

ple, Davies, Nutley, and Smith 2000).Certainly the implicit or explicit goalof research-funding agencies hasalways been to influence policythrough science (Weiss and Petrosino1999), and there have always beenindividuals who have articulated theneed for an evidence-based approach(for example, Fischer 1978). Butthere has been a surge of interest,particularly in the 1990s, in argu-ments for research-, science-, or evi-dence-based policy (for example,Amann 2000; Boruch, Petrosino, andChalmers 1999; Nutley and Davies1999; Wiles 2001).One indirect gauge of this surge is

the amount of academic writing onthe topic. For example, in Sherman’s(1999) argument for evidence-basedpolicing, decisions about where totarget police strategies would bebased on epidemiological data aboutthe nature and scope of problems.The kinds of interventions employed,and how long they were kept in place,would be guided by careful eval-uative studies, preferably random-ized field trials. Cullen andGendreau (2000) and MacKenzie(2000) are among those who madesimilar arguments about correc-tional treatment. Davies (1999), Fitz-

Gibbon (1999), MacDonald (1999),and Sheldon and Chilvers (2000),among others, articulated viewsabout evidence-based education andsocial welfare.

A more persuasive indicator thatevidence-based policy is having someimpact is initiatives undertaken bygovernments since the late 1990s.Whether due to growing pragmatismor pressures for accountability onhow public funds are spent, theevidence-based approach is begin-ning to take root. For example, theUnited Kingdom is promotingevidence-based policy in medicineand the social sectors vigorously (forexample, Davies, Nutley, and Smith2000; Wiles 2001). In 1997, itsLabour government was electedusing the slogan, &dquo;What counts iswhat works&dquo; (Davies, Nutley, andSmith 2000). The 1998 U.K. CrimeReduction Programme was greatlyinfluenced by both the University ofMaryland report to Congress oncrime prevention (Sherman et al.

1997) and the Home OfBce’s own syn-theses (Nuttall, Goldblatt, and Lewis1998). In Sweden, the NationalBoard of Health and Welfare wascommissioned by the government todraft a program for advancingknowledge in the social services toensure they are evidence based(National Board of Health and Wel-fare 2001).

In the United States, the Govern-ment Performance and Review Act of1993 was implemented to hold fed-eral agencies responsible for identi-fying measurable objectives andreaching them. This has led to thedevelopment of performance indica-tors to assess whether there is value

at Vrije Universiteit 34820 on March 8, 2010 http://ann.sagepub.comDownloaded from

17

added by agencies. The 1998reauthorization of the Safe and DrugFree Schools and Communities Act

required that programs fundedunder the law be research based. Thenews media now commonly ask whypolice are not using research-basedeyewitness identification techniques(Gawande 2001) or why schools useineffective drug prevention pro-grams (for example, Cohn 2001).

All of these signs seem to indicatemore than a passing interest in evi-dence-based policy. As Boruch (1997)noted, different policy questionsrequire different types of scientificevidence. To implement the mosteffective interventions to ameliorate

problems, careful evaluations areneeded. An evidence-based approachto what works therefore requiresthat these evaluations be gathered,appraised, and analyzed and that theresults be made accessible to influ-ence relevant decisions whenever

appropriate and possible.

Challenges to evidence-basedpolicy: Evaluation studies

If evidence-based policy requiresthat we cull prior evaluation studies,researchers face significant chal-lenges in doing so. For one, the rele-vant evaluations are not tidilyreported in a single source that wecan consult. Instead they are scat-tered across different academicfields. For example, medical, psycho-logical, educational, and economicresearchers more routinely includecrime measures as dependent vari-ables in their studies (for example,Greenberg and Shroder 1997). Theseevaluations are as relevant as those

reported in justice journals.

Coinciding with fragmentation,evaluation studies are not regularlypublished in academic journals or inoutlets that are readily accessible.Instead a large percentage ofevaluative research resides in what

Sechrest, White, and Brown (1979)called the fugitive literature. Theseare government reports, disserta-tions and master’s theses, conferencepapers, technical documents, andother literature that is difficult toobtain. Lipsey (1992), in his review ofdelinquency prevention and treat-ment studies, found 4 in 10 werereported in this literature. Althoughsome may argue that unpublishedstudies are of lesser quality becausethey have not been subjected to blindpeer review as journal articles are,this is an empirical question worthyof investigation. Such an assertion,at the very least, ignores the high-quality evaluations done by privateresearch firms. Evaluators in suchentities do not have organizationalincentives to publish in peer-reviewed journals, as professors oruniversity-based researchers do.

Relevant studies are not reportedsolely within the confines of theUnited States or other English-speaking nations. Recently theKellogg Foundation supported aninternational project that has identi-fied more than 30 national evalua-tion societies, including those inBrazil, Ghana, Korea, Sri Lanka,Thailand, and Zimbabwe (see http://home.wmis.net/~russon/icce/ /

eorg.htm). One argument is that it isnot important to consider evalua-tions conducted outside of one’s

jurisdiction because the cultural con-text will be very different. This is

at Vrije Universiteit 34820 on March 8, 2010 http://ann.sagepub.comDownloaded from

18

FIGURE 1CUMULATIVE GROWTH OF EVALUATION STUDIES:

NATIONAL CRIMINAL JUSTICE REFERENCE SERVICE DATABASE

SOURCE: The National Criminal Justice Reference Service (www.ncjrs.org).

another assertion worthy of empiri-cal test. Harlen (1997) noted thatmany in education believe evaluativestudies and findings from differentjurisdictions are not relevant to eachother. This ignores the reality thatinterventions are widely dissemi-nated across jurisdiction withoutconcern for context. For example, theofficer-led drug prevention programknown as D.A.R.E. (Drug AbuseResistance Education) is now in 44nations (Weiss and Petrosino 1999).Harlen (1997) suggested that wemust investigate the role of contextacross these evaluations. This ismost effectively done through rigor-ous research reviews. Such reviews,however, are difficult with interna-tional literature without translation

capabilities.

Another challenge to gatheringevaluative studies is that there is nofinite time by which the production ofthis evidence stops. Research, includ-ing evaluation, is cumulativelyincreasing (Boruch, Petrosino, andChalmers 1999). An example is pro-vided in Figure 1. Consider thecumulative growth of studies in-dexed either as evaluation or asevaluative study by the NationalCriminal Justice Reference Servicefor its database of abstracts. The dataunderscore the challenge faced incoping with the burgeoning evalua-tion literature.

It would be good to identify andacquire all relevant evaluations andkeep abreast of new studies as theybecome available. It would be evenbetter if all evaluations were of

at Vrije Universiteit 34820 on March 8, 2010 http://ann.sagepub.comDownloaded from

19

similar methodological quality andcame to the same conclusion aboutthe effectiveness of the intervention.

Unfortunately not all evaluationsare created equal. The results acrossstudies of the same intervention willoften differ, and sometimes those dif-ferences will be related to the qualityof the methods used (see Weisburd,Lum, and Petrosino 2001). This high-lights the importance of appraisingevaluation studies for methodologi-cal quality.

Challenges to evidence-basedpolicy: Reviewing methods

But what is the best way to draw

upon these existing evaluation stud-ies to understand what works and

develop evidence-based policy? Cer-tainly, relying on one or a few studieswhen others are available is veryrisky because it ignores evidence. Forexample, relying on one study if fiverelevant studies have been com-

pleted means that we ignore 80 per-cent of the evidence (Cook et al.

1992). It is true that the one study wepick may be representative of all theother studies, but as mentioned pre-viously, studies in an area often con-flict rather than converge. Evalua-tion studies themselves are part of asampling distribution and may differbecause of chance probability. Untilwe do a reasonable job of collectingthose other studies, an assertion ofconvergence based on an inadequatesampling of studies is unsupported.

Criminologists have generallyunderstood the problem of drawingconclusions from incomplete evi-dence and have a half century’s expe-rience in conducting broad surveys ofthe literature to identify relevant

evaluations (for example, Bailey1966; Kirby 1954; Lipton, Martinson,and Wilks 1975; Logan 1972; Witmerand Tufts 1954). Although a few ofthese earlier syntheses were remark-ably exhaustive, the science ofreviewing that developed in the1970s focused attention on the meth-

ods used in reviews of research.

Methods for analyzing separatebut similar studies have a century of

experience (Chalmers, Hedges, andCooper in press), but it was not untilthe 1970s that reviews became scru-tinized like primary reports of sur-veys and experiments. This wasironic, as some of the most influentialand widely cited articles across fieldswere literature reviews (Chalmers,Hedges, and Cooper in press). Begin-ning in the 1970s, not only were thetraditional reviews of evaluationsunder attack, but the modern statis-tical foundation for meta-analysis orquantitative analysis of studyresults was also being developed (forexample, Glass, McGaw, and Smith1981; Hedges and Olkin 1985).Research confirmed that traditional

reviews, in which researchers makerelative judgments about whatworks by using some unknown andinexplicit process of reasoning, werefraught with potential for bias (Coo-per and Hedges 1994). Quinsey(1983) underscored how such biascould affect conclusions aboutresearch following his review ofresearch on sex offender treatment

effects: &dquo;The difference in recidivismacross these studies is truly remark-able ; clearly by selectively contem-plating the various studies, one canconclude anything one wants&dquo; (101).

at Vrije Universiteit 34820 on March 8, 2010 http://ann.sagepub.comDownloaded from

20

One major problem noted withregard to traditional reviews wastheir lack of explicitness about themethods used, such as why certainstudies were included, the searchmethods used, and how the studieswere analyzed. This includes the cri-teria used to judge whether an inter-vention was effective or not. Considerthe debate over the conclusions in the

Lipton, Martinson, and Wilks (1975)summary of more than 200 correc-tional program evaluations, brisklyreported first by Martinson (1974).Despite finding that nearly half ofthe evaluations reported in Martin-son’s article had at least one statisti-

cally significant fmding in favor oftreatment, his overall conclusionswere gloomy about the prospects ofcorrectional intervention. The crite-rion for success was not readilyapparent, but it must have beenstrict (Palmer 1975).These earlier reviews, like

Martinson’s (1974), were also prob-lematic because they seemed to relyon statistical significance as the cri-terion for judging whether an inter-vention was successful. This later

proved to be problematic, as researchshowed that statistical significanceis the function not only of the size ofthe treatment effect but of method-

ological factors such as sample size(for example, Lipsey 1990). For exam-ple, large and meaningful effectsreported in studies with small sam-ples would be statistically insignifi-cant ; the investigator and traditionalreviewer would consider the findingevidence that treatment did not suc-ceed. Given that most social scienceresearch uses small samples, moder-ate and important intervention

effects have often been interpreted asstatistically insignificant and there-fore as treatment failures.

Systematic reviews

Evidence-based policy requiresovercoming these and other prob-lems with the evaluation studies andmethods for reviewing them. There isconsensus among those who advo-cate evidence-based policy that sys-tematic reviews are an importanttool in this process (Davies 1999;Nutley, Davies, and Tilley 2000).In systematic reviews, researchersattempt to gather relevant evalua-tive studies, critically appraise them,and come to judgments about whatworks using explicit, transparent,state-of-the-art methods. Systematicreviews will include detail abouteach stage of the decision process,including the question that guidedthe review, the criteria for studies tobe included, and the methods used tosearch for and screen evaluation

reports. It will also detail how analy-ses were done and how conclusionswere reached.

The foremost advantage of sys-tematic reviews is that when donewell and with full integrity, they pro-vide the most reliable and compre-hensive statement about whatworks. Such a final statement, aftersifting through the availableresearch, may be, &dquo;We know little ornothing-proceed with caution.&dquo;This can guide funding agencies andresearchers toward an agenda for anew generation of evaluation studies.This can also include feedback to

funding agencies where additionalprocess, implementation, and theory-

at Vrije Universiteit 34820 on March 8, 2010 http://ann.sagepub.comDownloaded from

21

driven studies would be critical to

implement.Systematic reviews, therefore, are

reviews in which rigorous methodsare employed regardless of whethermeta-analysis is undertaken to sum-marize, analyze, and combine studyfindings. When meta-analysis is

used, however, estimates of the aver-age impact across studies, as well ashow much variation there is and why,can be provided. Meta-analysis cangenerate clues as to why someprograms are more effective in some

settings and others are not. Meta-analysis is also critical in ruling outthe play of chance when combiningresults (Hedges and Olkin 1985).

Systematic reviews have otherbyproducts. They can reconcile dif-ferences between studies. Becauseeach study document is scrutinized,systematic reviews can underscoredeficiencies in report writing andlead to better systems for collectingdata required by reviewers. Reviewsalso ensure that relevant evalua-tions-which may have been ignoredand long forgotten-are eternallyutilized. It is satisfying to investiga-tors to find their studies still consid-ered 20 years or more after comple-tion (Petrosino forthcoming).

Systematic reviews have beeninfluential. This is understandable,as Weiss (1978) predicted that policymakers would find good syntheses ofresearch compelling because theywould reconcile conflicting studieswhen possible and provide a compre-hensive resource for their aides toconsult. Hunt (1997) discussed howthe results from meta-analysis con-tradicted conclusions in earlier tradi-tional reviews, in areas such as

psychotherapy, class size, and schoolfunding. Palmer (1994) noted thatmeta-analyses like Lipsey’s (1992)helped to counter the prevailing pes-simism about the efficacy of correc-tional treatment generated by ear-lier reviews.

Challenges to evidence-based policy: Currentsystematic reviews

There seems to be growing conver-gence among researchers and othersthat systematic reviews are a criticaltool for evidence-based policy (forexample, Nutley, Davies, and Tilley2000). This is reflected in the deci-sion by the United Kingdom’s mostprestigious social science fundingagency, the Economic and SocialResearch Council, to support anevidence-based policy and practiceinitiative featuring systematicreviews (see http://www.esrc.ac.uk/EBPesrcUKcentre.htm). On closerscrutiny, however, we find that thereare some challenges to the use of sys-tematic reviews in evidence-based

policy as they are currently done.One problem is that there is often

a lack of transparency in the reviewprocess. Completed syntheses aregenerally submitted to peer-reviewed journals, long after theresearch question has been deter-mined and the methods selected.

Except for rare occasions in whichreviewers submit a grant proposalfor funding, researchers do not a pri-ori describe why they are doing thereview and what methods they willemploy. Without transparent pro-cesses from beginning to end, ex postfacto decisions that can influence areview and slant it knowingly or

at Vrije Universiteit 34820 on March 8, 2010 http://ann.sagepub.comDownloaded from

22

unknowingly toward one conclusionor another are possible. This is espe-cially important in persuading policymakers who want to be sure thatresearch is not the product of slickadvocacy.

Because there is no uniform qual-ity control process, systematicreviews, like evaluations, range on acontinuum of quality. In some cases,the quality is due to the methods

employed. Some reviewers may usemeta-analytic methods but inade-quately describe their decision pro-cess. Other reviews may detail ex-haustive search processes but thenuse questionable methods for analy-sis. It is difficult for even the discern-

ing reader to know how trustworthythe findings are from a particularreview. In other cases, the quality ofthe review is due to the way it is

reported. Sometimes the nature ofthe outlet dictates how explicit andtransparent the reviewers can be.Some reviews, particularly those pre-pared for academic print journalswith concerns about page lengths,are briskly written. Dissertationsand technical reports are usuallyvery detailed but are less accessibleto readers. Systematic reviews mayall contain Materials and Methodssections, but the level of detail ineach may vary depending on the dis-semination outlet.

Policy makers and other inter-ested users of research have a wide

range of questions about what worksto which they want answers.Although funding agencies will spon-sor reviews at times to meet theseinformation needs, reviews are gen-erally conducted because of the

interests of individual researchers.For example, in criminology, offendertreatment has been a controversialand popular topic and the target ofmost systematic review activity(Petrosino 2000). Other importantareas for review such as police train-ing, services for crime victims, courtbacklog interventions, and so on havebeen inadequately covered. Evi-dence-based policy requires thatevaluations in these areas be synthe-sized, even if they are less relevant tolongstanding criminological debates.

Even if reviews did cover manymore questions than they currentlydo, they are often not disseminated insuch ways that decision makers andthe public can get them. All too often,reviews are published by academicsin peer-reviewed journals, outletsthat are not consulted by policy mak-ers. In fact, decision makers often gettheir information about researchfrom news media, which can selec-tively cover only a few of the thou-sands of evaluative studies relevantto crime and justice reported eachyear (Weiss and Singer 1988). Tyden(1996) wrote that publishing an aca-demic paper to disseminate to policymakers was akin to shooting it over awall, blindly, into water. The path toutilization by decision makers washaphazard at best.

To examine dissemination further,we analyzed the citations for 302meta-analyses reported by Lipseyand Wilson (1993) of psychologicaland educational treatment studies.

Nearly two-thirds listed in the refer-ence section were published in aca-demic journals. These were scatteredacross 93 journals during the years

at Vrije Universiteit 34820 on March 8, 2010 http://ann.sagepub.comDownloaded from

23

covered (1977-1991). Only theReview of Educational Research pub-lished an average of one review ormore per year. Unless researcherswere using other unknown mecha-nisms such as oral briefings andinternal memos to communicate todecision makers, it seems veryunlikely that this evidence got intothe hands of anyone other thanresearch specialists working in nar-row areas.

Most systematic reviews also tendto be one-off exercises, conductedonly as funding, interest, or time per-mits. Rarely are they updated to takeinto account new studies that are rel-evant to the review, a challenge thatis more significant given the cumula-tive growth of evaluation reportshighlighted in Figure 1. Yet yearsmay go by before an investigator pur-sues funding to update an existingreview. The methodology and statis-tical foundation for meta-analysis isstill rapidly evolving, with improvedtechniques and new software beingdeveloped to solve data problems. Itis rare to find reviews that take intoaccount these new techniques, con-ducting analyses to determine ifresults using different methodsconverge.Some reviewers publish in print

journals, an inefficient method fordisseminating reviews. Becauseprint journals find them too costly,cases in which reviewers take intoaccount cogent criticisms by othersand conduct reanalysis are rarelyreported. Unlike medical journals,criminological journals do not have astrong tradition in routinely printingletters to the editor that respond tocriticisms with additional analyses.

Some journals also have lengthy lagtimes between submission and publi-cation, delaying the dissemination ofevidence even further.

THE COCHRANECOLLABORATION

Are there ways of overcomingchallenges to using systematicreviews in evidence-based policy? Aprecedent for doing so was estab-lished in the health care field. ArchieCochrane was a noted epidemiologistwho wrote persuasively about theneed for medical practitioners to takescientific evidence into account intheir practice. Cochrane (1972)lamented the fact that although ran-domized trials had shown some prac-tices to be effective and others harm-

ful, clinical practitioners and medicalschools were ignoring the informa-tion. He later (Cochrane 1979) won-dered why the medical sciences hadnot yet organized all relevant trialsinto subspecialties so that decisionmakers could take such evidence intoaccount. A prot6g6 of Cochrane, anobstetrician turned researchernamed Iain Chalmers, soon identi-fied and reviewed randomized trialsrelevant to childbirth and prenatalinterventions (see www.cochrane.

org).In the early 1990s, the U.K.

National Health Service (NHS),under the direction of Sir Michael

Peckham, initiated the Research andDevelopment Programme with thegoal of establishing an evidence-based resource for health care.Because of the success of their earlier

project on childbirth and pregnancystudies, Chalmers and his colleagues

at Vrije Universiteit 34820 on March 8, 2010 http://ann.sagepub.comDownloaded from

24

were asked to extend this effort to allareas of health care intervention.The U.K. Cochrane Centre was estab-

lished, with core funding from theNHS, to begin the work. It soon

became clear that the amount ofwork far surpassed the capacity ofone center or one nation to take intoaccount. In 1993, in honor of his men-tor, Chalmers and his colleagueslaunched the international CochraneCollaboration to &dquo;help people makewell-informed decisions abouthealthcare by preparing, maintain-ing and promoting the accessibility ofsystematic reviews of the effects ofhealthcare interventions.&dquo; In just 8years, the Cochrane Collaborationhas been able to organize thousandsof individuals worldwide to contrib-ute to its work. Much more informa-tion about the Cochrane Collabora-tion can be found at its Web site,www.cochrane. org. But theCochrane Collaboration, in a verybrief time, established a number ofmechanisms to address challenges tousing systematic reviews in evi-dence-based health care policy.

For example, collaborative reviewgroups (CRGs) are responsible forthe core work of systematic review-ing. CRGs are international net-works of individuals interested in

particular health areas such asbreast cancer, epilepsy, injuries, andstroke. Each CRG has an editorial

board, generally comprising personswith scientific or practical expertisein the area, who are responsible forquality control of protocols (plans)and completed drafts of reviews.

It is useful to examine how aCochrane review is prepared. First,individuals approach the CRG in

which the intended topic area seemsappropriate. Once a title for the pro-posed review is agreed on, it is circu-lated to ensure that no other similarreviews are being prepared byreviewers from other CRGs. Reduc-

ing overlap and duplication is a cru-cial goal for the Cochrane Collabora-tion, as scarce resources must beused judiciously. Reviews are neededin so many areas of health care thatwide coverage is a priority. Once thetitle is agreed on, the reviewers mustthen submit a protocol for the review.The protocol is a detailed plan thatspells out a priori the question to beanswered, the background to theissue, and the methods to beemployed. The protocol then goesthrough a round or two of criticism bythe CRG editors. It must conform to acertain template to facilitate elec-tronic publication using the

Cochrane Collaboration’s software,Review Manager, or RevMan. Oncethe protocol is approved, it is pub-lished in the next edition of the quar-terly electronic publication, theCochrane Library, and made avail-able to all subscribers for commentand criticism. The editorial boardmust decide which criticisms shouldbe taken into account.

The reviewers then prepare thereview according to the protocol.Although deviation from the plan issometimes necessary, the protocolforces a prospective, transparent pro-cess. Post hoc changes are readilydetected, and analyses can be done todetermine if they altered findings.After the reviewers conduct thereview and write up a draft, it too issubmitted to the CRG editorialboard. Once the review draft is

at Vrije Universiteit 34820 on March 8, 2010 http://ann.sagepub.comDownloaded from

25

completed, the editors critique it

again. It is also sent to external read-ers, including researchers as well aspractitioners and patients. Thisround of criticism is designed toimprove the methods in the reviewand to ensure that the final review iswritten as accessibly as possible to anonresearch audience, includinghealth care patients, providers, andcitizens. For each completedCochrane review, the Cochrane Con-sumer Network crafts a one-pagesynopsis written accessibly forpatients and other consumers andposts the synopses at its Web site (seehttp://www.cochrane.org/cochrane/consumer.htm). Once the review isapproved, it is published in the nextissue of the Cochrane Library andagain made available for externalcriticism by subscribers. Again theeditors and the reviewers have todetermine which of these criticismsought to be taken into account in asubsequent review update. Cochranereviews must be updated every 2years, to take into account new stud-ies meeting eligibility criteria.

Another important mechanism forthe Cochrane Collaboration is themethods groups. These are interna-tional networks of individuals whoconduct systematic reviews focusedon the methods used in systematicreviews and primary studies. Forexample, a methods group might col-lect all systematic reviews in whichrandomized trials are compared tononrandomized trials. In this review,they would seek to determine if thereis a consistent relationship betweenthe reporting of random assignmentand results. Their objective is to

make sure that decisions in reviews,such as setting eligibility criteria, beinformed as much as possible by evi-dence. The Cochrane Collaboration isalso facilitated by 15 centers aroundthe world; they promote the interestsof the collaboration within host coun-

tries, train people in doing system-atic reviews, and identify potentialcollaborators and end users. FinallyCochrane fields and networks, suchas the Cochrane Consumer Network,focus on dimensions of health careother than health problems and workto ensure that their priorities arereflected in systematic reviews.

The main product of the CochraneCollaboration is the Cochrane

Library. This electronic publicationis updated quarterly and made avail-able via the World Wide Web or

through CD-ROMs mailed to sub-scribers. The January 2001 issue con-tained 1000 completed reviews and832 protocols (or plans for a review)in one central location using thesame format. Wolf (2000) noted thatthe uniformity allows the reader tounderstand and fmd all of the neces-

sary information in each review, fa-cilitating training and use. Anotherimportant feature of the CochraneLibrary is the Cochrane ControlledTrials Register (CCTR). The CCTRhas more than a quarter million cita-tions to randomized trials relevant tohealth care, an important resource inassisting reviewers find studies sothey can prepare and maintain theirreviews.

Empirical studies have reportedthat Cochrane syntheses are morerigorous than non-Cochrane system-atic reviews and meta-analyses pub-

at Vrije Universiteit 34820 on March 8, 2010 http://ann.sagepub.comDownloaded from

26

lished in medical journals. For exam-ple, Jadad and his colleagues (1998)found that Cochrane reviews pro-vided more detail, were more likely totest for methodological effects, wereless likely to be restricted by lan-guage barriers, and were updatedmore than print journal reviews. TheCochrane Library is quickly becom-ing recognized as the best singlesource of evidence on the effective-ness of health care interventions

(Egger and Davey-Smith 1998).Reviews by the Cochrane Collabora-tion are frequently used to generateand support guidelines by govern-ment agencies such as the NationalInstitute for Clinical Excellence (forexample, see Chalmers, Hedges, andCooper in press). In 1999, the U.S.National Institutes of Health madethe Cochrane Library available to all16000 of its employees. It is nowaccessible by all doctors in Brazil, theU.K. NHS, and all U.K. universities(Mark Starr, personal communica-tion, 2001). Finally the queen recog-nized Iain Chalmers for his effortswith the United Kingdom’s greatesthonor: knighthood!

Thus the Cochrane Collaborationhas been able to meet many of the

challenges posed by evidence-basedpolicy in health care. By requiringdetailed protocols, the Cochrane Col-laboration addresses the lack of

transparency in most systematicreviews of research. Through rigor-ous quality control, they producecommendable reviews. By publishingelectronically, dissemination is

quickened, and the ability to updateand correct the reviews in light ofnew evidence is realized. By provid-

ing an unbiased, single source for evi-dence and producing reviews,abstracts, and synopses for differentaudiences, they facilitate utilization.

THE CAMPBELLCOLLABORATION

With the success of the Cochrane

Collaboration, the same type of orga-nization was soon suggested for re-viewing social and educational eval-uations. Adrian Smith (1996),president of the Royal Statistical So-ciety, issued a challenge when hesaid,

As ordinary citizens ... we are, throughthe media, confronted daily with contro-versy and debate across a whole spec-trum of public policy issues. Obvious topi-cal examples include education-whatdoes work in the classroom?-and penalpolicy-what is effective in reducingreoffending? Perhaps there is an opportu-nity ... to launch a campaign directed atdeveloping analogues to the Cochrane

Collaboration, to provide suitable evi-dence bases in other areas besides medi-cine [emphasis added]. (378)

A number of individuals acrossdifferent fields and professions orga-nized and met to determine how bestto meet this challenge. Several ex-ploratory meetings were held during1999, including two headed by theSchool of Public Policy at UniversityCollege-London, and one organizedin Stockholm by the National Boardof Health and Welfare. These meet-

ings, which included researchers andmembers of the policy and practicecommunities, provided evidence thatthe development of an infrastructure

at Vrije Universiteit 34820 on March 8, 2010 http://ann.sagepub.comDownloaded from

27

similar to Cochrane’s for social andeducational intervention includingcriminal justice should be vigorouslypursued (Davies, Petrosino, andChalmers 1999; www.ucl.ac.uk/spp/publications/campbell.htm).

Early days and progressThe Campbell Collaboration was

officially inaugurated in February2000 at a meeting in Philadelphia,with more than 80 individuals from12 nations participating. The Camp-bell Collaboration was founded onnine principles developed first by theCochrane Collaboration (see Table 1).At the February 2000 inauguralmeeting, it was agreed that the head-quarters (secretariat) should resideat the University of Pennsylvania.An international eight-membersteering group was officially desig-nated to guide its early development.

The first three Campbell coordi-nating groups (similar to Cochrane’sCRGs) were created to facilitate sys-tematic reviews in their areas: edu-

cation, social welfare, and crime andjustice. The Campbell EducationCoordinating Group is focused ondeveloping protocols and reviews inthe following critical areas: truancy,mathematics learning, science learn-ing, information technology learning,work-related learning and transfer-able skills, assessment and learning,comprehensive school reform, schoolleadership and management, profes-sional education, and economics andeducation. The Campbell Social Wel-fare Coordinating Group has alsoorganized itself into several areas:social work, transportation, housing,social casework with certain ethnic

clientele, child welfare, and

TABLE 1

PRINCIPLES OF THE

CAMPBELL COLLABORATION

SOURCE: C2 Steering Group (2001).

employment programs within thewelfare system. The early progress ofthe Campbell Crime and JusticeCoordinating Group is describedelsewhere in this issue (seeFarrington and Petrosino 2001).The Campbell Collaboration

Methods Group was developed toincrease the precision of Campbellreviews by conducting reviews toinvestigate the role of methodologi-cal and statistical procedures used insystematic reviews, as well as char-acteristics in original studies (seehttpJ/web.missouri.edu/~c2method).Three methods subgroups were cre-ated during the past year, includingstatistics, quasi-experiments, and

at Vrije Universiteit 34820 on March 8, 2010 http://ann.sagepub.comDownloaded from

28

FIGURE 2C2-SPECTR

SOURCE: Petrosino et al. (2000).

process and implementation sub-groups. In conjunction with theCampbell secretariat and the coordi-nating groups, the methods grouphas taken the lead in developing apreliminary quality control processbased on the Cochrane model (seeAppendix A). The Campbell Commu-nication and Dissemination Groupwill develop best practice in translat-ing results to a variety of end users,including policy makers, practitio-ners, media, and the public.

To facilitate the work of reviewers,the Campbell Collaboration Social,Psychological, Educational andCriminological Trials Register (C2-SPECTR) is in development. As

Figure 2 shows, preliminary worktoward C2-SPECTR has alreadyidentified more than 10000 citationsto randomized or possibly ran-domized trials (Petrosino et al. 2000),and this has now been augmentedwith new additions to the data file.Like the CCTR in health care, C2-SPECTR should serve as a produc-tive resource and facilitate prepara-tion and maintenance of reviews.Plans to build a complimentary data-base of nonrandomized evaluationsare being discussed.

During 2000, Campbell andCochrane groups mutually partici-pated in the NHS Wider PublicHealth Project (see www.york.ac.uk/

at Vrije Universiteit 34820 on March 8, 2010 http://ann.sagepub.comDownloaded from

29

crd/publications/wphp.htm), anattempt to collate evidence from sys-tematic reviews relevant to theUnited Kingdom’s intended healthpolicies. As the NHS now considerseducation, social welfare, and crimi-nal justice directly or indirectlyrelated to public health, Campbellgroups were involved (for example,Petrosino 2000). Several hundred

systematic-or possibly system-atic-reviews were identified inthese areas and can now be used to

help us map the terrain and identifytarget areas where high-qualityreviews are needed.

Funding from the Ministry ofSocial Affairs of Denmark has been

acquired to establish a CampbellCenter in Copenhagen to facilitateefforts in the Nordic region.Resources have also been secured tocreate the Meta-Analysis Unit at theUniversity of Murcia in Spain (seehttp://www.um.es/sip/unidadmal.html), the first step in developing aCampbell Center for Mediterraneannations. Objectives of these centersinclude facilitating reviews throughtraining, identifying end users andcollaborators, and promoting dissem-ination and utilization.

These are just a few of the manydevelopments in the early days of theCampbell Collaboration. Althoughthe collaboration anticipates creat-ing an electronic publication thatwill make available C2-SPECTR andother helpful resources, its criticalproduct will be high-quality system-atic reviews. For the Campbell Col-laboration to achieve the kind of suc-cess Cochrane has obtained in thehealth care field, it will have toensure that these reviews are as

unbiased and technically sound aspossible. Appendix B provides a pre-liminary model of stages in a Camp-bell review.

CONCLUSION

Donald Campbell articulated anevidence-based approach before themethods of systematic reviewing andmeta-analysis became staples ofempirical inquiry. Still we think hewould be pleased with the efforts ofhundreds of individuals who are

working worldwide to advance theinternational collaboration thatbears his name.

The challenges of evidence-basedpolicy are many We think the Camp-bell Collaboration will help to meetsome of them. For example, with rig-orous quality control and protocols,the Campbell Collaboration willattempt to produce the same level oftransparency with unbiased reviewsfor which the Cochrane Collabora-tion is lauded. Through electronicpublication of reviews in a singlesource (that is, Campbell Library), itwill attempt to extend beyond com-municating with researchers andfacilitate dissemination and utiliza-tion by decision makers and ordinarycitizens. Maintaining reviews, takinginto account evidence worldwide, andpreparing reviews using the best sci-ence available should enhance theuse of Campbell reviews.

Systematic reviews are certainlyan important tool for evidence-basedpolicy, but like Campbell before us,we do not wish to zealously oversellscientific evidence. Systematicreviews will not resolve all enduringpolitical and academic conflicts, nor

at Vrije Universiteit 34820 on March 8, 2010 http://ann.sagepub.comDownloaded from

30

will they often-if at all-provideneat and tidy prescriptions to deci-sion makers on what they ought todo. In their proper role in evidence-based policy, they will enlighten byexplicitly revealing what is knownfrom scientific evidence and what isnot. They will also generate morequestions to be resolved. One unan-ticipated benefit in the short life ofthe Campbell Collaboration is thesustained forum it provides for dis-cussions about evaluation designand how to engage people fromaround the world.

Criminology is a noble professionbecause it aims to reduce the miserystemming from crime and injustice.To the extent that this collaborationcan fulfill Don Campbell’s vision ofassisting people in making well-informed decisions, it will help crimi-nologists stay true to criminology’soriginal and noble intent.

APPENDIX AA FLOW CHART OF THE STEPSIN THE CAMPBELL REVIEW

QUALITY CONTROL PROCESS

SOURCE: C2 Steering Group (2001 ).

APPENDIX BSTAGES OF A CAMPBELLSYSTEMATIC REVIEW

at Vrije Universiteit 34820 on March 8, 2010 http://ann.sagepub.comDownloaded from

31

SOURCE: C2 Steering Group (2001).

References

Amann, Ron. 2000. Foreword. In WhatWorks? Evidence-Based Policy andPractice in Public Services, ed. Huw T.O. Davies, Sandra Nutley, and Peter C.Smith. Bristol, UK: Policy Press.

Bailey, William C. 1966. CorrectionalOutcome: An Evaluation of 100 Re-

ports. Journal of Criminal Law, Crim-inology and Police Science 57:153-60.

Boruch, Robert F. 1997. Randomized Ex-periments for Planning and Evalua-tion: A Practical Guide. Thousand

Oaks, CA: Sage.Boruch, Robert F., Anthony Petrosino,

and lain Chalmers. 1999. The Camp-bell Collaboration: A Proposal for Mul-tinational, Continuous and System-atic Reviews of Evidence. In Pro-

ceedings of the International Meetingon Systematic Reviews of the Effects ofSocial and Educational Interventions

July 15-16, ed. Philip Davies, AnthonyPetrosino, and Iain Chalmers. London:University College-London, School ofPublic Policy.

Campbell, Donald T. 1969. Reforms asExperiments. American Psychologist24:409-29.

Chalmers, lain, Larry V. Hedges, andHarris Cooper. In press. A Brief His-tory of Research Synthesis. Evalua-tion & the Health Professions.

Cochrane, Archie L. 1972. Effectivenessand Efficiency. Random Reflections onHealth Services. London: Nuffield Pro-vincial Hospitals Trust.

—. 1979. 1931-1971: A Critical Re-

view, with Particular Reference to theMedical Profession. In Medicines forthe Year 2000. London: Office ofHealth Economics.

Cohn, Jason. 2001. Drug Education: TheTriumph of Bad Science. Rolling Stone869:41-42,96.

Cook, Thomas D., Harris Cooper, David S.Cordray, Heidi Hartmann, Larry VHedges, Richard J. Light, Thomas A.

at Vrije Universiteit 34820 on March 8, 2010 http://ann.sagepub.comDownloaded from

32

Louis, and Frederick Mosteller, eds.1992. Meta-Analysis for Explanation.New York: Russell Sage.

Cooper, Harris C. and Larry V Hedges,eds. 1994. The Handbook of ResearchSynthesis. New York: Russell Sage.

C2 Steering Group. 2001. C2: Concept,Status, Plans. Overarching Grant Pro-posal. Philadelphia: Campbell Collab-oration Secretariat.

Cullen, Francis T. and Paul Gendreau.2000. Assessing Correctional Rehabil-itation : Policy, Practice, and Prospects.In Policies, Processes, and Decisions ofthe Criminal Justice System, CriminalJustice 2000, vol. 3, ed. Julie Horney.Washington, DC: National Institute ofJustice.

Davies, Huw T. O., Sandra M. Nutley, andPeter C. Smith, eds. 2000. WhatWorks? Evidence-Based Policy in Pub-lic Services. Bristol, UK: Policy Press.

Davies, Phillip. 1999. What Is Evidence-Based Education? British Journal ofEducational Studies 47:108-21.

Davies, Philip, Anthony Petrosino, andlain Chalmers, eds. 1999. Proceedingsof the International Meeting on Sys-tematic Reviews of the Effects of Socialand Educational Interventions, July15-16. London: University College-London, School of Public Policy.

Egger, Matthias and George Davey-Smith. 1998. Bias in Location and Se-lection of Studies. British MedicalJournal 316:61-66.

Farrington, David P. and AnthonyPetrosino. 2001. The Campbell Collab-oration Crime and Justice Group. An-

als of the American Academy of Polat-ical and Social Science 578:35-49.

Fischer, Joel. 1978. Does Anything Work?Journal of Social Service Research1:215-43.

Fitz-Gibbon, Carol. 1999. Education: TheHigh Potential Not Yet Realized. Pub-lic Money & Management 19:33-40.

Gawande, Atul. 2001. Under Suspicion.The Fugitive Science of Criminal Jus-tice. New Yorker, 8 Jan., 50-53.

Glass, Gene V, Barry McGaw, and Mary L.Smith. 1981. Meta-Analysis in SocialResearch. London: Sage.

Greenberg, David and Mark Shroder.1997. The Digest of Social Experi-ments. 2d ed. Washington, DC: UrbanInstitute Press.

Harlen, Wynne. 1997. Educational Re-search and Educational Reform. InThe Role of Research in Mature Educa-tional Systems, ed. Seamus Hegarty.Slough, UK: National Foundation forEducational Research.

Hedges, Larry V and Ingram Olkin. 1985.Statistical Methods for Meta-Analysis.New York: Academic Press.

Hunt, Morton. 1997. The Story of Meta-Analysis. New York: Russell Sage.

Jadad, Alejandor R., Deborah J. Cook,Alison Jones, Terry P. Klassen, PeterTugwell, Michael Moher, and DavidMoher. 1998. Methodology and Re-ports of Systematic Reviews andMeta-Analyses: A Comparison ofCochrane Reviews with Articles Pub-lished in Paper-Based Journals. Jour-nal of the American Medical Associa-tion 280:278-80.

Kirby, Bernard C. 1954. Measuring Ef-fects of Criminals and Delinquents.Sociology and Social Research 38 :368-74.

Lipsey, Mark W. 1990. Design Sensitivity:Statistical Power forExperimental Re-search. Newbury Park, CA: Sage.

—. 1992. Juvenile DelinquencyTreatment: A Meta-Analytic Inquiryinto the Variability of Effects. In Meta-

Analysis forExplanation, ed. Thomas D.Cook, Harris Cooper, David S. Cordray,Heidi Hartmann, Larry V. Hedges,Richard J. Light, Thomas A. Louis, andFrederick Mosteller. New York:Russell Sage.

Lipsey, Mark W. and David B. Wilson.1993. The Efficacy of Psychological,

at Vrije Universiteit 34820 on March 8, 2010 http://ann.sagepub.comDownloaded from

33

Educational and Behavioral Treat-ment : Confirmation from Meta-

Analysis. American Psychologist48:1181-209.

Lipton, Douglas S. 1992. How to Maxi-mize Utilization of Evaluation Re-search by Policymakers. Annals of theAmerican Academy of Political andSocial Science 521:175-88.

Lipton, Douglas S., Robert Martinson,and Judith Wilks. 1975. The Effective-ness of Correctional Treatment. NewYork: Praeger.

Logan, Charles H. 1972. Evaluation Re-search in Crime and Delinquency: AReappraisal. Journal of CriminalLaw, Criminology and Police Science63:378-87.

MacDonald, Geraldine. 1999. Evidence-Based Social Care: Wheels Off the

Runway? Public Money & Manage-ment 19:25-32.

MacKenzie, Doris Layton. 2000. Evi-dence-Based Corrections: IdentifyingWhat Works. Crime & Delinquency46:457-71.

Martinson, Robert. 1974. What Works?Questions and Answers About PrisonReform. Public Interest 10:22-54.

National Board of Health and Welfare.2001. A Programme of National Sup-port for the Advancement of Knowl-edge in the Social Services. Stockholm:Author.

Nutley, Sandra and Huw T. O. Davies.1999. The Fall and Rise of Evidence inCriminal Justice. Public Money &

Management 19:47-54.

Nutley, Sandra, Huw T. O. Davies, andNick Tilley. 2000. Letter to the Editor.Public Money & Management 20:3-6.

Nuttall, Christopher, Peter Goldblatt,and Chris Lewis, eds. 1998. ReducingOffending: An Assessment of ResearchEvidence on Ways of Dealing with Of-fending Behaviour. London: Home Office.

Palmer, Ted. 1975. Martinson Revisited.Journal of Research in Crime and De-linquency 12:133-52.

—. 1994.A Profile of Correctional Ef-fectiveness and New Directions for Re-search. Albany: State University ofNew York Press.

Petrosino, Anthony. 2000. Crime, Drugsand Alcohol. In Evidence from System-atic Reviews of Research Relevant toImplementing the "Wider PublicHealth" Agenda, Contributors to theCochrane Collaboration and the

Campbell Collaboration. York, UK:NHS Centre for Reviews and Dissemi-nation.

—. Forthcoming. What Works to Re-duce Offending? A Systematic Reviewof 300 Randomized Field Trials inCrime Reduction. New York: Oxford

University Press.Petrosino, Anthony, Robert F. Boruch,

Cath Rounding, Steve McDonald, andlain Chalmers. 2000. The CampbellCollaboration Social, Psychological,Educational and CriminologicalTrials Register (C2-SPECTR) to Facil-itate the Preparation and Mainte-nance of Systematic Reviews of Socialand Educational Interventions. Eval-uation Research in Education 14 :293-307.

Quinsey, Vernon L. 1983. Prediction ofRecidivism and the Evaluation ofTreatment Programs for Sexual Offenders. In Sexual Aggression and theLaw, ed. Simon N. Verdun-Jones andA. A. Keltner. Burnaby, Canada: Si-mon Fraser University CriminologyResearch Centre.

Sechrest, Lee, Susan O. White, andElizabeth D. Brown, eds. 1979. The Re-habilitation of Criminal Offenders:Problems and Prospects. Washington,DC: National Academy of Sciences.

Sheldon, Brian and Roy Chilvers. 2000.Evidence-Based Social Care: A Studyof Prospects and Problems. Dorset,UK: Russell House.

at Vrije Universiteit 34820 on March 8, 2010 http://ann.sagepub.comDownloaded from

34

Sherman, Lawrence W. 1999. Evidence-Based Policing. In Ideas in AmericanPolicing. Washington, DC: PoliceFoundation.

Sherman, Lawrence W., Denise C.Gottfredson, Doris Layton MacKen-zie, John E. Eck, Peter Reuter, andShawn D. Bushway. 1997. PreventingCrime: What Works, What Doesn’t,What’s Promising. Washington, DC:U.S. Department of Justice, NationalInstitute of Justice.

Smith, Adrian. 1996. Mad Cows and Ec-stasy : Chance and Choice in an Evi-dence-Based Society. Journal of theRoyal Statistical Society 159:367-83.

Tyden, Thomas. 1996. The Contributionof Longitudinal Studies for Under-standing Science Communication andResearch Utilization. Science Commu-nication 18:29.

Weisburd, David, Cynthia M. Lum, andAnthony Petrosino. 2001. Does Re-search Design Affect Study Out-comes ? Findings from the MarylandReport Criminal Justice Sample. An-nals of the American Academy ofPolit-ical and Social Science 578:50-70.

Weiss, Carol Hirschon. 1978. Improvingthe Linkage Between Social Researchand Public Policy In Knowledge andPolicy: The Uncertain Connection, ed.

L. E. Lynn. Washington, DC: NationalAcademy of Sciences.

Weiss, Carol Hirschon and AnthonyPetrosino. 1999. Improving the Use ofResearch: The Case of D.A.R.E. Un-

published grant proposal to the Rob-ert Wood Johnson Foundation, Sub-stance Abuse Policy Research Pro-gram. Cambridge, MA: Harvard Grad-uate School of Education.

Weiss, Carol Hirschon and EleanorSinger. 1988. The Reporting of SocialScience in the National Media. NewYork: Russell Sage.

Wiles, Paul. 2001. Criminology in the21st Century: Public Good or PrivateInterest? Paper presented at the meet-ing of the Australian and New Zea-land Society of Criminology, Univer-sity of Melbourne, Australia, Feb.

Witmer, Helen L. and Edith Tufts. 1954.The Effectiveness of Delinquency Pre-vention Programs. Washington, DC:U.S. Department of Health, Educationand Welfare, Social Security Adminis-tration, Children’s Bureau.

Wolf, Frederic M. 2000. Lessons to beLearned from Evidence-Based Medi-cine : Practice and Promise of Evi-dence-Based Medicine and Evidence-Based Education. Medical Teacher22:251-59.

at Vrije Universiteit 34820 on March 8, 2010 http://ann.sagepub.comDownloaded from