Adams The Legalized Crime Of Banking And A Constitutional Remedy (1958)

6
136 VOLUME 116 | NUMBER 1 | January 2008 Environmental Health Perspectives Research | Mini-Monograph Standards and Practices for Judging the Quality of Scientific Work The project on Scientific Knowledge and Public Policy (SKAPP) examines the nature of science and the ways in which it is used and misused in government decision making and legal proceedings. Last year, SKAPP commissioned papers to address the question “Should scientific work conducted for pur- poses of advocacy before regulatory agencies or courts be judged by the same standards as science conducted for other purposes?” (SKAPP 2006). This article is adapted from one of those papers. Science is a social enterprise, and scientific work tends to be accepted by the community when it has been confirmed. Crucially, experimental and theoretical results must be reproduced by others within the science com- munity and the validity of the work estab- lished by replication (Wikipedia 2007a). However, such replication can take years, and what constitutes replication in a given case may also be disputable. Consequently, a vari- ety of standards and practices have been established over the years for assessing the quality of scientific work (Barrow and Conrad 2006). These standards and practices apply to both “testing” (activities conducted pursuant to protocols, prescribed by regulatory agencies, that specify the content and charac- teristics of studies to be conducted to meet regulatory requirements, e.g., a standard bioassay for carcinogenesis) and “research” (studies that are hypothesis driven, addressing broad methodologic or mechanistic ques- tions, e.g., the biologic activity of chemicals on the environment). Both government and the scientific com- munity outside of government have played independent but reinforcing roles in develop- ing and propagating these standards and prac- tices. The private sector is actually driving implementation of several relatively newer practices, in part to address concerns about the credibility of industry-funded science. In general, the regulatory implementations of these concepts impose additional require- ments well beyond those conventionally imposed outside of the regulatory context. These government requirements promote a high degree of reliability. As explained below, the cumulative result of their applicability is that science conducted for regulatory purposes in many cases is actually likely to be more reli- able than science conducted outside the regu- latory arena. This is not to argue that the latter should be required to comply with these government requirements to any greater extent than it already is, but only to emphasize that science conducted pursuant to them is relatively more reliable as a result. Information Quality Act. The Information Quality Act (IQA 2000) governs the quality of information that federal agencies disseminate. The IQA required the Office of Management and Budget (OMB) to issue an initial set of implementing guidelines (OMB 2002). Each federal agency then issued its guidelines apply- ing the OMB guidelines to its particular cir- cumstances [e.g., U.S. Environmental Protection Agency (U.S. EPA) 2002a]. It is important to note that the IQA applies not only to information that agencies generate themselves but also to information developed by nongovernment parties, to the extent the government “disseminates” it, either by adopting or endorsing it as its own view or by relying on it to make a decision. [Information prepared for administrative adjudications is exempt from IQA guidelines, but agencies construe this exemption narrowly (U.S. EPA 2002a).] Thus, to the extent that businesses, universities, or other private enti- ties conduct research or testing and the results come into the possession of an agency such as the U.S. EPA, the results cannot form a basis of the agency’s decision without becoming subject to IQA requirements. And those requirements are most precise and demanding in the area of scientific information. OMB’s guidelines (OMB 2002) prescribe fairly detailed standards for “objectivity.” As a general matter, information must be accurate, reliable, and unbiased. Scientific information must be generated using sound research methods. The sources of the information must be disclosed and data should be docu- mented. Scientific information must be accompanied by supporting data and models. “Influential” scientific information must be sufficiently transparent to be reproduced sub- ject to several caveats. (“Influential” informa- tion is that which an agency “reasonably can determine will have or does have a clear and substantial impact on important public poli- cies or private sector decisions.”) Influential information regarding risks to health, safety, or the environment must also be based on This article is part of the mini-monograph “Science for Regulation and Litigation.” Address correspondence to J.W. Conrad Jr., Conrad Law & Policy Counsel, 1615 L St. NW, Suite 1350, Washington, DC 20036 USA. Telephone: (202) 822- 1970. Fax: (202) 822-1971. E-mail: jamie@ conradcounsel.com At the time this article was accepted, C.J.H. and J.W.C. were both employed by the American Chemistry Council. Received 12 December 2006; accepted 5 July 2007. Scientific and Legal Perspectives on Science Generated for Regulatory Activities Carol J. Henry and James W. Conrad Jr. American Chemistry Council, Arlington, Virginia, USA This article originated from a conference that asked “Should scientific work conducted for purposes of advocacy before regulatory agencies or courts be judged by the same standards as science con- ducted for other purposes?” In the article, which focuses on the regulatory advocacy context, we argue that it can be and should be. First, we describe a set of standards and practices currently being used to judge the quality of scientific research and testing and explain how these standards and prac- tices assist in judging the quality of research and testing regardless of why the work was conducted. These standards and practices include the federal Information Quality Act, federal Good Laboratory Practice standards, peer review, disclosure of funding sources, and transparency in research policies. The more that scientific information meets these standards and practices, the more likely it is to be of high quality, reliable, reproducible, and credible. We then explore legal issues that may be impli- cated in any effort to create special rules for science conducted specifically for a regulatory proceed- ing. Federal administrative law does not provide a basis for treating information in a given proceeding differently depending on its source or the reason for which it was generated. To the con- trary, this law positively assures that interested persons have the right to offer their technical exper- tise toward the solution of regulatory problems. Any proposal to subject scientific information generated for the purpose of a regulatory proceeding to more demanding standards than other scien- tific information considered in that proceeding would clash with this law and would face significant administrative complexities. In a closely related example, the U.S. Environmental Protection Agency considered but abandoned a program to implement standards aimed at “external” information. Key words: Administrative Procedure Act, agency proceedings, conflict of interest, financial disclosure, industry science, Information Quality Act, interested persons, peer review, regulatory science, right to publish, scientific quality. Environ Health Perspect 116:136–141 (2008). doi:10.1289/ehp.9978 available via http://dx.doi.org/ [Online 7 November 2007]

description

 

Transcript of Adams The Legalized Crime Of Banking And A Constitutional Remedy (1958)

Page 1: Adams   The Legalized Crime Of Banking And A Constitutional Remedy (1958)

136 VOLUME 116 | NUMBER 1 | January 2008 • Environmental Health Perspectives

Research | Mini-Monograph

Standards and Practicesfor Judging the Qualityof Scientific Work

The project on Scientific Knowledge andPublic Policy (SKAPP) examines the natureof science and the ways in which it is usedand misused in government decision makingand legal proceedings. Last year, SKAPPcommissioned papers to address the question“Should scientific work conducted for pur-poses of advocacy before regulatory agenciesor courts be judged by the same standards asscience conducted for other purposes?”(SKAPP 2006). This article is adapted fromone of those papers.

Science is a social enterprise, and scientificwork tends to be accepted by the communitywhen it has been confirmed. Crucially,experimental and theoretical results must bereproduced by others within the science com-munity and the validity of the work estab-lished by replication (Wikipedia 2007a).However, such replication can take years, andwhat constitutes replication in a given casemay also be disputable. Consequently, a vari-ety of standards and practices have beenestablished over the years for assessing thequality of scientific work (Barrow andConrad 2006). These standards and practicesapply to both “testing” (activities conductedpursuant to protocols, prescribed by regulatory

agencies, that specify the content and charac-teristics of studies to be conducted to meetregulatory requirements, e.g., a standardbioassay for carcinogenesis) and “research”(studies that are hypothesis driven, addressingbroad methodologic or mechanistic ques-tions, e.g., the biologic activity of chemicalson the environment).

Both government and the scientific com-munity outside of government have playedindependent but reinforcing roles in develop-ing and propagating these standards and prac-tices. The private sector is actually drivingimplementation of several relatively newerpractices, in part to address concerns aboutthe credibility of industry-funded science. Ingeneral, the regulatory implementations ofthese concepts impose additional require-ments well beyond those conventionallyimposed outside of the regulatory context.These government requirements promote ahigh degree of reliability. As explained below,the cumulative result of their applicability isthat science conducted for regulatory purposesin many cases is actually likely to be more reli-able than science conducted outside the regu-latory arena. This is not to argue that thelatter should be required to comply with thesegovernment requirements to any greaterextent than it already is, but only to emphasizethat science conducted pursuant to them isrelatively more reliable as a result.

Information Quality Act. The InformationQuality Act (IQA 2000) governs the quality ofinformation that federal agencies disseminate.The IQA required the Office of Managementand Budget (OMB) to issue an initial set ofimplementing guidelines (OMB 2002). Eachfederal agency then issued its guidelines apply-ing the OMB guidelines to its particular cir-cumstances [e.g., U.S. EnvironmentalProtection Agency (U.S. EPA) 2002a].

It is important to note that the IQAapplies not only to information that agenciesgenerate themselves but also to informationdeveloped by nongovernment parties, to theextent the government “disseminates” it,either by adopting or endorsing it as its ownview or by relying on it to make a decision.[Information prepared for administrativeadjudications is exempt from IQA guidelines,but agencies construe this exemption narrowly(U.S. EPA 2002a).] Thus, to the extent thatbusinesses, universities, or other private enti-ties conduct research or testing and the resultscome into the possession of an agency such asthe U.S. EPA, the results cannot form a basisof the agency’s decision without becomingsubject to IQA requirements. And thoserequirements are most precise and demandingin the area of scientific information.

OMB’s guidelines (OMB 2002) prescribefairly detailed standards for “objectivity.” As ageneral matter, information must be accurate,reliable, and unbiased. Scientific informationmust be generated using sound researchmethods. The sources of the informationmust be disclosed and data should be docu-mented. Scientific information must beaccompanied by supporting data and models.“Influential” scientific information must besufficiently transparent to be reproduced sub-ject to several caveats. (“Influential” informa-tion is that which an agency “reasonably candetermine will have or does have a clear andsubstantial impact on important public poli-cies or private sector decisions.”) Influentialinformation regarding risks to health, safety,or the environment must also be based on

This article is part of the mini-monograph “Science forRegulation and Litigation.”

Address correspondence to J.W. Conrad Jr., ConradLaw & Policy Counsel, 1615 L St. NW, Suite 1350,Washington, DC 20036 USA. Telephone: (202) 822-1970. Fax: (202) 822-1971. E-mail: [email protected]

At the time this article was accepted, C.J.H. andJ.W.C. were both employed by the AmericanChemistry Council.

Received 12 December 2006; accepted 5 July 2007.

Scientific and Legal Perspectives on Science Generated for Regulatory Activities

Carol J. Henry and James W. Conrad Jr.

American Chemistry Council, Arlington, Virginia, USA

This article originated from a conference that asked “Should scientific work conducted for purposesof advocacy before regulatory agencies or courts be judged by the same standards as science con-ducted for other purposes?” In the article, which focuses on the regulatory advocacy context, weargue that it can be and should be. First, we describe a set of standards and practices currently beingused to judge the quality of scientific research and testing and explain how these standards and prac-tices assist in judging the quality of research and testing regardless of why the work was conducted.These standards and practices include the federal Information Quality Act, federal Good LaboratoryPractice standards, peer review, disclosure of funding sources, and transparency in research policies.The more that scientific information meets these standards and practices, the more likely it is to beof high quality, reliable, reproducible, and credible. We then explore legal issues that may be impli-cated in any effort to create special rules for science conducted specifically for a regulatory proceed-ing. Federal administrative law does not provide a basis for treating information in a givenproceeding differently depending on its source or the reason for which it was generated. To the con-trary, this law positively assures that interested persons have the right to offer their technical exper-tise toward the solution of regulatory problems. Any proposal to subject scientific informationgenerated for the purpose of a regulatory proceeding to more demanding standards than other scien-tific information considered in that proceeding would clash with this law and would face significantadministrative complexities. In a closely related example, the U.S. Environmental Protection Agencyconsidered but abandoned a program to implement standards aimed at “external” information. Keywords: Administrative Procedure Act, agency proceedings, conflict of interest, financial disclosure,industry science, Information Quality Act, interested persons, peer review, regulatory science, rightto publish, scientific quality. Environ Health Perspect 116:136–141 (2008). doi:10.1289/ehp.9978available via http://dx.doi.org/ [Online 7 November 2007]

Page 2: Adams   The Legalized Crime Of Banking And A Constitutional Remedy (1958)

“the best available, peer-reviewed science andsupporting studies conducted in accordancewith sound and objective scientific practices;and . . . data collected by accepted methods orbest available methods,” and must disclosesignificant uncertainties and relevant peer-reviewed studies.

Accordingly, if a business submitted apaper to the U.S. EPA regarding research ithad conducted on the risks posed by one of itsproducts, and the U.S. EPA was going to con-sider this paper as part of making a decisioninvolving that family of products—a decisionthat would have a substantial impact on pro-ducers and customers of those products—thatpaper would be subject to the most demandinglevel of IQA objectivity standards. The under-lying data and methods would have to be pro-vided to the agency. Also, the agency (andhence the submitter) would have to be able todocument that the research reported in thepaper was conducted in accordance with soundand objective scientific practices. Finally, thepaper would need to be based on the best avail-able science, including any peer-reviewed workin the literature. [Notably, by virtue of beingprovided to a federal agency, this informationwould become subject to the Freedom ofInformation Act (FOIA 1966). Exemptions toFOIA exist for privacy and commercial consid-erations, although health effects data generallycannot be claimed as confidential businessinformation (Conrad 2006).]

Good Laboratory Practice regulations.Both the U.S. EPA and Food and DrugAdministration (FDA) have adopted compara-ble sets of requirements specifying laboratorypractices and procedures that must be followedto ensure the quality and integrity of studiessubmitted to support agency decisions (FDA2006; U.S. EPA 2006a, 2006b). These GoodLaboratory Practice (GLP) standards prescribeessential, routine features of sound laboratoryscience. All studies submitted to these agenciesin connection with the relevant statutory pro-grams must be conducted in accordance withthese standards. The GLP rules are more rigor-ous than standards followed at university labo-ratories (Anderson et al. 2001).

GLPs have three basic elements: qualityassurance, standard operating procedures, andstudy protocols.

Quality assurance. GLPs mandate docu-mentation of study conduct and results andensure that a full record of the study is pre-served for subsequent review, if necessary.

Standard operating procedures. Theremust be written procedures for accurate andfull data collection under the nonclinical lab-oratory study methods to be used, methodsdetermined by management to be adequate toensure the quality and integrity of the data.

Study protocols. Each study must statethe objectives and all methods for the conduct

of the study in a clearly written protocol thatmust be approved by the agency. These pro-tocols have been validated and chosen, afterextensive and careful review, to provide anacceptable degree of scientific certainty, in theagency’s view, regarding the reliability andrelevance of test results, which in turn pro-vides the confidence necessary for makingsafety determinations and other regulatorydeterminations.

Conformance to GLPs does not, in itself,ensure that scientific work is reproducible, orthat the resulting data will be interpreted cor-rectly. However, when research studies adhereto GLPs, reviewers and those acting upon thescience may have a high degree of confidencethat the experimenters a) adhered to theexperimental protocol employed, b) took allthe steps and measurements claimed to betaken during conduct of the study itself, andc) accurately reported the test results.

GLPs are neither required nor common inresearch laboratories but have been imple-mented at all U.S. EPA contract laboratoriesand at major universities that perform signifi-cant medical and toxicology testing andresearch for regulatory purposes. Sponsoringorganizations can require that research andtesting studies comply with GLPs and canincorporate that request into contractual vehi-cles. As a result, GLPs have grown to be usedfairly extensively outside the specific U.S.EPA and FDA regulatory contexts in whichthey are required.

Peer review. While peer review has been anintegral part of medicine for centuries, it hasbecome a mainstay of the scientific processonly since the mid-twentieth century(Wikipedia 2007b). The National ResearchCouncil (NRC) has defined peer review as pro-viding an “in-depth critique of assumptions,calculations, extrapolations, alternate interpre-tations, methodology, and acceptance criteriaemployed and conclusions drawn in the origi-nal work” (NRC 1998). At a minimum, peerreview must exhibit the following features(International Life Sciences Institute 2005):• It must include multiple assessments.• It must be conducted by scientists with no

direct connection to the research, or itssponsors.

• It must be conducted by scientists who haveexperience with or expertise in the researchin question.

A rigorous peer review is a key part of thefoundation on which scientific excellence isachieved in all research programs. Science isnot “self-evident” and requires scientific judg-ment, and because judgments vary, a well-balanced representation of intellectualperspectives is needed. Where that is obtained,the peer review is more likely to rebut any sci-entifically untoward or untenable hypothesisand arrive at a successful review.

The crucial role of reviewer independenceand expertise. The hallmark of any peerreview is the independence and expertise ofthe peer reviewer. Reviewer knowledge, expe-rience, and expertise are central requirementsfor instructive peer review and ensure “techni-cal credibility” for the review. The peerreviewer should have expertise at least equiva-lent to that needed for the “original work” butmust be independent of the work beingreviewed. These dual requirements for exper-tise and independence offer the best chance ofobtaining objective, expert evaluation whilemaintaining scientific integrity (NRC 2003).

Conflict of interest and bias in peerreview. Identifying and managing potentialconflicts of interest (COIs)—financial and non-financial—is critical to meeting the require-ment for independent peer review. In turn, thisrequires distinguishing between COI and bias.

The National Academies (2003) definesCOI as any financial or other interest thatconflicts with the service of an individualbecause it could impair the individual’s objec-tivity or create an unfair competitive advan-tage for any individual or organization.

Conventionally speaking, some COIs are“actual,” with an unambiguous potential forfinancial gain (e.g., the reviewer holds stockin an entity likely to be affected by the rele-vant regulatory action). Other actual COIsmight be employment conflicts or close pro-fessional financial relationships. Potentialfinancial COIs may exist on the part ofexperts remunerated by any individual ororganization (International Life SciencesInstitute 2005). Similarly, any scientist canhave a COI, depending on the topic. As theGovernment Accountability Office (GAO2001) has noted, association with industrydoes not by itself indicate a COI. Conversely,association with an environmental group doesnot inoculate a scientist against COIs. Therewell may be COIs among academics, becauseof intense competition for research funds andpublications, that are difficult to identify.

An individual with a COI generally maynot participate in a peer review. Organizationsthat use peer-review systems require candi-dates to disclose their organizational affilia-tions, financial interests, personal andprofessional involvement, and other informa-tion that may be pertinent to the topic, erringon the side of full disclosure, and to certifythat the information is true and accurate tothe best of their knowledge.

“Bias” is a partiality or loss of objectivitybecause of personal views or positions and maybe perceived as arising from the close identifi-cation or association of an individual with aparticular point of view or with a particulargroup that may be affected by the researchbeing reviewed. For example, questions aboutthe neutrality of a reviewer could arise if

Science generated for regulatory activities

Environmental Health Perspectives • VOLUME 116 | NUMBER 1 | January 2008 137

Page 3: Adams   The Legalized Crime Of Banking And A Constitutional Remedy (1958)

that person had represented an interest groupat a hearing.

It is generally recognized that bias ispervasive and not inherently undesirable. Asubcommittee of the U.S. EPA ScienceAdvisory Board (2000) has opined that,“[a]lthough it is possible to avoid conflict ofinterest, avoidance of bias is probably notpossible. All scientists carry bias due, forexample, to discipline, affiliation and experi-ence.” Potential sources of bias in peer review-ers are managed through disclosure.

Federal regulations generally implementthese concepts. Although these rules addressonly federal employees, they include “specialgovernment employees,” such as participantsin panels organized under the FederalAdvisory Committee Act (1972), and there-fore they apply in the case of federal peerreview panels such as the U.S. EPA ScienceAdvisory Board. These rules prohibit a federalemployee from participating directly and sub-stantially in a particular matter (as opposed to“broad policy options”) that will have a directand predictable effect on a) a financial interestof the employee (generally, employment orstock ownership); b) the employee’s employer;or c) organizations for which the person hasserved in the last year in a paid capacity or asan active participant, where a reasonable per-son would question the person’s impartialityin the matter, unless covered by an exclusionor issued a waiver (Office of GovernmentEthics 1997).

Peer review at federal agencies.Historically, the primary venues for peerreview have been in journal publication andin grant evaluation. Increasingly, federal agen-cies have been conducting more demandingpeer reviews of studies that will form the basisof important agency decisions. For example,the U.S. EPA has several standing advisorybodies to conduct peer reviews, including itsScience Advisory Board, its Science AdvisoryPanel (for pesticide program decisions), andits Clean Air Science Advisory Committee.Some agencies request the National ResearchCouncil of the National Academies to con-vene peer reviews for very prominent issuesand assessments. Although the principles forjournal and agency peer review are the same,they involve very different practices, proce-dures, and decision pathways from the resultsand conclusions of the peer review.

Agency peer reviews are complex, timeconsuming, and most often involve forming apanel of expert peer reviewers, where thepanel will meet in public for their delibera-tions. In some cases, peer reviews are con-ducted by scientists within the agency, and inother cases by external scientists. Prior criti-cisms of agency peer reviews (GAO 2001)have only heightened the scrutiny applied tothem both within and outside agencies.

OMB’s peer review bulletin. Many fed-eral agencies have written policies for whenand how to conduct agency peer reviews (U.S.EPA Science Policy Council 2000). To bringgreater consistency to and establish minimumrequirements for such reviews, the OMB andthe Office of Science and Technology Policyin early 2005 issued their “Final InformationQuality Bulletin for Peer Review” (OMB2005). The bulletin requires all federal agen-cies to conduct peer reviews of all “influential”scientific information—defined as under theIQA—that the agency intends to disseminate.[As with the IQA, information disseminatedas part of an adjudication (e.g., a permit deci-sion) is exempt from rules outlined in the bul-letin, unless it is novel or precedential and peerreview is practical.]

The bulletin sets especially high standardsfor “highly influential” scientific assessments.For these documents, a) agency scientists gen-erally may not participate; b) reviewers must beprovided with background information suffi-cient to understand the key findings or conclu-sions of the draft assessment; c) where feasible,public comment should be sought and pro-vided to the reviewers; and d) the agencyshould respond to the reviewers’ report.

Under the guidelines of the OMB bul-letin, peer review should now be applied sys-tematically to all influential scientificinformation published by federal agencies orused by them to make decisions—regardless ofwhy the work was conducted—in a mannerthat is more rigorous and revealing than thatoccurring outside the federal government.

Disclosure and acknowledgment of fundingsources. In the mid-1990s, concern arose aboutthe integrity of scientific research because ofincreasing commercial links and consequentinfluences. In response, the Department ofHealth and Human Services (DHHS) and theNational Science Foundation (NSF) issuedtheir “Investigator Financial Disclosure Policy”(DHHS/NSF 1995). This policy required dis-closure of an investigator’s significant financialinterest if it “would reasonably appear to beaffected” by activities funded or proposed forfunding by NSF and DHHS.

As the scientific community became moreaware of the potential COIs for universityinvestigators with commercial ties receivingfederal grant support, questions also emergedabout the potential implications for interpreta-tion of research results and conclusions. By2001 some of the major scientific journals hadestablished policies to encourage authors todeclare any competing financial interests inrelation to research papers. Financial disclo-sure forms are now routinely required to besubmitted with manuscripts for use by the edi-tors, and journal articles generally acknowl-edge research support, although the practice isnot universal (Nature 2006; Science 2006).

Although journals have instituted disclo-sure policies (sometimes termed “competingfinancial interests policies”) to increase trans-parency for their readers, granting agenciesand sponsors employ a variety of approachesand requirements, and it is too early to con-clude that a broad consensus practice has beenestablished. The Long-Range Initiative (LRI)of the American Chemistry Council (ACC)requires its contractors to acknowledge ACCas a sponsor of the research in all articles orpublications pertaining to the research con-ducted under agreement with the ACC.

Transparent research policies. Researchprograms are commonly regarded as morecredible, and their results less suspect, whenthey employ transparent processes and proce-dures regarding ownership of data, release ofresults, and publication of results. This prac-tice is growing for assessing scientific quality,although it has not yet achieved broad accep-tance within the research community.

When investigators own the data and thescientific information that they generatethrough their research efforts, they are in con-trol of how those data will be evaluated, used,and communicated. Sponsors who choose toemploy this approach do so to lend strength,objectivity, and credibility to the outcome ofthe research. The ACC LRI program includesthis element of data ownership (ACC 2006).

With ownership of the data—the right torelease data independently and to publishwithout prior sponsor approval—inappropri-ate sponsor interference can be avoided.Interactions with sponsors can be beneficial inproviding specialized knowledge and insight ata level of detail other scientific resources mightnot provide. However, the final decision onwhether to accept the sponsor’s information oradvice remains with the investigator.

Congress has required the federal govern-ment to take a slightly different tack inthe area of research transparency. As a resultof the “Shelby Amendment” (OmnibusAppropriations Act 1998), the OMB revised itsCircular A-110 governing federal grants, andnow all federal agencies must make available tothe public, pursuant to FOIA, final researchdata generated by agency grantees that anagency cites in support of a rule or order (OMB1999). Thus, when the federal governmentfunds scientific work and relies on it to supportagency action with the force and effect of law,the data produced by that work will be placedin the public domain if someone in the publicasks for it. This way, federally funded researchdata can be available so that the scientific com-munity can validate the work through replica-tion. Although Circular A-110 does not applyto privately funded research submitted to fed-eral agencies, the same prospect of disclosure isan inevitable effect of the IQA, discussedabove, if the agency relies on that information.

Henry and Conrad

138 VOLUME 116 | NUMBER 1 | January 2008 • Environmental Health Perspectives

Page 4: Adams   The Legalized Crime Of Banking And A Constitutional Remedy (1958)

Federal Standardsfor Considering“Regulatory” Science

The foregoing discussion outlines the standardsthat one can use to judge the quality of scien-tific research without regard to the purpose forwhich it was conducted. The following discus-sion explains why regulatory agencies shouldnot—and arguably cannot—treat science cre-ated for purposes of an agency proceeding dif-ferently in that proceeding than science notcreated for purposes of that proceeding.

The breadth of research and testingobligations. The data proffered by regulatedparties in agency proceedings often are notsolely the results of self-interested efforts toinfluence those proceedings. Rather, in manycases the parties have been obligated by regu-lation or order to conduct particular studies,according to particular protocols, and to pro-vide the results of that work to the agency(Conrad 2006).

For example, under Section 4 of the ToxicSubstances Control Act (TSCA 1976), theU.S. EPA has broad power to issue rules order-ing persons manufacturing, processing, orimporting a chemical to conduct further testsregarding the chemical’s health or environmen-tal effects. TSCA Section 8(d) authorizes theU.S. EPA to compel, by rule, manufacturersand importers of a given chemical to submitlists and copies of existing, unpublished healthand safety studies for the chemical. TSCAmandates have resulted in “more than 50,000studies covering a broad range of health andecological endpoints” being filed with the U.S.EPA since 1976 (U.S. EPA 2003a).

Similarly, the Federal Insecticide,Fungicide, and Rodenticide Act (1972)requires any potential pesticide chemical toundergo more than 100 scientific testsaddressing chemistry, health effects, environ-mental effects, and residue chemistry to deter-mine whether it can be used safely (U.S. EPA2006c). Only after the information hasundergone a thorough and rigorous review bythe U.S. EPA can the product be “registered”by the U.S. EPA for use to protect crops orpublic health.

Most federal processes for evaluating thesafety of chemicals, including the FDA reviewof drug applications (Federal Food, Drug, andCosmetic Act 1938), depend heavily on pri-vately generated data. Indeed, historically andfor the foreseeable future, the vast majority ofchemical and product testing has been and willbe borne by industry, not the public sector.Policies regarding the treatment of the result-ing data need to bear this reality in mind.

The rights of interested persons underfederal administrative law. The notion thatscience generated for regulatory purposesshould be evaluated differently than other

science is premised on the idea that the self-interest of regulated parties that conduct orsponsor research creates a conflict with theinterest of the truth, whether conscious orunconscious, that renders the research invalidor at least suspect (Krimsky 2005).

At the outset, we acknowledge the grow-ing literature purporting to find that industry-funded research produces results that favor itssponsors more often than other research onthe same topic (Krimsky 2005; vom Saal andHughes 2005). As the authors of one of thosestudies has fairly observed in comparingindustry and government-funded studies, oneor both of two things could be going on:either “industry-funded scientists [are] underreal or perceived pressure to find and publishonly data suggesting negative outcomes” or“government-funded scientists [are] underreal or perceived pressure to publish only datasuggesting adverse outcomes” (vom Saal andHughes 2005). Indeed, the source of a scien-tist’s funding may be less a cause of bias thanan effect of it. As an editor of The Lancet(Horton 1997) has argued, financial conflicts“may not be [more] influential” than underly-ing biases, because “interpretations of scien-tific data will always be refracted through theexperiences and biases of the authors.”Similarly, a student of the science/policyinterface (Sarewitz 2004) has argued that

stripping out conflicts of interest and ideologicalcommitments to look at ‘what the science is reallytelling us’ can be a meaningless exercise [because]even the most apparently apolitical, disinterestedscientist may, by virtue of disciplinary orientation,view the world in a way that is more amenable tosome value systems than others.

As vom Saal and Hughes argued in their arti-cle (2005), the appropriate technical responsewhen confronted with science conducted byinterested parties is to use that fact as an alertto look, perhaps more deeply than one other-wise might have, for “what specific factors,other than source of funding,” may be associ-ated with the results of that science.

Such an approach is clearly the onlyproper one in the case of interested researchsubmitted to federal regulatory agencies, sim-ply because the concept of “conflict of inter-est” is not employed in federal laws governingthe regulatory process (aside from the govern-ment ethics rules noted above). Indeed, nofederal laws, rules, or policies express a pre-sumption that agencies in a given proceedingshould ignore or give less weight to scientificwork on the basis of who conducted orfunded it or, more to the point, whether itwas prepared specifically for the relevant pro-ceeding. To the contrary, federal administra-tive law, as interpreted by the courts,generally evinces a congressional mandate thatagencies give interested or affected partiesaccess to and input into administrative

processes. In effect, Congress and the courtshave determined that in an open democraticsociety administered by a bureaucracyrequired to act fairly and rationally, it isimportant that agencies allow interested oraffected persons to provide information tothem, and fairly consider that information.

The backbone of federal administrativelaw is the Administrative Procedure Act (APA1946), which requires agencies to providenotice and “give interested persons an oppor-tunity to participate in [a] rulemakingthrough the submission of written data,views, or arguments. . . .” Courts interpretingSection 553 of the APA have made clear thatan agency must consider and respond to—rather than discount—all significant mattersput before it. In particular, courts have specif-ically rejected the notion that the APA shouldprovide some basis for insulating agency offi-cials from the input of regulated parties. Inthe words of former D.C. Circuit JudgePatricia Wald (Sierra Club v. Costle 1981):

Under our system of government, the very legiti-macy of general policymaking performed byunelected administrators depends in no small partupon the openness, accessibility, and amenabilityof these officials to the needs of the public fromwhich their ultimate authority derives, and uponwhom their commands must fall. . . . Furthermore,the importance to effective regulation of continu-ing contact with a regulated industry, otheraffected groups, and the public cannot be underes-timated. Informal contacts can . . . spur the provi-sion of information which the agency needs.

In addition to setting a single set of qualitystandards for federally disseminated informa-tion regardless of provenance, the IQA alsoauthorizes “affected persons to seek and obtaincorrection of information . . . disseminated by[a federal] agency that does not comply withthe guidelines issued by [OMB].” Far fromhaving their own information judged weightedless under the IQA, affected persons areempowered to use their own information toobtain correction of government informationthat does not meet the common set ofstandards that should apply to any informationdisseminated by the government.

Several other administrative law statutesembody the same orientation toward inter-ested persons and their rights to submit infor-mation to federal agencies and have itconsidered. These include the RegulatoryFlexibility Act (1980), the PaperworkReduction Act (1980), and the FederalAdvisory Committee Act (1972). The latteralso requires advisory committees to be “bal-anced,” which should equally prohibit exclu-sion of, as well as domination by, any interest.

The upshot of this authority is not thatregulated agencies are bound to acceptunquestioningly any information that is gen-erated specifically for that proceeding.Agencies can and, indeed, must assess the

Science generated for regulatory activities

Environmental Health Perspectives • VOLUME 116 | NUMBER 1 | January 2008 139

Page 5: Adams   The Legalized Crime Of Banking And A Constitutional Remedy (1958)

validity of information upon which they rely.But they cannot adopt blanket approachesthat judge science generated for a regulatoryproceeding differently than other science con-sidered in that proceeding.

What sorts of proceedings and entitieswould be covered. The next hurdle in imagin-ing a system that imposed different standardson science created specifically for a given reg-ulatory proceeding is to consider what sorts ofagency “proceedings,” and what sorts of enti-ties conducting or sponsoring science, wouldbe covered. These turn out not to be simpledeterminations in all cases.

Where testing is being conducted by amanufacturer of a chemical or product(e.g., an exposure study conducted in supportof a pesticide’s reregistration), there is littlequestion that the research is being conductedfor those proceedings. But manufacturersfrequently conduct or sponsor research andtesting for product stewardship and otherbusiness reasons. In some cases, that work mayalso be useful in some other agency “proceed-ing” (e.g., establishing a “reference concentra-tion” value in the U.S. EPA Integrated RiskInformation System database (http://www.epa.gov/iris). Such a proceeding may be ongo-ing, or the company may know the agency iscontemplating it, or the company may plan topropose that the agency initiate it. Whatwould the rules be in such “mixed-motive”cases? Conversely, such an agency proceedingmay arise later and be truly unanticipated bythe regulated entity. What sort of evidentiaryprocess would have to be established to deter-mine what the research proponent intended orknew at the time research was initiated? Allthese circumstances would need to beaddressed in a system that tried to treat “regu-latory proceeding” science differently.

Moreover, many regulatory settings aremore than bilateral; that is, entities other thanthe agency and a single regulated party maybe able to submit scientific information. Evenif one accepted the premise that science pre-pared for purposes of a proceeding should betreated differently than other science, there isno inherent reason that science prepared for aproceeding by opponents of the permitshould be treated differently than science pre-pared by its proponents. Indeed, many regu-latory proceedings have multiple partiesaligned with and against the agency and otherparties on different issues in complex ways,making it difficult in many cases to determinewho is on anyone’s “side.”

Finally, it will often be difficult to demar-cate “nonregulatory” science, given the extentto which academic scientists are participantsor are at least partisans in regulatory disputes.Although highly controversial issues generallyraise important intellectual questions (e.g.,the effects of pollutants at low doses), there is

also no question that many of the academicsworking on these issues are highly investedboth in their hypotheses and in the regulatoryuses of their findings. Although these scien-tists may not have a financial or other tangiblestake in any particular regulatory proceeding,it seems artificial and formalistic to say thattheir research is not being conducted, at leastin part, so that its results can be used inregulatory proceedings.

Thus, any effort to establish special rulesfor consideration of science generated forregulatory proceedings will face difficult defin-itional challenges regarding what is a “pro-ceeding,” even more difficult evidentiarychallenges determining whether and the extentto which scientific work was being conductedfor such proceedings, and politically loadedchallenges over when “unaffiliated” or “acade-mic” work was in fact being conducted, atleast in part, for regulatory purposes.

Case study: the U.S. EPA assessment fac-tors for external information. The issuesraised above ultimately led the U.S. EPA toabandon a related effort: to establish guide-lines that treated “external” information dif-ferently than information whose generationthe U.S. EPA controlled.

Early on, the U.S. EPA realized that theIQA would apply to information generatedby third parties that the agency relied upon orotherwise disseminated. The U.S. EPA draft“Assessment Factors for Evaluating theQuality of Information from ExternalSources” (U.S. EPA 2002b) noted that

the Agency . . . receives information that is volun-tarily submitted to EPA by external sources (‘thirdparties’) in hopes of influencing Agency actions. . . .The purpose of this document is to describe sets of‘assessment factors’ that illustrate the types of con-siderations that EPA takes into account when evalu-ating the quality and relevance of information thatis voluntarily submitted or that we obtain fromexternal sources in support of various Agencyactions.

The balance of the document consisted of anelaboration on five “categories of generalassessment factors”: soundness, applicabilityand utility, clarity and completeness, uncer-tainty and variability, and evaluation andreview.

Critics argued that there is no basis, underthe IQA or any other legal authority, orindeed, on any technical grounds, for theU.S. EPA to assess “external” or “third-party”information by different standards than first-or second-party information. On technicalgrounds, critics noted that information gener-ated by the U.S. EPA or its contractors is notimmune from the same types of errors associ-ated with information from external sources.For example, the U.S. EPA Inspector Generalhad just issued a memorandum noting thatthe agency faced a number of unresolved

challenges in “establishing quality assurancepractices to improve the reliability, accuracy,and scientific basis of environmental data”(U.S. EPA Inspector General 2002). TheInspector General’s memo expressed similarconcerns with respect to the accuracy and reli-ability of information generated by U.S. EPAcontractors. Critics argued that there wasample justification for the agency to apply itsproposed assessment factors to that informa-tion, as well as to information submitted bythird parties.

Critics of the U.S. EPA draft assessmentfactors also argued that assessment factors forexternal information created the undesirableappearance of a double standard and openedthe door to arbitrary agency decisions toexclude otherwise appropriate informationreceived from external sources on the basis ofthe selective application of assessment factorsto information products. Most important,they contended that the standards the agencyoffered for judging the quality and reliabilityof third-party data were no different thanthose that should be applied to evaluate infor-mation generated by the agency itself, U.S.EPA contractors, or U.S. EPA permittees, andhence a single set of assessment factors shouldapply to all.

When the U.S. EPA finalized the assess-ment factors document, it clarified that “thedocument does not constitute a new standardfor information quality, nor does it describe anew process for evaluating third party infor-mation.” Second, and more important, itadded that, “in general, we agree that consis-tent standards of quality should apply to bothinternally and externally generated informa-tion, when used for the same purposes” (U.S.EPA 2003b).

Conclusion

Only one set of standards and practicesshould be used to judge the quality of scien-tific work in a given regulatory proceeding,regardless of why the work was conducted. Itmay be that, over time, more of these prac-tices and standards will apply to all scientificinformation. Many of these hallmarks of sci-entific quality are incorporated into federallaw, rules, and policy. These same federalauthorities impose additional standards thatfurther ensure the quality of scientific workgenerated or submitted for regulatory pur-poses. Federal laws also ensure that interestedparties have a right to submit information forregulatory proceedings and to have that infor-mation considered fairly and on its merits.Any system of differential treatment for regu-latory science would face severe scrutiny inlight of that authority and would be difficultto administer. Most important, it would notnecessarily lead to an improved scientificfoundation for regulations.

Henry and Conrad

140 VOLUME 116 | NUMBER 1 | January 2008 • Environmental Health Perspectives

Page 6: Adams   The Legalized Crime Of Banking And A Constitutional Remedy (1958)

Science generated for regulatory activities

Environmental Health Perspectives • VOLUME 116 | NUMBER 1 | January 2008 141

REFERENCES

ACC (American Chemistry Council). 2006. Long-Range ResearchInitiative—Funding Opportunities. Available: http://www.uslri.com/home.cfm?id = funding [accessed 16 April 2007].

APA (Administrative Procedure Act). 1946. Public Law 79-404.Anderson W, Parsons B, Rennie D. 2001. Daubert’s backwash:

litigation-generated science. Univ Mich J Law Reform34:619–682.

Barrow CB, Conrad JW. 2006. Assessing the reliability and credi-bility of industry science and scientists. Environ HealthPerspect 114:153–155. Available: http://www.ehponline.org/mem/2005/8417/8417.pdf [accessed 16 April 2007].

Conrad, JW. 2006. Open secrets: the widespread availability ofinformation about the health and environmental effects ofchemicals. Law Contemp Probs 69:141–165.

DHHS/NSF (Department of Health and Human Services/NationalScience Foundation). 1995. Investigator financial disclosurepolicy. Fed Reg 60:35820.

FDA (Food and Drug Administration). 2006. Good LaboratoryPractice Standards. 21 CFR 58. Washington, DC:U.S. Foodand Drug Administration.

Federal Advisory Committee Act. 1972. Public Law 74-220.Federal Food, Drug, and Cosmetic Act. 1938. Ch. 675.Federal Insecticide, Fungicide, and Rodenticide Act. 1972.

Public Law 92-516.FOIA (Freedom of Information Act). 1966. Public Law 89-554.GAO (Government Accountability Office). 2001. EPA’s Science

Advisory Board Panels: Improved Policies and ProceduresNeeded to Ensure Independence and Balance. Washington,DC:Government Accountability Office.

Horton R. 1997. Conflicts of interest in clinical research: oppro-brium or obsession? Lancet 349:1112–1113.

Information Quality Act. 2000. Public Law 106-554.International Life Sciences Institute. 2005. Scientific Peer

Review to Inform Regulatory Decisions. Washington, DC:International Life Sciences Institute.

Krimsky, S. 2005. The funding effect in science and its implica-tions for the judiciary. J Law Policy 13:43–68.

National Academies. 2003. Policy on Committee Compositionand Balance and Conflicts of Interest for Committees Usedin the Development of Reports. Washington, DC:NationalAcademies. Available: http://www.nationalacademies.org/|coi/bi-coi_form-0.pdf [accessed 16 April 2007].

National Research Council. 1998. Peer Review in theEnvironmental Technology Development Programs.Washington, DC:National Academies.

National Research Council. 2003. Peer Review in the Departmentof Energy—Office of Science and Technology—InterimReport. Washington, DC:National Academies.

Nature. 2006. Competing Financial Interests. Available: http://www.nature.com/authors/editorial_policies/competing.html[accessed 16 April 2007].

Office of Government Ethics. 1997. Standards of EthicalConduct for Employees of the Executive Branch. 5 CFR2635. Washington, DC:U.S. Office of Government Ethics.

OMB (Office of Management and Budget). 1999. Circular A-110.Uniform administrative requirements for grants and agree-ments with institutions of higher education, hospitals, andother non-profit organizations. Fed Reg 64:54926.

OMB (Office of Management and Budget). 2002. Guidelines forensuring and maximizing the quality, objectivity, utility andintegrity of information disseminated by federal agencies.Fed Reg 67:8452–8460.

OMB (Office of Management and Budget). 2005. Final informa-tion quality bulletin for peer review. Fed Reg 70:2664.

Omnibus Appropriations Act. 1998. Public Law 105-277.Paperwork Reduction Act. 1980. Public Law 96-511.Regulatory Flexibility Act. 1980. Public Law 96-354.Sarewitz D. 2004. How science makes environmental contro-

versies worse. Environ Science Policy 7:385–403.Science. 2006. Statement on Real or Perceived Conflicts of

Interest for Authors. Available: http://www.sciencemag.org/about/authors/prep/coi.pdf [accessed 16 April 2007].

Sierra Club v. Costle. 1981. Case 79-1565, U.S. Court of Appealsfor the D.C. Circuit.

SKAPP (Project on Scientific Knowledge and Public Policy).2006. Coronado Conference III. Available: http://www.defendingscience.org/coronado_conference_papers/Coronado-Conference-Papers.cfm [accessed 16 April 2007].

TSCA (Toxic Substances Control Act). 1976. Public Law 94-469.U.S. EPA (U.S. Environmental Protection Agency). 2002a.

Guidelines for Ensuring and Maximizing the Quality,Objectivity, Utility and Integrity of Information Disseminatedby the Environmental Protection Agency. Available:http://www.epa.gov/quality/informationguidelines/documents/EPA_InfoQualityGuidelines.pdf [accessed16 April 2007].

U.S. EPA (U.S. Environmental Protection Agency). 2002b.Developing assessment factors for evaluating the quality ofinformation from external sources. Fed Reg 67:57225–57226.

U.S. EPA (U.S. Environmental Protection Agency). 2003a.Overview: Office of Pollution Prevention and ToxicsPrograms. Washington, DC: U.S. Environmental ProtectionAgency. Available: http://www.epa.gov/oppt/pubs/oppt101c2.pdf [accessed 16 April 2007].

U.S. EPA (U.S. Environmental Protection Agency). 2003b.Clarification of Issues Identified Through Public Input onthe Environmental Protection Agency (EPA) DraftAssessment Factors Document. Copy on file with author.

U.S. EPA (U.S. Environmental Protection Agency). 2006a. GoodLaboratory Practice Standards. 40 CFR 160. Washington,DC:U.S. Environmental Protection Agency.

U.S. EPA. 2006b. Good Laboratory Practice Standards. 40 CFR792. Washington, DC:U.S. Environmental ProtectionAgency.

U.S. EPA. 2006c. Pesticides: Regulating Pesticides. Washington,DC:U.S. Environmental Protection Agency. Available:http://www.epa.gov/pesticides/regulating/index.htm[accessed 16 April 2007].

U.S. EPA Inspector General. 2002. EPA’s Key ManagementChallenges. Washington, DC:U.S. Environmental ProtectionAgency. Available: http://www.epa.gov/oig/reports/2002/topchallenges2002.pdf [accessed 4 December 2007].

U.S. EPA Science Advisory Board Environmental HealthCommittee. 2000. Review of the Draft Report to Congress‘Characterization of Date Uncertainty and Variability in IRISAssessments, Pre-Pilot vs Post-Pilot.’ Washington, DC: U.S.Environmental Protection Agency Available: http://www.epa.gov/sab/pdf/ehcl007.pdf [accessed 16 April 2007].

U.S. EPA Science Policy Council. 2000. Peer Review Handbook.Washington, DC:U.S. Environmental Protection Agency.

Vom Saal FS, Hughes C. 2005. An extensive new literature con-cerning low-dose effects of bisphenol A shows the need fora new risk assessment. Environ Health Perspect 113:926–933.

Wikipedia. 2007. Scientific Method; Evaluation and Iteration;Confirmation. Available: http://en.wikipedia.org/wiki/Scientific_method [accessed 16 April 2007].

Wikipedia. 2007b. Peer review; Confirmation. Available: http://en.wikipedia.org/wiki/Peer_review (accessed 4 December2007).