Cross-national perspectives on the policy uses (and abuses) of evaluation

3
Forethoughts Cross-National Perspectives on the Policy Uses (and Abuses) of Evaluation Ray C. Rist, Guest Editor It is an axiomatic proposition that the management of complex societies necessitates the abundance of good information. For, in the absence of such information, it is extremely difficult to adequately assess either the consequences of previous decisions or to understand the potential costs and benefits of future options. Having information available as an input to the decision-making process can no longer be assumed to be optional-it is now an essential ingredient. Relevant information cannot guarantee good decision-making. But its absence can almost assuredly guarantee bad decision-making. The lack of a knowledge base during the decision- making process generates a high probability of finding the "ready, fire, aim" syndrome alive and well. One particular kind of information that is increasingly prevalent in gov- ernment is that generated by program evaluation.1 Indeed, recent analy- ses (a number of which have been conducted by the authors of this present special issue) suggest that program evaluation is taking its place along with budgetary, economic, and auditing information, for example, in the governmental decision making of many, if not all, western democratic societies (Rist, 1989). That the use varies by country and by the functions to which such evaluation information applies should not be surprising. Each country has organized its information needs differently and subse- quently created systems that more or less provide that which is desired. The differential organizational structures and systems that are begin- ning to be documented in the research literature suggest that these vari- ations will impact upon what can be expected from the evaluation effort per se. Simply stated, how a country manages what goes into the evalu- Ray C. Rist is director of operations in the general government division of the United States General Accounting Office. He was previously a professor at Cornell University and has authored or edited sixteen books and written nearly one hun- dred articles. He is chair of the Working Group on PolicyEvaluation whose mem- bers prepared the articles for this special symposium.

Transcript of Cross-national perspectives on the policy uses (and abuses) of evaluation

Forethoughts

Cross-National Perspectives on the Policy Uses (and Abuses)

of Evaluation

Ray C. Rist, Guest Editor

It is an axiomatic proposition that the management of complex societies necessitates the abundance of good information. For, in the absence of such information, it is extremely difficult to adequately assess either the consequences of previous decisions or to understand the potential costs and benefits of future options. Having information available as an input to the decision-making process can no longer be assumed to be optional-i t is now an essential ingredient. Relevant information cannot guarantee good decision-making. But its absence can almost assuredly guarantee bad decision-making. The lack of a knowledge base during the decision- making process generates a high probability of finding the "ready, fire, aim" syndrome alive and well.

One particular kind of information that is increasingly prevalent in gov- ernment is that generated by program evaluation.1 Indeed, recent analy- ses (a number of which have been conducted by the authors of this present special issue) suggest that program evaluation is taking its place along with budgetary, economic, and auditing information, for example, in the governmental decision making of many, if not all, western democratic societies (Rist, 1989). That the use varies by country and by the functions to which such evaluation information applies should not be surprising. Each country has organized its information needs differently and subse- quently created systems that more or less provide that which is desired.

The differential organizational structures and systems that are begin- ning to be documented in the research literature suggest that these vari- ations will impact upon what can be expected from the evaluation effort per se. Simply stated, how a country manages what goes into the evalu-

Ray C. Rist is director of operations in the general government division of the United States General Accounting Office. He was previously a professor at Cornell University and has authored or edited sixteen books and written nearly one hun- dred articles. He is chair of the Working Group on Policy Evaluation whose mem- bers prepared the articles for this special symposium.

4 Knowledge in Society/Winter 1989-90

ation effort will greatly influence the information that subsequently comes out. While some countries are moving to systematically distribute the eval- uation function across many governmental units, there are others that are pressing to have the function centralized as much as possible, and still others that have at present an ad hoc policy in place which indicates they have not yet decided exactly how they want to organize the generation of such information (Derlien, 1989). At present, consequently, there are mul- tiple strategies in place. The underlying assumption across the countries discussed in this present issue, as well as what is known from other sources, suggests that there exists a basic assumption that evaluation information is needed in the management of complex societies. What uses can be made of this information shapes the analyses in the articles to follow.

Evaluation and the Life Cycle o f the Policy Process

As I recently noted elsewhere:

Retrospective information can be gathered at any stage in the life of a policy or program. Indeed, it is this ability to span the life cycle that provides one of the important rationales for the introduction of evaluation data and analysis into the management of the federal sector. To be able to bring analysis to bear at any time in the duration of a policy or program is no small feat, and it portends the opportunity for evaluation to make a clear and discernible impact on de- cision making (Rist, 1989, p. 10).

But to describe the potential is not to describe the reality. Generating the opportunities for use is not the same as showing that such use takes place. And it is h e r e - w h e n the issue of the utilization of evaluations is ra ised- that the articles in this issue come together.

The life cycle of a policy or program can be conceptualized as having three broad time frames: policy or program creation, implementation, and outcomes or impacts (Chelimsky, 1985). At each of these three stages, evaluation data and analysis can be brought to bear on the existing infor- mation needs and decisions to be made. The fact that such information can address all three of these broad phases of the policy process has been an important reason for growth in the evaluation endeavor. Because eval- uation data can now be juxtaposed to budgetary information, for example, at each stage of the policy process, a powerful incentive has been created for its use. Stated differently, not only to know what a program or policy is costing, but also to know simultaneously whether it has been carefully conceptualized, managed, and useful is of considerable help to policymak- ers.

The articles in this issue support the view that good evaluation infor- mation can be helpful at all three stages of the policy process. Specifically, the contributions of program evaluation to the policy formulation process are addressed in the articles from Great Britain and the United States,

Forethoughts 5

while matters of policy implementation find focus in the article from Nor- way, and the concerns with policy outcomes and their accountability are central themes of those from Canada and the Netherlands. The Danish author in this collection essentially cuts across all three of these stages, calling into question the linear model of policy utilization, whereby it is assumed that knowledge leads directly to action. He correctly notes that it does not and that informing the policy process is frequently a subtle, in- direct, and meandering effort that can product some surprising and un- anticipated instances of use.

There is a certain irony in this present set of circumstances. On the one hand, the basic assumption exists that program evaluation data can pro- duce direct and immediately useful information for the policy process of the respective government. On the other, the evaluation community itself writes and talks of indirect influences on decision making, of social en- lightenment instead of social engineering, and of cumulative persuasive- ness coming from years of evaluation findings, rather than from one or a few such studies (Weiss 1987). The evaluation community sees itself as protecting a craft that is rather fragile, tenuous, and even peripheral to much of the decision-making going on in government. The disconnection between the expectations of the users and the perceptions of the providers is more or less dramatic, depending upon a host of country-specific cir- cumstances. But those sponsoring evaluation research are more optimis- tic than are those doing the w o r k - a situation that speaks to a basic ten- sion in the evaluation community that will need to be addressed.

N o t e s

The views expressed here are those of the author and no endorsement by the United States General Accounting Office is intended or should be inferred. 1. Chelimsky (1985, p. 7) provides a current and appropriate definition of program eval-

uation when she writes: "Thus, a reasonably well accepted definition might be that program evaluation is the application of systematic research methods to the assess- ment of program design, implementation, and effectiveness."

R e f e r e n c e s

Chelimsky, E. (1985). Old patterns and new directions in program evaluation. In E. Chelimsky (Ed.), Program evaluation: Patterns and directions. Washington, D.C.: Amer- ican Society for Public Administration.

Cook, T.D., & Shadish, W.R. (1986). Program evaluation: The worldly science Annual Review of Psycholo~y, 37, 193-232.

Derlien, H. (1989). Genesis and structure of evaluation efforts in comparative perspec- tive. In R. Rist (Ed.), Program evaluation and the management of government: Patterns and prospects across eight nations. New Brunswick, N.J.: Transaction Books.

Rist, R.C. (Ed.). (1989). Prosram evaluation and the management of government: Patterns and prospects across eight nations. New Brunswick, N.J.: Transaction Books.

Weiss, C. H. (1987). Evaluation for decisions: Is anybody there? Does anybody care? Plenary Presentation, American Evaluation Association meeting, Boston.