Program maturity and cost analysis in the evaluation of primary prevention programs

12
Journal o/ Communiry Psychology Volume 12, Jonuur.v, 1984 PROGRAM MATURITY AND COST ANALYSIS IN THE EVALUATION OF PRIMARY PREVENTION PROGRAMS* FRANK BAKER AND DAVID V. PERKINS State University of New York at BuJJaio Primary prevention has been growing as a major initiative in mental health, and to avoid overpromising its benefits, primary prevention programs must be evaluated in a systematic and effective fashion. Information on the developmental maturity and the direct costs of a program is very useful in assessing its merit, even if definitive findings on incidence reduction are not available for many years. This paper outlines the steps by which information on a program’s developmental maturity and direct costs can be obtained, and discusses the implications of these steps for primary prevention policy, Primary prevention in mental health currently enjoys greater official endorsement in governmental and professional circles than at any time in recent history (Plaut, 1980; Swift, 1980; Task Panel on Prevention, 1978). Over the years such popularity has characterized many initially promising solutions to the problem of mental illness (e.g., Moral Treatment, deinstitutionalization) that subsequently faltered due to unexpected problems, an inability to deliver what was promised, and shifting political winds (Levine, 1981). Primary prevention may be no exception to this lesson of history, and will be par- ticularly vulnerable to the extent that it overpromises what can be delivered. The tenden- cy to overpromise can be reduced in part by carefully thinking through the conceptual underpinnings of prevention programs (Levine & Perkins, 1980), and perhaps to an even greater extent, at least over the short term, by the design and execution of careful, infor- mative program evaluations (Bloom, 1977; Heller, Price, & Sher, 1980). However, knotty problems hamper the evaluation of primary prevention programs. In addition to the host of issues complicating evaluation of action programs in general (cf. Weiss, 1972), prevention programs as a group are hampered by a lack of precision and comparability in program components (i.e., target population, intervention, and out- come goals) and also by a lack of time allowed to establish definitive findings regarding long-term effects on incidence. That is, since the ultimate effects of primary prevention programs are not clear until considerable time has passed, the necessary longitudinal studies cannot be completed within the short duration of the typical funding cycle. Broskowski and Baker (1974) have discussed the difficulties of separating out the par- ticular contribution of a prevention program among all the competing rival variables which can account for demonstrated differences. Bloom (1968, p. 713) pointed out that: “In a word, we are generally asked to evaluate the outcome of an undefined program having unspecified objectives on an often vaguely delineated recipient group whose level or variety of pathology is virtually impossible to assess, either before or after their ex- posure to the program.” We argue that in spite of these problems there is an urgent need to develop effective, widely used frameworks for evaluating primary prevention programs to facilitate com- munication, comparability, and scientific progress. A great deal of useful information about prevention programs is available in the short term, and can be put to good use *Reprint requests should be sent to Frank Baker, Division of Community Psychiatry, State University of New York at Buffalo, 2211 Main Street, Buffalo, New York 14214. 31

Transcript of Program maturity and cost analysis in the evaluation of primary prevention programs

Page 1: Program maturity and cost analysis in the evaluation of primary prevention programs

Journal o/ Communiry Psychology Volume 12, Jonuur.v, 1984

PROGRAM MATURITY AND COST ANALYSIS IN THE EVALUATION OF PRIMARY PREVENTION PROGRAMS*

FRANK BAKER A N D DAVID V. PERKINS

State University of New York at BuJJaio

Primary prevention has been growing as a major initiative in mental health, and to avoid overpromising its benefits, primary prevention programs must be evaluated in a systematic and effective fashion. Information on the developmental maturity and the direct costs of a program is very useful in assessing its merit, even if definitive findings on incidence reduction are not available for many years. This paper outlines the steps by which information on a program’s developmental maturity and direct costs can be obtained, and discusses the implications of these steps for primary prevention policy,

Primary prevention in mental health currently enjoys greater official endorsement in governmental and professional circles than at any time in recent history (Plaut, 1980; Swift, 1980; Task Panel on Prevention, 1978). Over the years such popularity has characterized many initially promising solutions to the problem of mental illness (e.g., Moral Treatment, deinstitutionalization) that subsequently faltered due to unexpected problems, an inability to deliver what was promised, and shifting political winds (Levine, 1981). Primary prevention may be no exception to this lesson of history, and will be par- ticularly vulnerable to the extent that it overpromises what can be delivered. The tenden- cy to overpromise can be reduced in part by carefully thinking through the conceptual underpinnings of prevention programs (Levine & Perkins, 1980), and perhaps to an even greater extent, at least over the short term, by the design and execution of careful, infor- mative program evaluations (Bloom, 1977; Heller, Price, & Sher, 1980).

However, knotty problems hamper the evaluation of primary prevention programs. In addition to the host of issues complicating evaluation of action programs in general (cf. Weiss, 1972), prevention programs as a group are hampered by a lack of precision and comparability in program components (i.e., target population, intervention, and out- come goals) and also by a lack of time allowed to establish definitive findings regarding long-term effects on incidence. That is, since the ultimate effects of primary prevention programs are not clear until considerable time has passed, the necessary longitudinal studies cannot be completed within the short duration of the typical funding cycle. Broskowski and Baker (1974) have discussed the difficulties of separating out the par- ticular contribution of a prevention program among all the competing rival variables which can account for demonstrated differences. Bloom (1968, p. 713) pointed out that: “In a word, we are generally asked to evaluate the outcome of an undefined program having unspecified objectives on an often vaguely delineated recipient group whose level or variety of pathology is virtually impossible to assess, either before or after their ex- posure to the program.”

We argue that in spite of these problems there is an urgent need to develop effective, widely used frameworks for evaluating primary prevention programs to facilitate com- munication, comparability, and scientific progress. A great deal of useful information about prevention programs is available in the short term, and can be put to good use

*Reprint requests should be sent to Frank Baker, Division of Community Psychiatry, State University of New York at Buffalo, 2211 Main Street, Buffalo, New York 14214.

31

Page 2: Program maturity and cost analysis in the evaluation of primary prevention programs

32 FRANK BAKER A N D DAVID V. PERKINS

right away in making decisions even if the ultimate verdict is not in for years. Par- ticularly valuable here is information concerning the level ofmaturity of the”program vis- A-vis the generally accepted “state of the art,” and program costs, in a formal, cost- analytic sense. This paper outlines a comprehensive framework within which the evalua- tion of primary prevention programs can immediately proceed along these lines, and concludes with a brief discussion of the implications of these proposals for current policy formulation.

Developmental Approach to Evaluation All programs must progress through common developmental stages to become

capable of producing their intended outcomes. For example, any program must, at a minimum, complete to some degree tasks such as securing initial funding, recruiting and training staff, developing an organizational structure, making itself known to potential consumers and attracting their involvement, evaluating its effectiveness, and planning orderly responses to change as a consequence of evaluation. Baker (1983) has described an index for use in formative evaluations called the Program Development Quotient (PDQ). Developmental tasks are defined such that each comprises a series of stages along which an evaluator might gauge a program at a particular point in time. Trans- forming each dimension into an interval scale provides a basis for quantitative assess- ment of a program’s developmental status on each task. Trained raters who have arrived at acceptable levels of interrater agreement based on a review of documents and inter- views visit target programs and independently rate them on their accomplishment of each task. These ratings are then assembled into a score for each program. The variety of dimensions allows for comparisons across programs which differ in many ways and yet which all have to accomplish the basic tasks required to deliver services and produce desired client outcomes.

Application of the PDQ approach in evaluating primary prevention programs follows identification of the key developmental tasks specific to these programs. These tasks are both internal to the program, involving program components and the relationships among them, and also external to the program in the form of relationships to the larger community system. Speci$cation of Program Components

Mature primary prevention programs are clear in their specification of three inter- nal components (Price, 1978):

(1) The target population toward which the program is directed (e.g., abused or neglected children, who are themselves at risk to become abusive or neglectful parents later in life). Target groups need not themselves be direct recipients of an intervention (e.g., a program to prevent runaway behavior in younger teenagers might deliver parent effectiveness training to their parents). Furthermore, although target groups need not be defined narrowly or restrictively for a program to have value, precise identification of the group is usually consistent with a more advanced stage of development (Heller, Price, & Sher, 1980). Thus, in addition to relevant demographic risks such as age (e.g., children ages 3-5) or marital status (e.g., widowed), mature primary prevention programs indicate additional social or psychiatric conditions (e.g., parental history of disorder) adding further risk and thus further specificity to the target group definition (Heller & Monahan, 1977).

A preventive intervention procedure. Typical procedures include traditional mental health activities such as individual and group counseling, and broader, more

(2)

Page 3: Program maturity and cost analysis in the evaluation of primary prevention programs

EVALUATION OF PREVENTION 33

generic approaches such as mental health education (Bloom, 1980), competence building (Heber, 1978), social support (Baker, 1977; Cassel, 1976), and network building (Gottlieb & Hall, 1980). The most important basis for choosing an intervention is ob- viously its ability to prevent disorder in the target population. However, traditional procedures are usually limited in requiring extensive professional training, lacking efficiency in their delivery to large numbers of recipients, and stigmatizing recipients through their association with mental illness. Thus, more mature primary prevention programs include, or are even based entirely upon one or more of the nontraditional, strength-building interventions (Cowen, 1980).

( 3 ) A disorder or problem-in-living for which the incidence is reduced through the application of the intervention to the recipients. This component of primary prevention is perhaps the most important to specify precisely, yet is often the least well defined in prac- tice. Caplan’s (1964) classic distinctions between primary, secondary, and tertiary prevention are made largely in terms of programs’ effects on the incidence or prevalence of disorders, and primary prevention is by definition a reduction in incidence (i.e., the number of new cases arising within an interval of time) rather than prevalence (which in- cludes the duration of the disorder). Recent theoretical statements (Bloom, 1979; Cowen, 1980; Task Panel on Prevention, 1978) argue that incidence reduction is also ac- complished in a nonspecific fashion by increasing the level of “positive mental health” (competence, information, etc.) in a community or population. Thus, another index of program development is the extent to which attention is devoted to health-promoting effects of the intervention.

We propose that the more specifically each component of the program is defined, and the more precisely the different components are linked together, the more mature is the program. That is, although some identification of target group, intervention, and out- come is minimally required to define the program, one of these components may be more clearly specified than another, particularly during the early stages of program develop- ment. Thus, the basic developmental task is the increasingly precise specification of com- ponents and their interrelationships (Heller, Price, & Sher, 1980). Systems Integration

The definition and rationale for services integration are the basis for a number of re- cent articles (Baker, Intagliata, & Kirshstein, 1980; Baker & Northman, 1981). A federal report on evaluation of services integration projects (DHEW, 1973) defines ser- vice integration as:

“The linking together by various means of the services of two or more service providers to allow treatment of an individual’s or family’s needs in a more coor- dinated and comprehensive manner.”

In the debate over services integration, some have stressed the difficulty in achieving this linkage by emphasizing a definition of human services integration as a process of striving for an ideal system goal. The American Society for Public Administrators (1974) describes this ideal integrated or consolidated system of services as rarely achieved and question whether in some cases it is even desirable. They describe the activities of ad- ministrators working to overcome “provincialism or seeming self-sufficiency of a par- ticular governmental service or function” as serving the cause of services integration but warn against expecting this to be an easily achieved goal:

Like learning to play golf, you never go home to your (disinterested) loved ones and proclaim “today I accomplished the integration of a system of services.” Rather,

Page 4: Program maturity and cost analysis in the evaluation of primary prevention programs

34 F R A N K BAKER A N D DAVID V . PERKINS

more modestly, you whisper courteously, “perhaps today I took another step or steps in the integration of effort among the many public organizations and agencies” (ASPA, 1974, p. 79).

There are a variety of ways by which service integration may be approached. The approaches range from the kind of “top-down” management change strategy which emphasizes linking service programs and providers at administrative levels for purposes of improved planning and management to a kind of bottom-up strategy focusing on link- ing service programs and providers at the point at which services are delivered SO as to improve responses to the needs of clients (Gardner, 1976).

An open systems perspective. Integration is one of the major defining characteristics of a system because without a degree of integration among the components, there is no unified totality (Baker, 1973). Components without integration do not constitute a system but merely a collection of individual elements isolated from each other.

An open systems model helps one to maintain a perspective on the critical impor- tance of the program’s environment. A primary prevention program can be usefully viewed as engaging in multiple transactions with a number of critical constituencies who provide necessary inputs and accept key outputs. These include clients and consumer groups; suppliers of sanction, staff, funds, materials, technology, and information; other providers who may compete or cooperate with regard to clients and resources; and regulatory organizations, including governmental agencies and professional associations.

Criteria ofservice integration. In developing criteria for integration, it is important to consider the extent to which internal service components actively relate to one another, in terms of both general program activities and activities centered around in- dividual consumers. It is also important to give attention to the extent to which the organization’s multiple services are integrated with those in its larger environment.

In examining the integration of internal components, it is important to differentiate the effort which is devoted to achieve integration from the degree of integration actually achieved. The total amount of time given by staff and administration to achieving in- tegration might be a satisfaction index of integrative effort. The degree of integration achieved might be operationalized in terms of evidence of collaboration and achievement of mutual understandings among units. Collaboration versus noncollaboration might be assessed by examining: (1) who participates in organizational decision making, par- ticularly as the decision making relates to the distribution of scarce resources and the dis- position of clients; (2) absence of self-containment of various units in terms of activity patterns; and (3) the extent to which the unit exclusively depends on its own staff to ac- complish various functions. One could also assess internal integration by: (4) types and use of interunit communication; (5) the use of specific information exchange devices; (6) the exchange of clients and material resources; (7) the number of jointly engaged pro- jects, including planning, training, and experimental treatment programs; (8) the degree of cross-unit identification through the use of unifying identifying labels; and (9) mutual understanding as indicated by the degree of agreement about the respective roles of units.

At the level of individual care patterns, it would be expected that a comprehensive program would require a client to use fewer different organizational sources in seeking the total care that is needed. Measures have been developed of two major dimensions of individual care patterns-compactness and cohesiveness (Schumaker, 1972). Com- pactness of an individual care pattern is determined by the range of sources or types of sources of care used by a client or consumer; the smaller the range, the more compact is

Page 5: Program maturity and cost analysis in the evaluation of primary prevention programs

EVALUATION OF PREVENTION 35

the pattern. Cohesiveness of a care pattern is defined as “the degree of integration of sources of care into an organized pattern” (Schumaker, 1972, p. 933). The cohesiveness of a human service care pattern is related to the number of sources of care in the pattern, to the number of sources of a particular type in the pattern, and to the number of types of sources in the pattern. Among relevant questions of concern are whether there are sufficient sources to meet the patient’s needs and those of his family and whether there are obvious gaps and discontinuities.

Closely associated with the degree of effective integration at the level of the client as well as between organizations, is the concept of continuity of care. Continuity of care has been defined operationally as existing to the extent that four criteria of continuity are met, including: (1) client movement or the absence of it in appropriate response to treat- ment needs; (2) stability of the client-caretaker relationship; (3) easy flow of both verbal and written communication among staff members; and (4) efforts made to retrieve clients who appear to be dropping out of treatment prematurely (Bass & Windle, 1972).

As Morrissey, Hall, and Lindsey (1982) point out, the traditional emphases of evaluation research have been limited to client and program-level variables; however, it is also important particularly in primary prevention to examine interorganizational relations. Baker and O’Brien (1971) have identified a number of hypotheses that relate to intersystemic integration of human service organizations. These hypotheses may be of use in guiding an evaluation of a primary prevention program. Similarly, a study by Baker, Isaacs, and Schulberg (1972) of the relationships between community mental health centers and state mental hospitals offers a model for examination of exchange relationships among mental health agencies making use of a questionnaire survey and telephone interview procedure.

In terms of methods for determining levels of system integration, several ap- proaches are possible. One method which can be applied during site visits to each of the providers is interviewing them regarding the extent to which the programs have facilitated integration of services in their localities. A second approach involves the collection of data through more structured data collection techniques developed by Van de Ven and Ferry (1980). Their technique for studying interorganizational fields is based on a network perspective, Van de Ven and Ferry have developed a method for computing structural indices of interunit networks which involves the use of Likert-type scales ask- ing about dyadic and organizational relations. Among the questions are those that deal with resource dependence, agency and personnel awareness, degree of consensus, domain similarity, frequency of communication, resource flows, formalization of interagency agreements, formalization of interagency committees, perceived effectiveness of relationships, and quality of communications. Still, other procedures for studying in- terorganization relations as a method of assessing the degree of systems integration of these programs involve the use of questionnaires with clients over time to determine changes in their patterns of seeking help from other providers, the tracking of client through an interagency network, and specific case studies of clients and agency behavior.

Thus far, precise specification of internal program components and external system relationships have been identified as among the developmental tasks to be mastered by primary prevention programs. Tables 1 , 2, and 3 summarize the application of these ideas in terms of an illustrative list of the developmental issues (Table l), sample items from a PDQ protocol (Table 2), and sample items and format from a measure of services integration (Table 3).

Page 6: Program maturity and cost analysis in the evaluation of primary prevention programs

36 FRANK BAKER AND DAVID V. PERKINS

TABLE 1

Summary of Issues In a PDQ Evaluation

Specificity of Program Components Target population-Defined in terms of empirical factors Intervention procedure-Specific, competence-building factors Outcome-Specific and observable change in incidence Links between program components

Integration of Internal and External Human Components Frequency of Contact Nature of contact Outcomes of contact-for clients

Compactness Cohesiveness Continuity

Outcomes of contact-for system

TABLE 2 PDQ Assessment Instrument

Sample Items ~~

Describe the program's target population completely in a single phrase or sentence: ''

I . Does this definition include demographic risk factors (e.g., age, sex, marital status, etc.)? No Yes Yes, more than

0 1 2

No Yes Yes, more than 2. Does this definition include other risk factors (e.g., social or psychiatric condition, etc.)?

0 1 2

Describe the program's intervention(s) in a single phrase or sentence:

3. Does the intervention package include generic procedures such as education, social support, network- building, etc.?

No Yes Yes, more than 1

0 1 2

Describe the ourcome(s) produced by the program in a single phrase or sentence: "

4. Does the description of outcomes include negative conditions that will be alleviated? No Yes, decreased Prevalence only Yes, decreased incidence

0 1 2

5 . Do the outcomes include positive conditions that will increase? No 0

Yes, cognitive 1

Yes, behavioral L

Page 7: Program maturity and cost analysis in the evaluation of primary prevention programs

EVALUATION OF PREVENTION 37

The PDQ method proposed here provides for comparisons of a primary prevention program with an absolute ideal (i.e., the maximum possible PDQ score) and with another program having comparable internal components (e.g., two programs aimed at preven- ting Fetal Alcohol Syndrome by educating first-time mothers about the effects of alcohol on unborn children). It also allows comparisons to be made of the program with itself at an earlier point in time.

The most useful application of the PDQ method may be in “formative” evaluations in which data on level of program development is fed back to program staff and ad- ministration in order to help the program to continue to progress. Comparisons can also

TABLE 3 Sample Items and Formar from a Measure of Services Integration

Please list the names of the most important individuals or agencies that your unit has had to coordinate with in providing prevention services during the past month. (Select up to five of the most important agents or agencies.)

Names of Key Agents Reasons for Relationship How important was this agent or and/or Agencies agency in attaining the goals of your

prevention project during the past 1 . month? (Write a number from score 2. below.) 3. 4. 6 = low 5. 1 = not very important

2 = somewhat important 3 = quite important 4 = very important 5 = absolutely critical

Please continue to answer the following questions for each of the five other caregivers that you listed above. Key Care Providers

61 82 #3 #4 #5 1. During the past month howfrequently did people

in your program communicate or were in contact with the other agent or agency?

Many Not 1-2 About Times Once Times Weekly Daily Daily

0 1 2 3 4

2. In general what percent of all these com- munications were initiated by people in your unit? (Indicate percent) Using the following scale indicate to what extent your unit changed or influenced the services or operations of this other unit? To N o Consider- Ex- Little Some able Great tent Extent Extent Extent Extent

I 2 3 4 5 Using scale in 3 above, to what extent do you feel the relationship between your unit and this other unit is productive?

-----

o % , -% % -% -70

3.

-----

4.

Page 8: Program maturity and cost analysis in the evaluation of primary prevention programs

38 F R A N K BAKER A N D D A V I D V . PERKINS

be constructively made across several incompletely developed programs. Immature programs are not fairly evaluated in terms of measurements of outcome which they may not be yet capable of fully producing. However, the PDQ method also establishes the basis for eventual summative evaluation of the overall effectiveness of a program in com- parison to alternate programs or the absence of a program (Scriven, 1967). Required at this stage is a fully mature program entailing precise specification of internal components and interrelationships, based on years of formative development, and also enough time (i.e., in years) to provide convincing evidence as to long-term effects on prevalence or in- cidence. Cost Analysis

In today’s economy, choices concerning preventive intervention programs are not limited to data based on relative degree of system development or performance, but also must take into account cost factors. In order to carry out cost-outcome, cost- effectiveness, and cost-benefit studies, basic accounting procedures must be established in each of the programs to be compared. The application of cost-finding techniques as program evaluation methods assumes the establishment of an ongoing accounting system and rate-setting structure in the program. Thus, the first step is to establish a reasonable level of cost accounting across programs if comparisons are to be made among programs. Coursey (1977) has summarized this approach which assumes a program budget rather than an object budget and simply relates program cost to types of outputs:

“X dollars are used to provide this type of service to reach this goal for this many clients.” This type of simple auditing can be used to compare different programs or the same program at different points in time. Of course the validity of “between program” com arison resumes that other program-relevant cost factors can be

Therefore, it is essential to have a method of keeping track of units of service and their attendant costs.

It is important to make a distinction between cost-outcome analyses and cost- effectiveness analyses, As noted above, cost-analysis and -outcome studies are justifiable as independent program evaluation techniques and may need to proceed separately dur- ing the initial phases of the study; however, the synthesis of the two in cost-outcome analyses precedes the development of cost-efectiveness analyses. Basically, cost- effectiveness analyses consist of comparing cost-outcome analyses across programs. A necessary condition for cost-effectiveness analyses is that the target populations ad- dressed by the programs being compared are themselves comparable (Hagedorn, Beck, Neubert, & Werlin, 1976). If the target groups are essentially dissimilar, any statements that are made about relative cost of productivity of the programs will be invalid. If programs draw their clientele from essentially similar populations, then it is possible to do some relatively reasonable comparisons across programs. Obviously, client variables interact with cost of productivity variables for the programs dealing with essentially un- Comparable populations.

The distinction between cost-effectiveness and cost-benefit analysis should also be pointed out. Unlike cost-benefit studies, in cost-effectiveness analysis, no attempt is made to translate outcome into dollar values. Essentially the procedure here is to com- pare the cost of alternative procedures for achieving a particular therapeutic goal which assumes comparable goals as well as comparable target populations. If we have two alternative methods for attempting to reach the same goals, the percentages of success

equated (e.g., t K P e rent or space). (p. 53)

Page 9: Program maturity and cost analysis in the evaluation of primary prevention programs

EVALUATION OF PREVENTION 39

rate should be essentially comparable for comparable target populations in comparable time periods and investment of resources. Cost-effectiveness analysis allows an evaluator to rank programs in terms of the size of their effects relative to cost, but it will not necessarily establish whether the benefits of the particular programs are worth the invest- ment.

By contrast, cost-benefit analysis attempts to convert output criteria to some monetary value. The difficulty of translating benefits into dollar terms has been one of the most frustrating parts of this procedure. Halpern and Binner (1972) have suggested output value analysis as a procedure for translating outcome into economic values. With adults, this essentially translates into attempting to examine economic productivity as impacted upon by the program. One can also use the economic value of response to treat- ment by asking a group of raters to estimate what it would be worth to enter into treat- ment severely impaired individuals and emerge markedly improved. Some types of dollar estimates can then be assigned as related to the estimated response value. This procedure offers a way of examining the children’s programs which are not amenable to the kinds of economic productivity analyses which Halpern, Wilderman, and Binner (1974) have described.

During the first year of an evaluation, three phases of cost analysis might be per- formed: (1) analysis of budgets, (2) review of accounting systems, and (3) development of cost indices.

Budget analysis. The program budget is reviewed to develop a list of the major cost categories which serve as a reference for accounting system review.

Accounting systems review. The primary objective of this review is to document the process(es) by which costs are assigned to the program. In order to perform cost- outcome comparisons, the program costs must include all the major costs related to the performance of the program. Since the process(es) by which costs are charged to the program have been documented, the next step is to relate these to major cost categories to facilitate cost-outcome comparisons.

Cost-indices. In order to have valid cost-outcome comparisons across programs, the per unit costs of identical input to the programs must not be materially different. Thus, if floor space in one locality is much higher than in another, some adjustment must be made before a valid comparative cost-outcome analysis can be done.

Where these cost differences are material, some adjustment is needed to determine the most cost-effective program apart from the unique effects of the particular localities. For comparison purposes, the cost of program B operated in locality X could be es- tablished and compared with the costs of program A operated in locality X. Also, since these programs are experimental, it is desirable to develop a basis for estimating the cost of running the same program in another locality, e.g., program A in locality Y.

In the development of cost indices, the attention must be on major cost categories. Otherwise the cost-index program will itself consume too many resources. Cost data from the individual programs will provide much of the necessary information. For exam- ple, salaries of similarly qualified personnel can be compared. Utility costs (e.g., per kwh) can be compared. Published price indices are available, although in many cases neither the cost categories nor the geographic areas may be partitioned sufficiently fine for use in this fashion.

The steps in conducting the cost-outcome analyses and subsequent cost-effectiveness studies of primary prevention programs would include essentially the same six basic steps

Page 10: Program maturity and cost analysis in the evaluation of primary prevention programs

40 FRANK BAKER AND DAVID V. PERKINS

identified by Hagedorn et al. (1976) in their discussion of cost-outcome studies in the evaluation of community mental health centers:

Identify objectives and goals of treatment as well as target groups across programs. This may require additional specification of objectives and treatment goals as well as refinement of information about target groups including not only clients, families, and also the community in some cases. Aggregate treatment programs, modalities of treatments or techniques to be compared. Establish the cost of programs by client or other unit (e.g., family or other aggregate level). This assumes the basic cost-finding procedures and common accounting procedures described above. Measure the outcome of interventions on the target groups by the administra- tion of pre- and posttreatment assessments. This may require encouragement of the use of similar measures across programs of similar type in order to have comparable outcome measurement. Combine cost and outcome data for each program. Combine cost-outcome matrices for multiple programs to perform cost- effectiveness analysis.

Summary and Implications The ideas presented here have important implications for policy decisions concern-

ing primary prevention. First, although programs require time to reach a point where full-scale evaluation of outcome is possible, the path towards this goal invariably begins with basic steps like obtaining referrals, sharpening the intervention, and so on, the ac- complishment of which can represent meaningful progress over the short term. Further- more, this approach implies that evaluation at this stage can directly assist in moving the program towards the “state of the art.” Price (1978) has referred to the bootstrapping principle of program development in primary prevention in suggesting that programs which can specify one component precisely follow some simple steps to reach the more complete identification of components characteristic of mature programs. For example, where a program starts by defining as precisely as possible the target group of interest, such as the children of chronic alcoholics, its developers can use the research literature and their own experiences to identify with some specificity exactly what the disorders or problems-in-living are for which this group is at risk, thus clarifying a second component. Following this, bootstrappers need only determine the relative potential of different in- tervention strategies to prevent (or to facilitate, if the outcome is positive) those end- states in that population and development is complete. The stage may then be set for a summative evaluation (see above).

Where the initial point of departure is a specific intervention procedure (e.g., social skills training) program developers first identify specific outcomes that may be prevented (if negative) or facilitated (if positive) by the application of this intervention. Epidemiologic data on these outcomes will then point to specific groups which are at risk and thus in need of the intervention. Finally, when the starting point is the outcome to be prevented, the first step is to define in valid, reliable, and operational terms the target dis- order, and then use epidemiologic and retrospective data in identifying specific groups at risk for that disorder. The final step is to find or develop an intervention that specificially links the target population with a reduced incidence of the disorder.

A second implication of these proposals is that for primary prevention programs to substantiate their widely cited (e.g., Swift, 1980) advantages over treatment in terms of

Page 11: Program maturity and cost analysis in the evaluation of primary prevention programs

EVALUATION OF PREVENTION 41

cost-benefits and accountability, costs and outcomes will have to be linked in a more ex- plicit and systematic way. Standardized cost accounting will have the further benefit of permitting cost-effectiveness comparisons, which will indicate which programs are more efficient.

It would appear from the foregoing arguments that conducting and disseminating the results of developmental and cost analyses are desirable steps for all prevention programs and should be made a contingency of funding. Meaningful evaluation is possi- ble at all stages across the life span of a program, and failure to evaluate will be detrimental to the future prospects of prevention in mental health.

REFERENCES AMERICAN SOCIETY FOR PUBLIC ADMINISTRATORS. (1974). Human services integration. Washington, D.C.:

Author. BAKER, F. (1973). Organizations as open systems. In F. Baker (Ed.), Organizational systems: General

systems approaches to complex organizations. Homewood, IL: Richard D. Irwin. BAKER, F. (1977). The interface between professional and natural support systems. Clinical Social Work

Journal, 5 , 139- 148. BAKER, F. (1983). A system development approach to program evaluation. Manuscript submitted for

publication. BAKER, F., INTAGLIATA, J., & KIRSHSTEIN, R. (1980). Case management evaluation: Phase onefinal report,

Vol. 111. Albany, NY: New York State Office of Mental Health. BAKER, F., ISAACS, C. D., & SCHULBERG, H. C. (1972). Study of rhe relationships between community men-

tal health centers and state mental hospitals (Contract report to the National Institute of Mental Health, Accession #PB249-485). Springfield, VA: National Technical Information Service.

BAKER, F., & NORTHMAN, J. (1981). BAKER, F., & O'BRIEN, G. (I97 I).

American Public Health Journal, 61, 130-137. BASS, R. D., & WINDLE, C. (1972).

Psychiatry, 129, 196-201. BLOOM, B. L. (1968).

Miller (Eds.), Comprehensive mental health. Madison, WI: University of Wisconsin Press. BLOOM, B. L. (1977).

prevention: A n idea whose time has come. Washington, D.C.: U S . Government Printing Office. BLOOM, B. L. (1979).

Mental Health Journal, 15, 179-191. BLOOM, B. L. (1980). BROSKOWSKI, A., & BAKER, F. (1974). Professional, organizational, and social barriers to primary preven-

CAPLAN, G. (1964). CASSEL, J. (1976). The contribution of the social environment to host resistance. American Journal of

Epidemiology, 104, 107- 123. COURSEY, R. D. (1977). An overview of techniques and models of program evaluation. In R. D. Coursey, G.

A. Specter, S. A. Murrell & B. Hunt (Eds.), Program evaluation and mental health. New York: Grune and Stratton.

COWEN, E. (1980). The wooing of primary prevention. American Journal ofCommunity Psychology, 8,258- 284.

DEPARTMENT OF HEALTH, EDUCATION, AND WELFARE. (1973). Integration of human services in HEW: An evaluation of services integration projects (SRS-73-02012). Washington, D.C.: U.S. Government Printing Office.

GARDNER, S. (1976). Roles for general purpose governments in services integration (Project Share: Human Services Monograph Series, No. 2, Publication No. 08-76-130). Washington, D.C.: US. Government Printing Office.

Social networks and the utilization of preventive mental health ser- vices. In R. Price, R. Ketterer, B. Bader, & J. Monahan (Eds.), Prevention in mental health: Research, policy, and practice. Beverly Hills, CA: Sage.

Helping: Human services for the '80s. St. Louis: C. V. Mosby. Inter-systems relations and coordination of human service organizations.

Continuity of care: An approach to measurement. American Journal of

The evaluation of primary prevention programs. In L. Roberts, N. Greenfield, & M.

Evaluating achievable objectives for primary prevention. In S. Goldston (Ed.), Primary

Prevention of mental disorders: Recent advances in theory and practice. Community

Social and community interventions. Annual Review of Psychology, 31, I 1 1-142.

tion. American Journal of Orthopsychiatry, 44, 707-719. Principles of preventive psychiatry. New York: Basic Books.

GOTTLIEB, B. H., & HALL, A. (1980).

Page 12: Program maturity and cost analysis in the evaluation of primary prevention programs

42 FRANK BAKER AND DAVID V. PERKINS

HAGEDORN, H. J., BECK, K. J., NEUBERT, S. F., & WERLIN, S. H . (1976). A working manual ofsimple program evaluation techniques for community mental health centers (Publication No. (ADM) 76-404). Rockville, MD: National Institute of Mental Health.

A model for an output value analysis of mental health programs. Ad- ministration in Mental Health, Winter, 40-51.

Income patterns and mental health treatment (Research Report). Denver, CO: Fort Logan Mental Health Center.

Sociocultural mental retardation: A longitudinal study. In D. Forgays (Ed.), Primary prevenrion of psychopathology (Vol. 2). Hanover, NH: University Press of New England.

HALPERN, J., & BINNER, P. R. (1972).

HALPERN, J., WILDERMAN, E., & BINNER, P. R. (1974).

HEBER, F. R. (1978).

HELLER, K., & MONAHAN, J. (1977). HELLER, K., PRICE, R. H., & SHER, K. (1980).

Psychology and community change. Homewood, IL: Dorsey Press. Research and evaluation in primary prevention: Issues and

guidelines. In R. Price, R. Ketterer, B. Bader, & J. Monahan (Eds.), Prevention in mental health. Beverly Hills, CA: Sage.

The history and politics of community mental health. New York: Oxford University Press.

Social setting interventions and primary prevention: Comments on the Report of the Task Panel on Prevention to the President’s Commission on Mental Health. American Journal of Community Psychology, 8, 147-157.

Interorganizational relations: A sourcebook of measures for mental health program (DDHS Publication No. [ADM] 82-1887. National Institute of Mental Health, Series BN, No. 2). Washington, D.C.: U S . Government Printing Ofice.

Evaluation research in primary prevention: Lifting ourselves by our bootstraps. Presented at the Primary Prevention Conference of the Community Mental Health Institute, National Council of Community Mental Health Centers, Denver, CO.

PLAUT, T. F. A. (1980). Prevention policy: The federal perspective. In R. Price, R. Ketterer, B. Bader, & J. Monahan (Eds.), Prevention in mental health: Research, policy, and practice. Beverly Hills, CA: Sage.

SCHUMAKER, C. J. (1972). Change in health sponsorship: 11. Cohesiveness, compactness and family consulta- tion of medical care patterns. American Journal of Public Health. 62, 931-935.

SCRIVEN, M. (1967). The methodology of evaluation. In Perspectives of curriculum education (AERA Monograph Series on Curriculum Education, No. I). Chicago: Rand McNally.

SWIFT, C. F. (1980). Primary prevention: Policy and practice. In. R. Price, R. Ketterer, B. Bader, & J. Monahan (Eds.), Prevention in mental health: Research. policy, and practice. Beverly Hills, CA: Sage.

TASK PANEL ON PREVENTION. (1978). Report of the Task Panel on Prevention. In Report to the President of the President’s Commission on Menral Health (Vol. IV). Washington, D.C.: U S . Government Printing Ofice.

Measuring and assessing organizations. New York: Wiley. Evaluating educational and social action programs: A treeful of owls. In C. Weiss (Ed.),

LEVINE, M. (1981).

LEVINE, M., & PERKINS, D. V. (1980).

MORRISSEY, J., HALL, R., & LINDSEY, M. (1982).

PRICE, R. H. (1978).

VAN DE VEN, A. H., & FERRY, D. L. (1980). WEISS, C. H. (1972).

Evaluating action programs: Readings in social action and education. Boston: Allyn & Bacon.