Manager and evaluator views of program evaluation

11
Journal o/ Community Psychology Volume II. July. 1983 MANAGER AND EVALUATOR VIEWS OF PROGRAM EVALUATION* FRANK BAKER State University of New York at Buffalo This article presents the results of a 1979 national survey of federally funded com- munity mental health centers (CMHCs). The directors and chief program evaluators of 323 CMHCs returned questionnaires which allow comparisons of the opinions of the center managers and their evaluators regarding accountability and evaluation and the level and knowledge of center evaluation staff. The CMHC directors and evaluators are also compared to a random sample of community psychologists and four other criterion groups of professionals regarding their responses t o three items of the service provider accountability subscale of the Human Service Ideology scale. Over a dozen years ago, a colleague and I (Schulberg & Baker, 1968) published a paper discussing the problems of implementation of evaluative research findings by human service managers. One of the difficulties identified in that article was the difference in perspective between administrators and evaluators. Over the ensuing years, others have also commented on the fact that managers and evaluators tend to view the world in different ways, emphasize different priorities, and approach the requirement of evaluation with different degrees of attitudinal commitment (Hagedorn, Neubeck, & Werlin, 1976; Trice & Roman, 1974; Weiss, 1972). Recently there has been considerable attention given to problems of management's utilization of evaluative research findings and the beginning of research on the factors that determine such utilization (Cohen, 1977; Davis & Salasin, 1977; Schulberg & Jerrell, 1979). However, the literature has offered relatively little empirical information comparing the frames of reference of ad- ministrators and evaluators. The primary purpose of this paper is to present some of the results from a nationwide survey of the directors and primary evaluators of federally funded com- munity mental health centers regarding their perceptions of program evaluation issues. Specifically, this paper will discuss the similarities and differences in the attitudes of the center managers and their evaluator colleagues concerning being accountable to con- sumers and their views regarding the level of skill and knowledge of center evaluation staff. METHOD CMHC DirectorlEvaluator Survey This paper reports results from a 1979 nationwide survey of community mental health centers (CMHCs) that was jointly sponsored by the National Council of Com- munity Mental Health Centers and a Committee on Program Evaluation which I chaired within Division 27 of the American Psychological Association (APA). The major pur- poses of the survey were to determine the current status of program evaluation in *An earlier version of this paper was presented at a symposium on "Program Evaluation: A Tool for Health Care Administrators," July 16, 1981 at Hofstra University, Hempstead, New York, and will appear in a forthcoming book based on that symposium. The author wishes to acknowledge the support of Gerald Landsberg and John C. Wolfe at the National Council of Community Mental Health Centers and Murray Levine and J. R. Newbrough of APA Division 27. Special thanks also to Charles Windle of the National Institute of Mental Health and to other members of the 1979 Division 27 Program Evaluation committee in- cluding Donald Bartlett, Anthony Broskowski, Gordon Seidenberg, Herbert C. Shulberg, and Barry Willer. Send reprint requests to author, Division of Community Psychiatry, Department of Psychiatry, State Univer- sity of New York at Buffalo, 2211 Main Street, Building E, Buffalo, New York 14214. 213

Transcript of Manager and evaluator views of program evaluation

Page 1: Manager and evaluator views of program evaluation

Journal o/ Community Psychology Volume II. July. 1983

MANAGER AND EVALUATOR VIEWS OF PROGRAM EVALUATION* FRANK BAKER

State University of New York at Buffalo

This article presents the results of a 1979 national survey of federally funded com- munity mental health centers (CMHCs). The directors and chief program evaluators of 323 CMHCs returned questionnaires which allow comparisons of the opinions of the center managers and their evaluators regarding accountability and evaluation and the level and knowledge of center evaluation staff. The CMHC directors and evaluators are also compared to a random sample of community psychologists and four other criterion groups of professionals regarding their responses t o three items of the service provider accountability subscale of the Human Service Ideology scale.

Over a dozen years ago, a colleague and I (Schulberg & Baker, 1968) published a paper discussing the problems of implementation of evaluative research findings by human service managers. One of the difficulties identified in that article was the difference in perspective between administrators and evaluators. Over the ensuing years, others have also commented on the fact that managers and evaluators tend to view the world in different ways, emphasize different priorities, and approach the requirement of evaluation with different degrees of attitudinal commitment (Hagedorn, Neubeck, & Werlin, 1976; Trice & Roman, 1974; Weiss, 1972). Recently there has been considerable attention given to problems of management's utilization of evaluative research findings and the beginning of research on the factors that determine such utilization (Cohen, 1977; Davis & Salasin, 1977; Schulberg & Jerrell, 1979). However, the literature has offered relatively little empirical information comparing the frames of reference of ad- ministrators and evaluators.

The primary purpose of this paper is to present some of the results from a nationwide survey of the directors and primary evaluators of federally funded com- munity mental health centers regarding their perceptions of program evaluation issues. Specifically, this paper will discuss the similarities and differences in the attitudes of the center managers and their evaluator colleagues concerning being accountable to con- sumers and their views regarding the level of skill and knowledge of center evaluation staff.

METHOD CMHC DirectorlEvaluator Survey

This paper reports results from a 1979 nationwide survey of community mental health centers (CMHCs) that was jointly sponsored by the National Council of Com- munity Mental Health Centers and a Committee on Program Evaluation which I chaired within Division 27 of the American Psychological Association (APA). The major pur- poses of the survey were to determine the current status of program evaluation in

*An earlier version of this paper was presented at a symposium on "Program Evaluation: A Tool for Health Care Administrators," July 16, 1981 at Hofstra University, Hempstead, New York, and will appear in a forthcoming book based on that symposium. The author wishes to acknowledge the support of Gerald Landsberg and John C. Wolfe at the National Council of Community Mental Health Centers and Murray Levine and J . R. Newbrough of APA Division 27. Special thanks also to Charles Windle of the National Institute of Mental Health and to other members of the 1979 Division 27 Program Evaluation committee in- cluding Donald Bartlett, Anthony Broskowski, Gordon Seidenberg, Herbert C. Shulberg, and Barry Willer. Send reprint requests to author, Division of Community Psychiatry, Department of Psychiatry, State Univer- sity of New York at Buffalo, 2211 Main Street, Building E, Buffalo, New York 14214.

213

Page 2: Manager and evaluator views of program evaluation

214 FRANK BAKER

CMHCs and to assess the attitudes of the director and chief program evaluator toward evaluation at each CMHC. Data concerning the background and organizational characteristics of the chief evaluators at the CMHCs were reported in an earlier paper that also presented the methodology employed in more detail (Baker, 1982).

Briefly, during the summer of 1979,537 CMHC directors were sent a questionnaire to be completed by themselves and a second questionnaire which they were to pass on for completion by the “person having chief responsibility for program evaluation in the center.” Two telephone and mail follow-ups were conducted with nonrespondents and a total of 323 community mental health centers returned both the director’s and program evaluator’s questionnaires, yielding a response rate of 60% without any apparent systematic nonresponse bias.

The Questionnaires In addition to some general background questions, the CMHC director question-

naire was composed of a set of 17 attitude items in standard Likert-style agree-disagree format and a question asking for ratings of the levels of 12 areas of knowledge and skills of the center’s present evaluation staff. The center evaluator questionnaire was made up of a parallel set of items and included some more detailed questions concerning the per- formance of specific program evaluation activities.

Instructions for the attitude items were as follows: “We are interested in the opinions of CMHC directors concerning accountability and evaluation. Please indicate your own personal opinions by circling the appropriate letters indicating your agreement or disagreement with the following statements.”

Fourteen of the items included in the measure of attitudes concerning evaluation and accountability came from a preliminary draft of items for a subscale of the Human Service Ideology scale (Baker & Baker, Note l), and dealt with issues of human service providers being accountable to consumers. The other three items specifically refer to community mental health centers and dealt with the contribution that program evalua- tion can make to community mental health center practice. Kirkhart (Note 2) used these three items in her dissertation study of community mental health centers in the State of Michigan.

The 17 attitude items, including nine positively worded and eight negatively worded statements, were arranged in random order in Likert format with provision for respondents to circle one of six response categories for each item: strongly, moderately, or slightly agree; and strongly, moderately, or slightly disagree. On positively worded items, strong agreement was scored 7; moderate agreement, 6; slight agreement, 5; slight disagreement, 3; moderate disagreement, 2; strong disagreement, 1. Reverse scoring was used for negatively worded items and when no response was given to an item, a score O f 4 was assigned.

Another part of both the directors and the evaluators questionnaires called for the rating of 12 skill areas. The task was to rate level of skill or knowledge of the current evaluation staff of the center on a 4-point scale from poor to excellent. The respondent was also asked to check those skills he/she thought needed to be improved. The twelve areas of skill or knowledge included:

( 1) accounting, ( 2) administration/organization management, ( 3) clinical treatment techniques,

Page 3: Manager and evaluator views of program evaluation

VIEWS OF PROGRAM EVALUATION

community organization and relations, computer, data analysis/statistics, interpersonal skills/group dynamics, evaluation instrument design/scaling, planning and policy development, quality of care, research study, and writing.

RESULTS

215

Table 1 presents a comparison of the responses of directors and evaluators of the 17 program evaluation items. For eight of these items, there are no significant differences in the means between the evaluators and directors in degree of agreement versus disagree- ment with the statements. However, for nine of the items there is a statistically signifi- cant difference between the mean response of the evaluator and director.

Evaluators have significantly higher positive responses to the belief that program evaluation can contribute to center management decision making; to clients having a right to information about the human service programs so they can shop around and ex- ercise the best judgment as to where to go for help; that human services should be ac- countable to and regularly evaluated by consumers and providers; that the provider ser- vice has a responsibility to be accountable to the users of the service; and that service organizations should have established mechanisms to assess the extent to which goals and objectives are achieved. The directors of the program have significantly higher mean scores indicating their agreement that the administration of a human service agency should establish specific procedures for maintaining program accountability. Evaluators have a significantly stronger tendency to agree with the negatively worded statement that accountability to funding sources is essentially just a fancy term for more control and more bureaucracy and that any human service organization which attempts to become fully accountable to all interested parties faces a hopeless dilemma.

No significant differences are found between the evaluators’ and directors’ percep- tions of the importance that some person or agency accept responsibility for the whole person, that health care services should be linked to the needs of defined communities for which providers of health care are responsible and that agencies should have procedures for client evaluation of services. Both the evaluators and directors were somewhat neutral about whether consumer participation contributes substantially to the promotion of social accountability and whether accountability mechanisms are likely to enhance program goal attainment.

Another way of examining the attitude items is to examine their dimensional struc- ture. Therefore, the 17 evaluation attitude items were subjected to an alpha factor analysis based on the combined pool of responses of the directors and the evaluators. Four factors with eigenvalues of greater than one emerged which accounted for a total of 5 I .2% of the variance. As Table 2 shows, Factor 1 accounted for 27.1% of the variance, followed by Factor 2 with 9.8%, Factor 3 with 8.1% and Factor 4 with 6.2%. Four items had factor loadings of .4 or higher on factor subscales 1,2 , and 3; while two items loaded at .4 above on Factor 4. Factor 1 with the highest loading items including items 1 , 2 , 3 , 16 seems to be made up of items emphasizing that program evaluation can contribute to

Page 4: Manager and evaluator views of program evaluation

216 FRANK BAKER

TABLE I Cornparkon of Ratings by Center Directors and Evaluators of Evaluation Attitude lfems

Directors Evaluators Mean S.D. Mean S.D. 1-score D.F. Sig.

1.

2.

3.

4.

5.

6.

7.

8.

9.

10.

11.

12.

13.

14.

IS.

16.

17.

Program evaluation can contribute to CMHC management decision making. Program evaluation can contribute to the effec- tiveness and efficiency of CMHC services. It is essential that the administration of a human service agency establish specific procedures for maintaining program accountability. In providing effective helping services, it is im- portant that some person or agency accept re- sponsibility for the whole person. Evaluation findings concerning human service programs should not be made known to the public. The provision of health care services should be linked to the needs of defined communities for which providers of health care are held respon- sible. Consumer participation does not contribute substantially to the promotion of social account- ability. Clients should have a right to information about a human service program so that they may shop around and exercise their best judgment as to where to go for help. A hopeless dilemma faces any human service organization attempting to become more fully accountable to all interested parties. Agencies should have procedures for client evaluation of services. Accountability to funding sources is essentially just a fancy term for more control and more bureaucracy. Demands for being accountable to the client, the community, and funding bodies are secondary to keeping the agency in a sound budgetary con- dition. Accountability mechanisms are not likely to enhance program goal attainment. Human services should be accountable to and regularly evaluated by consumers and providers. The provider of services has a responsibility to be accountable to the users of services. Service organizations should have established mechanisms to assess the extent to which goals and objectives are achieved. The positive contribution of evaluation to community mental health practices is minimal.

6.59

6.61

6.60

5.54

5.75

.62

.62

.65

.52

.30

5.96 1.02

4.84

6.03

3.53

6.04

4.48

4.72

5.51

5.50

.7 I

.06

.85

.I2

.79

.63

.36

.24

6.01 0.95

6.27 .78

5.10 1.63

6.75 .51

6.64 .60

6.41 .90

5.34 1.62

5.96 1.18

5.84 1.23

4.89

6.29

4.00

6.18

4.80

.74

.OI

.94

.04

.68

4.74 1.54

5.50 1.53

5.73 1.19

6.18 1.00

6.41 .76

5.08 1.74

-3.60 263 <.Ol

-.57 263 NS

2.69 263 C.01

1.57 263 NS

-2.12 263 <.05

1.26 263 NS

- .30 263 NS

-3.36 263 <.01

-3.23 263 <.01

-1.67 263 NS

-2.29 263 <.05

- .21 263 NS

.I3 263 NS

-2.25 263 <.05

-2.06 263 <.05

-2.11 263 <.05

.I9 263 NS

Page 5: Manager and evaluator views of program evaluation

VIEWS OF PROGRAM EVALUATION 217

agency administration and included are three items specifically referring to CMHCs. Factor 2 loads highly on items, 7, 9, 11, 13, all of which had to do with organizational ac- countability. Factor 3 includes among its highest loading items 8, 10, 14, and 15, all of which stress the importance of the consumer’s role and rights in evaluation. The fourth factor made up of items 4 and 6 emphasizes the importance of service providers being responsive to the needs of the whole person and community.

These items were combined into subscale scores and comparisons were made between the evaluators and directors on the subscales. There is no significant difference between the evaluators and the directors with regard to the importance cf evaluation to administration or in the salience of meeting the needs of whole persons and communities.

TABLE 2

Evaluaiion Aiiiiude Faciors and Items with Highesi Factor Loadings

Factor 1. Administrative Contribution 1.

2.

3.

16.

Program evaluation can contribute to CMHC management decision making. Program evaluation can contribute to the effectiveness and efficiency of CMHC services. It is essential that the administration of a human service agency establish specific procedures for maintaining program accountability. Service organizations should have established mechanisms to assess the extent to which goals and objectives are achieved.

7. Consumer participation does not contribute substantially to the pro- motion of social accountability.

9. A hopeless dilemma faces any human service organization attempting to become more fully accountable to all interested parties.

11. Accountability to funding sources is essentially just a fancy term for more control and more bureaucracy.

13. Accountability mechanisms are not likely to enhance program goal attainment.

Factor 2. Accountability

Factor 3. Consumer’s Role 8. Clients should have a right to information about a human service program

so that they may shop around and exercise their best judgment as to where to go for help. Agencies should have procedures for client evaluation of services. Human services should be accountable to and regularly evaluated by consumers and providers. The provider of service has a responsibility to be accountable to the users of services.

10.

14.

15.

Factor 4. Responsibility for Needs 4. In providing effective helping services, it is important that some person

or agency accept responsibility for the whole person. 6. The provision of health care services should be linked to the needs of

defined communities for which providers of health care are held responsible.

Percent of Total Factor Variance Loading

27.1

.70

.79

.42

.4 1

9.8

.5 1

.58

.56

.50 8. I

.47

.53

.55

.6 I 6.2

.43

.53

Page 6: Manager and evaluator views of program evaluation

218 FRANK BAKER

However, there are significant differences with regard to the other two subscales. Evaluators have a higher mean score on the subscale dealing with accountability to con- sumers and funders and, similarly, a significantly higher score regarding the importance of the consumer in evaluation.

Combining all 17 items into a total score offers another way of comparing the perceptions of directors and evaluators regarding the evaluation. The mean score of evaluators was significantly higher than that of directors on a score composed of the summative ratings across the 17 items. It appears that evaluators have a somewhat more positive attitude toward evaluation than do the directors, although the correlation between the directors and evaluators from the same center on the total scores of 17 items was .23, which is significantly beyond the .01 level. Comparisons to National Criterion Groups

During 1977 and 1978, Baker and Baker (Note 1) gathered data from five criterion groups of professionals regarding their attitudes toward various aspects of human service ideology. Based on the responses of these 233 individuals, the multidimensional Human Service Ideology (HSI) scale was developed. One of the four HSI subscales was labeled Service Provider Accountability (SPA) and was composed of five items.

Since the survey of community mental health center directors and evaluators was being planned at about the same time that data analysis was beginning on the Human Service Ideology scale, some of the same items were included in the questionnaires sent to CMHC directors and evaluators. Thus, it is possible to examine the responses of the center directors and evaluators on part of the Service Provider Accountability dimension of the Human Service Ideology Scale. Three of the five items constituting the SPA scale were included in the community mental health center questionnaires.

The three items from the Service Provider Accountability subscale (that was ad- ministered both to the five known groups on which the HSI scale was developed and to the evaluators and directors of community mental health centers) were the following:

3. It is essential that the administration of a human service agency establish specific procedures for maintaining program accountability.

1 1. Accountability to funding sources is essentially just a fancy term for more con- trol and more bureaucracy.

13. Accountability mechanisms are not likely to enhance program goal attain- ment.

The Human Service Ideology scale was developed on the basis of responses from five groups of respondents who were selected on the basis of the likelihood of their having different degrees of commitment to human service concepts. The five groups of respondents included:

A random sample of 100 individuals from the Division of Community Psychology (Division 27) of the American Psychological Association selected from the 1978 APA Directory. Members of the New England Human Service Coalition composed of in- dividuals from Boston and other parts of New England who met regularly to discuss common interests in the development of human services. One of the products of this group was the founding of the New England Journal of Human Services. Members of the Human Services Network, a nationwide professional interest grou organized by Fran Burnford, Director of the University of Southern Caligrnia (USC) Human Services Research and Design Center. In 1977, the

1.

2.

3.

Page 7: Manager and evaluator views of program evaluation

VIEWS OF PROGRAM EVALUATION 219

USC Human Services Center brought to ether members of the network from around the country at a conference to &cuss human services education. A product of this national conference was the influential book, Human Services Professional Education (Chenault & Burnford, 1978). A random sample of 100 board-certified psychiatrists selected from the Direc- tory of Medical Specialists for 1977. Students in a public administration doctoral program in a western university who were also contacted with the help of the USC Human Services Center.

Comparing the responses to the three common items for the CMHC directors and evaluators and the five criterion groups on which the HSI scale was developed allows us to examine the attitudes of center personnel with a wider perspective (see Table 3). An omnibus test of all seven groups through a one-way analysis of variance for each item was conducted. The F ratios for all three analysis of variance comparisons were signifi- cant beyond the .001 level. Applying the Scheffk procedure for a posteriori contrasts showed no differences between the directors and evaluators from the center but did show differences on two of three items when they were compared to the other national re- spondent groups.

4.

5 .

TABLE 3

Comparison of Center Directors and Evaluators with National Criterion Groups for Items from HSI Service Provider Accountability Subscale

Response Groups Item #3 Item # I 1 Item # I 3

N Mean Mean Mean

Center Directors 264 6.60 4.48 5.51

Center Evaluators 264 6.41 4.80 5.50

USC Human Services Network 60 6.53 5.58 5.63

New England Human Services Coalition 40 6.73 5.95 5.12

Community Psychologists 74 6.39 4.82 5.64

Board-Certified Psychiatrists 42 5.83 4.29 4.86

Public Administration Students 17 5.83 4.7 I 5.12 ~ ~

Groups with significantly different means 1 x 5 1 x 3 5 x 6 by Scheffi procedure for a posteriori 1 x 7 1 x 6 contrasts: 2 x 5 2 x 6

3 x 5 3 x 5 5 x 6 5 x 6

Regarding the item (#3) which emphasized the importance of the administration of a human service organization establishing specific procedures for maintaining program accountability, it was found that the directors of community mental health centers differed from the psychiatrists and public administrators but did not differ significantly from the USC network group. The evaluators only differed from the psychiatrists on this item. As noted in the table, the New England generalists had the highest mean score on this item followed by the center directors, then the USC group followed by the center evaluators, community psychologists, and then the psychiatrists and public administra- tion students with the lowest scores.

The second item (#I 1) is a rather cynical statement indicating that accountability to funding sources is just a fancy term for more control and more bureaucracy. A high score

Page 8: Manager and evaluator views of program evaluation

220 FRANK BAKER

on this item required disagreeing with it because of its negative wording. As might be ex- pected, the two human service interests network groups rejected this item most strongly followed by the psychologists, center evaluators, and public administration students. However, the center directors and sample of psychiatrists tended to agree with this item more than the other groups.

The third item (#13) from the HSI subscale dealt with whether accountability mechanisms were likely to enhance goal attainment. The sample of psychiatrists tended to embrace most strongly the position that accountability mechanisms were not likely to help with attaining goals while the sample of community psychologists and the USC Human Service group most strongly rejected this item. The evaluators and directors of centers had essentially the same mean score and did not significantly differ from each other or from the other groups. In fact, only the psychiatrists and members of the New England Human Services Coalition significantly differed on their mean response to this attitude item on the Scheffk test. Skills and Knowledge Levels

The directors and chief program evaluators were also asked to rate the skills/knowledge levels of the center’s present evaluation staff and to check those skills that they each thought needed to be improved. Four levels of rating were made available ranging from “excellent” through “good” and “fair” to “poor” on the areas of skill or knowledge. Table 4 presents a comparison of the mean ratings of directors and evaluators on the areas of skill or knowledge. Directors tended to judge most of these skill/knowledge areas as being at a somewhat higher level than did their chief evaluators. Significant differences existed with regard to the directors judging that a number of areas of knowledge are at a somewhat higher level than did their evaluators including: ac- counting, administration/organization management, community organization and relations, computers, data analysis/statistics, evaluation instrument design/scaling, planning and policy development, and quality of care. No difference existed in the ratings between the manager and evaluators ratings of interpersonal skills/group dynamics and

TABLE 4 CMHC Director and Evaluator Ratings of Present Level of Skill Knowledge of Evaluation Staff

Director Evaluator Mean S.D. Mean S.D. r-score D.F. Sig.

Accounting Administration/Organization Management Clinical Treatment Techniques Community Organization and Relations Computers Data Analysis/Statistics Interpersonal Skills/Group Dynamics Evaluation Instruction Design/Scaling Planning and Policy Development Quality of Care Research Study Writing

2.53 .94 2.06 .90 5.75 3.09 .65 2.87 .72 3.69 3.05 .86 2.93 .96 1.82 2.92 .77 2.70 .82 3.34 2.84 .96 2.50 .95 5.36 3.30 .82 3.04 .78 4.46 3.11 .79 3.11 .71 0 3.17 .78 2.85 .85 5.20 2.98 .71 2.82 .80 2.53 3.15 .75 2.87 .74 4.30 2.99 .85 2.92 3 9 1.03 3.23 .82 3.27 .74 - .54

185 23 1 218

224 21 1 237 230 226 230 218 226 225

<.01 <.01 NS

<.01

<.01 <.01 NS <.01

< .05 <.01 NS NS

Page 9: Manager and evaluator views of program evaluation

VIEWS OF PROGRAM EVALUATION 22 1

there was no significant difference between directors and evaluators regarding their ratings of clinical treatment techniques, research study, and writing.

Table 5 presents those areas of skill and knowledge which were checked by directors and evaluators as needing improvement. In general, directors and evaluators were in agreement as to which areas most needed improvement. Most of both the directors and evaluators gave checks to accounting and computers. As might be expected, more evaluators than directors were concerned about evaluation instrument design and scaling needing improvement as well as data analysis and statistics. Also, more of the evaluators recognized that they needed improvement in their knowledge of administration and organization management while the directors checked community organization relations somewhat more frequently than did the evaluators. Although a few evaluators and direc- tors agreed that clinical treatment techniques, writing, and interpersonal skills and group dynamics, as areas that evaluators needed to improve in, these received fewer checks from both groups.

TABLE 5 Areas of Skill and Knowledge Checked by Directors and Evaluators as Needing Improvement

Skill/Knowledge Areas Directors Evaluators Total

Accounting 34 39 13 Computers 32 41 13 Evaluation Instrument Design/Scaling 29 39 68 Planning and Policy Development Data Analysis/Statistics Research Study Community Organization and Relations Administration/Organization Management Quality of Care Clinical Treatment Techniques Writing Interpersonal Skills/Group Dynamics

30 33 63 24 31 61 28 27 55 30 23 53 20 32 52 17 27 44 14 19 33 17 14 31 18 1 1 29

Examining the relations between skill ratings and areas identified as needing im- provement is interesting. Accounting and computers which were most cited as needing improvement also received among the lowest ratings by both directors and evaluators regarding the present level of evaluation staff knowledge. Planning and policy develop- ment which also received relatively low ratings was an area that was frequently cited as needing improvement. The evaluators had given themselves relatively low ratings on evaluation instrument design and scaling and also identified this as an area that needed improvement. While both the directors and evaluators rated the evaluation staff as hav- ing good skills in data analysis and statistics, this was also one of the areas frequently checked as needing improvement. In general, the evaluators seemed most concerned about improving in those areas which were directly related to collecting, storing, and analyzing information including computers, accounting, evaluation instrument design/scaling, and data analysis and statistics. While concerned about evaluators irn- proving in their level of knowledge and skills involved in accounting and computers and

Page 10: Manager and evaluator views of program evaluation

222 FRANK BAKER

in evaluation instrument design and scaling, the directors more frequently checked plan- ning and policy development and community organization and relations as skills on which the evaluation staff needed to improve.

SUMMARY AND CONCLUSIONS In general, the results of this survey showed that managers and evaluators in com-

munity mental health centers tended to agree on many specific attitude dimensions re- garding evaluation. However, there was an overall pattern of evaluators to have a somewhat more positive orientation toward evaluation than did their center directors. Comparing the CMHC directors and evaluators with professional criterion groups on three items drawn from a subscale of the Human Service Ideology scale revealed that the center directors shared the concern of a national sample of psychiatrists that account- ability to funding services actually may just mean more bureaucratic control. However, the center managers parted company with the psychiatrists on the other two items and agreed with their evaluators and the more human-service-oriented criterion groups on the importance of having the administration of a human service agency establish procedures for accountability and on the usefulness of such mechanisms for goal attain- ment. There was a tendency for the directors to rate their evaluation staff higher on skills and knowledge levels than did the chief evaluators themselves.

While it appeared that center directors and evaluators did not significantly differ regarding the importance of evaluation for center administration and the salience of responding to the needs of the whole person and communities, they did differ significantly regarding their attitudes toward the involvement of consumers in evaluation and regarding the importance of being accountable to the users of service as well as to funding agencies. With regard to both of these latter dimensions, the evaluators indicated significantly stronger commitment to them.

Windle (1979) has emphasized the role of the citizen in the management process of centers in general and in the management function of evaluation in particular. One way that managers have acted to take consumers into account is through identifying public needs and programs that meet these needs. Needs assessments are very popular and management does not seem to have much trouble with this requirement. Relevant to this was the finding that there was no significant differences between the ratings of ad- ministrators and evaluators on the needs dimension of the attitude measure.

Windle’s view that further change is needed in health program management and that the citizen should have a more direct role in program management does not seem to be as supported by center directors. According to the data presented here, directors on the average were significantly less committed than their evaluators to be accountable to consumers and were also less committed to a role for the consumer in program evalua- tion.

However, despite their difference from the center evaluators on perceptions of the role of consumers in evaluation, the directors still had views remarkably similar to the evaluators concerning most of the questions asked about program evaluation in this sur- vey. At the beginning, I noted that the evaluation literature might lead one to expect managers and evaluators to disagree more. Perhaps there is a bias in the literature since most of the comments on differences are made by highly trained professionals who work as evaluators outside the organization.

Other data from the survey about the center evaluators that was reported earlier (Baker, 1982) may help to shed some light on the issue of how attitudes and perceptions

Page 11: Manager and evaluator views of program evaluation

VIEWS OF PROGRAM EVALUATION 223

may be affected by being an internal evaluator. The majority of the CMHC evaluators did work within the organization in positions that were relatively low in the hierarchy and in most cases they reported directly to the center director. Furthermore, the majority of the evaluators were the only evaluation staff employed by the center and thus they had no “local” or internal organization reference group of other evaluators available to sup- port their views. Regarding whether they may be more “cosmopolitan” in their choice of external and professional reference groups, this appears to be unlikely since the majority of the evaluators have not been exposed to extensive professional socialization in graduate school; the majority of the center evaluators do not have doctoral degrees, but instead hold a bachelor’s or a master’s degree as their highest degree. What training they have had in program evaluation is as likely to have been received on the job as through formal graduate school training.

Without identification with an external professional reference group and in the absence of an internal reference group of fellow-evaluators, the center evaluators may tend to do what is common in such organizational situations. Litterer (1965) described the pattern as follows: “People will frequently use their superiors as reference groups, and attempt to find out their attitudes, beliefs, and wants, then try to show that they are in conformance with or supportive of them” (p. 58) .

REFERENCE NOTE I . BAKER, F., & BAKER, A. P. Dimensions of human service ideology. Unpublished manuscript, 1981.

(Available from Division of Community Psychiatry, State University of New York at Buffalo, 221 1 Main Street, Building E, Buffalo, N.Y. 14214 KIRKHART, K. E. Program evaluation in community mental health centers. Unpublished doctoral disser- tation, University of Michigan, 1979.

2.

REFERENCES BAKER, F. Program evaluators in community mental health centers: Results of a national survey. Journalof

Community Psychology, 1982, 10, 151-156. CHENAULT, J., & BURNFORD, F. (Eds.) Human services professional education. New York: McGraw Hill,

1978. COHEN, L. H. Factors affecting the utilization of mental health evaluation research findings. Professional

DAVIS, H. R., & SALASIN, S. E. Facilitating the utilization of evaluation: A rocky road. In I. Davidoff, M. Guttentag, & J. Offut (Eds.), Evaluating community mental health services: Principles and practice (DHEW Publication No. ADM 77-46). Washington, D.C.: US. Government Printing Office, 1977.

HAGEDORN, H. J., BECK, K. J., NEUBERT, S. F., & WERLIN, S. H. A working manual of simple program evaluation techniques for community mental health centers (DHEW Publication No. ADM 76-404). Rockville, MD: National Institute of Mental Health, 1976.

Psychology, 1977, 8, 526-534.

LITTERER, J. A. SCHULBERG, H. C., & BAKER, F.

TRICE, H. M., & ROMAN, P. M.

The analysis of organizations. New York: Wiley, 1965. Program evaluation models and the implementation of research findings.

American Journal of Public Health, 1968, 50, 1248-1255. Dilemmas of evaluation in community mental health organizations. In P. M.

Roman and H. M. Trice (Eds.), Sociological perspectives on community mental health. Philadelphia: Davis, 1974.

WEISS, C . H. Evaluation research. Englewood Cliffs, N.J.: Prentice-Hall, 1972. WINDLE, C. The citizen as part of the management process. In H. C. Schulberg and J. Jerrell (Eds.), The

evaluators and management. Beverly Hills: Sage, 1979.