COMMENTARY: POLITICS AND PITFALLS IN EVALUATION

5
COMMUNITY HEALTH STUDIES VOLUME X, NUMBER 1, 1986 COMMENTARY: POLITICS AND PITFALLS IN EVALUATION Alan Owen and Richard Mohr Australian Community Health Association, 243 Cleveland Street, Redfern, NSW, 2016. “Evaluation” means, simply, determining the value of something. But value cannot be determined without suitable criteria for judg- ment or without appropriate standards for measurement. This paper criticizes the domi- nant view of evaluation, discusses an alterna- tive view and considers the Community Health Accreditation and Standards Project’ as an example of the latter. In spite of recently expressed pessimism about the general state of health care evaluation in Australia,2a number of pressures are at work that currently lie outside the field of vision of the traditional evaluation culture. We believe that these will have an impact on evaluation practice and increase the relevance of particular styles of evaluation in the future. The devolution of responsibility onto community-based services and activities is one important factor. As health institutions shorten their length of stay and increase turnover, the task of developing standards, maintaining qual- ity and ensuring continuity in community care will become more important to administrators and practitioners. The Better Health Commission,’ even though its future plans are sure to be the subject of much debate, will place preventive health strategies more firmly on the evaluation agenda. The World Health Orga- nisation’s Global Strategy Towards Health For All,4 also no doubt to be the subject of vigorous debate, will push the issues of community involvement, community development and primary health care at a local level into the health decision-making process. Other pressures for a change of thinking on standards and evaluation are also developing outside the health portfolio with the creation of the new Federal Department of Community Services. In particular, the Home and Com- munity Care Program with its emphasis on low-key basic support services will influence the direction of Community Health policy. The issues of boundaries between, and collabora- tion with, community health will mean more attention being paid, not only to the content of services, but also the philosophies and assump- OWEN & MOHR 95 tions underlying the activities as they work together at the local level. Efforts at evaluation and the determination of standards will have to reflect some sensitivity to these new develop- ments. The underlying theme in much of this recent activity is the growing requirement that evaluation take place more in collaboration with, and less remote from, the users and providers of community-based care. It is impor- tant that the criteria we use and the standards we measure against be developed with substan- tial contributions from the people doing the work and, ideally, with the service consumers. Beyond an assumption about the inevita- bility of certain pressures for change, what is the point of evaluating community health ser- vices now? One reason is the continual need to revise our understanding of what we do. The Australian Community Health Association also believes that, in addition, standards and evalua- tion can consolidate the considerable gains made in the last ten years at a time when the organisational structures around services are in a state of change. A formal process is needed to ensure that gains can be maintained and valu- able experience is not lost to the health system. Ideally, evaluation is a way of assessing the extent to which goals have been achieved, and the results of evaluation give formal recognition to the work done and clearer directions for future endeavours. Before looking at some of the theory behind evaluation, the possible reasons for wanting to evaluate, review, or assess standards in community health should be briefly considered.’ Evaluation may be to justify institutional existence and continued funding, or to preempt criticism, or it may be to provide feedback to workers and hence to improve the quality of the services and activities of community health. Conversely, why might community health not want to be evaluated or reviewed? Maybe it is not fair that community health always ha to justify itself perhaps it is premature to review activities that have been unstably and unsym- pathetically administered, have a high turnover COMMUNITY HEALTH STUDIES

Transcript of COMMENTARY: POLITICS AND PITFALLS IN EVALUATION

COMMUNITY HEALTH STUDIES VOLUME X , NUMBER 1, 1986

COMMENTARY: POLITICS AND PITFALLS IN EVALUATION

Alan Owen and Richard Mohr

Australian Community Health Association, 243 Cleveland Street, Redfern, NSW, 2016.

“Evaluation” means, simply, determining the value of something. But value cannot be determined without suitable criteria for judg- ment or without appropriate standards for measurement. This paper criticizes the domi- nant view of evaluation, discusses an alterna- tive view and considers the Community Health Accreditation and Standards Project’ as an example of the latter.

In spite of recently expressed pessimism about the general state of health care evaluation in Australia,2 a number of pressures are at work that currently lie outside the field of vision of the traditional evaluation culture. We believe that these will have an impact on evaluation practice and increase the relevance of particular styles of evaluation in the future.

The devolution of responsibility onto community-based services and activities is one important factor. As health institutions shorten their length of stay and increase turnover, the task of developing standards, maintaining qual- ity and ensuring continuity in community care will become more important to administrators and practitioners. The Better Health Commission,’ even though its future plans are sure to be the subject of much debate, will place preventive health strategies more firmly on the evaluation agenda. The World Health Orga- nisation’s Global Strategy Towards Health For All,4 also no doubt to be the subject of vigorous debate, will push the issues of community involvement, community development and primary health care at a local level into the health decision-making process.

Other pressures for a change of thinking on standards and evaluation are also developing outside the health portfolio with the creation of the new Federal Department of Community Services. In particular, the Home and Com- munity Care Program with its emphasis on low-key basic support services will influence the direction of Community Health policy. The issues of boundaries between, and collabora- tion with, community health will mean more attention being paid, not only to the content of services, but also the philosophies and assump-

OWEN & MOHR 95

tions underlying the activities as they work together at the local level. Efforts at evaluation and the determination of standards will have to reflect some sensitivity to these new develop- ments.

The underlying theme in much of this recent activity is the growing requirement that evaluation take place more in collaboration with, and less remote from, the users and providers of community-based care. It is impor- tant that the criteria we use and the standards we measure against be developed with substan- tial contributions from the people doing the work and, ideally, with the service consumers.

Beyond an assumption about the inevita- bility of certain pressures for change, what is the point of evaluating community health ser- vices now? One reason is the continual need to revise our understanding of what we do. The Australian Community Health Association also believes that, in addition, standards and evalua- tion can consolidate the considerable gains made in the last ten years at a time when the organisational structures around services are in a state of change. A formal process is needed to ensure that gains can be maintained and valu- able experience is not lost to the health system.

Ideally, evaluation is a way of assessing the extent to which goals have been achieved, and the results of evaluation give formal recognition to the work done and clearer directions for future endeavours. Before looking at some of the theory behind evaluation, the possible reasons for wanting to evaluate, review, or assess standards in community health should be briefly considered.’

Evaluation may be to justify institutional existence and continued funding, or to preempt criticism, or it may be to provide feedback to workers and hence to improve the quality of the services and activities of community health. Conversely, why might community health not want to be evaluated or reviewed? Maybe it is not fair that community health always h a to justify itself perhaps it is premature to review activities that have been unstably and unsym- pathetically administered, have a high turnover

COMMUNITY HEALTH STUDIES

of staff, little training backup and constantly changing sources of funds.

Processes of evaluation themselves can be used as management tools, to bring services into line with administrative goals rather than for service development - a justification for cuts or so-called “rationalisation”.

Talk about standards and goals may inter- fere with the way service workers and profes- sionals “always do things”, creating anxiety about change at a time when instability already is in over-supply. Reassessing work can be time-consuming and a nuisance when there is pressure for more and more of the same direct service work to be done.

To make an informed and justifiable assessment of whether or how community health should be assessed or evaluated, we may need to look in a more detached way at the methods and the types of evaluation to be proposed. We suggest here some principles for judging the appropriateness of any evaluation.

One of the major functions of evaluation is to provide some valid analysis of day to day practice and this must depend on gathering facts. Of course the types of facts that are considered important’ reflect varied points of view. A community nurse’s perspective will differ from a social worker’s or a hospital Chief Executive Officer’s. Often only the things that are easily countable are considered significant, so they are given more weight when decisions are made. The use of statistical analyses to silence discussion rather than elucidate issues makes it more difficult to debate the basic assumptions and choice of measures, since only the “results” count.

As well as disagreement on what consti- tutes a relevant fact, there may be differing views of the purpose of evaluation. Community health workers may wish to see the services evaluated in order to defend them against criticism or to argue for more resources. The local community or consumers of a service may wish to ensure that it becomes more relevant to their own needs; some community health work- ers may share this goal. Administrators or politicians may wish to justify cut-backs or to make the services fit into their accounting systems. Professionals outside community health, whether other practitioners or academic researchers will again have their own interests in evaluating community services. This may be to enhance their own reputations, for instance, or their own way of operating.

OWEN & MOHR 96

Furter has analysed the issues in evaluation clearly,6 and Chalmers, in a paper in Commun- ity Health Studies’ as well as at a recent ANZSERCH meeting at Sydney University, has made a similar point in discussing the scientific method in relation to epidemiology and com- munity health. He observes that the traditional, self-consciously “scientific” method of evalua- tion often does not ask critical questions, but, rather, simply follows the well-established theories of the day. Any novel ‘data’ become accessible, Chalmers says, only when there is a theory capable of provoking the right questions.

Wadsworth has criticised similarly the for- malistic approach to research in community health and related areas and proposes some criteria for assessing evaluation methods. “The researcher,” she writes of the predominant approach, “in a largely one-way process, takes from the passive researched, in what has aptly been referred to . . . as a ‘data raid’.’’ Resear- chers, through their use of technical jargon and highly specialised research techniques, then appropriate that data and render it inaccessible to the ‘researched’. So the “creations of a participant’s own activity transform into sepa- rate, alien ‘things’ - reifications beyond con- trol”. The evaluation material then takes on a life of its own, and becomes something “to which adjustment or resigned submission must be made”.8

From this perspective it is clear that the business of evaluation cannot be divorced from either its wider purpose or its local context. This implies that there will be a number of perspec- tives from which we can approach evaluation, and a number of different purposes for which it may be done. As we have suggested above, the viewpoint or purpose will profoundly influence the type of evaluation undertaken. It follows that purpose has to be agreed explicitly at the start of any exercise.

This is not to say that all evaluation is done to serve the interests of the evaluators, or even if it is primarily for that purpose, that it is useless and self-serving. The point we wish to emphasise is that the first issue to be aware of is the motivation and the interests of those who might do the evaluating.

A recent paper by Hawe dealt with two common failings of evaluation, namely of trying to evaluate something that does not exist, and evaluating something which will be of no interest to decision-makers9 An example of the first type was an evaluation of community health services

COMMUNITY HEALTH STUDIES

in Brisbane by Najman and others.lOThey found that community health services had made little impact and met remarkably few of their goals but, since the program was under-resourced and very new, it had hardly had a chance to make the impact which was expected of it.

In a submission to an evaluating team set up by the New South Wales Health Commission in 1982, the New South Wales Community Health Association argued that attempting to evaluate community health was also, at that stage, to make a mistake of this type.” The community health program had never been implemented with clearly stated and agreed upon goals, and at that time, was operating with staffing levels 27 per cent below those originally intended. To focus an evaluation on outcome measures would have carried the assumption that the process of implementation involved was unproblematic. To this end, the submission also pointed out the failure of existing management structures in New South Wales and argued that the program would need to be adequately resourced and properly managed before any meaningful eval- uation could be carried out.

The second type of error noted by Hawe, evaluation which is irrelevant to decision- makers, is common in the academic research tradition, which focusses more on manageable hypotheses than on relevance or new policy directions. In addition, academic evaluation sometimes has trouble being relevant to the concerns of other participants in community health, such as the workers or the client group.

To overcome these problems it is important that community health workers and consumers should be aware of and involved in monitoring services. It is quite feasible for these groups to initiate the process themselves. If, however, the evaluation is initiated from outside the commun- ity health service, then the workers and consum- ers should be aware of what is going on, should agree to the evaluation taking place and should be closely involved in determining what is to be evaluated and in what terms. That is to say, they should agree that the program is ready to be evaluated in the terms proposed and should agree on the stated goals and the criteria by which it is to be evaluated. Above all, the staff and consumers should always have access to the information collected by the evaluators and should have ample opportunity to understand, comment and act on the findings.

This raises the issue that is perhaps the most important of all - by what criteria will commun-

OWEN & MOHR 97

ity health be evaluated? Cost benefit or head counts or other terms that equate to hospital orientated work are the most common yard- sticks. Many other considerations determine criteria - keeping off other people’s turf, pro- tecting the interests of the staff or covering administrative bungling by deliberately asking the wrong questions. These are all likely to be ingredients of any evaluation, no matter how carefully implemented it may be.

These considerations introduce the ques- tion of standards: these are the benchmarks by which performance is measured, and are the most important and difficult component to work out in the process of review and accreditation. Thinking clearly about standards forces us to examine our assumptions: do we want to encour- age ‘more of the same’, or something different altogether? - is ‘more’ equal to ‘better’? What does ‘better’ mean?

Apart from the technical side of evaluation, much attention should be directed at making sense of the proposed goals and indicators of progress in the world at large. The first step is to clarify the desired goals, then those goals have to be given some operational expression. Only after that is agreed on can we start to measure performance. The first and most important step in this process is to decide the purpose of a program.

A recent popular work of fiction, The Hitch- hiker’s Guide to the GalaxyIz had important things to say about asking the right questions. A massive computer, Deep Thought, had been programmed to calculate the answer to a ques- tion no one was sure about - “the answer to life, the universe and everything”. It turned out that the answer was “42”, but of course that answer was useless unless the question could be spelt out more clearly.

If evaluation, then, is the art of asking the right questions, the Community Health Accre- ditation and Standards Project (CHASP) is a form of evaluation that attempts to do just that. It is devoted entirely to clarifying broad goals (expressed as the principles) and operational objectives (expressed as the standards). These then become a series of questions with quite specific answers which are applied to the com- munity health service being reviewed. Because CHASP uses peer review, there is not a com- plete distinction between subject and observer. The participants are both being a reviewer and being reviewed, and the exercise is focussed on

COMMUNITY HEALTH STUDIES

the centre as a whole, not just on what the boss wants to know or what the workers are good at doing.

It is important to remember, whether we are evaluating or being evaluated, that the “subjects” of research have not only a right to understand and participate, but that the re- search or evaluation cannot be successful with- out the contribution of its subjects. Evaluation is not an end in itself; it should assist us to understand and improve our own activities.

Information on evaluation should also be available to those for whom a service is intended so that they too may be able to participate in adding their own style of evaluation, whether this is “voting with their feet”, organising a local health consumers’ association, or whatever other action they may like to take.

In deciding whether, or how, we wish to be involved in assessing our own activities, it will be most useful to take account of those factors that we have discussed here. In summary form they can be stated as follows:

First, it is essential not to be mystified by quantitative methods that may take on a life of their own. The process has to be understand- able.

Second, it is necessary to be aware of the social and political context of the evaluation, and the motivation of those requesting the review - who they are - the external pressures for evaluation (for example, what is likely to happen if we don’t undertake a review?).

Third, as workers we are being looked at in some way whether we like it or not. Records of occasions of service, or client registrations, time and motion studies or budgetary allocations all indicate something about our activities. Often we have little control over how these are used.

The fourth point is that community health services must be reviewed according to the real goals and values of the program, not according to factors quite extraneous, simply because they are measurable. It is essential to agree on goals

OWEN & MOHR 98

and criteria. Fifth, the program must be ready for eva-

luation in the form proposed: that is, it must have had a chance to achieve what is expected of it.

Finally, the evaluation must provide useful and relevant information to the managers, work- ers and consumers of a community health ser- vice.

Any assessment of CHASP must include a description of it as it has been experienced at the level of health centres, in each of the three states in which it has been trialled. The Australian Community Health Association has drafted an “options paper” on the future for accreditation of community health and has called for re- sponses from all levels of the health system.” After raising the issues in this way the Associa- tion will make proposals for future directions in sensitive evaluation and quality control.

In their own separate ways both the Austra- lian Community Health Association and the national CHASP workers will be looking to the comments and experience of workers in the field to help guide the continuing development of this project. In this way we hope to see a path traced through the complicated politics of the health system, which will avoid the pitfalls of an in- appropriate method of evaluation being applied to the task of further developing community services.

Acknowledgements This paper has been prepared from material

originally presented at the South Australian Community Health Association Conference on September 15th, 1984,14 and the introductory paper at a seminar at Westmead Hospital on December l l th , 1984 on the Role of Standards and Quality Assurance in Community Health. The assistance of Denise Fry, Lesley King and Penny Hawe and the work done in the national Community Health Accreditation and Stan- dards Project have made this paper possible.

COMMUNITY HEALTH STUDIES

References

1. See School of Public Health and Tropical Medicine: National Community Health Accredtation and Standards Project: Final Report, 1985; and A Manual of Standards for Community Health, Canberra: AGPS, 1985.

2. Dewdney J. Report on the NH & MRC Workshop on Resources for Health Care Evaluation, Canberra, 5-6 February, 1985. ANZSERCH/APHA Annual Conference, Canberra, August, 1985.

3. McMichael A. Better Health Commission, paper presented at ANZSERCH/APHA Conference, Canberra, August 5, 1985.

4. See World Health Organisafion. Health For All Series, Nos 1 to 8, 1978-1982, and Research for the reorientation of national health systems, Technical Report Series No. 694, Geneva, 1983.

5. See also Legge D. Quality assurance: insti- tutional and political context, I X Interna- tional Health Records Conference, Auck- land, May 1984.

6. Furler E. Against hegemony in health care service evaluation. Community Health Stud

7. Chalmers A. Epidemiology and the scien- tific method. Community Health Stud 1980;

1979; 3,1:32-41.

4.1 :36-40.

OWEN & MOHR 99

8. Wadsworth Y. The politics of social re- search: a social research strategy for the community health, education and welfare movement. Aust J SOC Issues 1982; 17,3:

9. Hawe P. Evaluability assessment: getting your program ready for evaluation. ANZSERCWAPHA (NSW) Conference, Current Issues in Health Promotion, Syd- ney: April 1984.

10. Najman JM, Jones J, Gibson D. et al. The impact of health centres in Brisbane on some community health indicators. Com- munity Health Stud 1981; 5,l:ll-21.

11. Community Health Association of New South Wales (1982). Submission to Interim Evaluation Team on Community Health Services. Unpublished.

12. Adams, Douglas. The Hitchhikers' Guide to the Galaxy. London: Pan Books, 1979.

13. Australian Community Health Association. Quality Assurance in Community Health: options for an auspice for the accreditation of community health. Sydney, 1985. Avail- able from the ACHA on request.

14. Mohr R. Evaluating the evaluators: a plea for good sense before scientific method South Australia Community Health Asso- ciation Conference, Adelaide, September 1984.

232-246.

COMMUNITY HEALTH STUDIES