Psychiatric services for elderly people: Evaluating system performance

14
INTERNATIONAL JOURNAL OF GERIATRIC PSYCHIATRY, VOL. 9: 259-272 (1994) EDITORIAL Psychiatric Services for Elderly People: Evaluating System Performance Mental health services, in common with others, are under increasing pressure to demonstrate efficient use of funds, cost containment and equitable allo- cation of resources while maintaining high stan- dards of care. This has resulted in an increasing drive to assess performance. Over the last two decades there has occurred a gradual shift in emphasis in the way in which health systems are evaluated. Ih the 1970s, the emphasis was very much upon the amount of resources (‘structure’ or ‘inputs’: see below) devoted to a system or to a programme within it. Such inputs were most fre- quently conceptualized in financial terms or in staffing levels, or in the availability of technology. Sometimes it was growth in these inputs that was valued; in the UK, for instance, successive govern- ments sought to show that the National Health Ser- vice was more generously funded by themselves than by their rivals. Sometimes, growth was per- ceived as problematic and, as with the United States in the late 1970s, ‘cost containment’ became the policy problem. Although, as we shall see, the emphasis has somewhat changed in the 1980s and 1990s, measures of inputs to health systems remain at the core of international comparative studies (see, for instance, OECD, 1987), partly because inputs are easier to measure and compare. The changed emphasis of the 1980s very much concerned itself with health system processes, fre- quently commandeering the term ‘quality’ to label its desiderata, which included adherence to estab- lished clinical procedures and the appropriate orchestration of multiprofessional skills in suitable care settings. Roughly contemporaneous with this was a growing interest in the productivity of health systems: counting processes (‘outputs’: see below) and assessing the efficiency with which they are pro- duced (see, for instance, DHSS, 1985). More recently, there has arisen an ostensible con- cern with ‘outcomes’: the effect of health systems on their users’ health status (Frater and Sheldon, 1993). In part, this reflects the persistence of econ- omists in arguing that the only value of health ser- vices is their effect on health (an instrumentalist position, which, as we shall see below, is not axio- matic), but also the increasing influence of consu- merism in health care along with the rhetoric of politicians eager to find new grounds upon which to challenge rising health expenditure. This concern with health care outcomes has also proved some- thing of a spur to the research community; ‘health services research’ has become largely synonymous with assessing the outcomes of interventions and services (see, for instance, Advisory Group on Health Technology Assessment, 1992). The purpose of this article is to review develop- ments in the above topics as they affect, or might affect, the evaluation of psychiatric services for elderly people. It is not a literature review in the usual sense of a systematicexposition and compari- son of writings or findings on a topic. There seems to be very little such literature on our topic specific to psychogeriatrics, and much of what does exist is largely descriptive of what is held to be a good practice in particular programmes or institutions (Moak, 1990; Munich, 1990; Ginsberg, 1991) or provides narrowly normative guidance on quality assurance methods (Jessee and Morgan-Williams, 1987; Kamis-Gould et al., 1991), though see Wan (1986) for a discussion on different methodologies for evaluation of long-term geriatric care. This is perhaps not surprising, since mental health services are in general quite difficult to eva- luate. Intentional interventions are rationally based, that is, underlying the activity is a frame- work which consists of views from theoretical, experimental and philosophical sources all within a social context (Adelman, 1986). One obvious rea- son for the difficulty in evaluating psychiatry is the large diversity in rational frameworks adopted by different practitioners, planners and cultures. The practice of psychiatry is theoretically hetero- geneous, both in the ‘positive’sense (different views about the causes of, and effects of particular thera- CCC 08854230/94/040259-14 0 1994 by John Wiley & SORS, Ltd.

Transcript of Psychiatric services for elderly people: Evaluating system performance

Page 1: Psychiatric services for elderly people: Evaluating system performance

INTERNATIONAL JOURNAL OF GERIATRIC PSYCHIATRY, VOL. 9: 259-272 (1994)

EDITORIAL Psychiatric Services for Elderly People:

Evaluating System Performance

Mental health services, in common with others, are under increasing pressure to demonstrate efficient use of funds, cost containment and equitable allo- cation of resources while maintaining high stan- dards of care. This has resulted in an increasing drive to assess performance. Over the last two decades there has occurred a gradual shift in emphasis in the way in which health systems are evaluated. Ih the 1970s, the emphasis was very much upon the amount of resources (‘structure’ or ‘inputs’: see below) devoted to a system or to a programme within it. Such inputs were most fre- quently conceptualized in financial terms or in staffing levels, or in the availability of technology. Sometimes it was growth in these inputs that was valued; in the UK, for instance, successive govern- ments sought to show that the National Health Ser- vice was more generously funded by themselves than by their rivals. Sometimes, growth was per- ceived as problematic and, as with the United States in the late 1970s, ‘cost containment’ became the policy problem. Although, as we shall see, the emphasis has somewhat changed in the 1980s and 1990s, measures of inputs to health systems remain at the core of international comparative studies (see, for instance, OECD, 1987), partly because inputs are easier to measure and compare.

The changed emphasis of the 1980s very much concerned itself with health system processes, fre- quently commandeering the term ‘quality’ to label its desiderata, which included adherence to estab- lished clinical procedures and the appropriate orchestration of multiprofessional skills in suitable care settings. Roughly contemporaneous with this was a growing interest in the productivity of health systems: counting processes (‘outputs’: see below) and assessing the efficiency with which they are pro- duced (see, for instance, DHSS, 1985).

More recently, there has arisen an ostensible con- cern with ‘outcomes’: the effect of health systems on their users’ health status (Frater and Sheldon, 1993). In part, this reflects the persistence of econ-

omists in arguing that the only value of health ser- vices is their effect on health (an instrumentalist position, which, as we shall see below, is not axio- matic), but also the increasing influence of consu- merism in health care along with the rhetoric of politicians eager to find new grounds upon which to challenge rising health expenditure. This concern with health care outcomes has also proved some- thing of a spur to the research community; ‘health services research’ has become largely synonymous with assessing the outcomes of interventions and services (see, for instance, Advisory Group on Health Technology Assessment, 1992).

The purpose of this article is to review develop- ments in the above topics as they affect, or might affect, the evaluation of psychiatric services for elderly people. It is not a literature review in the usual sense of a systematic exposition and compari- son of writings or findings on a topic. There seems to be very little such literature on our topic specific to psychogeriatrics, and much of what does exist is largely descriptive of what is held to be a good practice in particular programmes or institutions (Moak, 1990; Munich, 1990; Ginsberg, 1991) or provides narrowly normative guidance on quality assurance methods (Jessee and Morgan-Williams, 1987; Kamis-Gould et al., 1991), though see Wan (1986) for a discussion on different methodologies for evaluation of long-term geriatric care.

This is perhaps not surprising, since mental health services are in general quite difficult to eva- luate. Intentional interventions are rationally based, that is, underlying the activity is a frame- work which consists of views from theoretical, experimental and philosophical sources all within a social context (Adelman, 1986). One obvious rea- son for the difficulty in evaluating psychiatry is the large diversity in rational frameworks adopted by different practitioners, planners and cultures. The practice of psychiatry is theoretically hetero- geneous, both in the ‘positive’ sense (different views about the causes of, and effects of particular thera-

CCC 08854230/94/040259-14 0 1994 by John Wiley & SORS, Ltd.

Page 2: Psychiatric services for elderly people: Evaluating system performance

260 EDITORIAL

pies on mental health problems) and in the norma- tive sense (different views about the ethics and appropriateness of particular therapies). This heterogeneity inevitably spills over into the organi- zation of services and the practice of other health professions. A problem with applying quality assessment, for example, in psychiatry using approaches which have developed primarily in medical and surgical contexts is the basic concep- tualization of mental illness (Turner, 1989). If men- tal illness is a disease like any medical condition then traditional approaches may apply, but ‘if it is more a personality disturbance based on dysfunc- tional learned repertoires, or inadequate adjust- ment to societal living then the medical model is harder to apply’ (Turner, 1989). Thus, for example, among structural factors used to evaluate surgical/ medical care technical competence is seen as very important, while in mental illness compassion, insight and empathy may be as important. Process evaluation can be more difficult because of the apparently greater variability in accepted standards for psychiatric treatment than for medical/surgical care (Turner, 1989).

There are other reasons for the difficulty in evalu- ation, too. It is not as easy to conceptualize the outcomes of psychiatric care as of (say) surgery. Treatment often extends to the whole family rather than just the immediate patient and outcomes can be harder to define since they involve subjective feelings, for instance of self-worth. The difficulties are further compounded by the complexities of the therapeutic team approach and the importance of agencies other than health care institutions (Jessee and Morgan-Williams, 1987). In the specific care of psychogeriatric services, even further complica- tions arise from the high incidence of comorbidity, often multiple (Moak, 1990).

This article therefore has three specific objec- tives. First, it clarifies some of the concepts and terminology of health system evaluation. Second, it draws on the published literature to exemplify the various concepts introduced earlier. Third, it discusses some of the implications of routinely eva- luating a system rather than scientifically evaluat- ing particular interventions or programmes. We conclude by arguing that the evaluation of psychi- atric services for elderly persons is best achieved by the construction of relatively simple models from an array of complex knowledge.

A CONCEPTUAL FRAMEWORK

The seminal framework for evaluating health ser- vices was developed by Donabedian (see, for instance, Donabedian, 1980), and is represented in the central spine of Fig. 1. It represents an implicitly instrumental view of a health system: a kind of sausage machine into the top of which resources (‘inputs’; or ‘structure’ in Donabedian’s terms) are poured. These are employed in diagnostic and ther- apeutic ‘processes’ (which are commonly aggre- gated into ‘output’ measures, such as numbers of treatments or numbers of patients). This is techni- cally known as a ‘production function’.

‘Outcome’ is the effect of the process on health status; this, of course, is a counterfactual statement, referring to the differential between health status without, and with, the process of intervention. Fig. 1 somewhat elaborates Donabedian’s outcome con- cept. ‘Efficacy’ or ‘technical effectiveness’ refers to the effect of a process when administered under ideal conditions. This is typically the subject matter of biomedical ‘effectiveness’ research, including controlled trials. ‘Individual effectiveness’ refers to the outcome(s) for average individual patients, and may well be weaker than technical effectiveness as a result of occurring in less than ideal conditions, for instance in the hands of the average professional practitioner. The ‘impact’ of a process refers to whether its individual outcomes are discernible in a population; there are several reasons why they may not be, including the effect of confounding variables, a multiplicity of which affect health sta- tus, and difficulties of access to diagnostic and treat- ment processes for those who need them.

From this basic framework, two key economists’ concepts can be illustrated. ‘Technical efficiency’ is the ratio of inputs to processes/outputs, and may be expressed in terms either of money inputs (eg cost per inpatient case) or physical resources (eg throughout per available bed). As the ratio falls, efficiency increases, and vice versa. (In strict econ- omic jargon, this is the concept of X-efficiency: Leibenstein, 1966.) It should be noted that, on this definition, efficiency is not necessarily related to outcomes/effectiveness; it is possible to perform ineffective treatments efficiently, or even kill people efficiently. ‘Cost effectiveness’ is the ratio of finan- cial inputs to units of some concept of outcome. Given that different health care interventions for different conditions may have widely differing expected outcomes, it can be difficult to find lowest common denominator comparisons between inter-

Page 3: Psychiatric services for elderly people: Evaluating system performance

EDITORIAL 26 1

I Social Costs

cost Effectiveness

Efficacy / Technical Effectiveness

Individual Effectiveness /Outcomes

Population Impact

Social Benefits

Cost Benefits

Fig. 1. A structure of performance terms in medical care organizations. Source: Harrison (1992)

ventions for different diseases. ‘Cost-utility analy- sis’ is a special variant of cost-effectiveness analysis which aims to allow such comparisons, normally by the use of quality-adjusted life years (QALYs) as a common outcome measure. (For a summary account of the methodology, see Petrou and Ren-

ton, 1993; for guidance on their use, see Drummond ef al., 1993; Gerard and Mooney, 1993.)

Such cost-effectivenesshtility approaches can be rather narrow in terms of their analysis since it is usual to include only costs to the health care organization and outcomes for the patient. A

Page 4: Psychiatric services for elderly people: Evaluating system performance

262

STRUCTURE PROCESSES

SOClALCLASS GROUPS ETHNlCGROUPS EPB - G E N D E R

EDITORIAL

OUTCOMES

EQUITY IMPLIES EQUAL TREATMENT FOR THOSE WITH EQUAL NEEDS, BUT........

AN OPERATIONAL DEFINITION REQUIRES A STATEMENT OF

(A) WHAT TREATMENT’ ?

(6) WHAT GROUPS ARE TO BE COMPARED ?

Fig. 2. Defintions of equity

broader approach would be ‘benefit-cost analysis’, encompassing costs and benefits wherever they fall. Such analysis is difficult in practice, not least because of the problems in both knowing, and then expressing in monetary values, such benefits and costs.

The concept missing from Fig. 1 is ‘equity’, whose generic definition is ‘equal treatment for those with equal needs’ (Evandrou et al., 1992). This, however, is insufficient to function as an oper- ational definition, for which purpose two further matters need to be specified. One is the currency of ‘treatment’: this might be expressed as inputs (equal health spending), processes (an equal rate of access to, say, coronary artery bypass grafting) or outcome (equal standardized mortality ratios, for instance). The second requirement is to define the currency of ‘needs’; which groups are to be com- pared? We might be interested in social class equity, geographical equity, equity between ethnic groups, and so on. Since ‘treatment’ and ‘need’ are two distinct dimensions of analysis, and since an opera- tional definition of ‘equity’ requires both, the number of potential operational definitions is rather large, as Fig. 2 shows.

As noted above, the implicit assumption of the ‘sausage machine’ model of a health system is that its objective is to change the health status of indivi-

duals and/or a population. Such an instrumental view implies that, as Donabedian notes, the only purpose in evaluating structure and process is that a more or less known causal relationship exists between them and outcomes. However, while out- comes are important, it is not difficult to show that they are not the sole basis of evaluating health sys- tems. There are two arguments.

First, there are, in fact, elements of structure and process which are valued for their own sake and incidentally of any effect upon outcomes. It is not, for instance, difficult to agree that doctors should be polite to patients, or that hospital waiting areas should be adequately decorated, whether or not they affect patient compliance or attendance. Secondly, it is simple not self-evident that citizens only desire effective (ie outcome-related) health care. This second argument merits examination.

Patients seek care in order to be relieved of some actual or perceived, present or potential, ‘dis-ease’. The care itself is not directly of value; it is generally inconvenient, often painful or frightening. As a thought experiment, one could ask a representative patient (or oneself) whether he/she would prefer to have . . . a condition per- ceived as requiring care, plus the best conceivable care for that condition, completely free of all . . . costs, or would prefer simply not to have the condition . . . care

Evans has argued that:

Page 5: Psychiatric services for elderly people: Evaluating system performance

EDITORIAL 263

is not a ‘good’ in the usual sense, but a ‘bad’ or ‘regret- table’, made ‘necessary’ by the even more regrettable cir- cumstances of ‘dis-ease’. It follows that patients want to receive eflective health care, i.e. care [in respect of which] there is a reasonable expectation [of] a positive impact on their health! (Evans, 1990, pp. 118-9)

But this may oversimplify the situation. First, while it may be true that no-one would want a health care intervention known for certain to be ineffec- tive in 100% of cases, interventions are often demanded because they carry a possibility, however remote, of effectiveness. Second, this denies the importance of processes in affecting the welfare of people. To confine the argument for a moment to publicly funded services, Goodin and Wilenski (1984) have pointed out that, if the purpose of government is to serve citizens, and if citizens are as interested in processes (and democracy is, of course, just such a process!) as in outcome, then there is no reason for policy-makers to scorn pro- cesses, whether or not they lead to desired out- comes. The argument can be extended to cover the contributors to any system of third party payment, whether or not publicly organized. This is of parti- cular importance for mental health systems, and only partly because, as noted above, outcomes have often been difficult both to conceptualize and to demonstrate. In the light of the attitudes to mental illness taken by totalitarian governments, the very processes of mental health care can be seen as a mark of civilization. In other words, society may derive welfare gains from the very fact that its efforts are organized to provide services for people perceived to merit care.

Goal attainment models are increasingly domi- nating effectiveness research, where programmes or systems are seen as effective to the extent that they achieve a given set of objectives. This assumes that important objectives and outcomes can be identi- fied and measured appropriately. However, partici- pants (management, clinicians, patients and public) may in practice be pursuing different and compet- ing goals simultaneously. More recently a systems approach has been proposed as an alternative. This assumes that programmes are complex social sys- tems with multiple and contradictory functions. Thus, to determine a programmes’ effectiveness, the researcher should attempt to determine whether the system is internally consistent and coordinated and makes judicious use of its resources. He or she may thus examine a variety of measures such as efficiency, job satisfaction, goal consensus, inte- gration, etc, as criteria of effectiveness. This is

because effectiveness can be regarded as a socially constructed phenomenon and is ‘essentially con- tested’ in that there are always grounds for debating the appropriateness of various criteria (Anspach, 1991).

INPUTS AND EFFICIENCY

As an evaluation measure, the use of gross inputs (of money, or staff time) to a health system is double-edged; depending upon one’s starting values, a high level of input may be either good or bad. Moreover, as Schulz et al. (1983) have shown, there is no necessary relationship between the cost and quality (as, in this case, perceived by staff) of inpatient psychiatric services, even when controlled for case mix. In practice, therefore, ser- ious evaluation of health systems by input concepts has to proceed in one (or both) of two other ways: inputs either have to be related to some other con- cept or have to be examined in more specific, usually non-quantitative, terms of organizational structure. We discuss each of these approaches in turn.

The level and mix of inputs to a health institution are of obvious importance to its managers, who, whether operating in a public or a private context, need to ensure financial propriety and viability. Smith (1987) has suggested four important indi- cators of this type, applied to mental health ser- vices. These are the percentage of staff time spent face to face with clients (and therefore, in the US context, billable), the percentage of cancelled out- patient sessions, the average number of weekly treatments per client, and the average length of stay (inpatients) or total treatment (outpatients). Based on a review of US practice, Sorensen et GI.( 1987) have proposed a set of 25 key performance indi- cators for mental health organizations. These include a number of business measures (such as revenue per client, working capital ratios), a number of staff deployment measures (such as staff time allocated to different services, salary expendi- ture by professional grouping) and a number of efficiency measures (such as average units of service per staff day, average caseload and average cost per unit of service). On a more micro level, Lavender (1987a) has devised ‘model standards questionnaires’ to assess, inter alia, the manage- ment practices, physical environment and staff levels of wards in a psychiatric rehabilitation insti- tution. The precise measures required will vary with

Page 6: Psychiatric services for elderly people: Evaluating system performance

264 EDITORIAL

the context within which a particular mental health organization operates, but the divergence between Sorensen’s recommendations for the US system and the performance indicators devised for the UK National Health Service is not enormous (DHSS, 1985).

In such arrays of indicators, cost per unit of ser- vice (eg per inpatient case) is frequently regarded as the straightforward ‘bottom line’ measure of organizational performance. However, such mea- sures are by no means straightforward. Zelman (1987) has observed three problem areas. First, standard definitions of both ‘cost’ and ‘unit of ser- vice’ are problematic; different technical approaches to cost can produce wildly different results, while the complexity of evaluation is increased by the way secondary causal factors vari- ably interact with the primary ones. This requires adequate measures of case mix for adjustment of observational data (Newman and McGovern, 1987) such as clinical status as defined by diagnostic signs and symptoms, level of functioning, as well as age and sex, etc. The difficulties of case mix defi- nition in devising meaningful units of service in the mental health field are well known (Treanor and Cotch, 1990). Second, there are further techni- cal problems in transforming data through the defi- nition of cost centres and the allocation of overheads (see also Beecham and Knapp, 1992). Third, it is not necessarily easy to collect good data from caregivers:

. . . management has two contradictory needs it is trying to accomplish having accurate information on the one hand, and taking corrective action on the other hand. To the extent that there are negative consequences based on data which has been reported accurately, there may be a tendency toward inaccurate reporting. (Zelman, 1987, p. 205)

Of course, evaluations of this type are also the sub- ject of politics. National Health Services perform- ance indicators were a political expedient (Harrison, 1988), and indeed, increased political concerns with narrow definitions of efficiency can be at the expense of evaluations of quality (Jerrell, 1986; Rodriguez, 1989; Appleby et al., 1993).

Approaches to inputs which focus on organiza- tional structure usually work on the assumption that they have some desirable impact on processes. As Kamis-Gould (1987) notes, the fact that an eval- uator cannot normally sit in on therapeutic sessions leads to the use of organizational inputs, such as audit or utilization review procedures, as proxies

for good care processes. It is far from clear whether such arrangements do in fact provide a good proxy; the evidence is mixed. For instance, Wilson (1989) has shown that utilization review in mental health services has led to the introduction of desirable practices such as discharge planning, while Hertz- man (1984) has shown a link between quality assur- ance and actual quality.

In contrast, Zusman’s (1988) review suggests that there is no firm evidence that quality assurance improves quality, and some that it does not. Accre- ditation of professionals (Senior, 1989) and of insti- tutions also shows only a weak relationship with the quality of care (McGurrin and Hadley, 1991), though it might serve to motivate the non-accre- dited to improve in the hope of securing accredi- tation (McGurrin and Hadley, 1991).

Schulz et al. (1983) showed a relationship between various, more general, characteristics of the management of mental health services and qua- lity of care, findings since replicated in an interna- tional context (Schulz et al., 1990, 1993). For instance, high levels of staff participation in man- agement and high levels of goal congruence between management and professionals seem to lead to high levels of priority for multidisciplinary team care, for deinstitutionalization and for qua- lity, the latter also having the existence of peer review mechanisms as a contributor. (See also Gib- son, 1989.) (The mention of deinstitutionalization here reminds us of a point made earlier: that we do not only value health care for its outcomes, but for its processes too.)

There is also a good deal of current interest in the effect upon professional behaviour, and there- fore upon quality (and outcome), of such mecha- nisms as clinical guidelines or protocols. It is clear that the mere existence of such guidelines is largely insufficient to secure adherence to them (Lomas and Haynes, 1987). Compliance with guidelines is, inter alia, related to feedback (Lavender, 1987a; Harris et al., 1985) and the involvement of pro- fessionals in the process of producing the guidelines (Russell and Grimshaw, 1992). On the other hand, guidelines can be operationalized via prospective payment linked to utilization review.

The most general conclusion that can be reached about the use of organizational structures as input evaluation measures is that they are an important determinant both of the concepts of quality employed in a health care institution and of whether, and how, these concepts are actually pur- sued. Are there ‘cracks’, perhaps between health

Page 7: Psychiatric services for elderly people: Evaluating system performance

EDITORIAL 265

and social agencies, in the system into which clients may ‘fall’ (Tausig, 1987)? Are quality comparisons made internally over time, or externally across space? Do they employ causeleffect knowledge? Are the criteria updated? How do they combine hard data with softer judgement (Clifford, 1987)?

PROCESS AND OUTPUT

As with input-oriented approaches to evaluation, process/output-oriented approaches fall into fairly distinct categories, though in the latter case three. The first is concerned with outputs, defined, as noted above, as aggregates of processes. Here, as also noted previously, the problem is largely one of how to conceptualize case mix (for a general review, see Hornbook, 1982); it is not insignificant that attempts to develop diagnosis related groups (DRGs) in psychiatry have not been widely regarded as successful (Mitchell et al., 1987). New- man and McGovern (1987) have proposed an alter- native simple output measure for mental health services, based on a combination of clinical status (including diagnostic signs and symptoms, patient’s age and level of functioning) and typical treatment strategy. However, the better methods require the use of medical record abstraction and, therefore, can be very expensive.

The second approach to process evaluation entails the monitoring of diagnosis and treatment so as ‘to know that staff are making correct decisions on behalf of clients’ (Sherman, 1987). This begs the question of what is meant by ‘correct’. At its simplest, this may be adherence to straight- forward policies about how to route mental health clients through what are usually complex systems of care and support. Thus, using the example of a specific hospital’s adult outpatient access policies, Sherman (1987) proposed the use as a performance indicator of ‘percentage of clients correctly assigned to a program element’. A slightly more complex approach is provided by Lavender’s (1987b) model standards questionnaire, which includes enquiries about the use and review of par- ticular client training activities in the context of a psychiatric rehabilitation institution, and Smith’s (1987) proposal for the ongoing measurement of clients’ levels of functioning.

Much more complex, but identical in principle, are clinical protocols, based on the notion that if there are well-established causal links between par- ticular care processes and desired patient outcome,

it is sufficient to concentrate monitoring efforts on processes on the assumption that this will lead to improved outcomes. Thus, many advocates of pro- tocols imply that causal knowledge obtained from randomized controlled trials functions as a kind of gold standard for their construction (Cole, 1988). It must be recognized, however, that there are major obstacles to such an approach in the field of psychiatry, whereas we have seen practitioners exhibit not only a wide range of theoretical assump- tions about the causality of mental health problems and their alleviation (Lavender, 1987a) but also a range of normative positions about the desirability of particular therapies in themselves (Bachrach, 1991).

The third approach centres on client satisfaction with services, that is, mainly acceptability of pro- cesses. This area is currently regarded as being of increased policy importance, perhaps because data are relatively cheaply and easily collectable. The increasing emphasis on patient-centred outcomes and consumerism has led to the routine use of patient satisfaction surveys in programme evalu- ations. However, these are often of questionable validity, usually indicating high base levels of satis- faction, and thus are easily abused (Lebow, 1987). They reflect the reluctance of consumers of health care to criticize providers and also the concen- tration on factors that professionals feel are import- ant. Research using a wide range of possible measures including long-term psychogeriatric in- patients in England, Australia and the USA shows that interpersonal relationships with staff are con- sidered to be a key factor in patients’ satisfaction. Factors such as improvements in autonomy for the patients and a greater say in the running of wards are more important than physical surroundings (Elzinga and Barlow, 1991; Elbeck and Fecteau, 1990).

The problem of comparability, however, implies the use of standardized, ‘off-the-shelf scales. Lebow (1987) recommends one of the most com- monly used instruments, one of the variants of Att- kisson’s Client Satisfaction Questionnaire (Pascoe and Attkisson, 1983), but also gives references to more specific inpatient and outpatient question- naires. In addition to the choice of instrument, there are also decisions about when and where to survey clients, since it is known that both response rates and satisfaction rates tend to be higher during treatment than afterwards (Lebow, 1987), though Greenley et al. (1985) have shown that such in- patients do not seem to have significantly higher

Page 8: Psychiatric services for elderly people: Evaluating system performance

266 EDITORIAL

levels before discharge than afterwards. However, there is evidence that patients suffering from schi- zophrenia, manic depression, personality disorders and organic brain syndromes may be sufficiently cognitively incapacitated to produce random responses to satisfaction questionnaires.

Measures of process and output can be used to estimate technical efficiency. The measurement of efficiency has been the focus of a great deal of research in economics and operations research for several years and numerous methods have been proposed. Two of the most widely used approaches are stochastic frontier (SF) and data envelopment analysis (DEA). These approaches estimate the extent to which multiple inputs have been con- verted into multiple output measures. Most rele- vant here are resources, activities and outcomes such as reduced service needs or increased patient functioning. The analyses then attempt to deter- mine the factors which are related to the various notions of efficiency (see Fried el al., 1993 and Charnes et al., 1993 for reviews of these techniques in the health care sector and Schinnar et al., 1990 for an example in the mental health care sector). Kamis-Gould (1991) used this approach to evaluate 54 adult partial care mental health programmes. It was found that most of the variance in efficiency between programmes was ‘explained’ by organiza- tional variables, variation in effectiveness was accounted for by client characteristics and that there was a relationship between efficiency and effectiveness. Yet widely comparable measures of output (and hence of efficiency) are, as we have seen, difficult to devise in mental health. In prin- ciple, however, case mix is vital to the understand- ing of differences in hospital costs.

One issue that is not easily accounted for in ana- lysing hospital efficiency is the quality of care. To some extent, this is a question of using case mix adjusters to control for differences in the com- plexity of cases treated. However, this does not really address the issue of quality. Methods for incorporating the quality of care into the analysis of hospital efficiency such as the use of hospital- specific mortality rates are not appropriate for psy- chiatric services. Qualitative approaches have been used to measure efficiency and also to provide the face validity for the more quantitative measures. Qualitative approaches include peer assessment, self-perception and the measurement of efficient behaviours in terms of both administration and clinical ‘best practices’. This is based upon the assumption that after adjusting for exogenous vari-

ables, expert opinion, self-perception and the prac- tice of efficient administrative and clinical behav- iour will be correlated with more traditional measures of efficiency. (See, for instance, Schulz et al., 1983.)

OUTCOME AND COST EFFECTIVENESS

The desired outcome of an episode of psychiatric care is an improvement in the client’s health status which would not otherwise have occurred (Russell and Buckwalter, 1991a) or the amelioration of a deterioration which otherwise would have occurred (Moak, 1990). (For a general discussion, see Coch- rane, 1972.) The high incidence of ‘spontaneous’ recovery or amelioration (at least in the short term) from mental health problems therefore makes the measurement of outcome potentially problematic. Indeed, it is generally difficult to avoid designing measures which make untested causal assumptions about psychiatric outcomes.

This is at its most obvious in proposals for the development of simple measures. Thus, Gurian and Chanowitz (1987) propose that outcomes of care given in a psychogeriatric nursing home can be measured by the numbers of patients actually trans- ferred to (less intensive) community care and the numbers retained in the home (that is, not trans- ferred to more intensive care). This assumes not only that changes in patient need are solely the result of the nursing home’s effort, but also that all patients are given appropriate care and/or refer- ral. Similarly, Jenkins’ (1990) proposal that the out- comes of services for dementia can be assessed through ongoing surveys of prevalence in the com- munity and numbers of persons admitted to in- patient or outpatient care would provide indicators that it would be difficult to interpret (Murphy, 1992).

The problem is not avoided by the more complex approach of measuring outcome by reference to changes in clients’ scores on scales which measure various aspects of symptoms and functioning. We cannot discuss specific scales here; Smith (1987), Newman et al. (1987), Cheetham et al. (1992) and Jenkins (1992) all provide references for a number which may be applicable to the elderly mentally ill. Newman et al. argue that such outcome mea- sures are most likely to be found in actual use where the language employed in them coincides with the language of clinical planning, supervision and man- agement. Since measures which are used are more

Page 9: Psychiatric services for elderly people: Evaluating system performance

EDITORIAL 267

Table 1. Data sources and instruments for the national evaluation of the Robert Wood Johnson Foundation Program on Chronic Mental Illness

Study component Data sources Instruments

Site-level study

Community care study

Housing studies Site level

Client level Community Care Section 8

Documents Site visits Key-informant surveys

Management information system Client interviews

Case manager interviews

Documents Site visits Housing management information system

Client interviews Client interviews

Financing studies Site level Site visits

Finance reports Budgets Management information system

Medicaid State Medicaid plan State Medicaid files

Disability and vocational rehabilitation studies

Site level Site visits Client level Social Security Administration pilot study

Assessing Local Service Systems for Chronically Mentally Ill Persons

Community Care Client Questionnaire (baseline and follow-up) Case Manager questionnaire

Community Care Client Questionnaire Section 8 Questionnaire (identical to Community Care Client Questionnaire follow-up)

Source: Goldman et al. (1991).

likely to be reliable and valid, they are more likely to be useful, and so used, and so on: a kind of tautology. (For the technical aspects of reliability and validity, see Russell and Buckwalter, 1991b.) Green and Gracely (1987) used decision analytic techniques, based on ideal outcome measure cri- teria devised by the National Institute of Mental Health, to identify the best performing instruments for evaluating services to the chronically mentally ill. Thus, for example, the New Jersey level of func- tioning scale, which describes patients’ level of functioning from being totally dependent on others to needing no contact with mental health services, was identified as being particularly useful.

An example of a major evaluation of a system was that of a demonstration project in the US designed to create or strengthen local mental health authorities in the brief that coordination of services would lead to the development of a comprehensive system of mental health and social welfare services

which would improve the quality of life of people with severe and persistent mental illness (Goldman et al., 1991). The principal objective of this evalu- ation is to assess the impact of interventions asso- ciated with new or modified mental health service systems on the care for persons with chronic mental illness. It addresses whether strategies are realized and lead to intended impacts. In particular, it aims to assess whether a city-wide mental health auth- ority can centralize administrative, clinical and fis- cal responsibility, as shown by a coordinated system of care, to discover whether this impacts on continuity of care and outcomes measured by a variety of patient-administered questionnaires and case manager reports, using a quasi-experimen- tal approach. Examples of the measures used are shown in Table 1.

Outcomes, as measured by changes in scale scores, can be combined with cost data (see above) to produce cost-efectiveness measures; for

Page 10: Psychiatric services for elderly people: Evaluating system performance

268 EDITORIAL

instance, Newman and McGovern (1987) have shown how to use costs per unit of change in the Global Assessment Scale for this purpose (also see Rappaport, 1989).

Both scale outcomes and quality of life outcomes are narrow in the sense that they may not take account only of outcomes for the client. (Similarly, cost-effectiveness and cost-utility calculations usually include only organizational costs; for a review of the various approaches to measuring costs as part of evaluation see Dickey et al., 1986.) Broader calculations are, of course, highly desir- able as a means of avoiding hidden external costs and hidden value judgements. For example, one of the issues as community-based care becomes the accepted best practice is the cost of deinstitutionali- zation. As Harris et al. (1990) have noted, the ben- efits to the patient of delayed institutionalization (for instance) might be disbenefits to the patient’s informal carers. Measurement of direct costs does not fully capture societal costs. The technical answer to this problem is to undertake cost-benefit analysis.

One of the classic studies comparing community with hospital care which attempted to estimate the costs which fall outside of programme budgets was the Mendota Mental Health Institute experiment where patients were randomly assigned to groups. Weisbrod et al. (1982) performed a cost-benefit analysis in which indirect costs were found to repre- sent nearly 50% of the total costs associated with the interventions. However, despite the attempt to include an unusually wide range of tangible and intangible forms of benefits and costs, the results were equivocal and the method is quite impractical for the purposes of ongoing system evaluation. A further example of the same problem can be found in Knapp et al.3 evaluation of a demonstration study comparing care in the community with hospi- tal care; client welfare was assessed along a number of dimensions in both settings such as morale, life satisfaction, behaviour, choice, empowerment activities, social contacts and integration (Knapp and Beecham, 1990). This showed that quality of life was no worse than in hospital and at signifi- cantly reduced cost. The study also showed that the costs may vary by subgroups and that higher community care costs were not only attributable to increased need but also associated with improved success at meeting these needs. However, because of the lack of comparability of the outcomes no judgement can be made about whether these extra welfare benefits justify the extra expenditure.

In order to allow comparison across interven- tions and client groups, an alternative approach to measuring outcomes is to employ quality of life scales (see, for instance, Kind et al., 1982; European Group for Health Measurement and Quality of Life Assessment, 1991; Stoker et al., 1992). Such scales may encompass broader or narrower concepts of quality, and may be designed either for a specific patient group (Donaldson et al., 1988) or to func- tion, through quality-adjusted life yearly (QALY) calculations, as a lowest common denominator out- come measure, allowing comparison with the out- comes of quite different treatments for quite different diseases. Such comparisons are normally made as cost per QALY (that is, cost-utility: see above) statements; Wilkinson et al. (1990) have made such calculations for a UK community-based psychiatric service.

NEED AND EQUITY

As we noted above, ‘need’ for medical care may be expressed in terms of structure (resources), pro- cess or outcome. In the field of psychiatry, recent writing on the topic eschews the usual medical approach of ‘diagnosis leads to treatment need’ (Wing, 1990) in favour of effectively working back- wards from desired outcome. Ideally, the need iden- tification process would begin with:

a concept of social disablement that includes any sub- stantial inability to perform up to personal expectation, or to the expectations of important others, and is asso- ciated with psychiatric disorder or impairment. (Wing et al., 1992, p. 3; cf Doyal and Gough, 1991)

(This problem-based approach (Wing, 1990) would, in the case of psychogeriatrics, involve some overlap with services which aim to alleviate physi- cal disablement.) The desired ourcome would be the removal of the disability, and the means for achieving such removal are health care processes:

Need can be defined . . . in terms . . . of the model of treatment or other intervention required to meet it . . . If an individual is socially disabled, in association with a mental disorder for which an effective and acceptable form or model of care exists, either for amelioration or prevention, the individual is in need of that intervention. (Wing et al., 1992, p. 5).

Once such processes had been identified, the resource inputs (or structure), material, human and

Page 11: Psychiatric services for elderly people: Evaluating system performance

EDITORIAL 269

financial, necessary for their provision could be identified.

There are, however, at least three problems which require this classic health needs assessment approach to be modified in practice. The first and most obvious is (again) the matter of causality. Given the many factors associated with mental health problems, and the many agencies and ser- vices that would have to be involved, the specifica- tion of the appropriate services is not easy. And the existence of services does not mean that they are appropriately accessed by users or appropri- ately provided by professionals. This latter point can be a special problem for the elderly mentally ill; Moak (1990) has referred to what he terms ‘ther- apeutic nihilism’, in which it is assumed that the needs of this group are the same as for other adults with mental health problems.

The second problem with the health needs assessment model is that sources of direct infor- mation about social disablement due to mental health problems are unreliable, unless case registers have been developed and maintained. Murphy (1992) has pointed out that local surveys, even for an easily definable common condition such as dementia, are likely to be too small to yield useful confidence limits. It is often more sensible to use prevalence estimates from national surveys rather than undertake imprecise local work. In practice, here there is reason to believe that there is significant geographical variation: various prox- ies tend to be employed, especially social depriva- tion indicators for a review see Jarman and Hirsch, 1992).

Thirdly, since it is unlikely that all unmet need can be met within available resources, it will be necessary to prioritize. Strictly speaking, the usual approach of simply ranking broad care groups (children, the elderly, etc) will not suffice for this purpose. The groups used in the ranking must be either in the same ‘currency’ as needs have been defined in or capable of being linked back to those needs; for example, see Murphy (1992). Even then, there remains the problem of deciding how much need at each point in the ranking should be met. The usual approach is to allocate different amounts of resources (or incremental resources) in descend- ing order. This will provide a degree of vertical equity, that is, between each group in the priority ordering. Horizontal equity, that is, between indivi- duals in a group, will not be achieved unless all needs within a group are met to the same degree for each individual.

CONCLUDING REMARKS

Despite our stated reservations about the view that health care interventions can only be justified with reference to their expected outcomes, we recognize that some evaluation based on outcomes is likely to be an essential component of psychiatric services for elderly people. This is what Russell and Buck- Walter (1991 b) refer to as ‘surnmative’ evaluation.

Evaluating the outcomes of a system on an on- going basis is different from evaluating the effecti- veness of an intervention in at least two important respects. One is that in system evaluation it is diffi- cult to be confident that outcomes are not random or affected by a vast range of variables that cannot be controlled for. The second is that, unless on- going evaluation is to be an impracticably expens- ive and complex task, it must rely on simple measures. Yet simple measures are frequently so crude as to be meaningless, and are just as fre- quently abused and otherwise misinterpreted. As Lebow and Newman (1987) assert, what are needed are simple measures that are not simple-minded.

Both these characteristics of system evaluation point, for the purposes of summative evaluation, in the direction of clinical guidelines. Of course, if such an approach is not to be simple-minded, it must rely on established research evidence about causal relationships between process and outcome. Where such knowledge exists, evaluation might be based upon adherence to eflective processes, pre- sumably defined by protocols or similar approaches (Sheldon and Borowitz, 1993). The rapid develop- ment of integrated information technology may well facilitate this kind of development. Such an approach on its own will not, of course, ensure equity, and would therefore have to be accom- panied by a comparison between health care needs and the services provided.

But summative evaluation is not enough on its own. As we have noted, health care processes, per- haps nowhere more than in the fields of mental health and the elderly, are often valued for their own sake, as expressions of social concern and soli- darity. Moreover, the theoretical diversity in psy- chiatry may make summative evaluation more difficult than in some other sectors of medicine. Thus formative evaluation is necessary. (This is not to say that there are no relationships between for- mative and summative evaluations; for instance, the quality of therapeutic relationship as perceived by the patient may well affect compliance with treatment, in turn affecting outcome: Ley, 1988.)

Page 12: Psychiatric services for elderly people: Evaluating system performance

270 EDITORIAL

The managerial angle on formative evaluation is essentially concerned with efficiency: how much do processes cost, and can the cost be reduced (hopefully) without reducing quality? But there is a consumer angle too: how acceptable are pro- cesses? We have seen that patient satisfaction data need careful interpretation if they are not to pro- duce simple-minded conclusions. And patient satis- faction data tell us nothing about public opinion more generally. There is no single, unequivocally ‘correct’ approach to the evaluation of psycho- geriatric services. The various approaches that we have discussed may conflict, or trade off, with each other (most notoriously, efficiency and equity), and, after all, to ‘evaluate’ means t o place a value upon something. I t is a subjective process.

We close by observing that there is a second sense, additional to that used above, in which evaluation measures need t o avoid being simple- minded. As well as measuring (or proxying) what they are intended to measure, they should so far as possible avoid the creation of ‘perverse incen- tives’: cost-shifting and other forms of ‘gaming’ such as ‘DRG creep’ are the product of this second form of simple-mindedness.

STEPHEN HARRISON Nufield Institute for Health

University of Leeds TREVOR A. SHELDON

Centre f o r Health Economics University of York

REFERENCES

Adelman, H. S . (1986) Intervention theory and evaluat- ing efficacy. Evaluation Rev. 10,65-83.

Advisory Group on Health Technology Assessment (1 992) Assessing the Effects of Health Technologies: Principles, Practice, Proposals. Department of Health, London.

Anspach, R. R. (1991) Everyday methods for assessing effectiveness. SOC. Prob. 38, 1-19.

Appleby, J. , Sheldon, T. A. and Clarke, A. (1993) An effective efficiency index. Health Serv. J . 1993,22-24.

Bachrach, L. L. (1991) Planning high quality services. Hosp. Commun. Psychiat. 42,268-269.

Beecham, J. and Knapp, M. (1992) Costing psychiatric interventions. In Measuring Mental Health Needs (G. Thornicroft, C. R. Brewin and J. Wing, Eds). Gaskell, London.

Charnes, A., Cooper, W. W., Lewin, A. Y. and Seiford, L. M. (1993) Data Envelopment Analysis; Theory,

Methodology and Applications. Kluwer Academic, Boston.

Cheetham, J., Fuller, R., McIvor, G. andPerth, A. (1992) Evaluating Social Work Effectiveness. Open University Press, Buckingham.

Clifford, D. L. (1987) A consideration of simple measures and organisational structure. Evaluation Prog. Plan.

Cochrane, A. L. (1972) Eflectiveness and Eficiency: Ran- dom ReJections on Health Services. Nuffield Provincial Hospitals Trust, London.

Cole, M. G. (1988) Evaluation of psychogeriatric ser- vices. Can. J. Psychiat. 33,57-58.

Department of Health and Social Security (1985) Per- formance Indicators for the NHS: Computer User Manual. DHSS, London.

Dickey, B., Canon, N. and McGuire, T. (1986) Mental health cost studies: Some observations on methodo- logy. Admin. Ment. Health 13, 189-201.

Donabedian, A. (1980) The Definition of Quality and Approaches to its Assessment (Vol. 1 of ‘Explorations in Quality Assessment and Monitoring’). Health Administration Press, Ann Arbor, Michigan.

Donaldson, C., Atkinson, A., Bond, J. and Wright, K. (1988) Should QALYs be programme-specific? J. Health Econ. 14,229-256.

Doyal, L. and Gough, I. (1991) A Theory of Human Need. Macmillan, Basingstoke.

Drummond, M., Torrance, G. and Mason, J. (1993) Cost-effectiveness league tables: More harm than good? SOC. Sci. Med. 31,3340.

Elbeck, M. and Fecteau, G. (1990) Improving the validity of measures of patient satisfaction with psychiatric care and treatment. Hosp. Commun. Psychiat. 41,998- 1001.

Elzinga, R. H. and Barlow, J. (1991) patient satisfaction among residential population of a psychiatric hospital. Int. J. SOC. Psychiat. 37,24-43.

European Group for Health Measurement and Quality of Life Assessment (199 1) Cross-cultural adaptation of health measures. Health Poll. 19,3342.

Evandrou, M., Falkingham, J., Le Grand, J. and Winter, D. (1992) Equity in health and social care. J. SOC.

Evans, R. G. (1990) The dog in the night-time: Medical practice variations and health policy. In The Challenge of Medical Practice Variations (T. F. Anderson and G. Mooney, Eds). Macmillan, Basingstoke.

Frater, A. and Sheldon, T. A. (1993) The outcomes movement in the USA and UK. In Purchasing and Providing Cost-Effective Health Care (M. F. Drum- mond and A. K. Maynard, Eds). Churchill Liv- ingstone, Edinburgh.

Fried, H. O., Lovell, C. A. K. and Schmidt, S . S . (1993) The Measurement of Productive Eficiency: Techniques and Applications. Oxford University Press, New York.

Gerard, K. and Mooney, G. (1993) QALY league tables: Handle with care. Health Econ. 2,59-64.

10,231-237.

POI. 21,489-523.

Page 13: Psychiatric services for elderly people: Evaluating system performance

EDITORIAL 27 1

Gibson, D. (1989) Requisites for excellence: Structure and process in delivering psychiatric care. Occupat. Ther. Ment. Health 9,27-52.

Ginsberg, L. (1991) Administrative issues in the imple- mentation and evaluation of community mental health programs. Admin. Pol. Ment. Health 18, 187-194.

Goldman, H. H., Lehman, A. F., Morrissey, J. P., New- man, s. J., Frank, R. G. and Steinwachs, D. M. (1990) Design for the national evaluation of the Robert Wood Johnson Foundation Program on Chronic Mental 111- ness. Hosp. Commun. Psychiat. 41, 1217-1221.

Goodin, R. E. and Wilenski, P. (1984) Beyond efficiency: The logical underpinnings of administrative principles. Pub. Admin. Rev. 6, 512-517.

Greenley, J. R., Schulz, R., Nam, S. H. and Peterson, R. W. (1985) Patient satisfaction with psychiatric in- patient care: Issues in measurement and application. Res. Commun. Ment. Health 5,303-319.

Gurian, B. and Chanowitz, B. (1987) An empirical evalu- ation of a geropsychiatric nursing home. Gerontologist 27,766-772.

Hadley, T. R. and McGurrin, M. C. (1988) Accredi- tation, certification, and the quality of care in state hospitals. Hosp. Commun. Psychiat. 39,739-742.

Harris, C . M., Fry, J., Jarman, B. and Woodman, E. (1985) Prescribing: A case for prolonged treatment. J. Roy. Coll. Gen. Practitioners 35,286287.

Harris, A. G., Marriott, J. A. S. and Robertson, J. (1990) Issues in the evaluation of a community psychogeria- tric service. Can. J. Psychiat. 35,215-222.

Harrison, S. (1988) Managing the National Health Ser- vice: Shifting the Frontier? Chapman and Hall, Lon- don.

Harrison, S. (1992) Management and doctors. In Man- agement Training for Psychiatrists (D. Bhugra and A. Burns, Eds). Gaskell, London.

Hertzman, M. (1984) PSRO, quality review and mental health-a political and practical guide. Psychiat. Quart. 56, 11 3-129.

Hornbrook, M. C. (1982) Hospital casemix: Its defini- tion, measurement and use. Med. Care Rev. 39, 1 4 3 ,

Jarman, B. and Hirsch, S. (1992) Statistical models to predict district psychiatric morbidity. In Measuring Mental Health Needs (G. Thornicroft, C. R. Brewin and J. Wing, Eds) Gaskell, London.

Jenkins, R. (1990) Towards a system of outcome indi- cators for mental health care. Brit. J. Psychiat. 157, 500-514.

Jenkins, R. (1992) Health targets. In Measuring Mental Health Needs (G. Thornicroft, C. R. Brewin and J. Wing, Eds). Gaskell, London.

Jerrell, J. M. (1986) Evaluation in a down-loaded mental health system. Evaluation Prog. Plan. 9, 161-166.

Jessee, W. F. and Morgan-Williams, G. (1987) Systems for quality assurance in mental health services: A stra- tegy for improvement. Admin. Ment. Health 15,3-10.

Kamis-Gould, E. (1987) The New Jersey Performance

73-123.

Management System: A state system and uses of simple measures. Evaluation Prog. Plan. 10,249-255.

Kamis-Gould, E. (1991) A case study in frontier produc- tion analysis: Assessing the efficiency and effectiveness of New Jersey’s partial-care mental health programs. Evaluation Prog. Plan. 14,385-390.

Kamis-Could, E., Brame, J., Campbell, J., Pascall, L., Schlosser, L. and Bard, R. (1991) A functional model of quality assurance for psychiatric hospitals and cor- responding staffing requirements. Evaluation Prog. Plan. 14, 147-155.

Kind, P., Rosser, R. and Williams, A. (1982) Valuation of quality of life: Some psychometric evidence. In The Value of Life and Safety (M. W. Jones-Lee, Ed.). North Holland, New York.

Knapp, M. and Beecham, J. (1 990) Costing mental health services. Psychol. Med. 20,893-908.

Lavender, A. (1987a) Improving the quality of care on psychiatric hospital rehabilitation wards: A controlled evaluation. Brit. J. Psychiat. 150,47&48 1.

Lavender, A. (1987b) The measurement of the quality of care in psychiatric rehabilitation settings: Develop- ment of the model standards questionnaires. Behav. Psychother. 15,201-214.

Lebow, J. L. (1987) Acceptability as a simple measure in mental health program evaluation. Evaluation Prog. Plan. 10, 191-285.

Lebow, J. and Newman, F. L. (1987) The utilisation of simple measures in mental health program evaluation. Evaluation Prog. Plan. 10, 189-190.

Leibenstein, H. (1966) Allocative efficiency vs. X- efficiency. Am. Econ. Rev. 56,392-405.

Ley, P. (1988) Communicating with Patients: Improving Communication, Satisfaction and Compliance. Croom Helm, London.

Lomas, J. and Haynes, R. B. (1987) A taxonomy and critical review of tested strategies for the application of clinical practice recommendations: From ‘official’ to ‘individual’ clinical policy. Am. J. Prevent. Med.

McGurrin, M. C. and Hadley, T. R. (1991) Quality of care and accreditation status of state psychiatric hospi- tals. Hosp. Commun. Psychiat. 42, 1060-1061.

Mitchell, J. B., Dicky, B., Liptzin, B. and Sederer, L. I. (1987) Bringing psychiatric patients into the Medicare prospective payments system: Alternatives to DRGs. Am. J. Psychiat. 144,610-61 5.

Moak, G. S. (1990) Improving quality in psychogeriatric treatment. Psychiatr. Clin. N. Am. 13,99-111.

Munich, R. L. (1990) Linking quality assurance and qua- lity of care. J. Ment. Health Admin. 17, 145-160.

Murphy, E. (1992) Setting priorities during the develop- ment of local psychiatric services. In Measuring Mental Health Needs (G. Thornicroft, C. R. Brewin and J. Wing, Eds). Gaskell, London.

Newman, F. L., Hunter, R. H. and Irving, D. (1987) Simple measures of progress and outcome in the evalu-

4, 77-90.

Page 14: Psychiatric services for elderly people: Evaluating system performance

272 EDITORIAL

ation of mental health services. Evaluation Prog. Plan.

Newman, F. L. and McGovern, M. P. (1987) Simple measures of case mix in mental health services. Evalu- ation Prog. Plan. 10, 197-200.

OECD (1987) Financing and Delivering Health Care. Organisation for Economic Co-operation and Deve- lopment, Paris.

Office of Technology Assessment (1978) Assessing the EfJicacy and Safety of Medical Technologies. Govern- ment Printing Office, Washington, DC.

Pascoe, G. C. and Attkisson, C. C. (1983) The evaluation ranking scale: A new methodology for assessing satis- faction. Evaluation Prog. Plan. 6, 335-341.

Petrou, S. and Renton, A. (1993) The QALY: A guide for the public health physician. Pub. Health 107, 321- 336.

Rappaport, M. (1989) Cost-effectiveness index (CEI): a tool to help evaluate mental health programs. Journnal of Mental Health Administration 16,97-110.

Rodriguez, A. R. (1989) Evolutions in utilisation and quality management: A crisis for psychiatric services? Gen. Hosp. Psychiat. 11,256-263.

Rosser, R. and Kind, P. (1978) A scale of valuation of states of illness: Is there a social consensus? Int. J. Epidemiol. 7,347-357.

Russell, D. W. and Buckwalter, K. C. (1991a) Research- ing and evaluating model geriatric mental health pro- grams, Part 111: Measurement of outcomes. Arch. Psychiat. Nurs. V, 76-83.

Russell, D. W. and Buckwalter, K. C. (1991b) Research- ing and evaluating model geriatric mental health pro- grams, Part I: Design of mental health evaluation studies. Arch. Psychiatr. Nur. V, 3-9.

Russell, I. and Grimshaw, J. (1992) The effectiveness of referral guidelines: A review of the methods and findings of published evaluations. In Hospital Refer- rals (M. Roland and A. Coulter, Eds). Oxford Univer- sity Press, Oxford.

Schinnar, A. P., Kamis-Gould, E., Delvica, N. and Roth- bard, A. B. (1990) Organisational determinants of efficiency and effectiveness in mental health care pro- grams. Health Serv. Res. 25, 387.

Schulz, R., Girard, G. and Harrison, S. (1990) Manage- ment practices and priorities for mental health system performance: Evidence from England and West Ger- many. Int. J. Health Plan. Management5,135-146.

Schulz, R. I., Girard, C., Enikeev, I., Harrison, S. and Xiemin, M. (1993) Management and psychiatrists’ job satisfaction: Evidence from England, Germany, USSR and China, J. Management Med. 7,4&56.

Schulz, R. I., Greenley, J. R. and Peterson, R. W. (1983) Management, cost and quality of acute inpatient psy- chiatric services. Med. Care 21, 303-319.

10,209-218. Senior, N. (1989) Regulation and review of psychiatric

services in the United States. Psychiat. Ann. 19, 415- 420.

Sheldon, T. A. and Borowitz, M. (1993) Changing the measure of quality in the NHS: From purchasing activity to purchasing protocols. Qual. Health Care

Sherman, P. S. (1987) Simple quality assurance measures. Evaluation Prog. Plan. 10,227-229.

Smith, M. E. (1987) A guide to the use of simple process measures. Evaluation Prog. Plan. 10,219-225.

Sorensen, J. E., Zelman, W., Hanbery, G. W. and Kucic, A. R. (1987) Managing mental health organisations with 25 key performance indicators. Evaluation Prog. Plan. 10,239-247.

Stocking, B. (1992) The introduction and costs of new technologies. In In the Best of Health? The Status and Future of Health Care in the UK (E. Beck et al., Eds). Chapman and Hall, London.

Stoker, M. J., Dunbar, G. C., Beaumont, G. (1992)The SmithKline Beecham ‘quality of life’ scale: a validation and reliability study in patients with affective disorder. Quality of Life Research. 1,385-396.

Tausig, M. (1987) Detecting ‘cracks’ in mental health service systems: Application of network analytic tech- niques. Am. J. Commun. Psychol. 15,337-351.

Treanor, J. J. and Cotch, K. E. (1990) Staffing of adult psychiatric inpatient facilities. Hosp. Commun. Psy- chiat. 41,545-549.

Turner, W. E. 111. (1989) Quality care comparisons in medical/surgical and psychiatric settings. Admin. Pol. Ment. Health 17, 79-90.

Wan, T. H. (1986) Evaluation research in long-term care. Res. Aging 8,559-585.

Weisbrod, B. (1982) A guide to benefit-cost analysis as seen through a controlled experiment in treating the mentally ill. J. Health Politics Policy Law 7 , 808-845.

Wilkinson, G., Croft-Jefferys, C., Krekorian, H., McLees, S. and Falloon, I. (1990) QALYs in psychi- atric care? Psychiatr. Bull. 14,582-585.

Wilson, P. A. (1989) Utilisation review of psychiatric care: Building a program that works. Psychiatr. Hosp.

Wing, J . K. (1990) Meeting the needs of people with psychiatric disorders. Soc. Psychiat. Psychiatr. Epide- miol. 25, 2-8.

Wing, J., Brewin, C. R. and Thornicroft, G. (1992) Defin- ing mental health needs. In Measuring Mental Health Needs (G. Thornicroft, C. R. Brewin and J . Wing, Eds) Gaskell, London.

Zelman, W. N. (1987) Cost per unit of service. Evaluation Prog. Plan. 10,201-207.

Zusman, J. (1988) Quality assurance in mental health care. Hosp. Commun. Psychiat. 39, 12861290,

2,149-150.

19,129-132.