Linard_1996-ABF_Designing Valid Performance Indicators - System Dynamics Modeling

download Linard_1996-ABF_Designing Valid Performance Indicators - System Dynamics Modeling

of 11

Transcript of Linard_1996-ABF_Designing Valid Performance Indicators - System Dynamics Modeling

  • 8/9/2019 Linard_1996-ABF_Designing Valid Performance Indicators - System Dynamics Modeling

    1/11

    Designing Valid Performance IndicatorsThe Asia Business Forum - Kuala Lumpa August 1996

    - 1 -

    Focussing on Feedback DynamicsDesigning Valid Performance Indicators for the Public Sector

    Keith T LinardSenior Lecturer, School of Civil Engineering

    University College (University of New South Wales) Australian Defence Force Academy

    Keithlinard#@#yahoo.co.uk(Remove hashes to email)

    SUMMARY

    Program performance indicators in both public and private sectors are usually selected on the basis ofindividual intuition or group consensus. The essential purpose of performance indicators is to modifybehaviour through the performance feedback (usually after some delay) to the manager. However, thehuman mind is demonstrably poor in assessing the consequences of feedback, especially where delayis involved. Further, if a combination of indicators is used it is neigh impossible to predict theoutcome without sophisticated analysis. Consequently many performance indicators lead to counter-

    intuitive results ... the unexpected consequences of the introduction of performance based pay in theAustralian federal bureaucracy (leading to drop in morale and efficiency) is but one example.

    Systems thinking provides a rigorous basis for thev qualitative testing of the effects of putativeperformance indicators in complex environments such as health or social security. Recentdevelopments in graphically based system dynamics software enable more quantitative testing.

    The paper examines the development of performance monitoring in the federal public sector over thepast decade and shows how the system dynamics paradigm can provide a more confident pathwaytowards future development of performance indicators.

    Keywords: Evaluation; performance indicators; performance monitoring; system dynamics;

    learning organisation; management simulation; decision making.

    Keith Linard, as Chief Finance Officer, Australian Department of Finance, was responsible for the

    Machinery of Government Section and later the Financial Management Improvement Section during

    the 1983-88 'reform' of the Australian Federal Public Service. He currently runs the postgraduate

    system dynamics program at the Australian Defence Force Academy and co-directs the postgraduate

    program in project management.

    Brainstorming Performance Indicators

    Then a voice, shrill and high, rent the shuddering sky,And they knew that some danger was near;

    The Beaver turned pale to the tip of its tail,And even the Butcher felt queer.

    Tis the voice of the Jubjub! he suddenly cried.(This man, that they used to call Dunce.)

    As the Bellman would tell you, he added with pride,I have uttered that sentiment once.

    Tis the voice of the Jubjub! Keep count, I entreat.You will find I have told it you twice.

    Tis the voice of the Jubjub! the proof is complete,If only Ive stated it thrice.

    Lewis Carroll, The Hunting of the Snark.

  • 8/9/2019 Linard_1996-ABF_Designing Valid Performance Indicators - System Dynamics Modeling

    2/11

    Performance Management in the Public Sector Designing Valid Performance Indicators

    - 2 -

    Focussing on Feedback DynamicsDesigning Valid Performance Indicators for the Public Sector

    INTRODUCTION

    This paper had its genesis in 1984, at the start of the Australian Governments FinancialManagement Improvement Program (FMIP) when, as head of the FMI Section in theDepartment of Finance, I wrote Evaluating Government Programs1. In that book I set out astructured logical framework for planning and implementing program evaluations. The centrepiece of the methodology was the program logic framework, adapted from the USAIDLogFrame approach.2 (LogFrame remains an important planning and evaluation tool formany government aid agencies, including Australias international aid bureau.)

    Program logic analysis is defined as the systematic study of the presumed relationships

    between political & regulatory environment, program resource inputs, ongoing program

    processes / activities, short term outputs, longer term results, and program objectives

    A flawed paradigm

    There is nothing problematic in this definition. However, the methodology and the trainingcourse materials designed to back it up3, implied a simple linear causality to program logic, asdepicted in Figure 1. This assumption is fallacious in all but the most simplistic cases.

    The methodology focuses on challenging the presumptions, implicit or explicit betweenprogram resourcing levels and implementation activities and the achievement of results.Accordingly, the performance indicators called for by this paradigm are focused on testingthese presumptions. With one minor exception, the approach did not see as important thepossibility that program activities could bring about significant changes to the programenvironment.4 Herein, as we shall see, is a fundamental flaw in the paradigm.

    1 Department of Finance. Evaluating Government Programs - A Handbook. AGPS, Canberra, 1987.pp.13-15.

    2 Linard, KT, Mithen, BJ. Whose Project is this Anyway?. Proc. 4th National Conference AustralasianEvaluation Society. 1988.

    3 Department of Finance,Evaluation Module: Training & Skill Development Workshop. 1987.4 Appendix B, page 57, of the Evaluating Government Programs handbook did include in its checklist of

    factors the question: Have program implementation or other changes in the social / politicalenvironment affected the relevance of the original program objectives or introduced new objectives?

    R e s o u r c e I n p u t s

    Po l i t ica l Cu l t ur a l &L e g a l E n v i r o n m e n t

    A r e p r e s u m e dt o b r i n g a b o u t

    O n g o i n g P r o g r am

    Act iv i t ies

    S h o r t T e r mO u t p u t s

    A c h i e v e m e n to f O b j e c t iv e s

    A r e p r e s u m e dt o b r in g a b o u t

    A r e p r e s u m e d

    t o b r i n g a b o u t

    H a v e w e c o n s id e r e d the p o l i ti ca l im pa ct . ..?

    A re the s ta f f tra i ned ? I s th e r e u n io n / e m p l o y e r

    s u p p o r t ?

    W h a t d a t a is t h e r e t o p r o v e

    th is ?

    A re the o b jec t ives s t ill re l evan t ?

    Figure 1: Logic Model for A Typical Government Program(adapted fromEvaluating Government Programs - A Handbook, p.14.)

  • 8/9/2019 Linard_1996-ABF_Designing Valid Performance Indicators - System Dynamics Modeling

    3/11

    Performance Management in the Public Sector Designing Valid Performance Indicators

    - 3 -

    As a structured approach, the program logic analysis is a powerful tool for encouraging staffto focus on the main game ... objectives achievement. This valuable application of themethodology by both Australian Federal and State bureaucracies is well documented. 5

    DASETT Recreation Program ($739,000 p.a.)

    SUB-PROGRAM OBJECTIVE

    The development of a national policy and programs for the promotion offitness through increased participation in regular physical activity: by 1990, 40% of the adult population should participate in sufficient

    activity to achieve and maintain physical fitness and health by 2000, 60% of the adult population should participate in sufficient

    activity to achieve and maintain physical fitness and health.

    The methodology has also proven useful in exposing woolly thinking. An example of theillogic exposed by logic analysis was the preposterous claim of the Recreation and Fitness

    sub-program of the Australian Department of the Arts, Sport, the Environment, Tourism andTerritories (DASETT) that its $739,000 p.a. budget could claim the kudos for upwards of$2,000 million in savings from prevention of cardio-vascular diseases6 - totally ignoring thecontribution of the $7,000 million plus annual recreational outlays of voluntary organisations,the private sector and local and state governments

    Figure 2: Wishful Thinking of Program Managers

    Nevertheless, as the logic framework paradigm began to be adopted, implicitly or explicitly,by Commonwealth and State agencies, and as I applied it in practice, I became progressively

    5 Refer, e.g., any proceedings of the Australasian Evaluation Society.Lenne, B and H Cleland. Describing Program Logic. Program Evaluation Bulletin 2/87, ProgramEvaluation Unit, Public Service Board of NSW.

    6 DASETT. Strategies for Fitness. AGPS, Canberra, 1987.DASETT, Technical Paper No 2. The Economic Impact of Sport and Recreation Regular Physical

    Activity. AGPS 1988.DASETT. Review and Evaluation of the Recreation and Fitness Assistance Program. September 1990.

    An Ab sence of Logic . . .

    DASETT

    Private Sector

    State Govt

    Local Govt Voluntary Org's

    $ 0.8 m.

    $ 4,000 m.

    $ 50 m

    $ 1,000 m. $ 2 ,000 m.

    Annual Recreational Outlays($ mill ion)

    Private sector

    Voluntary org's

    Local Govt

    State Govt

    DASETT0

    1,000

    2,000

    3,000

    4,000

    5,000

    4,000

    2,000

    1,000

    50 0.8

    Annual Recreational Outlays($ mill ion)

    0.011

    % ofou

    tlaycau

    ses

    100%ofout

    come??

    ?

  • 8/9/2019 Linard_1996-ABF_Designing Valid Performance Indicators - System Dynamics Modeling

    4/11

    Performance Management in the Public Sector Designing Valid Performance Indicators

    - 4 -

    more concerned with the presumed one way causality. I should note that intelligentdiagramming based computer tools which would readily enable program managers andevaluators to explore explicitly the program feedback relationships did not appear until 1987,with the release of IThink for the Macintosh. IBM PC users had to wait until 1993/4 forVENSIM and POWERSIM and the PC version of IThink.

    Brainstorming performance indicators

    From 1985 I had ongoing involvement with the development of Department of Financeguidelines on program performance indicators7. The more workshops I attended onDeveloping Performance Indicators8, the more concerned I became that it would be pureserendipity were ad hoc brainstorming sessions to lead to appropriate management controlmechanisms. This is not to deny that the process had major benefits in reorienting themanagerial mindset towards outcomes and away from the fixation on inputs and process.However, deciding performance indicators by consensus, as though such indicators weresomehow external to the system and responsive to democratic ideals, had the air of a LewisCarroll nonsense rhyme.

    Brainstorming Performance Indicators

    Taking Three as the subject to reason about -A convenient number to state -

    We add Seven, and Ten, and then multiply outBy One Thousand diminished by Eight.

    The result we proceed to divide, as you see,By Nine Hundred and Ninety and Two:

    Then subtract Seventeen, and the answer must beExactly and perfectly true.

    Lewis Carroll, The Hunting of the Snark.

    But how do I know if its a good indicator ?

    In preparing this paper I scanned the 634 documents contained in the CommonwealthManagers Toolbox and the Defence Managers Toolbox9 which explicitly referred toperformance indicators. These ranged from 1 page circulars to multi-hundred page reportsand covered the period 1989 to 1995. My review of these documents, and of evaluationliterature generally, showed common themes:

    1. indicators exist to assist decision making: an important purpose of performance indicatorsis to provide feedback to the managers to guide decisions on continuous programimprovement;

    2. there are a variety of indicator types (input, process, output, outcome), but performanceindicators should focus on a limited number of key outcome areas designated by programmanagers as critical to the continuing successful functioning of the agency;

    3. a first step in selecting indicators is ensuring that the objectives are appropriately stated;

    7 Department of Finance. Indicators of Performance. Program Budgeting Issues No. 5, July 1985 et al.8 Department of Finance. Performance Indicators - A Workshop Manual. September 1986.9 Department of Defence. Defence Managers Toolbox(Incorporating the Commonwealth Managers

    Toolbox). Canberra, June 1995. (This is a compact disc library of unclassified Federal Governmentand Defence Department documents.)

  • 8/9/2019 Linard_1996-ABF_Designing Valid Performance Indicators - System Dynamics Modeling

    5/11

    Performance Management in the Public Sector Designing Valid Performance Indicators

    - 5 -

    4. there should be widespread managerial and staff involvement in the selection of theperformance indicators so that ownership will occur;

    5. there are a variety of uses for indicators, including strategic and program planning,leadership and motivation, management control, accountability and evaluation;

    6. there are a variety of audiences for performance indicators, including agency managers atdifferent levels, executive Government, Parliament, lobby groups, press and public.

    Beyond these platitudes, the public sector literature has little to say about how one mightdevelop appropriate indicators. The 1986 Department of Finance Performance IndicatorsWorkshop Handbook listed 7 simplistic criteria: 10

    Are the PIs relevant to the key result area?

    Will the PIs help my agency to make major decisions this year or in 3 years time?

    Is there a clear logical relationship between data and objective?

    Is the data important ... worth keeping?

    Can the data be collected without excessive cost?

    Is the data collected in a valid way? Will decision makers understand indicators?

    The (unpublished) 1986 report by Tedder11, which devotes 14 pages to a structured approachto the identification, analysis and acceptance of program performance indicators is probablythe most developed document in this area, but it is, in essence, simply an elaboration of the 7points above.

    The wider literature on evaluation and program budgeting is similarly unhelpful in movingbeyond the vague platitudes. Thus, one of the seminal documents in prompting governmentsto move towards program management was the 1965 Manual for Programme andPerformance Budgeting produced by the Economic and Social Affairs Department of the

    United Nations. Despite a wealth of information on the possible uses of performanceindicators, its only contribution to the process of defining them was the statement Suchmeasures can be developed and refined after steps have been taken towards ... the

    establishment of suitable programme classifications ...12

    Ramanathan and Hegstad13, in their comprehensive set of readings on performancemanagement, glibly state that, having identified the specific program objectives, ...we musttranslate these targets into performance measures, process or efficiency standards and input

    budgets corresponding to each staff member. How we do this and how we ensure that themeasures will achieve the desired result is ignored.

    The contributors to successive Australasian Evaluation Society conferences, the premierforum in Australia for discussion on public sector program evaluation issues, have alsoskirted this issue. Some, like Scott14, provide detailed shopping lists of relevant indicators butdo not give any convincing basis for indicator selection. Others, like Guthrie15, give a valid

    10 Department of Finance. Performance Indicators - A Workshop Manual. September 1986, Worksheet 7.11 Tedder, P. Performance Monitoring - A Managers Guide to the Selection and Implementation of

    Performance Indicators. Department of Veterans Affairs, unpublished draft, 1986.12 United Nations Organisation. Manual for Programme and Performance Budgeting. UNO Economic and

    Social Affairs Department, New York, 1965.13 Ramanathan, KV and LP Hegstad. Readings in Management Control in Nonprofit Organisations.

    Wiley, New York, 1982.14 Scott, M. Performance Indicators for Information Technology in the Australian Broadcasting

    Corporation, Proc. Australasian Evaluation Society International Conference, 1992, p.91.1-91.4.15 Guthrie, J. Performance Indicators at the Cutting Edge, Proc. Australasian Evaluation Society

    International Conference, 1992, p.50.1-50.7.

  • 8/9/2019 Linard_1996-ABF_Designing Valid Performance Indicators - System Dynamics Modeling

    6/11

    Performance Management in the Public Sector Designing Valid Performance Indicators

    - 6 -

    critique of the approaches to performance indicators in the Australian public sector,questioning in particular who gives guidance on the indicators, what the indicators actuallymeasure and whether valid measurements can be made. But again there is silence on how todo better.

    The literature is singularly lacking in detailed advice on how to determine whether, in fact,the use of consensus based indicators will push the program in the desired direction. It issingularly lacking on criteria for selecting indicators. We are simply asked, like LewisCarrolls Beaver, to accept that The proof is complete, If only Ive stated it thrice!

    SYSTEM DYNAMICS AND PERFORMANCE INDICATORS

    These concerns have led me on a journey of discovery, embracing in particular PeterChecklands Soft Systems Modelling methodology16, the discipline of System Dynamicspioneered by Jay Forrester and Peter Senges Systems Thinking. In these immenselyfruitful fields I found confirmation of my fears. Our approach to performance indicators had,and still has, fundamental flaws. On the other hand, these disciplines have tools and

    techniques which can greatly assist the manager in developing performance indicators and inundertaking evaluations.

    Misperceptions of feedback - the critical issue

    An implicit and fundamental assumption behind the Australian public sector approach toselecting performance indicators is that the feedback they give to the decision maker (eitherdirectly or via resultant pressure or direction from others) will cause the decision maker tomake appropriate adjustments to the inputs or processes.

    However, there is abundant research in the field of system dynamics17, as well as in the fieldsof experimental economics and psychology which suggest that managers have great difficulty

    managing dynamically complex tasks. Sterman argues persuasively from his work at MITsSloan Schoo of Management, that there is systematic misperception of feedback especiallywhen there are delays in the system. Mosekilde, Larsen and Sterman18 present the results of48 simulations of the Beer Game19 (a simulation of a simple factory-warehouse-retailsystem) run with 192 graduate students from MIT and senior executives of major US firms.They show that decision making on the basis of straight forward performance indicators, butin the face of delays, consistently resulted in costs averaging more than 10 times theoptimum!

    16 Checkland, P and J Scoles. Soft Systems Methodology in Action. Wiley, Chichester, 1990.17 Sterman, J. Deterministic Chaos in Models of Human Behaviour. System Dynamics Review, 1988, 4,

    148-178.Sterman, J. Misperceptions of Feedback in Dynamic Decision Making. Organisational Behaviouir and

    Human Decision Processes, 1989, 43(3), 301-335.Sterman, J. Modelling Managerial Behaviour: Misperceptions of Feedback in a Dynamic DecisionMaking Experiment. Management Science, 1989, 35(3), 321-339.Paich, M, & J Sterman. Boom, Bust and Failures to Learn in Experimental Markets. ManagementScience, 1993, 39(12), 1439-1458.Smith, V, G Suchanek and A Williams. Bubbles, Crashes and Endogenous Expectations in ExperimentalSpot Asset Markets,Econometrica, 1988, 56(5), 1119-1152.Funke, J, Solving Complex Problems: Exploration and Control of Complex Systems, in R Sternberg andP Frensch (eds.), Complex Problem Solving: Principles and Mechanisms. Erlbaum Assoc., New Jersey,1991.

    18 Mosekilde, E, E Larsen and J Sterman. Coping With Complexity: Deterministic Chaos in Human

    Decision making Behaviour, in J Casti and A Karlqvist (eds.),Beyond Belief: Randomness, Predictionand Exploration in Science. CRC Press, Boston, 1990.

    19 The Beer Game is described in detail in Senge, P, The Fifth Discipline - The Art and Practice of theLearning Organization. Doubleday, New York, 1990

  • 8/9/2019 Linard_1996-ABF_Designing Valid Performance Indicators - System Dynamics Modeling

    7/11

    Performance Management in the Public Sector Designing Valid Performance Indicators

    - 7 -

    Simulations run by the author at the Australian Defence Force Academy, involving seventeams comprising 56 undergraduate and graduate students, showed a similar pattern. In boththe MIT simulations and those at ADFA, highly educated managers and students failed tocomprehend the significance of feedback in the face of delay induced dynamics.

    In more recent experiments at MIT, where graduate students had full information, training,incentives and opportunities for gaining experience, Diehl and Sterman still found poormanagerial performance in the face of variations in feedback strength and delay.20 Often theyfound the subjects were outperformed by a simple no-control rule. Diehl and Sterman arguethat the mental constructs and heuristics that managers bring to bear on complex tasks arefundamentally dynamically deficient:

    Subjects were unable to account well for delays and feedback effects because (1) peoples

    mental representations of complex tasks are highly simplified, tending to exclude side effects,

    feedback processes, delays, and other elements of dynamic complexity; and (2) even when

    these elements are known, peoples ability to infer correctly the behaviour of even simple

    feedback systems is poor.

    The first deficiency can certainly be addressed through training. The second, however, ... isa fundamental bound on human rationality - our cognitive capabilities do not include the

    ability to solve systems of high-order non-linear differential equations intuitively.

    Systems thinking, system archetypes and causal loops

    Regarding the first deficiency, causal loop diagramming, a structured approach followingclear guidelines21, provides powerful insights into systemic problems, takes explicit accountof feedback and provides pointers to possible performance indicators . . . which, of course, wewould later test through simulation. This is dealt with in detail later in this conference in mypaper Public Sector Performance Management - Now and for the Future.

    While causal loops might aid in the understanding of feedback and suggest intervention pointsand performance measures, they give no guidance on the efficacy of proposed indicators. Forthat we need an additional tool, system dynamics modelling software. The followingcomputer simulation will illustrate how we might go about this.

    Simulating the Beer Game

    The so-called Beer Game (more precisely an inventory management game) is a classicmanagement game developed at MIT to simulate decision making in the face of dynamicprocesses, testing the systems principle that "structure influences behaviour. The scenario forthe MIT game is depicted in Figure 3. Each team comprises 4 sets of decision makers - an

    ordering executive and a supply executive for each of the retail, wholesale, distribution andfactory sectors. The aim of each team is to minimise total inventory costs and stockout costs,in the face of delays, as indicated in the diagram, and of demand uncertainty. As notedearlier, even with such a simple system senior executives and MBA students are unable toaccount for the dynamic interactions.

    20 Diehl, E and J Sterman. Effects of Feedback Complexity on Dynamic Decision Making. MIT SloanSchool of Management, Research Report D-4401-1. March 1994.

    21 Refer for example Senge (1990), op. cit., or virtually any issue of The System Thinker, PegasusCommunications, Cambridge MA

  • 8/9/2019 Linard_1996-ABF_Designing Valid Performance Indicators - System Dynamics Modeling

    8/11

    Performance Management in the Public Sector Designing Valid Performance Indicators

    - 8 -

    Figure 3: System simulated in the 'Beer Game'

    The problem confronting the game players, and their response to intuitive performanceindicators is illustrated in Figures 4 to 6, where we build a system dynamics model of (amuch simplified) Beer Game scenario using the POWERSIM modelling software. In thismodel there are only two stocks, Inventory (goods-in-stock) and Order_Backlog ( or goods-on-order).22

    In Figure 4 we see the basic system layout. The manager has but one decision: given

    knowledge of inventory, order backlog and current demand, what is the quantity to beordered. The aim is to minimise inventory holding and stockout costs. The manager mayspecify any performance indicator possible given the model structure. The dynamics comefrom two sources: there is a known 2 week delay before any order is fulfilled; and orders canvary. (In this simulation there is a single variation, from a steady 4 orders per week, rising to6 orders per week at week 10.

    Figure 4: Basic Structure of Simple Inventory System

    (Two stocks - Order Backlog and InventoryOne decision point, ordering; 2 week delivery delay; selling initially steadyat 4, increasing by 2 at week 10 and remains steady thereafter at 6 items per week.

    22 In the system dynamics software modelling packages such as POWERSIM, as we add elements to thediagram, the software builds the structure of the models equations, greatly simplifying the modellingprocess.

    Flow of Orders (paper flow)

    INOUT INOUT INOUT

    Factory

    Distributor Wholesaler Retailer

    CUSTOMERS

    Distributor Wholesaler Retailerdelay

    delaydelay

    delaydelay delay

    Flow of Goods (filled orders)

  • 8/9/2019 Linard_1996-ABF_Designing Valid Performance Indicators - System Dynamics Modeling

    9/11

    Performance Management in the Public Sector Designing Valid Performance Indicators

    - 9 -

    Figure 5 shows the additional structural elements that are added to the model in order toperform a no feedback control simulation.

    The arrow from selling to ordering indicates that ordering is a function of the numbersold.

    The performance indicator (PI_Target_less_Actual) is defined by the diagram to be a

    function of Inventory and Target_Inventory (PI_Target_less_Actual = Target_Inventory -Inventory ).

    Selling is a function of demand (selling = demand).

    Delivering is a delayed function (indicated by the two bars across the arrow link) ofordering.

    The ordering heuristic is one of no control - order this week whatever was sold in theprevious week. Performance indicators play no part in the decision making. The model isready to run.

    If this heuristic were to be followed throughout the game we would see inventory declinefrom its initial 24 to 22 items at week 10 and then to 20 at week 11. Thereafter it wouldremain constant. The decline in inventory occurs because there is a two week delay beforethe higher lever of sales (which deplete inventory) are translated into a higher level of orders(which replenish inventory).

    Figure 5: Result of Simulation Run 1

    (Inventory falls at week 10 as a result of demand increase from 4 to 6 & does not recover)

    The management task . . . brainstorm performance indicators

    The putative performance management panel is now challenged to implement performance

    indicators and related decision rules which will force inventory to tend towards target. Groupbrainstorming with senior managers and graduate student groups (reminiscent of publicservice performance indicator workshops) inevitably results in decision rules which, as we

  • 8/9/2019 Linard_1996-ABF_Designing Valid Performance Indicators - System Dynamics Modeling

    10/11

    Performance Management in the Public Sector Designing Valid Performance Indicators

    - 10 -

    see in Figure 6, leads to disaster. The players almost invariably set a single performanceindicator (Target_Inventory less Actual_Inventory) and set the decision rule: order what wassold plus the performance gap.

    Similarly in the standard Beer Game, which has, in addition a wholesaler, distributor andbrewer. Faced with a management decision situation which is orders of magnitudes lesscomplex that that faced by most program managers, managers tend to apply performanceinformation in a way which is disastrous.

    Testing the value of intuitive performance indicators

    Figure 6 shows the revised model, with ordering now defined to be a function of bothselling (ie, last weeks sales) and the performance indicator. In the face of a simpleperturbation (a once-off increase in sales) and a short delay, this intuitive indicator sets upchaotic fluctuations. The mix of performance indicators, and the related decision algorithmsrequired to force the system to stability at the target inventory is by no means intuitive,notwithstanding the problem simplicity.

    Figure 6: Result of Simulation Run 2

    Ordering heuristic based on current sales data plus performance informationon the gap between target inventory and actual inventory.

    Unfortunately the boom and bust behaviour of the inventory-time graph in Figure 6 is all tooreminiscent of Australias economic management, our housing industry, the car industry. Theboom-bust problem, of course, is not limited to Australia.

    A Simple Analogy

    One can liken the typical public sector decision context to that of drivers on a busy city roadwhere the windscreens are obscured, so they steer by looking in their rear view mirrors, andthere is a delay of several seconds between using the steering wheel, brakes or accelerators

  • 8/9/2019 Linard_1996-ABF_Designing Valid Performance Indicators - System Dynamics Modeling

    11/11

    Performance Management in the Public Sector Designing Valid Performance Indicators

    - 11 -

    and the corresponding vehicle response. In such a situation we would not be surprised at theoccasional crash.

    CONCLUSIONS

    The thesis of this paper is simple. Neither the theoreticians nor the practitioners of evaluationhave developed robust criteria for selecting appropriate performance indicators. Traditionalapproaches have also largely ignored the pervading effect of feedback. Meanwhile, theparallel disciplines of soft systems modelling and system dynamics have developed a range oftechniques and tools which can greatly assist the evaluation profession. Causal loop diagramshelp us conceptualise how managers might apply feedback information, while systemdynamics modelling tools such as POWERSIM provide a powerful, yet painless, means oftesting the likely impact of different decision rules.

    -------------------------------------------------------