Performance Reporting: Insights from International Practice · PERFORMANCE REPORTING: INSIGHTS FROM...

34
Performance Reporting: Insights from International Practice Managing for Performance and Results Series Richard Boyle Head of Research Institute of Public Administration Dublin, Ireland

Transcript of Performance Reporting: Insights from International Practice · PERFORMANCE REPORTING: INSIGHTS FROM...

Page 1: Performance Reporting: Insights from International Practice · PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE F O R E W O R D David Treworgy Jonathan D. Breul On behalf

Performance Reporting: Insights from International Practice

Man

agin

g fo

r Pe

rfo

rman

ce a

nd

Res

ult

s Se

ries

Richard BoyleHead of ResearchInstitute of Public AdministrationDublin, Ireland

Page 2: Performance Reporting: Insights from International Practice · PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE F O R E W O R D David Treworgy Jonathan D. Breul On behalf

2 0 0 9 MANAGING FOR PERFORMANCE AND RESULTS SERIES

Richard BoyleHead of ResearchInstitute of Public AdministrationDublin, Ireland

Performance Reporting: Insights from International Practice

Page 3: Performance Reporting: Insights from International Practice · PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE F O R E W O R D David Treworgy Jonathan D. Breul On behalf
Page 4: Performance Reporting: Insights from International Practice · PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE F O R E W O R D David Treworgy Jonathan D. Breul On behalf

T A B L E O F C O N T E N T S

3

Foreword ..............................................................................................4

Executive Summary ..............................................................................6

Introduction .........................................................................................8Methodology .................................................................................8

Analysis of Reported Performance Indicators ....................................11Understanding the Breadth of the Data .......................................11Ensuring the Quality of Reported Indicators ................................13Varying Practices .........................................................................14

Developing a Good System for Reporting on Outputs and Outcomes ...................................................................................15

Key Attributes of Preparation .......................................................15Key Attributes of Presentation ......................................................20Recommendations for Improvement ............................................25

Appendix: A Strategic Focus on Outcomes—A Global Trend .............26

Acknowledgments ..............................................................................27

Endnotes ............................................................................................28

References .........................................................................................29

About the Author ...............................................................................30

Key Contact Information ....................................................................31

Page 5: Performance Reporting: Insights from International Practice · PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE F O R E W O R D David Treworgy Jonathan D. Breul On behalf

IBM Center for The Business of Government4

PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE

F O R E W O R D

David Treworgy

Jonathan D. Breul

On behalf of the IBM Center for The Business of Government, we are pleased to present this report, “Performance Reporting: Insights from International Practice” by Richard Boyle, Institute of Public Administration in Dublin, Ireland.

While there is much debate about the use and impact of performance reporting, Richard Boyle finds that there is surprisingly little information on the nature and quality of output and outcome indicators that are actually used and presented in performance reports. He further notes that there is an almost total lack of information on cross-national comparative practice.

With this in mind, Boyle sets out to provide empirical evidence of what is actually happening in output and outcome reporting by governments across the world. He examines four countries regarded as among those at the fore-front of performance reporting: Australia, Canada, Ireland, and the United States. His report provides cross-national comparative data on good and bad practices in performance reporting, shares good practices across these coun-tries, assesses the state of performance reporting, and provides directly rele-vant assistance to program managers in both central and line agencies.

Interestingly, Boyle finds that there is a clear distinction between performance reports in the United States and those in other countries he examined. On the whole, indicators contained in United States reports are more likely to report on outcomes, be quantitative in nature, meet data quality criteria, and have associated targets and multi-year baseline data.

Drawing on his qualitative analysis of indicators used in performance reports, Boyle identifies six key attributes for those involved in providing better output and outcome information and offers six corresponding recommendations for producing better performance reports.

Page 6: Performance Reporting: Insights from International Practice · PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE F O R E W O R D David Treworgy Jonathan D. Breul On behalf

www.businessofgovernment.org 5

PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE

In the United States, we hope that this particularly timely and informative report will be useful to the Obama Administration’s Chief Performance Officer, departmental performance improvement officers and program managers, as well as the Congress. We also hope the report will be useful to other nations as they also move toward an increased focus on outcome reporting.

David Treworgy Partner Public Sector Financial Management Services IBM Global Business Services [email protected]

Jonathan D. Breul Executive Director IBM Center for The Business of Government [email protected]

Page 7: Performance Reporting: Insights from International Practice · PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE F O R E W O R D David Treworgy Jonathan D. Breul On behalf

IBM Center for The Business of Government6

PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE

E X E C U T I V E S U M M A R Y

Recent years have seen a growing emphasis on the reporting to politicians and citizens of the outputs and outcomes of government programs. Yet there is limited information on what outputs and outcomes are actually reported, in practice. The purpose of this report is to examine the reporting of outputs and outcomes in four countries. What types of indicators are actually being reported? Does the reality match the rhetoric?

Marked Differences in Performance Reporting PracticesThe research for this report focused on a content analysis of performance indicators in a sample of performance reports produced in four countries: Australia, Canada, Ireland, and the United States. Some key findings emerge from this analysis:• There is a clear preponderance of output and

outcome indicators as opposed to activity and input indicators, suggesting that the emphasis on outputs and outcomes in government pro-grams is being reflected in practice. But the United States’ experience is very different from those of the other countries examined. Eighty percent of indicators reported in the U.S. perfor-mance reports examined are outcome indica-tors. In the other countries examined, output indicators are predominant.

• Canadian and U.S. performance reports contain a majority of quantitative indicators. For Australia and Ireland, the majority of indicators are qualitative in nature.

• Three-quarters of the reported indicators surveyed in the U.S. performance reports are aspirational in nature and beyond the direct control of the agency. In Australia and Ireland, achievement of

around 80 percent of the indicators can be attrib-uted to the agency. Canada sits in between.

• Virtually all of the reported indicators in the U.S. performance reports have targets associated with them. Overall, there is very little use made of targets in Australia, Canada, and Ireland. It is the norm in Australia, Canada, and Ireland to present data just for the year under scrutiny. In the United States, the norm is for multiyear trend data to be included, with between three and five years of data being commonplace.

• Nearly all of the indicators reported in the U.S. reports meet the SMART (specific, measurable, achievable, relevant, time-bound) quality criteria. Canada performs next best against the SMART criteria, with Australia and Ireland displaying some limitations. Fewer than three-quarters of the Australian and Irish indicators examined, for example, are classified as specific or measurable.

There is, therefore, a clear distinction between the U.S. performance reports and the others exam-ined. On the whole, indicators contained in the U.S. reports are more likely to report on outcomes, be quantitative in nature, meet data quality crite-ria, and have associated targets and multiyear baseline data.

The Key Attributes of Good DesignA qualitative analysis of the performance reports identifies six attributes of good performance reporting:• Key Attribute One: Having a consistent, compa-

rable, and structured approach to underpin the indicators reported.

• Key Attribute Two: Having a good performance story to accompany the indicators.

Page 8: Performance Reporting: Insights from International Practice · PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE F O R E W O R D David Treworgy Jonathan D. Breul On behalf

www.businessofgovernment.org 7

PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE

• Key Attribute Three: Having clearly specified outcome indicators and paying attention to detail.

• Key Attribute Four: Having information on both targets and baseline data combined to guide performance assessment over time.

• Key Attribute Five: Ensuring good presentation and effective use of technology.

• Key Attribute Six: Providing output and activity indicators as well as outcome indicators when discussing agency performance.

RecommendationsOn the basis of these attributes, six recommenda-tions to guide those concerned with improving reporting on outputs and outcomes are put forward:• Recommendation One: When developing per-

formance measurement systems, use a consis-tent, comparable, and structured approach to performance information across all agencies and programs.

• Recommendation Two: Include a good perfor-mance story to accompany the indicators.

• Recommendation Three: Specify outcome indi-cators, and fully explain the results reported against each indicator.

• Recommendation Four: Provide both target and baseline data.

• Recommendation Five: Ensure effective use of technology in presenting the performance data collected.

• Recommendation Six: Present agency perfor-mance information which includes output and activity indicators in addition to outcome indicators.

Page 9: Performance Reporting: Insights from International Practice · PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE F O R E W O R D David Treworgy Jonathan D. Breul On behalf

IBM Center for The Business of Government8

PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE

Governments are under increasing pressure to pub-licly demonstrate the results achieved by expendi-tures on government-funded programs. At the same time, a shift from ex ante to ex post controls has resulted in a steady increase in the volume of per-formance information, with a focus on outputs and outcomes (OECD, 2005). Many countries have devel-oped reporting frameworks for parliaments, giving increased emphasis to output and outcome reporting. A recent IBM Center report by Burt Perrin described the movement in governments across the world toward a greater focus on outcomes (Perrin). An excerpt from that report is presented in the Appendix.

Yet, despite this increase in activity, unease with what is actually being achieved is evident. Questions have been raised as to whether politicians actually find useful the output and outcome information reported to them (Pollitt, 2006). And the quality of the performance information provided has been questioned, sometimes as the result of national audit office scrutiny (Australian National Audit Office, 2007).

Much discussion and deliberation has gone into why output and outcome reporting may be having only limited impact. But little of this discussion is based on empirical information on the actual output and outcome information that is reported. There is surprisingly limited information on the nature and quality of the output and outcome indicators that are actually used and presented in performance reports for politicians. And there is an almost total lack of information on cross-national comparative practice with regard to output and outcome report-ing. Much of the discussion is founded, instead, on the performance reporting frameworks produced by central agencies, which themselves often rely on hypothetical examples of output and outcome indi-cators to illustrate reporting requirements.

The aim of this report is to provide empirical evi-dence of what is actually happening in output and outcome reporting by government departments. Examples of reporting from four countries regarded as among those at the forefront in discussions on output and outcome reporting provide cross-national com-parative data on good and poor practice and enhance the potential for lesson learning. The report aims to:• Share good practice across countries

• Assess the state of performance reporting

• Be directly relevant and of assistance to pro-gram managers in both central and line agencies

MethodologyThe research focuses on a content analysis of per-formance indicators in performance reports pro-duced in four countries.1 The countries and reports to be reviewed are:• Australia—departmental annual reports

• Canada—departmental performance reports

• Ireland—output statements

• United States—performance and accountability reports

These countries are all ones which have explicitly advanced an output and outcome reporting agenda for a number of years, as the box Performance Reporting Requirements and Reports Examined, by Countries Studied illustrates. A particular challenge is to find reports addressing a similar subject area, in order to enhance the comparison base. With this in mind, reports from the agriculture, health, and transportation sectors were reviewed.2 These sectoral areas give a spread of activities and functions, encompassing both more executive-focused and more policy-focused work. As such, the reports

Introduction

Page 10: Performance Reporting: Insights from International Practice · PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE F O R E W O R D David Treworgy Jonathan D. Breul On behalf

www.businessofgovernment.org 9

PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE

Performance Reporting Requirements and Reports Examined—By Country

AustraliaThe Australian government introduced an Outcomes and Outputs Framework as the basis for budgeting and reporting for public sector agencies in 1999–2000. Among the main elements of the framework are: • Specification of what the government is trying to

achieve (outcomes);

• Specification of how actual deliverables will assist in achieving the outcomes (outputs); and

• Annual performance reporting of agencies’ con-tribution to the achievement of outcomes and the delivery of outputs (Australian National Audit Office, 2007, p. 15). Annual reports to Parliament detail the degree to which plans for the coming budget year are realized and targeted performance is achieved.

The reports examined in this study are:• Department of Agriculture, Fisheries and Forestry

Annual Report 2007–08

• Department of Health and Ageing Annual Report 2007–08

• Department of Infrastructure, Transport, Regional Development and Local Government Annual Report 2007–08

CanadaCanada has had an Improved Reporting to Parliament project running since the 1990s. Thirteen broad Government of Canada outcomes are specified, and agencies must develop clearly defined and measurable strategic outcomes that link in to these over-arching outcomes. Departmental performance reports are intended to provide a comprehensive but succinct picture of departmental performance, as it compares against the strategic outcomes, through the reporting of program activities linked to the strategic outcomes (Treasury Board of Canada, 2007). An effort is being made to refocus reporting away from governmental out-puts to higher-level outcomes that show how agencies make a difference to citizens.

The reports examined in this study are:• Agriculture and Agri-Food Canada 2007–08

Departmental Performance Report

• Health Canada 2007–08 Departmental Performance Report

• Transport Canada 2007-08 Departmental Performance Report

IrelandIn Budget 2006, the Minister for Finance indicated that the government had decided that, starting in 2007, individual departments would publish an annual state-ment on the outputs and objectives of their depart-ments, and from 2008, the actual out-turns. These statements (named output statements) are presented to the relevant parliamentary committee along with the department’s annual estimates. Guidance from the Department of Finance suggests that, with regard to reporting on performance, a small number of high-level goals per department—each with a macro level outcome indicator—should be complemented by a small number of more detailed output indica-tors which should, where possible, be quantitative in nature; otherwise, qualitative.

The reports examined in this study are:• Department of Agriculture, Fisheries and Food

Annual Output Statement 2008

• Department of Health and Children Annual Output Statement 2008

• Department of Transport Output Statement 2008

United StatesThe Government Performance and Results Act of 1993 requires federal agencies to produce strategic plans, annual performance plans, and annual performance and accountability reports (PARs). These are aimed at establishing a system of accountability whereby agen-cies articulate what they are trying to achieve, how they will accomplish it, and how Congress and the public will know if they are succeeding (Breul, 2007, p. 313). Goals and objectives need to be stated as outcomes, and performance indicators must be valid indicators of the impact on outcome goals (Department of Energy, 2006).3

The reports examined in this study are:• Department of Agriculture 2008 Performance and

Accountability Report

• Department of Health and Human Services FY 2008 Citizens Report

• Department of Transportation FY 2008 Performance and Accountability Report

Page 11: Performance Reporting: Insights from International Practice · PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE F O R E W O R D David Treworgy Jonathan D. Breul On behalf

IBM Center for The Business of Government10

PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE

inform judgments as to the extent to which output and outcome reporting is advancing in different spheres of activity and policy domains. The most recently available reports at the time (covering 2007–08, in each case) form the basis for the analysis.

For the sake of convenience, in the remainder of this report, instead of being given their full titles, the reports are referred to as the agriculture, health, and transportation reports of the four countries examined.

The performance indicators in the reports were ana-lyzed against a number of criteria. In particular:• Whether the indicator focuses on outcome,

output, activity, or input

• Whether the indicator is quantitative, qualitative, or measures a discrete event

• Whether changes in the indicator are attribut-able to the agency/program, or if the indicator is aspirational in nature

• Whether a target is associated with the indicator

• Whether baseline data giving the previous year’s(s’) performance is associated with the indicator

• Whether the indicator meets SMART (specific, measurable, achievable, relevant, time-bound) quality criteria

The definitions used for this analysis (output, out-come, attributable, aspirational, etc.) are presented in the box below.4 In addition to this analysis, a more qualitative analysis of the content of perfor-mance reports was undertaken to identify good and bad practice and to facilitate the drawing of lessons from practice, to date.

Criteria Definitions

Activity reflects the things done by people in the course of delivering services or programs. For example consul-tation meetings held, visits to sites.

Achievable means that the required performance associated with the indicator can be accomplished. It is pos-sible, and it is not too far in the future. Achievable means that it is appropriately limited in scope.

Aspirational means that achievement is out of the direct control of the agency/program.

Attributable means the organization/program itself is capable of bringing about a change in the indicator value.

Baseline refers to whether or not a baseline level of performance for previous year(s) is specified against which change can be assessed.

Discrete event refers to a once-off event: an example would be “produce a policy paper by dd/mm/yyyy.”

Input covers the resources consumed for a particular activity, such as budget absorption, over/under spending, and the number of people working on a program.

Measurable means that the required performance can be measured, that the source of the data is identified and accessible, and that the performance indicator is valid and meaningfully reflects the desired performance, condi-tion or state. Measurable means that it is numeric or descriptive of outcomes, quantity, quality, time-performance, or cost.

Outcome focuses on what happens as a result of the delivery of the output; the events or changes in conditions/behavior/attitudes that arise.

Output refers to the products or services directly produced by an agency/program.

Qualitative means that the indicator is descriptive based on some quality rather than quantity.

Quantitative means the indicator is subject to numerical measurement.

Relevant means that the required performance will materially contribute to achieving the organization’s objectives and goals.

Specific means that the indicator, associated description, or associated objective/goal is concrete, detailed, focused, and well defined. The nature and the required level of performance can be clearly identified.

Target assesses if there is an associated reference point against which indicator performance can be judged.

Time-bound means that there is a deadline or specified time-frame, that the deadline or time-frame is reasonable, and that the time-frame is relevant, i.e., the deadline is not beyond the point at which achieving the goal loses its value.

Page 12: Performance Reporting: Insights from International Practice · PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE F O R E W O R D David Treworgy Jonathan D. Breul On behalf

www.businessofgovernment.org 11

PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE

Understanding the Breadth of the DataAs a relatively crude starting point, it is interesting to look at the number of indicators reported, as shown in Table 1. There are clear differences across the countries, with the U.S. performance reports focus-ing on a relatively small number of indicators, i.e., roughly 30-40 per report. By way of contrast, the Australian performance reports each have over 100 indicators. Canada and Ireland fall in between. There are no clearly discernable sectoral differences.

In moving on to consider the focus of reported per-formance indicators—to what extent they focus on outputs and outcomes—some interesting variations occur, as Figure 1 illustrates. Overall, there is a clear preponderance of output and outcome indicators as opposed to activity and input indicators, suggesting that the emphasis on outputs and outcomes is being reflected in practice. But the U.S. experience is clearly different from the other countries examined. The vast majority of indicators reported in perfor-mance reports in the United States (80 percent) are outcome indicators. In the other countries exam-ined, output indicators are predominant, accounting for over 50 percent of reported indicators in each case. Ireland has a particularly high proportion (18 percent) of activity indicators. There are also some

sectoral differences. Outcome indicators are more commonly reported for agriculture: only Canada had a majority of output indicators for agriculture.

The main reason for this difference in practice appears to be a difference in focus on whose perfor-mance is reported in the section of the report that deals with program performance. For the U.S. reports, the clear focus is on reporting on the outcomes of

Analysis of Reported Performance Indicators

Agriculture Health Transportation

Australia 142 103 128

Canada 88 87 85

Ireland 41 98 77

United States 32 40 36

Table 1: Number of Performance Indicators in Performance Reports

OutputOutcome InputActivity

0

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

USAIrelandCanadaAustralia

Figure 1: Focus of Performance Indicators

Page 13: Performance Reporting: Insights from International Practice · PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE F O R E W O R D David Treworgy Jonathan D. Breul On behalf

IBM Center for The Business of Government12

PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE

program performance only. For the other countries examined, a mix of program and agency performance is reported. In other words, reporting on how well the agency is performing in delivering on the outcomes is combined with a focus on program outcomes in the performance reports of Australia, Canada, and Ireland in this section of the report.

Figure 2 shows the type of performance indicator reported. The clear preference in the guidance for performance reports is that indicators should be quantitative in nature. In practice, the United States achieves this objective, with almost all of the reported indicators being quantitative in nature. The picture varies for the other countries examined, however. Canada also has a majority of quantitative indicators, with two-thirds of its indicators being quantitative in nature. But, for both Australia and Ireland, the majority of indicators are qualitative. Ireland also has a relatively large proportion (21 per-cent) of indicators that measure discrete events, such as the production of a report by a certain dead-line, the review of a project, or participation in international meetings. The health sector seems to lend itself more to the use of quantitative indicators, with the majority of the reported health indicators being quantitative in all countries except Ireland.

An interesting issue is the extent to which changes in the reported indicators can be directly attributed to the agency/program. If the change cannot be

attributed to the agency/program, it is described here as aspirational in nature: i.e., achievement is outside the direct control of the agency/program. A clear difference emerges between the United States and the other countries on this criterion, as Figure 3 illustrates. Three-quarters of the reported indicators surveyed in the U.S. reports are aspirational in nature. In Australia and Ireland, around 80 percent of the indicators are attributable in nature. Canada sits in between.

The reason for these differences is linked to the extent to which outcome indicators are reported. Outcomes tend to be amenable to influence outside of the agency/program, and the vast majority of reported outcome indicators are aspirational in nature. The fact that the United States has a prepon-derance of outcome indicators, whereas Australia and Ireland have a lower proportion of outcome indicators, helps to explain the variation.

Another significant issue is whether the reported indi-cators have targets and baseline data associated with them, as illustrated in Figures 4 and 5. Guidance on performance reporting commonly suggests that good practice would include the use of targets and baseline data to enable judgments to be made about performance, both in absolute terms and over time. Practice, however, varies considerably on these fronts. Again, the United States is an outlier when compared to the other countries surveyed, in

0

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

USAIrelandCanadaAustralia

QualitativeQuantitative Discrete event AspirationalAttributable

0

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

USAIrelandCanadaAustralia

Figure 2: Type of Performance Indicator Figure 3: Attributable or Aspirational Indicators

Page 14: Performance Reporting: Insights from International Practice · PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE F O R E W O R D David Treworgy Jonathan D. Breul On behalf

www.businessofgovernment.org 13

PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE

that virtually all of the reported indicators have tar-gets and baseline data associated with them. Overall, there is very little use made of targets in Australia, Canada, and Ireland. The health sector is somewhat of an exception, with three-quarters of the health indicators in Australia and almost half of the health indicators in Ireland having targets asso-ciated with them.

There is even less frequent presentation of baseline data in performance reports that can enable changes in performance indicators to be assessed over time. It is the norm in Australia, Canada and Ireland to present data just for the year under scrutiny (though around a third of the Australia health indicators and all eight of the Canada transportation indicators do have some baseline data from previous years associ-ated with them). In the United States, the norm is for multiyear trend data to be included, with between three and five years of data being commonplace.

Ensuring the Quality of Reported IndicatorsApart from the number of indicators in different cat-egories—such as output/outcome, quantitative/quali-tative—it is important to assess the quality of the indicators used. It is of little use, for example, hav-ing a lot of outcome indicators if they are of poor quality and consequently of little use. One means of assessing quality is to rate the indicators against the

commonly used SMART (specific, measurable, achievable, relevant, time-bound) criteria.

The results of this exercise are shown in Figure 6. Again, this figure shows the United States as differ-ing from the other countries, in that nearly all of the indicators reported meet the SMART criteria. Canada performs next best against the SMART crite-ria, with Australia and Ireland displaying some limi-tations. Fewer than three-quarters of the Australian and Irish indicators examined, for example, are clas-sified as specific or measurable. We found no exam-ples of irrelevant indicators.

The SMART criteria are a commonly used set of crite-ria for judging the quality of performance indicators:• Specific—The indicator, associated description,

or associated objective/goal is concrete, detailed, focused, and well defined. The nature and the required level of performance can be clearly identified.

• Measurable —The required performance can be measured, the source of the data is identified and accessible, and the performance indicator is valid and meaningfully reflects the desired per-formance, condition, or state. Measurable means that it is numeric or descriptive of outcomes, quantity, quality, time-performance, or cost.

• Achievable—The required performance associ-ated with the indicator can be accomplished. It

0

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

USAIrelandCanadaAustralia

No targetTarget

0

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

USAIrelandCanadaAustralia

No baselineBaseline

Figure 4: Target Associated with Indicator Figure 5: Baseline Data Associated with Indicator

Page 15: Performance Reporting: Insights from International Practice · PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE F O R E W O R D David Treworgy Jonathan D. Breul On behalf

IBM Center for The Business of Government14

PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE

is possible, and it is not too far in the future. Achievable means that it is appropriately limited in scope.

• Relevant—The required performance will mate-rially contribute to achieving the organization’s objectives and goals.

• Time-bound—There is a deadline or specified time frame, the deadline or time frame is rea-sonable, and the time frame is relevant, i.e., the deadline is not beyond the point at which achieving the goal loses its value.

In terms of the challenges associated with setting good quality performance indicators, many indica-tors were set out as broad objectives, which are in practice unspecific in nature and incapable of being measured or monitored in any meaningful sense. Despite the guidance on the need for specificity and clarity, it is not that uncommon still to see indicators along the lines of “make progress on …,” “improve the efficient administration of …,” and so on. Another problem that occurs is the mixing together of several objectives into one indicator, making reporting in any clear manner on the indicator impossible in practice. Good quality indicators facil-itated judgment on performance in an accessible manner, whether the focus was on outputs or on outcomes.

Varying PracticesOverall, a focus on output and outcome indicators being used in performance reports to politicians is discernible. Also particularly noteworthy are the vari-ations in practice between the countries examined:• The United States stands out, focusing strongly

on outcomes, using quantified and good quality indicators, by and large. Reporting against tar-gets and baseline data covering previous years are the norm.

• Australia and Ireland focus more on reporting against output and activity indicators. There are significant variations in the quality of indicators used, and limited use is made of targets and baseline data.

• Canada falls somewhat in between. There is a greater focus on outcome and quantitative indi-cators than in Australia and Ireland, and a higher quality of indicator measured against SMART criteria. But little use is made of targets and baseline data.

• There are also some sectoral variations, notably the greater use of outcome indicators in agricul-ture and of quantitative indicators in the health sector.

0

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

USAIrelandCanadaAustralia

Time-boundRelevantAchievableMeasurableSpecific

Figure 6: The Extent of SMART Indicators

Page 16: Performance Reporting: Insights from International Practice · PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE F O R E W O R D David Treworgy Jonathan D. Breul On behalf

www.businessofgovernment.org 15

PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE

Drawing on the qualitative analysis of indicators used in performance reports, it is possible to identify a number of key desirable attributes for those involved in getting better output and outcome infor-mation into performance reports for politicians.

Key Attributes of Preparation

Key Attribute One: Having a consistent, com-parable, and structured approach to underpin the indicators reported.While practice has been found to vary somewhat from the central advice and guidance issued on producing performance information, there is no doubting the benefits of a cohesive and comprehen-sive approach underpinning the development of output and outcome indicators in performance reports for politicians.

Both Canada and the United States have strong per-formance management frameworks that encourage a standardized approach to the presentation of per-formance information. Canada’s approach is based on the Management Resources and Results Structure (MRRS). The U.S. approach is under-pinned by the Government Performance and Results Act and subsequent managerial and financial reform efforts.

Australia also has an underpinning Outcomes and Output Framework. While this has been successful in encouraging a focus on performance manage-ment (Bouckaert and Halligan, 2008), it appears to have been less successful than the Canadian and U.S. frameworks in securing consistency in the development of output and outcome indicators. Ireland has issued central guidance on indicators and reporting requirements, but has a less cohesive

central approach for performance reporting than those of the other countries.

Data qualityA vital part of the consistent approach underpinning good output and outcome indicators is having pro-cesses in place to address data quality issues. Two particular aspects of data quality are notable regarding the U.S. performance reports, and perhaps help to explain why their indicators are of higher quality over-all. One is the explicit attention given in the reports to data completeness and quality issues. The transpor-tation report has a section on performance data completeness and reliability, and a link is provided to an assessment of the completeness and reliability of performance data and detailed information on the source, scope, and limitations for the performance: http://www.bts.gov/programs/statistical_policy_ and_research/source_and_accuracy_compendium/index.html.

In the website cited above, the United States also provides information as to how it aims to resolve inadequacies that exist in the performance data. In the agriculture report, each indicator in the report is accompanied by an assessment of the completeness, reliability, and quality of data used in the indicator. The health report is less transparent with regard to data quality, but there is a focus overall on assuring the reader that the indicators are based on sound, quality data and providing the source of that data. Where there are data limitations, these are often explicitly stated. In the performance reports exam-ined for Australia, Canada, and Ireland, such explicit attention to data quality is not present. It may occur in individual cases, but it is not normal practice in the reports.

Developing a Good System for Reporting on Outputs and Outcomes

Page 17: Performance Reporting: Insights from International Practice · PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE F O R E W O R D David Treworgy Jonathan D. Breul On behalf

IBM Center for The Business of Government16

PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE

The second notable way in which data quality was explicitly addressed in the United States until 2009 is the role played by an influential external indepen-dent scrutinizer, the Mercatus Center, based at George Mason University. From fiscal year 1999 to 2009, the Mercatus Center assessed all federal agency performance reports to see how well they were informing Congress and the public. These assessments go well beyond the quality of indica-tors, looking at how the reports reflect wider issues of transparency, public benefits, and leadership. But data quality is among the issues assessed, examining if the performance data used are valid, verifiable, and timely (McTigue, Wray and Ellig, 2009, p. 5). This public, independent reporting and ranking of performance reports acted as a further incentive to enhance data quality.

In all of the countries examined, the national audit institution has a role in the oversight of data quality. But this oversight role tends to be limited to periodic reviews of performance reports rather than a detailed annual assessment and verification of data quality.

Key Attribute Two: Having a good performance story to accompany the indicators.The majority of the performance reports examined contain narrative sections that spell out in more detail information on performance. These perfor-mance stories serve an important role in giving the reader a fuller picture of the implications of the out-puts and outcomes reported.

Good stories can be a means of including informa-tion that is important and useful to readers, but can be done in a way so as to not lose the focus on the main outcomes reported. For example, one of the main performance indicators for the U.S. transporta-tion report is “passenger vehicle occupant highway fatality rate per 100 million passenger vehicle miles traveled.” The accompanying narrative gives infor-mation on initiatives that are contributing to this overall outcome. In discussing electronic stability control (ESC) systems in passenger vehicles, the report states:

“ESC has the potential to save many lives by assisting the driver in maintaining control in critical driving situations. For vehicles equipped with the technology, we estimate that these systems have reduced fatal single

Excerpt from Canadian Management Resources and

Results Structure (MRRS) Policy

The MRRS policy underpins the development of a common, government-wide approach to plan-ning and managing the relationship between resource expenditures and results, providing a foundation for collecting, managing and report-ing financial and non-financial information to Parliament.

At the apex of the policy are 13 broad Government of Canada Outcomes setting out a whole-of-government framework. These broad government outcomes are elaborated at depart-mental level reporting by Strategic Outcomes and their corresponding Program Activities:• Strategic Outcomes define long-term and

enduring benefits to Canadians that are linked to the department’s mandate, vision, and core functions. These outcomes must be clearly defined and measurable, and must provide the basis for establishing horizontal linkages among departments.

• Program Activity Architecture (PAA) is an architecture consisting of a structured inven-tory of all programs being delivered by a department. The programs of the PAA are depicted according to their logical relation-ships to each other and the Strategic Outcome to which they contribute. This architecture must be developed at a sufficient level of materiality to reflect how a department allo-cates and manages the resources under its control to achieve intended results. The PAA must also be supported by a performance measurement framework (PMF) that enables a department to collect data and to make decisions on program design, management, allocations and strategies to better achieve expected results. Each program of the PAA, at each level, must have planned and actual information on resources and results.

Reports have a consistent format with common electronic templates, which are aimed at making them easier to produce, easier to compare, and easier for electronic navigation.

Source: http://www.tbs-sct.gc.ca/rma/mrrs-sgrr_e.asp

Page 18: Performance Reporting: Insights from International Practice · PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE F O R E W O R D David Treworgy Jonathan D. Breul On behalf

www.businessofgovernment.org 17

PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE

vehicle crashes by 63 percent for light trucks and vans (LTVs) and 36 percent for passenger cars. Rollover involvements in fatal crashes were decreased by 70 percent in passenger cars and 88 percent in LTVs.”

Reporting on outcomes achieved by a particular ini-tiative or initiatives can help explain what elements are contributing to the headline outcome indicator results.

Dealing with attribution and other external factors affecting outcomes in performance storiesPerformance stories serve a particularly useful pur-pose in addressing an issue often associated with outcome reporting: the extent to which the reported outcome results can be attributed to the program or agency being reported. As noted in the previous sec-tion, many indicators in performance reports are concerned with results that cannot be attributed directly to the program or agency. Accompanying performance stories can help shed light on the extent to which the agency is influencing these out-comes. As such, they are an important component of what Mayne (1999) refers to as “contribution analysis”: “What is needed for both understanding and reporting is a specific analysis undertaken to provide information on the contribution of a pro-gram to the outcomes it is trying to influence … the task at hand might be best described as, for reporting, trying to paint a credible picture about the attribution of a program.”

Again the U.S. transportation report provides useful examples in this context. One performance indicator is the “percent of total annual urban-area travel occurring in congested conditions.” In the accom-panying text, it is stated that, “although increased adoption of strategies related to traffic incident man-agement and work zone management may have helped to slow the growth of congestion it is diffi-cult to know to what extent external factors includ-ing the price of fuel have significantly influenced travel patterns and reduced vehicle miles traveled sharply.” Similarly, the narrative makes clear in another case that a growth in transit ridership over four years cannot be simply attributed to service improvements and fare subsidy programs: “The sub-stantial correlation with the increase in gasoline prices suggests a causal relationship.” Thus the per-formance story can be used to highlight external

Good Practice in Preparation: Data Quality Assessment

An Example from the U.S. Department of Agriculture (USDA) 2008 Performance

Accountability Report

Data assessment of performance measure 5.1.1—Participation levels for the major Federal nutrition assistance programs (mil-lions per month)Participation data are drawn from USDA admin-istrative records. State agency reports are certified accurate and submitted to regional offices. There, they are reviewed for completeness and consis-tency. If the data are acceptable, the regional ana-lyst posts them to the National Data Bank (NDB) Preload System. NDB is a holding area for data review prior to release. Otherwise, regional-office personnel reject the report and the State agency is contacted. Data posted by regional personnel into NDB are reviewed at USDA. If data are reason-able and consistent with previous reports, they will be downloaded to NDB for public release. If not, USDA works with regional offices and States to resolve problems and inconsistencies. This process of review and revision ensures that the data are as accurate and reliable as possible.

Completeness of Data—Figures for Food Stamp Program and WIC participation represent 12-month fiscal year averages. Figures for National School Lunch Program and School Breakfast Program are based on nine-month (school year) averages. Participation data are collected and validated monthly before being declared annual data. Reported estimates are based on data through April 2008, as available July 25, 2008.

Reliability of Data—The data are highly reliable. Participation-data reporting is used to support pro-gram financial operations. All of the data are used in published analyses, studies and reports. They also are used to support dialogue with and informa-tion requests from the Government Accountability Office (GAO), the Office of Inspector General (OIG) and the Office of Management and Budget.

Quality of Data - As described above, the data used to develop this measure are used widely for multiple purposes, both within and outside USDA. The measure itself is reported in stand-alone pub-lications as an important, high-quality indicator of program performance.

Source: http://www.ocfo.usda.gov/usdarpt/pdf/par2008.pdf, pp. 90-91

Page 19: Performance Reporting: Insights from International Practice · PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE F O R E W O R D David Treworgy Jonathan D. Breul On behalf

IBM Center for The Business of Government18

PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE

factors that may be impacting positively or nega-tively on outcome achievement.

Key Attribute Three: Having clearly specified out-come indicators and paying attention to detail.Despite the challenges in identifying and specifying outcomes for public programs, all of the reports in the countries examined contained at least some examples of good outcome indicators. A focus on outcomes is possible in many areas of work.

However, an issue emerging from the analysis is that, where outcome indicators are specified, the reported

results do not always actually tell anything about performance against the indicator. For example, the U.S. agriculture report has an indicator concerning “application and usage level of nutrition guidance tools,” but reporting on this indicator focuses on the number of pieces of nutrition guidance distributed, which tells nothing about their application or use. Similarly, the U.S. health report contains an indicator to “reduce the disparity between African-American and White infants in back sleeping by 50% to reduce the risk of Sudden Infant Death Syndrome,” but the actual performance reported is the number of cam-paign materials distributed.

Good Practice in Preparation: A Good Performance Story Providing Contextual Information

Agriculture Advancing Australia—Farm Help 2007– 08: $2.79 million

Performance measuresShort-term financial assistance and training provided to farmers experiencing financial difficulty.

The extent that farm families examine their options for the future and take action to improve their future financial prospects, either on or off the farm.

PerformanceDuring 2007– 08, 25 applicants were granted Farm Help Income Support, 568 advice and training sessions were attended, and 15 customers chose to take a re-establishment grant and leave farming. Training sessions have increased since 2006–07 (when 437 sessions took place) and the number of customers, including the number of re-establishment grants, has decreased (2006–07: 53 grants).

Monthly management information reports from Centrelink indicate program uptake and trends, and exit surveys provide feedback on program effectiveness. A mid-term review of the program and a five-year longitudinal study of exiting participants contributed to changes in the program. The fourth wave of the longitudinal study, com-pleted in October 2007, compares Farm Help to Exceptional Circumstances (EC) Relief Payments and EC Income Support and also examines the effectiveness of Farm Help in assisting farmers to improve their financial outlook.

Results from the fourth wave of the longitudinal study were consistent with the results of previous years, and also emphasized the value of Pathways Planning. The program’s ‘pathways planning’ approach effectively encourages farmers to think about future options and actions needed to improve their financial viability. Farmers who con-tinue with a planning approach after exiting the program are likely to experience better financial outcomes over the medium to longer term than farmers who do not. The fourth wave of the longitudinal study found that one-third of respondents who had prepared a pathways plan were in a better financial position than the year before.

The Farm Help experience led many farmers (up to 61%) to obtain further professional advice at their own expense to better secure their financial and farming futures. Up to a third undertook further study or training after exiting the program—a much higher proportion than among the general adult population.

Of those who took re-establishment grants, 88% agreed that the grant had helped them to adjust to life off the farm, and 85% said that leaving their farm had been a positive change in their life.

Source: Department of Agriculture, Fisheries and Forestry Annual Report 2007-08, http://www.daff.gov.au/about/annualreport/ 2007-2008

Page 20: Performance Reporting: Insights from International Practice · PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE F O R E W O R D David Treworgy Jonathan D. Breul On behalf

www.businessofgovernment.org 19

PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE

Another example of an outcome indicator which actually reports on output indicators comes from Canada. An indicator in the Canadian health report is “decrease in health-related, at risk behaviors asso-ciated with substance use within the general popu-lation, and specifically, youth and Aboriginal persons.” But the results reported are the amount of money provided for community-based initiatives to prevent and/or diminish substance use/abuse; the number of clandestine laboratories investigated and dismantled; and the level of disrupted production of doses of meth/ecstasy/GHB. All important outputs, but they are not reporting on the outcomes the indi-cator identifies.

An indicator for food innovation grants (FIG) in Australia sets out to measure ”the percentage of FIG grants that result in new or improved food products and packaging, processing, storage and distribution technologies.” But the results reported set out the number of grant payments made, the number of food manufacturers who obtain grants, and total investment in research and development as a result of the program.

Conversely, we came across examples in which indicators were poorly specified, or specified in out-put terms, but in which the associated results report-ing contained useful information on outcomes. In

Good Practice in Preparation: The Challenge of Specifying Indicators

Examples of poorly specified indicators• “Consultation with stakeholders on regulatory change in relation to therapeutic products, genetically modified

organisms, industrial chemicals, pesticides, and veterinary medicines as measured by timeliness and thorough-ness of consultation” (Australian health report)

• “Hospital networks that provide quality care as close as possible to where people live” (Irish health report)

• “Creation of research and evidence-based knowledge regarding rural Canada, community capacity building, and rural development” (Canadian agriculture report).

• “Ongoing promotion of safety education program targeted at schoolchildren” (Irish transportation report).

• “Completion of agreed activities to enhance Australia’s animal and plant health infrastructure and capacity to respond to emergencies including: through critical infrastructure protection activities; improved biosecurity awareness; improved national preparedness for emergency pest and diseases; implementation of obligations under the international convention for chemicals; enhancement of the national capacity to respond to emer-gency animal disease; enhanced emergency pest response capacity and ability to define plant health status; enhancement of diagnostic capacity and national plant health surveillance capacity; mitigation of the impact of invasive species through improved early detection and rapid response to plant health invasive species” (Australian agriculture report).

Examples of well-specified indicatorsGood output indicators• “Increases percentage of general practice patient care provided by practices participating in the Practice

Incentives Program as measured by the percentage of general practice patient care covered by practices par-ticipating in the Practice Incentives Program” (Australian health report)

• “Over 90,000 women screened nationally by the BreastCheck program in 2008” (Irish health report)

Clear outcome-focused indicators• “Passenger vehicle occupant highway fatality rate per 100 million passenger vehicle-miles traveled” (U.S.

transportation report)

• “Reduce prevalence of Canadians exposed to secondhand smoke from 28% to 20% by 2011” (Canadian health report)

• “Reduce complications of diabetes by increasing the proportion of American Indian/Alaska Native patients with diagnosed diabetes that have achieved blood pressure control (<130/80)” (U.S. health report)

• “Maintenance or increase in the proportion of fitted Hearing Service Program clients who use their device/s for 5 or more hours per day” (Australian health report)

Page 21: Performance Reporting: Insights from International Practice · PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE F O R E W O R D David Treworgy Jonathan D. Breul On behalf

IBM Center for The Business of Government20

PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE

these cases, information exists to present useful out-come indicators in the reports, but this information is not being effectively used and readers have to go into the details of the report to find out that out-comes are in fact being reported.

There are also examples of output indicators which actually report on outcomes. An indicator in the agriculture report for Canada linked to environmen-tal sustainability is “environmental components included in implementation agreements signed with all provinces and territories.” But, in the detailed reporting on performance, as well as the number of agreements signed, the report gives information on the results achieved from environmental farm plans: that those with a plan are twice as likely (61 percent compared to 32 percent) to use soil testing to deter-mine proper fertilizer application rates to meet crop needs and to reduce nutrient runoff; twice as likely to use soil testing and nutrient content of manure to determine manure application rates reducing the risk of surface and groundwater contamination; and more likely to protect and maintain riparian areas.

An output indicator used in the Irish Transportation report is “implementation of road safety strategy 2007–2012 measures by target date.” In addition to reporting on the number of actions completed and outstanding, the report gives information on the numbers of those killed on Irish roads.

The key point here is that reporting on outcomes needs to be followed through logically at all stages of reporting. Outcome indicators need to be speci-fied as indicators, and then the results reported against the indicator need to reflect the content of the indicator.

Key Attributes of PresentationKey Attribute Four: Having information on both targets and baseline data combined to guide perfor-mance assessment over time.The U.S. performance reports are ahead of the other reports when it comes to presenting established tar-gets and baseline data in the reports. Target levels of performance for the year under scrutiny are clearly established for nearly all indicators. And baseline data for previous years’ performance (up to five years being the norm), are presented alongside the indicator. In the performance reports of Australia,

Canada, and Ireland, the use of targets and baseline data is much less frequent. Where baseline data are present, the norm is for only the previous year’s per-formance to be presented.

The benefits of consistent use of targets and multi-year baseline data when it comes to facilitating judg-ment about performance are clear. Where they are absent, it is impossible to know if the level of perfor-mance reported against is acceptable or otherwise. For example, “over 900 inspections conducted” (relating to safe and effective health products, Canada) and “in 2007–08 two grant payments were made to the horticulture industry” (Australia) are examples of indicators of outputs presented in per-formance reports, but without a target and/or base-line data from previous years, it is impossible to know what these reported levels mean.

It is important that both target and baseline data are used in reports rather than reporting on one alone. Metzenbaum (2009) stresses the importance of com-municating performance trends and targets, not sim-ply target attainment. A central point of her argument is that the percentage of targets attained is of itself of limited use; rather: ”It is far more informative and

0

5%

10%

15%

20%

25%

30%

35%

40%

Results to dateTarget

20092008200620072005

Figure 7: Example of Good Use of Target and Baseline Data, American Indians and Alaska Natives with Diagnosed Diabetes who have Achieved Blood Pressure Control

Source: http://www.hhs.gov/asrt/ob/docbudget/citizensreport.pdf, p. 13

Page 22: Performance Reporting: Insights from International Practice · PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE F O R E W O R D David Treworgy Jonathan D. Breul On behalf

www.businessofgovernment.org 21

PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE

objective to communicate whether, where, in what direction, and by how much performance and related indicators are moving. Reporting perfor-mance trends indicates whether or not program out-comes and interim outcomes are going in the direction desired, suggesting whether agency actions are working as intended, not simply whether a target has been met or a commitment fulfilled” (p. 16).

There are challenges—in reporting on target setting and on the establishment of baseline data—that need to be addressed. Some examples of the types of challenges involved, and how they are being addressed, emerge from a scrutiny of the perfor-mance reports:

• Setting targets for demand-led programs. A par-ticular challenge is associated with target setting for demand-led programs, because demand is difficult to forecast. In these circumstances, keeping the target open for revision is impor-tant. For example, in the Australian health report it states that the target for number of Medicare rebates provided for services was revised from an estimated 264 million services (12.6 services per capita) in the original estimates to an esti-mated 275 million (13 services per capita) in the additional estimates, due mainly to strong uptake of newly introduced primary care items. The revised target was the one reported against in the performance report.

Good Practice in Presentation: A Well-Designed Outcome Indicator Report6

Source: http://www.health.gov.au/internet/annrpt/publishing.nsf/Content/3C6696A0554501F1CA2575A5008138F0/$File/Full%20Report%20of%20the%202007-08%20Annual%20Report.pdf, p. 42

Page 23: Performance Reporting: Insights from International Practice · PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE F O R E W O R D David Treworgy Jonathan D. Breul On behalf

IBM Center for The Business of Government22

PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE

Sometimes it can be helpful to set a range rather than a single figure for acceptable performance. In the U.S. agriculture report, targets for demand-led schemes have a range associated with them and the target is deemed to be met if the actual indicator falls within that range. An indicator of dollar value of agricultural trade preserved through trade agreement negotiation, monitoring, and enforcement has a 2008 target of $900. The report notes that this target is con-trolled by international parties, and that data assessment metrics to meet the target allow for an actual number in the range of $600-$900.

• Handling the revision of baselines. Sometimes, in light of new and enhanced information or revisions in thinking, the baseline data need to be revised. For example, the U.S. transportation report notes that the indicator of motorcyclist fatalities was re-baselined in 2008 to reflect a change in focus from fatalities per 100 million vehicle miles traveled to fatalities per 100,000 registrations. In these circumstances, it is impor-

tant to be clear about the change, the reasons for it, and the implications for an assessment of performance.

• The impact of legislative change. A perfor-mance indicator in the U.S. health report is “Increase the percentage of Head Start programs that achieve average fall to spring gains of: (a) at least 12 months in word knowledge (Peabody Picture Vocabulary Test); and (b) at least four counting items.” But the performance report notes that, because of the Improving Head Start Act of 2007, which reauthorized the Head Start program, the national reporting system was ter-minated—and that Head Start will only report data for these performance measures through FY 2007. In such circumstances, where legislative change results in a program being significantly modified or abandoned, all that can be done is to note the position and indicate any new arrangements that are planned. It is also noted against the Head Start indicator that, in con-junction with planning for future years, experts

Good Practice in Presentation: Use of Graphics and a Hotlink to More Information

Source: http://www.dot.gov/par/2008/

Page 24: Performance Reporting: Insights from International Practice · PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE F O R E W O R D David Treworgy Jonathan D. Breul On behalf

www.businessofgovernment.org 23

PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE

will be consulted on the best options for replacement performance measures.

Key Attribute Five: Ensuring good presentation and effective use of technology.With regard to presentation, poor practice in the per-formance reports surveyed often related to unclear, partial, and overly long performance information being presented in the report. Sometimes, the infor-mation required was not present at all. At other times, the information was there but was hard to find without searching for it. For example, sometimes, targets and baseline data are available in the perfor-mance reports, but they are difficult to find, due to not being readily presented next to the indicator. The Canadian agriculture report sets out a table of perfor-mance indicators showing changes in farm income over time. But it is only at the end of one of the paragraphs in the accompanying text that a target is set out to see net farm income be at least 80 percent

of the previous five-year average. Similarly, there is an excellent table in the Australian transportation report that gives baseline trend data on transport safety in areas such as accident and incident notifi-cation and outcomes of investigations. But the table is isolated at the end of a section on safety and is not referenced in the text. Similarly, as has been noted earlier, examples exist of outcome information being available, but being buried in the text.

Effective use of technology can be helpful in aiding readers of performance reports in sifting through the mass of data available. Despite efforts to keep per-formance indicators at a high level and report only on key output and outcome issues, Table 2 (refer-enced earlier) shows that there are still a lot of indi-cators to be found in performance reports. When accompanying text and narratives are included, the reports run to many pages. And often, the indicators raise questions in readers’ minds; they would like

Good Practice in Presentation: Canada Provides Structured and Consistent Online Access to the

Performance Information Contained in Performance Reports

Source: http://www.tbs-sct.gc.ca/ppg-cpr/home-accueil-eng.aspx

Canada has moved all of its major reports online, creating a webpage dedicated to parliamentarians (http://www.tbs-sct.gc.ca/tbs-sct/audience-auditoire/parliamentarian-parlementaire-eng.asp). This in turn links to a Government of Canada planning and performance gateway.

Page 25: Performance Reporting: Insights from International Practice · PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE F O R E W O R D David Treworgy Jonathan D. Breul On behalf

IBM Center for The Business of Government24

PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE

further information, and the ability to drill down to supplementary data is useful. This level of detail is not a problem in and of itself, as governments are increasingly involved in complex issues and the pub-lic and politicians demand more information, not less. What is needed in this context is a way of navi-gating through the performance information.

Canada and the United States are the two countries examined that appear to have made the most prog-ress with regard to using technology to support the readers of performance reports. In the U.S. health and transportation reports, for example, several of the indicators reported give hot links to websites that contain additional information on the outcome area under scrutiny. As McCormack (2007) notes, while electronic reporting will never replace hard copy versions for parliamentarians, used judiciously it has a number of advantages:• It gives the reader the ability to drill down

through information within a consistent structure.

• It creates the capacity for users to tailor information to their needs.

• It is accessible and portable.

It is possible to drill down and search by spending area, by outcome area, and by organization.

Key Attribute Six: Providing output and activity indicators as well as outcome indicators when discussing agency performance.The U.S. approach to performance reports has been to focus almost exclusively on outcomes. As we have seen, this approach differs substantially from the other countries examined, where the majority of indicators in the performance reports focus on out-puts. In part, this can be attributed to differences in emphasis as to what the performance reports are there to do. The U.S. government has clearly decided to use the reports to track performance against program outcomes, whereas in the other countries examined the reports are also used as accountability mechanisms for agency perfor-mance.7 In judging agency performance, output and activity indicators help provide a more complete picture.

For example, in the Canadian agriculture report, in the section on pest management, it is reported that

there is “management of 79 projects (including 40 new projects for 2007-08) with 95 percent (75 of 79) on target and meeting milestones as of March 2008.” This informs any judgment as to how well the department is managing this function. Timeliness indicators are also frequently used in the Australian, Canadian, and Irish performance reports. For exam-ple, one indicator in the Australian health report is that “on average, eligible clients are issued with a voucher within 14 days from receipt of a completed application.” The performance narrative gives actual performance against the 14-day target and compares that with the previous year’s performance.

Policy adviceA particular challenge in reporting on the perfor-mance of agencies is devising appropriate indicators and information on the provision of policy advice. In response to central guidance, the Australian per-formance reports all contain some variant of an indicator regarding policy advice along the lines of “level of satisfaction of the minister with the quality and timeliness of ministerial correspondence, briefs, parliamentary questions, speeches and media releases as measured by the feedback received.”

In Ireland, the health report attempts to put figures on the work of supporting the minister: “Respond to an estimated 6,000 parliamentary questions, 8,000 ministerial representations, and 140 Dáil and Seanad adjournment debates. Give evidence to meetings of … committees (22 in 2007) and prepare Order of Business notes, briefs, speeches, and attend meetings as required.” Clearly such information is of limited use, and there is a sense that this is being reported by rote (there is nothing reported, other than satisfactory feedback from ministers in any of the Australian reports examined).

A slightly more thoughtful approach can be seen in the performance narrative associated with reporting on policy advice in the Australian agriculture report. While the indicator simply notes that the minister has expressed his satisfaction formally and infor-mally, the accompanying narrative gives useful accompanying information. For example, it sets out five-year trends in ministerial work flow for items such as cabinet submissions and ministerial corre-spondence. And it accompanies this with the estab-lishment of targets for ministerial correspondence

Page 26: Performance Reporting: Insights from International Practice · PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE F O R E W O R D David Treworgy Jonathan D. Breul On behalf

www.businessofgovernment.org 25

PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE

that are reported: less than 5 percent of ministerial correspondence returned for re-drafting, and no overdue ministerials at close of business each Thursday. Such information at least gives one a sense of the quantity and quality of policy advice provided.

Recommendations for ImprovementDrawing from the six key attributes of a good system for outcome and output reporting, it is possible to identify six corresponding recommendations for actions aimed at getting better performance reports.

Recommendation One: When developing performance measurement systems, use a consistent, comparable, and structured approach to performance information across all agencies and programs.A consistent and structured approach comparable within and across agencies enhances the quality of reporting. This in turn facilitates understanding and the interpretation of the output and outcome indicators.

Recommendation Two: Include a good performance story to accompany the indicators.A good performance story helps with regard to inter-preting the meaning and significance of the perfor-mance information. The performance story can identify external factors that impact on performance, and help inform any judgments and understanding of agency and program performance.

Recommendation Three: Specify outcome indicators, and fully explain the results reported against the indicator.Good outcome indicators are possible. But it is important that, when such indicators are specified, actual results against the indicators are reported. The performance story must tell the story of the outcome indicator, and not just of some activity or output contributor to the outcome.

Recommendation Four: Provide both target and baseline data.Both target and baseline data should guide perfor-mance assessment over time.

Recommendation Five: Ensure effective use of technology in presenting the performance data collected.People can feel overwhelmed by the sheer amount of performance information available. Making effec-tive use of presentation techniques and technology through means such as the use of hot links and elec-tronic reporting can help readers navigate their way through the data.

Recommendation Six: Present agency performance information which includes output and activity indicators in addition to outcome indicators.If a purpose of the performance reports is to account for agency performance, good indicators of outputs and activity in the performance report section of the report are helpful in painting a richer description of the outcomes achieved.

Page 27: Performance Reporting: Insights from International Practice · PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE F O R E W O R D David Treworgy Jonathan D. Breul On behalf

IBM Center for The Business of Government26

PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE

Appendix: A Strategic Focus on Outcomes—A Global Trend

from Moving from Outputs to Outcomes: Practical Advicefrom Governments Around the World by Burt Perrin (Available at the IBM Center website: www.businessofgovernment.org)

Over the last decade, countries around the world have undertaken reforms with the aim of improving the relevance and effectiveness of public services and the quality of public sector management. A key aspect of most reform processes is a focus on results and, in particular, on outcomes.

Until recently, the performance of public sector pro-grams, and of their managers, has been judged largely on inputs, activities, and outputs. This approach, however, has come into question. One of the major factors behind many reform initiatives is a concern that government too often is preoccu-pied with process and following rules, and it is not clear what benefits are actually arising from public sector expenditures and activities.

External influences also have played a role in stimu-lating movement toward a results orientation. An outcome focus increasingly is a prerequisite for financial and other forms of support; for example, as both Ireland and Spain have indicated, one pressure for a results orientation came from the European Union (EU). Leadership from the EU has influenced the administrative systems of the 10 new Member States, mainly from Eastern Europe, and is a major factor influencing reform in other countries that are interested in future membership or closer relations with the EU. Both Spain and Ireland touch upon the role the EU has played in influencing directions in their countries. The World Bank and other develop-ment banks, along with many multilateral and bilat-eral donors, are increasingly demanding an outcome

orientation, along with appropriate monitoring and evaluation sys-tems, as a condi-tion of financial and other forms of support.

External pressure can come as well from the other direction, such as from civil society. A number of countries empha-sized the importance of the demands of civil society for tangible results that helped lead to their outcome approach. Civil society attention has also been cited as an important factor in sustaining the efforts and in providing a democratic basis for reform efforts linked to the needs and desires of the citizenry.

A number of benefits to an outcome-oriented approach have been identified; for example, it can serve as a frame of reference to ensure that inputs, activities, and outputs are appropriate. It represents a means of demonstrating the value and benefits of public services to citizens and to the legislature. At least as important, an outcome focus is an essential component of a learning approach that can identify how policies and program approaches may need to be adjusted, improved, or replaced with alternatives. It is essential not only to demonstrate that outcomes have occurred, but that the interventions in question have contributed to these in some way.

Page 28: Performance Reporting: Insights from International Practice · PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE F O R E W O R D David Treworgy Jonathan D. Breul On behalf

www.businessofgovernment.org 27

PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE

The author wishes to acknowledge the contribution of his colleagues, Orla O’Donnell (research officer) and Deirdre Mooney (administrative assistant) of the Institute of Public Administration, in the analysis of reported performance indicators. O’Donnell under-took much of the detailed work involved in the data analysis and presentation.

Acknowledgments

Page 29: Performance Reporting: Insights from International Practice · PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE F O R E W O R D David Treworgy Jonathan D. Breul On behalf

IBM Center for The Business of Government28

PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE

1. In practice, there is no common understanding or use of the term “performance indicator” displayed in the performance reports, either within or across countries. In some reports, the term “performance measure” is used, and in others, “performance indicator.” Not infrequently, the indicator is set out in the form of an objective (for example, “the efficient and effective delivery and adop-tion of research and development”), and the actual indi-cator data reporting on achievement of the objective is contained in the results information in the supporting documentation. The term performance indicator is used in this report to specify the indicator or measure, however defined, used in the performance report when reporting on program performance.

2. The Australian Department of Transport and Regional Services had its responsibilities reallocated dur-ing the year under scrutiny, and became the Department of Infrastructure, Transport, Regional Development and Local Government. For comparative purposes, only perfor-mance indicators from the original Transport and Regional Services portfolio budget statement 2007-08 are included in this study.

3. Until 2007, a single performance and accountabil-ity report (PAR) was produced by each agency. In 2007, a pilot program was introduced that enabled participating agencies to produce three documents: agency financial report, annual performance report, and a highlights docu-ment (subsequently renamed as a “citizens report”). Of the reports examined for this study, only the Department of Health and Human Services produced a citizens report, and this report was used for the analysis. The PAR was used for agriculture and transportation.

4. While most indicators could be classified with a reasonable degree of certainty against particular criteria, it was not always clear how to classify the indicator. For example it was not always clear if an indicator was an output or outcome indicator. Similarly, it may be that some indicators classified as, for example, not measur-

able, are indeed capable of measurement; it is just that there was no evidence of this displayed in the relevant section of the performance report. The time-bound nature of many indicators was particularly difficult to assess, as many indicators had implicit annual time limits associated with the reporting cycle, but this was not always clear. To try to ensure consistency, the author and a research col-league both separately classified the indicators and then discussed ones for which a difference of opinion existed to determine a final classification. But an element of judg-ment inevitably plays a role in any classification of this nature.

5. The format of the Canadian transportation report was different from that of all other reports studied when it came to setting out performance indicators. The eight indi-cators used in the study were identified from the sections of the report on performance measurement under each program activity for each strategic outcome. The report was narrative in nature rather than explicitly setting out performance indicators, as do all of the other reports.

6. Although this is an example of good practice in the presentation of outcome indicator reporting, it displays some of the limitations noted in Endnote 1. The specified indicator is set out in terms of an objective, and it is in the “measured by” section that the actual indicator data to be used are specified.

7. The performance of the agency itself is scrutinized separately in the performance and accountability reports in the United States, and also in more detail in other reports to Congress.

Endnotes

Page 30: Performance Reporting: Insights from International Practice · PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE F O R E W O R D David Treworgy Jonathan D. Breul On behalf

www.businessofgovernment.org 29

PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE

Australian National Audit Office. (2007). Application of the outcomes and outputs framework. Performance, Audit Report No. 23 2006-07. Canberra: Australian National Audit Office.

Bouckaert, G. & Halligan, J. (2008). Comparing per-formance across public sectors. In W. van Dooren and S. van de Walle (eds), Performance Information in the Public Sector: How It is Used. New York: Palgrave Macmillan, 72-93.

Breul, J. D. (2007). GPRA: A foundation for perfor-mance budgeting. Public Performance & Management Review, 30(3), 312-331.

Mayne, J. (1999). Addressing attribution through contribution analysis: Using performance measures sensibly. Ottawa: Office of the Auditor General of Canada. http://www.oag-bvg.gc.ca/internet/docs/99dp1_e.pdf (last accessed October 29, 2009).

Metzenbaum, S.H. (2009). Performance manage-ment recommendations for the new administration. Washington, DC: IBM Center for The Business of Government. http://www.businessofgovernment.org/pdfs/MetzenbaumNewAdmin.pdf (last accessed October 21, 2009).

McTigue, M., Wray, H., & Ellig, J. (2009). 10th Annual performance report scorecard: Which fed-eral agencies best inform the public? Arlington, VA: Mercatus Center, George Mason University.

Organisation for Economic Co-operation and Development. (2005). Modernising government. Paris: OECD.

Perrin, Burt (2006), Moving from outputs to outcomes: Practical advice from governments around the world, Washington, DC: IBM Center for The Business of Government. http://www.businessofgovernment.org/pdfs/PerrinReport.pdf

Pollitt, C. (2006). Performance information for democracy? Evaluation, 12(1), 38-55.

Treasury Board of Canada. (2007). Performance reporting: Good practice handbook. Ottawa: Treasury Board of Canada. http://www.tbs-sct.gc.ca/rma/dpr3/06-07/handbk-guide/gph-gbp_e.asp (last accessed October 7, 2009).

U.S. Department of Energy. (2006). FY 2006 perfor-mance and accountability reporting guidance and requirements. http://www.cfo.doe.gov/cf1-2/2006%20PAR%20Guidance%206-07-06.pdf (last accessed October 7, 2009).

References

Page 31: Performance Reporting: Insights from International Practice · PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE F O R E W O R D David Treworgy Jonathan D. Breul On behalf

IBM Center for The Business of Government30

PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE

A B O U T T H E A U T H O R

Richard Boyle, PhD, is Head of Research with the Institute of Public Administration (IPA) in Ireland. He has worked with the IPA since 1986. His main research interests focus on public service modern-ization, managing for results in the public sector, and developing and implementing effective performance management and evalua-tion systems.

Dr. Boyle has worked with many government departments, state agencies, and nonprofit organizations on performance measurement and evaluation. In addition to working with many Irish public ser-vice organizations, he has worked with the European Commission, the Organisation for Economic Co-operation and Development, and the World Bank in promoting performance management and evaluation capacity building. Dr. Boyle has served on the Board of the European Evaluation Society (2001-2005). He is founder and chair of the Irish Evaluation Network. Recently, Dr. Boyle has been commissioned by the Hong Kong gov-ernment efficiency unit to be executive editor and chief researcher for a series of reports on global public ser-vice reform developments. He is a member of the International Evaluation Research Group on Policy and Program Evaluation (INTEVAL). He is on the editorial board of the International Journal of Public Sector Management.

Dr. Boyle holds a bachelor’s degree and a master’s degree from the University of Liverpool,and a PhD from the University of Manchester.

Page 32: Performance Reporting: Insights from International Practice · PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE F O R E W O R D David Treworgy Jonathan D. Breul On behalf

www.businessofgovernment.org 31

PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE

K E Y C O N T A C T I N F O R M A T I O N

To contact the author:

Richard Boyle, PhDHead of ResearchInstitute of Public Administration57-61 Lansdowne RoadBallsbridgeDublin 4Ireland+353 1 240 3756 (direct)

e-mail: [email protected]

Page 33: Performance Reporting: Insights from International Practice · PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE F O R E W O R D David Treworgy Jonathan D. Breul On behalf

IBM Center for The Business of Government32

PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE

REPORTS from The IBM Center for The Business of Government

Analytics

Strategic Use of Analytics in Government by Thomas H. Davenport

Collaboration: Networks and Partnerships

Designing and Managing Cross-Sector Collaboration: A Case Study in Reducing Traffic Congestions by John M. Bryson, Barbara C. Crosby, Melissa M. Stone, and Emily O. Saunoi-Sandgren

Contracting

The Challenge of Contracting for Large Complex Projects by Trevor L. Brown, Matthew Potoski, and David M. Van Slyke

Defense

Transformation of the Department of Defense’s Business Systems by Jacques S. Gansler and William Lucyshyn

E-Government/Technology

Moving to the Cloud: An Introduction to Cloud Computing in Government by David C. WyldCreating Telemedicine-Based Medical Networks for Rural and Frontier Areas by Leonard R. Graziplene

Financial Management

Managing Risk in Government: An Introduction to Enterprise Risk Management by Karen HardyManaging a $700 Billion Bailout: Lessons from the Home Owners’ Loan Corporation and the Resolution Trust Corporation by Mark K.

Cassell and Susan M. Hoffmann

Healthcare

The Role and Use of Wireless Technology in the Management and Monitoring of Chronic Diseases by Elie Geisler and Nilmini WickramasingheCreating Telemedicine-Based Medical Networks for Rural and Frontier Areas by Leonard R. Graziplene

Human Capital Management

Federated Human Resource Management in the Federal Government by James R. Thompson and Rob Seidner

Managing for Performance and Results

Strategic Risk Management in Government: A Look at Homeland Security by David H. Schanzer, Joe Eyerman, and Veronique de RugyPerformance Management Recommendations for the New Administration by Shelley H. Metzenbaum

Organizational Transformation

Launching a New Mission: Michael Griffin and NASA’s Return to the Moon by W. Henry Lambright

Social Services

US and UK Routes to Employment: Strategies to Improve Integrated Service Delivery to People with Disabilities by Heike Boeltzig, Doria Pilling, Jaimie C. Timmons, and Robyn Johnson

For a full listing of IBM Center publications, visit the Center’s website at www.businessofgovernment.org.

Recent reports available on the website include:

Page 34: Performance Reporting: Insights from International Practice · PERFORMANCE REPORTING: INSIGHTS FROM INTERNATIONAL PRACTICE F O R E W O R D David Treworgy Jonathan D. Breul On behalf

About the IBM Center for The Business of GovernmentThe IBM Center for The Business of Government connects public management research with practice. Since 1998, we have helped public sector executives improve the effectiveness of government with practical ideas and original thinking. We sponsor independent research by top minds in academe and the nonprofit sector, and we create opportunities for dialogue on a broad range of public management topics.

The Center is one of the ways that IBM seeks to advance knowledge on how to improve public sector effectiveness. The IBM Center focuses on the future of the operation and management of the public sector.

About IBM Global Business ServicesWith consultants and professional staff in more than 160 countries globally, IBM Global Business Services is the world’s largest consulting services organization. IBM Global Business Services provides clients with business process and industry expertise, a deep understanding of technology solutions that address specific industry issues, and the ability to design, build and run those solutions in a way that delivers bottom-line business value. For more information visit www.ibm.com.

For additional information, contact:Jonathan D. BreulExecutive DirectorIBM Center for The Business of Government1301 K Street, NWFourth Floor, West TowerWashington, DC 20005(202) 515-4504, fax: (202) 515-4375

e-mail: [email protected]: www.businessofgovernment.org