Use of Evaluative Information in Foundations: Benchmarking Data

39
Use of Evaluative Information in Foundations: Benchmarking Data Patrizi Associates June 2010

description

Use of Evaluative Information in Foundations: Benchmarking Data. Patrizi Associates June 2010. Study Purpose. Study Purpose Benchmark foundation practices regarding evaluation functions and responsibilities and how evaluation resources are deployed. - PowerPoint PPT Presentation

Transcript of Use of Evaluative Information in Foundations: Benchmarking Data

Page 1: Use of Evaluative Information in Foundations: Benchmarking Data

Use of Evaluative Information in Foundations: Benchmarking Data

Patrizi AssociatesJune 2010

Page 2: Use of Evaluative Information in Foundations: Benchmarking Data

2

Study Purpose

Study Purpose Benchmark foundation practices regarding evaluation functions and responsibilities and how evaluation resources

are deployed. Explore perceptions of how well foundations use evaluative information. Explore patterns of “demand” for evaluative information.

Set the stage for the July 2010 Evaluation Roundtable Meeting to: Consider how evaluative information can be used effectively to advance foundation capacity to develop and guide

strategy in complex and challenging environments.

Page 3: Use of Evaluative Information in Foundations: Benchmarking Data

3

Study OverviewAbout the study The focus of this study is on “evaluative information” rather

than on “evaluation” in order to capture the range of functions and products used by foundations to gauge their own effectiveness.

The questions were posed to those responsible for evaluation in each of the participating foundations.

Although we’ve conducted benchmarking studies in the past, they were more narrowly focused on “evaluation,” more qualitative in nature, and included between 10-14 foundations. We are reluctant to use these as true points of comparison. However, we’ve included some historical references in this presentation based on these previous Evaluation Roundtable studies and from interviews conducted as part of this study.

Approach 33 foundations (US and Canada) with a history of strong

evaluation use were invited to participate. We sent a web-based survey to the person who led the

evaluation unit or had major responsibility for evaluation. 31 foundations completed the survey. 26 foundations returned foundation expenditure

information. 29 participated in follow-up phone interviews.

Time period of study Study conducted in Summer 2009. Respondents were asked

to provide data from time period 2007 and 2008 and to reflect on changes over the last five years.

Analysis We examined overall responses to identify patterns across

respondents and segmented responses by:

– Size of the foundation’s yearly grantmaking: • Foundations under $50 million• Foundations $50 million and under $200 million• Foundations $200 million and over

– Reporting structure: Whether the evaluation unit reports to the CEO, program leader, or administrative leader.

Caveat The sample size is small, although it includes over 50% of

foundations with grantmaking over $200 million annually and nearly 15% of those awarding over $50 million annually. The universe of foundations of interest is even smaller, in light of our criterion for study participation that a foundation has an expressed/ demonstrated strong interest in evaluation.

Page 4: Use of Evaluative Information in Foundations: Benchmarking Data

4

Participating Foundations by Grantmaking Size andReporting Structure

Evaluation Reports to

Foundations Under$50 Million

Foundations Between$50 to $200 Million

Foundations Above$200 Million

CEO Bruner Foundation* Colorado Trust Edna McConnell Clark

Foundation J.W. McConnell Family

Foundation New York State Health

Foundation

Cleveland Foundation James Irvine Foundation Ontario Trillium

Foundation Wallace Foundation William Penn Foundation

Hewlett Foundation Robert Wood Johnson

Foundation

Administrator Barr Foundation* Marin Community

Foundation

California Endowment Lumina Foundation Rockefeller Foundation

Atlantic Philanthropies Gates Global Health*+

Gates Global Development*+

Gates U.S. Program*+

Pew Charitable Trusts

Program California Health Care Foundation

Kauffman Foundation

Annie E. Casey Foundation California Wellness

Foundation Hilton Foundation Knight Foundation

Ford Foundation Kellogg Foundation Packard Foundation

* These foundations did not provide evaluative information expenditure data.

+ Because of the unique nature of the BMGF’s operations, they were counted as 3 separate foundations. At the time of the survey, evaluation staff reported to an administrator; shortly after the survey administration, the Foundation delegated this function to the three program presidents.

Page 5: Use of Evaluative Information in Foundations: Benchmarking Data

5

Study Context: Forces Shaping the Evolution of Evaluationin Foundations The evaluation function in philanthropy is relatively new, dating to the late 1970’s early 1980s with a large

expansion of the number of foundations with dedicated evaluation staff in the 1990s. Evaluation emerged during a period of professionalization of philanthropy, when many foundations shifted their

approach from that of a charitable grant maker responding to grant requests toward a more directive and purposeful role as a “strategist.”

With their expanded role as a strategic actor, foundations increased their attention to evaluation. To be a strategic philanthropy soon became linked to being an “effective” philanthropy and with this, an increased focus on measurement.

Reflecting this overall shift toward strategic philanthropy, evaluation units have expanded their focus from assessing whether grantees are effective toward assessing whether foundation strategies are effective and whether foundations “add value.”

A look at the trends in evaluation unit titles is revealing. We hypothesize that much of this evolution corresponds to the growth of strategic philanthropy.

1980s Early 1990s Late 1990s 2000sResearch and evaluation Planning and evaluation Organizational learning Impact “something”

Strategic “something”

Trends in Evaluation Unit Titles

Page 6: Use of Evaluative Information in Foundations: Benchmarking Data

6

Study Context: Tensions and Challenges

The role and function of evaluation sits astride several philosophical debates related to this evolution of philanthropy, largely pivoting around the degree to which evaluation serves to increase “learning” vs. its role in assuring accountability.

This core tension is played out in numerous questions regarding reporting, responsibility, orientation (internal or external) and level of resources relative to value.

In the shift toward strategic philanthropy, foundations have been challenged to reframe evaluation from an older model of “post hoc” assessment of grantees for accountability to one that examines their own work and is structured to inform strategy from start to finish.

This shift has resulted in many changes in the roles, responsibilities and structure of the evaluation function. This survey sought to describe how foundations use “evaluative information” (in its many forms) to guide their work and to analyze whether reporting relationship or size affects the use of evaluative information. We looked at:– The range of foundation activities employed to produce evaluative information– The resources available for these functions– Perception of internal use and demand for evaluative information by different groups internal to foundations

This study sets the stage for discussion at the Evaluation Roundtable Meeting: Information and Its Use in Supporting Strategy. We hope to reflect on the data and its implications for what foundations need to enhance

strategic learning.

Page 7: Use of Evaluative Information in Foundations: Benchmarking Data

7

Benchmarking Evaluation in Foundations: 2009 Findings

I. Evaluation Functions and Responsibilities

II. Evaluation Resources: Staffing and Budget

III. Perceptions of Demand for and Use of Evaluative Information

Page 8: Use of Evaluative Information in Foundations: Benchmarking Data

8

What Functions Do Evaluation Units Perform? What are the principal responsibilities of your unit?

Most foundations have expanded the role of the unit beyond supporting evaluations, including but not limited to the functions illustrated in the table.

Evaluation units have a major role in knowledge management in 60% of responding foundations, and this is more often the case when staff report to the CEO (73%) or to program (67%). Only 40% of those reporting to administrators are involved in knowledge management in any way.

Two developments have emerged as important aspects of the job: involvement in performance metrics and in program strategy. We know from prior work that an evaluation unit’s involvement in strategy development was relatively rare five years ago; now 90% have a role in strategy.

Evaluation Functions and Responsibilities

90%97%

50%

60%

87%

0%

20%

40%

60%

80%

100%

Aiding in development ofprogram strategy

Research other thanevaluation

Knowledge managementPerformancemetrics/indicators

Evaluation

Page 9: Use of Evaluative Information in Foundations: Benchmarking Data

9

What Types of Evaluative Activities DoEvaluation Units Do? Which of the following types of evaluation/performance metric activities does your foundation do?

There has been an increase in all types of evaluative information. In the past individual grant and initiative evaluations made up the majority of the portfolio of foundation evaluation work. Now, although nearly all foundations surveyed still evaluate individual grants and initiatives, the vast majority of those surveyed support the full spectrum of evaluative activities listed above.

Indicators work and strategy evaluations are increasingly important part of the portfolio of work, whereas this was relatively rare five years ago.

Evaluations of larger aspects of foundation work, such as entire program area assessments and foundation-wide assessments, are also more common; however, about 25% of the foundations responding typically do not conduct these types of assessments.

Evaluation Functions and Responsibilities

90%

93%

83%

76%

76%

83%

80%

83%

73%

70%

0% 20% 40% 60% 80% 100%

Individual Grant Evaluations

Initiative Evaluations

Strategy Evaluations

Entire Program Area Assessments

Foundation-wide Assessments

Satisfaction/Perception Surveys

Identifying Indicators of Grantmaking Performance

Tracking Grantmaking Indicators

Identifying Indicators of Foundation Performance

Tracking Foundation Indicators

* This chart does not include when “other units” have primary responsibility. 4% of respondents indicated that another unit had primary responsibility for each activity except perception surveys, where 21% of foundations delegated this responsibility to an “other unit.”

Page 10: Use of Evaluative Information in Foundations: Benchmarking Data

10

How is Responsibility for Evaluation Distributed Throughout Foundations?

We asked a series of questions about which unit in the foundation takes “primary” responsibility for an evaluative task.

Response options were: – My unit has primary responsibility

– Program has primary responsibility

– My unit and program share responsibility

– Other unit has responsibility

– My foundation does not do this type of work

The purpose of these questions was to surface information regarding the role of evaluation staff in relation to program staff in the work of evaluation

Evaluation Functions and Responsibilities

Page 11: Use of Evaluative Information in Foundations: Benchmarking Data

11

19%

41%

50%

41%

68%

68%

13%

12%

46%

43%

35%

26%

29%

27%

23%

8%

50%

36%

32%

33%

42%

30%

17%

27%

5%

4%

29%

32%

14%

10%

0% 20% 40% 60% 80% 100%

Individual Grant Evaluations

Initiative Evaluations

Strategy Evaluations

Entire Program Area Assessments

Foundation-wide Assessments

Satisfaction/Perception Surveys

Identifying Indicators of Grantmaking Performance

Tracking Grantmaking Indicators

Identifying Indicators of Foundation Performance

Tracking Foundation Indicators

Evaluation Unit

Shared

Program

Evaluation Is Not Exclusively the Responsibility of Evaluation StaffWhich unit has primary responsibility: evaluation, program, or shared?

Evaluation units tend to have primary responsibility when the focus of assessment is larger (i.e., strategy or foundation-wide evaluations) and conversely, program staff tend to have primary responsibility for individual evaluations, a smaller focus of assessment.

In most foundations, program staff assumes a great deal of the responsibility for most types of evaluation.– Most foundations give evaluation staff primary responsibility for foundation level assessments (including perception surveys)

– Program staff have at least shared if not primary responsibility for identifying and tracking indicators of grantmaking performance

Evaluation Functions and Responsibilities

Page 12: Use of Evaluative Information in Foundations: Benchmarking Data

12

Allocation of “Primary Responsibility” Differs Considerably Based on Reporting Structure

Evaluation Functions and Responsibilities

Evaluation Reports to Administrator Units reporting to an administrator are also

more likely to have “primary responsibility” for most evaluative activities, except identifying and tracking foundation indicators.

20%40%

67%75%

86%88%

12%11%

14%14%

0% 20% 40% 60% 80% 100%

Individual Grant EvaluationsInitiative EvaluationsStrategy Evaluations

Entire Program area AssessmentsFoundation-wide Assessments

Satisfaction/Perception SurveysIdentifying Indicators of Grantmaking Performance

Tracking Grantmaking IndicatorsIdentifying Indicators of Foundation Performance

Tracking Foundation Indicators

Evaluation Reports to Program A much different picture emerges if evaluation

reports to program. Not one evaluation unit reporting to program has “primary responsibility” for individual grant evaluations, strategy evaluations, entire program area assessments, or for identifying grantmaking performance indicators.

0%25%

0%0%

33%25%

0%0%

33%33%

0% 20% 40% 60% 80% 100%

Individual Grant EvaluationsInitiative EvaluationsStrategy Evaluations

Entire Program area AssessmentsFoundation-wide Assessments

Satisfaction/Perception SurveysIdentifying Indicators of Grantmaking Performance

Tracking Grantmaking IndicatorsIdentifying Indicators of Foundation Performance

Tracking Foundation Indicators

Evaluation Reports to CEO CEO reports are much more likely to have

“primary responsibility” for every type of evaluation and assessment.

38%56%

67%43%

78%88%

22%22%

78%75%

0% 20% 40% 60% 80% 100%

Individual Grant EvaluationsInitiative EvaluationsStrategy Evaluations

Entire Program area AssessmentsFoundation-wide Assessments

Satisfaction/Perception SurveysIdentifying Indicators of Grantmaking Performance

Tracking Grantmaking IndicatorsIdentifying Indicators of Foundation Performance

Tracking Foundation Indicators

Page 13: Use of Evaluative Information in Foundations: Benchmarking Data

13

Evaluation Units Assume an Increased Role in Program Strategies

Evaluation units now have at least some role in the strategy development process, with nearly all respondents reporting that they are at least “somewhat” or “heavily involved” at both the start and end/renewal points of program strategy.

Overall, 64% report that they are “heavily involved” in strategy discussions in the early stages of strategy development and nearly the same (61%) report heavy involvement at the end or renewal point.

Overall participation in strategy drops off considerably in the “ongoing” stages of strategy evolution, with only 27% reporting that they are heavily involved in “providing feedback or critique” in an ongoing basis.

However, respondents who report to the CEO are more involved in program strategy at every stage of the strategy cycle (outset, on-going and end) than those reporting to others.

Evaluation Functions and Responsibilities: Program Strategy Assistance

Page 14: Use of Evaluative Information in Foundations: Benchmarking Data

14

Summation/Questions/Points to Consider: Functions and Responsibilities

The role of evaluation has expanded, particularly in program strategy There is an increase in types of evaluative activities employed Reporting structure varies and seems to affect who has primary responsibility

Thought starters:– How does the distribution of responsibility reflect learning or accountability? What

are the tradeoffs?– How do you interpret the way in which role of evaluation differs under each reporting

relationship studied? – What kind of skills do program staff need to meet their responsibilities in the design

and implementation of evaluations/ evaluative activities?– Why does the evaluation role in program strategy drop off during implementation?

Page 15: Use of Evaluative Information in Foundations: Benchmarking Data

15

Benchmarking Evaluation in Foundations: 2009 Findings

I. Evaluation Functions and Responsibilities

II. Evaluation Resources: Staffing and Budget

III. Perceptions of Demand for and Use of Evaluative Information

Page 16: Use of Evaluative Information in Foundations: Benchmarking Data

16

Tracking Spending on Evaluative Information

Nearly every foundation found it difficult to provide data on evaluation spending. Most foundations do not systematically track these costs.

We asked respondents to submit estimates* of their spending on a range of “evaluative information” activities where the foundation is a “primary user,” including:– Evaluations

– Collection of data for indicators of foundation or program performance, and

– Other related expenditures to gather data to inform knowledge of foundation effectiveness

We received expenditure data from 26 foundations. Those not submitting these data are asterisked on slide 4. Neither the largest foundation (BMGF) or the smallest (Bruner) is included in the analysis.

To augment these data, we asked staff for their perceptions about how spending for evaluative information compared to spending on grants over the last five years.

Evaluation Resources

* We have every reason to believe that respondents submitted as accurate a number as possible. Those respondents who felt like the could not get good estimates did not participate in this part of the survey.

Page 17: Use of Evaluative Information in Foundations: Benchmarking Data

17

The Percentage Spent on Evaluative Information (Relative to the Grant Budget) Varies GreatlyDollars Spent on Evaluative Information and as a Percentage of Grantmaking Budget

Foundation Size Mean Median Minimum Maximum

Overall $4,664,6523.7%

$1,629,3132.2%

$212,451 0.3%

$28,719,57517.8%

Tier 1: Under $50 million $1,150,068 7.2%

$1,054,5007.4%

$212,4510.8%

$3,000,00017.8%

Tier 2: $50 to $200 million $2,354,6502.4%

$1,513,1451.6%

$273,2810.3%

$10,650,0006.5%

Tier 3: Over $200 million $12,139,2402.6%

$6,037,5362.3%

$500,0000.3%

$28,719,5754.9%

Although the highest percent spent on evaluative information was 17.8%, nearly 40% of all foundations surveyed invest less than 1% on these activities.

Smaller foundations tend to invest a greater portion of their grantmaking budget than those in the other two tiers, with the majority spending over 7% of their grant making budget.

In addition, those who report to the CEO and Administrators have invested 33+% more on evaluative information than those who report to program.

Evaluation Resources

These data are based on two years of spending for both evaluative information and the grants: FY 2007 and 2008. These data should be considered estimates as many foundations do not formally track this information.

Page 18: Use of Evaluative Information in Foundations: Benchmarking Data

18

All Respondents

Most Respondents Perceived an Increase in Evaluation Investments Prior to the Economic DownturnNot considering the recent economic downturn – Over the last five years, what is your perception of how funding levels for evaluation have changed relative to shifts in the size of the grants budget?

Availability of Evaluation Funds Compared to Grantmaking Spending

Net of RecentEconomic Downturn

Increased Dramatically 3%

Increased Somewhat 59%

Stayed about the Same 31%

Decreased Somewhat 7%

Decreased Dramatically 0%

Not Considering the Economic Downturn: Most respondents believe that evaluation spending “increased somewhat” compared to their foundations’

grantmaking spending. Respondents who perceived increases in investment were largely those reporting to the CEO or an administrator.

Compared to those reporting to program, more than twice as many administrator reports and 60% more of those reporting to CEOs responded that their foundation increased their investments in evaluation.

Decreases in spending on evaluation were perceived only in units reporting to program.

Evaluation Resources

Availability of Evaluation Funds Compared to Grantmaking Spending

CEO Admin Program

Increased 67% 78% 38%

Stayed Same 33% 22% 38%

Decreased 0% 0% 25%

By Reporting Structure

Page 19: Use of Evaluative Information in Foundations: Benchmarking Data

19

What was the Impact of the Economic Downturn on Evaluation?How has the recent economic downturn affected the amount of funds available for evaluation compared to those available to grantmaking?

Considering the Economic Downturn: In most foundations, the poor economy did not affect investment in evaluation more than it did grantmaking. The

majority of respondents reports that evaluation spending relative to grantmaking spending remained constant despite the economic downturn.

However, there were clear differences based on reporting structure:– All respondents who perceived an increase in evaluation spending were in units reporting to CEOs or administrators.– Decreases in spending occurred most frequently in units reporting to program. This is notable if we consider the order of

magnitude among the differences.

Evaluation Resources

All Respondents

Availability of Evaluation Funds Compared to Grantmaking Spending

ConsideringEconomic Downturn

Increased Dramatically 0%

Increased Somewhat 10%

Stayed about the Same 62%

Decreased Somewhat 21%

Decreased Dramatically 7%

Availability of Evaluation Funds Compared to Grantmaking Spending

CEO Admin Program

Increased 8% 20% 0%

Stayed Same 75% 50% 57%

Decreased 17% 30% 43%

By Reporting Structure

Page 20: Use of Evaluative Information in Foundations: Benchmarking Data

20

How is the Evaluation Function Staffed?Evaluation Unit Professional Staffing (in FTEs)

Foundation Size Mean Median Minimum Maximum

Overall 3.0 2.0 0 14

Tier 1: Under $50 million 2.1 2.0 0.8 4.8

Tier 2: $50 to $200 million 2.0 2.0 0 4.0

Tier 3: Over $200 million 5.0 3.5 0 14

The smallest foundations (tier 1) have more evaluation staff relative to their size than those in the other tiers. Staffing within the largest foundations (tier 3), varies tremendously and is heavily skewed by two foundations each

with 14 FTE staff, almost 3 times that of the mean. Overall, staffing has gone down since 2005 from and overall mean of 3.9 FTEs and median of 3.5 FTEs.

Evaluation Resources

Page 21: Use of Evaluative Information in Foundations: Benchmarking Data

21

Number of Evaluation FTE Staff by Reporting Structure

Mean Median Minimum Maximum

Total in FTEs 3.0 2.0 0 14.0

CEO 3.3 2.0 0.25 14.0

Administrator 4.0 2.75 0.75 14.0

Program 1.6 1.3 0 4.0

Reporting structure again greatly influences number of FTEs allocated to evaluation functions.– Those reporting to administrators have 2.5 times the staff size of those reporting to program.

– Those reporting to CEOs have over twice the staff of those reporting to program.

Evaluation Resources

These data are based on two years of spending for both evaluative information and the grants, FY 2007 and 2008. These data should be considered estimates as many foundations do not formally track this information.

Page 22: Use of Evaluative Information in Foundations: Benchmarking Data

22

Perceptions on the Level of Investment: Is It Appropriate?How would you assess the amount your foundation invests (both in terms ofstaff and funding) in each of…?

Half of the respondents believe that their foundation invests an appropriate amount (in dollars and staff time) in program strategy, foundation strategy, performance metrics, and evaluation functions. Still sizeable percentages, 31 to 47%, believe too little (or far too little) is invested in these areas.

Dissatisfaction was articulated most frequently regarding foundation investment in knowledge management and formal learning functions, where 67% say foundation investments were inadequate.

Program strategy was the only area where a number of respondents felt that their foundation invested “far too much.”

Evaluation Resources

7%10%7%13%20%20%

24%27%

40%30%

37%47%

59%57%

50%53%

43%30%

3%7%3%3%3% 7%

0%

20%

40%

60%

80%

100%

Program StrategyFoundation StrategyEvaluationPerformanceMetrics/Indicators

Formal LearningFunctions

KnowledgeManagement

Far Too Much

Too Much

Appropriate Amount

Too Litt le

Far Too Litt le

“There is a lot of ya ya ya-ing out there on learning—but it doesn’t happen.”

Page 23: Use of Evaluative Information in Foundations: Benchmarking Data

23

Dissatisfaction with the Level of Investment was Highest Among Those Reporting to Program and Those in Mid-size Foundations

How would you assess the amount your foundation invests in…?

CEO Administrator Program

Evaluation 25% 50% 75%

Performance Metrics/Indicators 25% 40% 75%

Knowledge Management 75% 50% 75%

Formal Learning Functions 42% 70% 62%

Program Strategy 27% 10% 62%

Foundation Strategy 25% 30% 62%

Knowledge management and formal learning functions were of greatest concern across all reporting structures and across all size foundations.

About 2 out of 3 (or higher) reporting to program believe that a less than appropriate amount is invested in all of these functions.

A majority (or more) reporting to administrators are most concerned about the level of investment in evaluation, knowledge management and learning.

The greatest weighting of dissatisfaction was expressed by those in foundations with over $50 million in grant making.

Percent of responses indicating a "less than appropriate amount"

How would you assess the amount your foundation invests in…?

Under $50M $50M-$200M Above $200M

Evaluation 25% 55% 60%

Performance Metrics/Indicators 37% 55% 40%

Knowledge Management 62% 62% 70%

Formal learning Functions 50% 64% 60%

Program Strategy 43% 45% 0%

Foundation Strategy 38% 36% 30%

Percent of responses indicating a "less than appropriate amount"

Evaluation Resources

Page 24: Use of Evaluative Information in Foundations: Benchmarking Data

24

Summation/Questions/Points to Consider: Evaluation Resources—Staffing and Budget

Overall financial support appears to be holding for evaluative activities. Staffing for evaluation, however, has dropped considerably since 2005. Knowledge management and learning were the areas of largest concern regarding the adequacy of

foundation investment, across all size segments and reporting relationships. Those reporting to program expressed the greatest concern about the adequacy of investment

made in strategy, evaluation, learning, knowledge management and indicators.

Thought starters:– Are resources (across the organization) adequate to meet the knowledge challenge of strategic

philanthropy?– How do you interpret the influence of reporting relationships on evaluative information

investment decisions?

Page 25: Use of Evaluative Information in Foundations: Benchmarking Data

25

Benchmarking Evaluation in Foundations: 2009 Findings

I. Evaluation Functions and Responsibilities

II. Evaluation Resources: Staffing and Budget

III. Perceptions of Demand for and Use of Evaluative Information

Page 26: Use of Evaluative Information in Foundations: Benchmarking Data

26

Management Demand for Evaluative Information Increased in Most FoundationsOver the last five years, what is your perception of trends in management demand for the following:

No decrease in demand was reported by any respondent. Respondents perceived large increases in demand for all types of evaluative activities. Most respondents perceived “dramatic

increases” in demand for program performance metrics and strategy evaluations. Although overall demand for individual grant evaluations increased, it was not perceived to be as strong as that experienced for

other forms of evaluative work. Differences by reporting structure:

– Units that report to program experience much greater demand for individual grant evaluations, research to inform program strategy and program metric data.

– Units that report to CEOs experience greater demand for strategy evaluations and overall foundation assessments. Our interviews revealed that increases in demand were largely driven by CEO or board interest.

Perceptions of Demand for and Use of Evaluative Information

7%28%26%32%31%28%33%

54%59% 39%

34%41%36%38%52%48%19%

31%39%

28%30%25%31%

20%18%27%7%

0%

20%

40%

60%

80%

100%

IndividualGrant

Evaluations

ProgramInitiative

Evaluations

OverallFoundation

Assessments

Research forStrategy

PerceptionSurveys

Full ProgramArea

Assessments

FoundationPerformance

Metrics

ProgramStrategy

Evaluations

ProgramPerformance

Metrics

Stayed about the Same

Increased Somewhat

Increased Dramatically

“We have a new president and (s)he’s driving the change here. Having the CEO focused on [evaluation] is critical. Before (s)he came in, it was hard to get program officers’ attention – I can’t overstate how vital that is.”

“The board is asking in a more explicit way what results are being achieved with our grantmaking and that is the reason for the increase.”

Page 27: Use of Evaluative Information in Foundations: Benchmarking Data

27

Most Respondents Believe Program Staff Use of Evaluation is at Least “Acceptable”How would you assess program staff’s use of evaluation to inform:

Good Acceptable Poor

Programmatic Development Work 23% 57% 20%

Mid-course Decisions During Implementation 33% 33% 33%

Summative Performance Assessments 43% 36% 21%

About 2/3 or more respondents report at least acceptable use of evaluation at all the different stages of program work.

The most frequently cited problem area for evaluation use was how well it informs midcourse decision making.

Perceptions of Demand for and Use of Evaluative Information

“We spend a lot of time and resources in developing strategies, but they can become like railroad tracks—once you get going, the mechanism for switching tracks on the basis of evaluative information is difficult. The ways in

which strategies get realigned is still a work in progress.”

Page 28: Use of Evaluative Information in Foundations: Benchmarking Data

28

How Much are Evaluation Findings Disseminated: An Early Indicator of UseWhat portion of your evaluations – either in full or summary form – do you estimate are shared with the following audiences:Percent reporting “large majority” or “all or almost all”

Evaluation Unit Reports to: CEO Administrator Program

Management 92% 40% 75%

Board 58% 40% 38%

Program Staff 92% 80% 87%

Grantees 67% 60% 50%

Broader Field 58% 20% 50%

Reporting is a factor in how evaluation findings are shared. More respondents reporting to CEO’s share their evaluation results with every major audience listed than do

those reporting to administrators or program.– This is even the case in sharing with grantees (17 point difference from program) and the broader field (8 point difference).– Fewer of those reporting to administrators share reports with the broader field.

“Frankly, there is a lot of cherry picking going on [regarding sharing evaluation findings]. Access to findings is more managed here. Findings get shared, but warts and all? Not generally.”

“The CEO controls the message to the board and s/he errs on the side of less information. S/he would not [share] a full evaluation report that had any equivocation.”

Page 29: Use of Evaluative Information in Foundations: Benchmarking Data

29

Comments on Issues of Use from Interviews

“It gets to be more difficult when people feel that there is more at stake. … a program director is stepping down or if someone’s under threat, it’s very difficult to do healthy learning.”

“We have many meetings where a program director will pitch for an idea for funding, but there won’t be a question on how these grants build on prior experience or investments. There are no moments where people are asked to look back on what they’ve learned.”

“When I came to the Foundation I looked at all the evaluations that they did over the last 4 to 5 years and I could not tell the impact that those evaluations had had on business. Partly, entire programs had changed. Partly, officers/ directors are no longer there, so there is no ownership of information as it comes back into the foundation—so it gets ignored.”

Page 30: Use of Evaluative Information in Foundations: Benchmarking Data

30

Most Respondents Also Believe Program Use of Metrics is at Least Acceptable

How would you assess program staff’s use of performance metrics to inform:

Again, about 2/3 or more of respondents rated staff use of performance metrics as acceptable or higher.

Perceptions of Demand for and Use of Evaluative Information

Good Acceptable Poor

Programmatic Development Work 17% 50% 33%

Mid-course Decisions During Implementation 8% 67% 25%

Summative Performance Assessments 22% 44% 35%

Page 31: Use of Evaluative Information in Foundations: Benchmarking Data

31

There Were Surprisingly Few Differences Between How Respondents Viewed Evaluation and Metrics in Terms of Use

How important are the following potential uses of evaluation and performance metrics at your foundation?

Percent report "very" or "moderately" important Evaluation Performance

Metrics

Provide information about implementation 97% 89%

Sharpen focus or operationalize a goal or strategy

87% 78%

Provide periodic information on program performance at regular intervals

83% 93%

Provide information to make summative judgments 76% 64%

Provide information about foundation progress at regular intervals

69% 82%

In considering these purposes, few respondents saw differences in the importance of use between evaluation and metrics. We were surprised by the relative lack of variation in responses.

Across the spectrum of uses cited, both evaluation and performance metrics were largely seen as “very” or “moderately important” tools. The largest spreads were: – Performance metrics were seen as less useful (by 12%)

than evaluation in making summative judgments.

– Evaluation was seen as less useful (by 13%) than performance metrics in providing information about foundation progress at regular intervals.

Perceptions of Demand for and Use of Evaluative Information

Page 32: Use of Evaluative Information in Foundations: Benchmarking Data

32

Respondents Raised Several Issues Regarding the Use of Metrics in Our Interviews

Evaluation vs. metrics: – “Executive leadership is looking less to evaluation and more to performance indicators without

understanding the difference.”

Program vs. operational metrics: – “Each program can define their own metrics they want to measure. [Staff tend] to rely on what

you would report to finance, which isn’t necessarily helpful as program impact indicators. The indicators we have now for the board work well except for assessing program performance.”

– “Interestingly, the first metrics out the door are more operational than program (e.g., turn-around time of grants, volume, etc.).”

Using appropriate metrics in complex systems: – “If we looked only at the [metrics] it was a failure. But if we looked at the dynamics of schools

and community, you’d see what was happening that explained the initial drop. It took looking at the broader dynamics and beyond metrics alone. It took the program officer really reaching into the community to understand what was behind those metrics.”

When to identify metrics: – “Should we focus on metrics when we haven’t defined strategy yet?”

Perceptions of Demand for and Use of Evaluative Information

Page 33: Use of Evaluative Information in Foundations: Benchmarking Data

33

Management Support for Evaluation Was Seen as Strong; CEO Reports Experience the Most Consistent SupportHow well would you assess managerial support of evaluative information at your foundation? All Respondents

About 2/3 of respondents report management support as being frequent or often.

Respondents who report to the CEO were more likely to perceive strong managerial support.

Only 25% of evaluation units reporting to program believe that management often or frequently addresses foundation problems identified in evaluations.

By Reporting StructurePercent of Responses Indicating "Frequently" or "Often"

Perceptions of Demand for and Use of Evaluative Information

37% 30% 28%

37% 50%34%

23% 20%31%

7%3%

0%

20%

40%

60%

80%

100%

Managementcommunicates its valuefor the use of evaluation

and evaluativeinformation

Management valuesevaluative efforts thatillustrate problems or

shortcomings in the workof the foundation

Management addressesfoundation problems

identified in evaluations

Rarely/Never

From time to Time

Often

Frequently

Evaluation Unit Reports To:

83%

100% 92%

70% 70%58%62% 62%

25%

0%

20%

40%

60%

80%

100%

Managementcommunicates its valuefor the use of evaluation

and evaluativeinformation

Management valuesevaluative efforts thatillustrate problems or

shortcomings in the workof the foundation

Management addressesfoundation problems

identified in evaluations

CEO

Administrator

Program

Page 34: Use of Evaluative Information in Foundations: Benchmarking Data

34

Board Support Reveals a Similar Pattern to Management Support

How would you assess the board’s position on:All Respondents

About half of the respondents report high support from their boards.

Relatively few respondents feel limited or low support from their boards.

Perceptions of board support tend to be lowest in evaluation units that report to program.

Percent of Responses Indicating “High Support”By Reporting Structure

Perceptions of Demand for and Use of Evaluative Information

59% 59% 54% 48%

31% 28% 32% 34%

6% 15%10%7%7% 7%

3%

0%

20%

40%

60%

80%

100%

Hearing a third-party perspective

Reinforcing theimportance of

evaluativeinformation

The role andfunctions of theevaluation unit

Spending onevaluation

Don't Know

Limited/Low Support

Moderate Support

High Support

Evaluation Unit Reports To:

83%

67% 64% 67%56%

67% 67%

33%25%

38%

25%

38%

0%

20%

40%

60%

80%

100%

Hearing a third-party perspective

Reinforcing theimportance of

evaluativeinformation

The role andfunctions of theevaluation unit

Spending onevaluation

CEO

Administrator

Program

Page 35: Use of Evaluative Information in Foundations: Benchmarking Data

35

Many Respondents Raised Four Organizational Cultural Factors as Impeding Good Learning at Their Foundation

To what extent are the following organizational culture factors an impediment to good learning at your foundation?

The top four factors identified by over 20% of respondents as “often” or “always” impeding good learning are:– Highly individualistic grantmaking

– Limited thinking or reflecting with others

– Lack of attention to implementation issues

– Limited constructive feedback

These tendencies were cited as a factor “sometimes” in at least 63% of foundations.

Relatively few respondents see isolation from the field, over commitment despite uncertainty, pressure to make larger grants or unwillingness to make small exploratory grants as impeding learning in their foundations.

Not a Factor Sometimes Often Always

Highly individualistic grantmaking 20% 53% 20% 7%

Lack of attention to implementation 27% 50% 23% 0%

Limited constructive feedback 37% 40% 20% 3%

Limited thinking or reflecting with others 28% 52% 10% 10%

Isolation from others in the field 45% 41% 14% 0%

Over-commitment when knowledge is limited and uncertainty is high

45% 41% 14% 0%

Pressure to make larger grants 52% 34% 10% 3%

Predisposition toward relatively untested grantmaking

33% 57% 7% 3%

Unwillingness to make small exploratory grants 63% 33% 3% 0%

Page 36: Use of Evaluative Information in Foundations: Benchmarking Data

36

Respondents Who Report to an Administrator or Program Believe that Organizational Culture Factors More Frequently Impede Learning at Their Foundation

To what extent are the following organizational culture factors an impediment to good learning atyour foundation?Percent of Responses Indicating “always a factor” or “often a factor”By Reporting Structure

Respondents who report to the CEO were generally less likely than others to believe that these cultural factors inhibit learning.

Among those who report to administrators, about a third identify the following as factors that reduce institutional learning:– Limited thinking with others

– Highly individualistic grantmaking

– Pressure to make large grants

– Limited constructive feedback

Among those who report to program:– Half cite a lack of attention to implementation as a

factor that often impedes foundation learning.

– Highly individualistic grantmaking is another common impediment to learning.

Evaluation Unit Reports to:

Percent of responses indicating "always a factor " or "often a factor" President Administrator Program

Limited thinking or reflecting with others 18% 30% 12%

Highly individualistic grantmaking 17% 30% 38%

Over-commitment when knowledge is limited and uncertainty is high 17% 11% 12%

Pressure to make large grants 9% 30% 0%

Isolation from others in field 9% 20% 12%

Limited constructive feedback 8% 40% 25%

Unwillingness to make small exploratory grants 8% 0% 0%

Lack of attention to implementation 8% 20% 50%

Predisposition toward relatively untested grantmaking approaches 0% 20% 12%

Perceptions of Demand for and Use of Evaluative Information

Page 37: Use of Evaluative Information in Foundations: Benchmarking Data

37

Respondents from Larger Foundations are More Likely to Believe that Culture Factors Impede LearningTo what extent are the following organizational culture factors an impediment to good learning at your foundation?Percent of Responses Indicating “always a factor” or “often a factor”By Foundation Size

Those working in foundations with grantmaking under $50m report few organizational factors that impede learning.

Among mid-size grantmaking institutions, more than half report highly individualistic grantmaking as a serious impediment to learning. A lack of attention to implementation was also cited as a factor inhibiting learning by about 1/3 of the foundations this tier.

Those respondents from larger foundations saw a greater number of cultural factors as impeding good learning.

Percent of responses indicating "always a factor " or "often a factor"

Under $50M

$50M to $200M

Above $200M

Limited constructive feedback 22% 18% 30%

Unwillingness to make small exploratory grants 11% 0% 0%

Limited thinking or reflecting with others 11% 27% 22%

Predisposition toward relatively untested grantmaking approaches 0% 18% 10%

Pressure to make large grants 0% 0% 40%

Over-commitment when knowledge is limited and uncertainty is high 0% 10% 30%

Lack of attention to implementation 0% 36% 30%

Isolation from others in field 0% 9% 30%

Highly individualistic grantmaking 0% 54% 20%

Perceptions of Demand for and Use of Evaluative Information

Page 38: Use of Evaluative Information in Foundations: Benchmarking Data

38

Summation/Questions/Points to Consider: Perceptions of Demand for and Use of Evaluative Information

Demand is increasing for all evaluative services and products. Information use is seen as most problematic when programs and strategies are “ongoing.” This is

strongly supported by the qualitative interviews. For the most part, those reporting to program expressed greater dissatisfaction with program use

of evaluative information. Program reports also felt their management and board were less supportive of evaluation than

those reporting to the CEO or an administrator.

Thought Starters:– Demand has increased yet staff size is down and spending remains steady. Are foundations able

to do more with less? Is something being sacrificed?– We expected that reporting to program would result in greater evaluation use and more general

“buy in,” yet this seems not to be the case? How do you interpret this? – What are the other sources of evaluative information available to program staff outside of the

evaluation unit? How are all sources integrated and made available to program and management?

– How is program learning surfaced and facilitated?– Do/should performance metrics and evaluation serve different purposes and what are they?

Page 39: Use of Evaluative Information in Foundations: Benchmarking Data

39

In Sum: If Strategy Is Learned; How Well Are We Doing?

“Strategy is learned; not planned.” Henry Mintzberg

As foundations have increasingly engaged in strategic philanthropy, the evaluation function has evolved in a corresponding manner—with a greater focus on issues of strategy. Examples abound regarding highly valued theory of change work, serving to improve and clarify foundation intent, focus, feasibility etc.

Yet, we also see areas where improvement is needed, particularly related to deep and ongoing learning necessary for strategy evolution. We believe that as important strategic actors, foundations need to have strong capacities and commitment to learn throughout strategy evolution.

The question becomes: How can philanthropic organizations deepen their capacity to learn and adapt their strategies?

This is not just an evaluation issue, but an organizational challenge requiring the commitment of evaluation, program and management.