Learning As We Go: Making Evaluation - TCC · PDF filePeter J. York Introduction TCC Group...

12
Peter J. York Introduction TCC Group often gets calls from funders and nonprofits seeking evaluation assistance. The first question we always ask is: Why? Many funders respond by saying, “We want to know what’s work- ing and what isn’t.” What they often mean is, “We want to know which grantees are succeeding and which aren’t.” The typical response from nonprofit leaders is, “We want to be able to show that our programs are effective.” What they often mean is, “Our funders want us to evaluate our programs.” Both statements demonstrate that funders and nonprofit leaders typically use evaluation to prove the effectiveness of their work to others rather than for their own benefit. briefing paper strategies to achieve social impact Evaluation as the Cornerstone of Organizational Capacity Building » page 2 What is Evaluative Learning? » page 3 Overcoming the Biggest Barrier to Evaluative Learning » page 4 Key Elements of Any Evaluative Learning Process » page 5 The First Step in Ongoing Learning: Develop a Logic Model » page 8 Where Do Nonprofits Go From Here? » page 9 Where Do Funders Go From Here? » page 9 A Final Word » page 10 Inside: Learning As We Go: Making Evaluation Work for Everyone tcc group A Briefing Paper for Funders and Nonprofits

Transcript of Learning As We Go: Making Evaluation - TCC · PDF filePeter J. York Introduction TCC Group...

Peter J. York

Introduction

TCC Group often gets calls from funders and nonprofits seekingevaluation assistance. The first question we always ask is: Why?Many funders respond by saying, “We want to know what’s work-ing and what isn’t.” What they often mean is, “We want to knowwhich grantees are succeeding and which aren’t.” The typicalresponse from nonprofit leaders is, “We want to be able to showthat our programs are effective.” What they often mean is, “Ourfunders want us to evaluate our programs.” Both statementsdemonstrate that funders and nonprofit leaders typically use evaluation to prove the effectiveness of their work to others ratherthan for their own benefit.

briefingpaper

strategies to achieve social impact

Evaluation as theCornerstone ofOrganizationalCapacity Building» page 2

What is EvaluativeLearning?» page 3

Overcoming theBiggest Barrier toEvaluative Learning» page 4

Key Elements ofAny EvaluativeLearning Process» page 5

The First Step inOngoing Learning:Develop a LogicModel» page 8

Where DoNonprofits Go From Here?» page 9

Where Do FundersGo From Here?» page 9

A Final Word» page 10

Inside:

Learning As We Go:Making EvaluationWork for Everyone

tcc group

A Briefing Paper for Funders and Nonprofits

An important piece ismissing from theseanswers: When evalua-tions aim to demonstratesomething to someoneelse, their design oftenexcludes opportunitiesfor internal learning.Questions about how tostrengthen programs and services, use resources more efficiently and effectively,and share models of success take a back seat to questions ofwhether everyone did what they said they would do, and ifclients’ lives and communities improved as a result.

The learning can’t stop there. At TCC Group, we’re noticing atrend in which funders and nonprofits are shifting away from“proving something to someone else,” and toward enhancingwhat they do so they can achieve their mission and sharesuccesses with their peers in and outside the organization.Funders, including the Northern California-based SierraHealth Foundation, have approached us to help design evalu-ation systems for specific grantmaking programs that willhelp them facilitate ongoing learning. Nonprofits such asEureka Communities, a national leadership development pro-gram, have engaged us to develop evaluation systems to helpenhance its programs and organizations. These examples areencouraging, but there are still many nonprofit leaders andfunders who continue to view evaluation as something that isdone to demonstrate something to others.

It is important to distinguish between evaluation for account-ability and evaluation for learning - a collaborative approachto evaluation that we call “evaluative learning.” This paperdistills what TCC Group has learned about evaluative learn-ing and provides information and tools to help you take nextsteps so everyone can “learn as we go.” Whether your organi-zation is large and has many resources, or is small and oper-ates on a limited budget and staff, you can use evaluation asa learning tool.

Capacity building is any activity that improves organizationaleffectiveness. As defined by Grantmakers for EffectiveOrganizations, organizational effectiveness is “the ability of an organization to fulfill its mission through a blend of soundmanagement, strong governance, and a persistent rededica-tion to achieving results.” Examples of capacity buildinginclude strategic planning, board development, marketing,

communications, upgrading technology, increasing fund development capability, enhancing operations, planning anddeveloping facilities, acquiring new equipment, hiring newstaff, improving staff knowledge and skills, improving organizational leadership, and evaluating programs.

Since nonprofit organizations must focus on achieving theirmissions through high quality programs and services, thetypes of capacity building they invest the most resources inare those most directly tied to program and service delivery.As a result, most identify fundraising, human resource devel-opment and general operations as their highest priority.These capacities are the “nuts and bolts” necessary toachieve an organization’s mission.

However, over the past five to ten years, the nonprofit sectorhas begun to realize that these “nuts and bolts” aren’tenough for them to be effective. Solid leadership and plan-ning are prerequisites for any organization to ensure thatother functions, including programs and services, serve themission. Drawing on resources within the managementassistance field, a growing number of nonprofits are takingsteps to strengthen their planning and leadership.

In a recent survey of California nonprofits, TCC Group foundthat just 13 percent of respondents identified evaluation as atop priority with respect to capacity building. Among the rest,organizational planning and leadership often proceed withouta clear, ongoing understanding of the quality and success oftheir programs. The failure to assess operations and pro-grams on an ongoing basis means that the intuitive under-standing that leaders have of their operations and programsare usually not shared widely enough within the organization.

An overall, constant organizational learning process is intrin-sic to all nonprofits. This learning process is always informedby strong leadership and involves three steps, beginning withorganizational planning (see Exhibit A above). It is the organi-zation’s planning capacity – which includes scanning theenvironment, taking stock of itself, conducting client needsassessments, and developing strategic plans and programs –which start the learning process. In the absence of ongoingevaluations, however, these planning efforts will lack theinformation necessary to adequately assess how well anorganization is adhering to its mission and achieving thedesired impact.

2 briefing paperExhibit A: The Role Evaluation Should Play in Organizational Capacity Building

Evaluation as theCornerstone ofOrganizational Capacity Building

A second step in the learning process is to buildthe “nuts and bolts” capacities, using the plan asa guide. Here, too, well-designed evaluations area valuable source of information, specificallywhen evaluation activities include assessing howthe “nuts and bolts” capacities affect the qualityof programs and services. Core organizationalcapacities should always directly or indirectlyserve the programs.

The third step in the learning process is programimplementation. Evaluation serves programdelivery by providing critical information aboutthe quality of the program in relation to achiev-ing the intended outcomes.

To make evaluation a cornerstone of organiza-tional learning and capacity building requires are-examination of how we design, conduct anduse evaluation. We recommend evaluative learn-ing as a particularly useful way to facilitate organizational learning on an ongoing basis.

Evaluative learning requires a degree of compro-mise between the traditional thinking about theneed for a completely objective evaluation(which requires an outside evaluator) and howthe evaluation methodology is designed.Evaluations that aim to maintain absolute objec-tivity and only use external evaluators are usefulfor accountability, but less so for organizationallearning. These types of evaluations can beexpensive, and thus difficult for an organizationor funder to support on an ongoing basis.

By contrast, evaluative learning is an ongoing,collaboratively designed, and stakeholder-ledevaluation process that aims primarily to serveorganizational learning. There are four main elements of this definition, as explained belowwith illustrative cases.

1. Evaluation should be ongoing. Most evaluations,whether initiated by funders or nonprofit organi-zations, tend to occur periodically, if at all. Whenevaluation does take place, it is usually sched-uled when funders need to make critical grant-making decisions or nonprofit organizations wantto prove that their programs deserve new or continued support. Instead, the evaluation

process should occur frequently, and at points intime when key organizational and programmaticdecisions are being made.

TCC Group is currently developing an evalua-tion system for Eureka Communities, a nationalleadership development program for nonprofitexecutive directors, to ensure evaluation find-ings are always in alignment with organizationaldecision making. TCC Group is developing aweb-based evaluation system–or, as the clientrefers to it, a “dashboard”–that will provide“real-time” findings in an interactive web-basedreport that will be current whenever it isaccessed.

2. Evaluations need to be collaboratively designedto ensure buy-in and support from everyone. Theevaluative learning process involves more thanan outside evaluator in designing the evaluationapproach, methods, tools and processes. Whenorganizational leaders, staff, partners and constituents are part of the evaluation designprocess, they become active participants inshaping what and how they want to learn, and, asa result, are much more likely to use what theylearn. The findings are also more likely to be usedcollaboratively to reinforce group successes andjointly solve problems. While outside evaluationexpertise will probably still be needed to helpwith design and methods, the professional evalu -ator will serve mainly as a partner in the designprocess, not as the sole designer.

The Massachusetts Cultural Council (MCC) hiredTCC Group to help develop a logic model and anevaluation plan for its START Initiative, a program that provides organizational capacitybuilding assistance to local cultural councilsand arts and cultural organizations. MCC wanted expert help with methodology to ensurethat the evaluation addressed ongoing learningneeds.

TCC Group met with all senior staff, MCC inter-nally engaged all staff members, to lay out theevaluation framework. Organizational leadersand staff proposed the evaluation questions,methods and measures, and TCC provided feed-back regarding feasibility and ensured that thedesign addressed long-term learning goals.

As a result of this client-led approach, programstaff members gained a clear sense of how theirability to improve their work would be addressedthrough the evaluation. Once the new system isin place, everyone will have a stake in ensuringthat the evaluation is of the highest quality.

3

AchievingEvaluativeLearning When aFunder is StillDriving anEvaluation

TCC Group has consid -erable experienceassisting in funder-driven evaluations, inwhich funders haveexplicit evaluationrequirements, reportingguidelines, and in someinstances, prescribedevaluation data collec-tion tools, processesand protocols. Whilethere is obvious valuein such structuredevaluations for thegrantmaker, the valueto the organizationoften requires furtherexamination, and, frequently, the tech-nical assistance of anoutside consultant tooversee the evaluation.

We've identified threesteps to be most productive in turning afunder-driven evalua-tion into a valuableprocess for organiza-tional and programplanning, and determin-ing strategic direction:

1. Acknowledge therole of evaluation forthe organization aswell as for the funder.Not to do so ends upnullifying valuableopportunities for evaluative learning.

continued on page 4 »

What Is Evaluative Learning?

3. Key stakeholders need to serve as leaders tomove the evaluation process forward if the organi-zation and everyone involved is to learn and grow.By definition, stakeholders are those individualswho have a stake in an organization and/or itsprograms and services. Of course, not everyonehas an equal stake, and therefore each person’srole in the evaluative learning process can andshould be different. But when stakeholders leadan evaluation process they are much more likelyto “own” the findings–that is, they will be signifi-cantly more likely to use the findings for internalplanning and decision-making.

TCC Group recently concluded an evaluation ofthe Strategic Solutions Initiative, a five-year projectthat aims to improve the understanding and useof strategic restructuring (such as mergers andjoint ventures) among funders, consultants andnonprofit leaders nationwide. The core strategiesinvolved conducting research and sharing whatwas learned, training consultants on how to facili-tate strategic restructuring processes, and disseminating knowledge locally and nationallythrough workshops and presentations. Evaluationwas integral to this initiative from the outset. Thefunders (the James Irvine Foundation, the Davidand Lucile Packard Foundation, and the Williamand Flora Hewlett Foundation) and the grantee,La Piana Associates, jointly designed the evalua-tion with assistance from TCC.

All stakeholders provided leadership with regardto how the data could and should be interpreted.For example, when the data showed that notenough was known about the role of leadership ina nonprofit’s decision to consider strategicrestructuring, the initiative’s strategies wererevised to focus more research on that issue. Such strategy changes would have been less like-ly if key stakeholders were not leading the evalua-tion process together. When one organizationalleader oversees the evaluation without otherstakeholder involvement, and others disagreewith a particular finding, the leader may dismiss this feedback without considering the ramifications.

4. The primary purpose of “evaluative learning” isorganizational learning. For both funders and non-profit organizations, evaluative learning focuseson ongoing internal program and organizationalplanning and development needs before itaddresses accountability to others. As a result,

the evaluative learning process results in a shift-ing away from analyzing what went wrong andtowards understanding what works. In short, theshift away from accountability (to others) tointernal learning replaces a tendency to look forproblems with looking for solutions.

The biggest barrier to developing evaluativelearning capacity is the often different andsometimes disparate perceptions of evaluationamong funders and nonprofit leaders. As ExhibitC on pages 6 and 7 shows (Partnering forLearning), many funders and nonprofit leadershave different perceptions about the purpose ofevaluation, what to evaluate, how to evaluate,who should evaluate, when to evaluate and howto use findings. If evaluation is to truly serveorganizational learning and capacity building,funders and nonprofit organizations need tocome together with respect to these perceptions.More specifically, in order for nonprofit organiza-tions and funders to equally benefit from evalua-tive learning, they need to agree to:

• Use evaluative learning to strengthen programand organizational planning as well as overallgrantmaking strategies. Both funders and non-profits thus benefit from the learning.

• Evaluate the relationship between programresources (money, experience, and time), pro-gram quality, and the desired outcomes. It is vitalto the learning process to look at the big pic-ture. If evaluation measures only the number ofdollars spent, services provided, clients served,quality of programs, or the outcomes, it missesthe key to understanding how to improve pro-grams overall. Rather than consider each ele-ment individually, it’s essential to understandthe relationships between these elements inservice to the organizational mission.

• Be receptive to making compromises on thedegree of objectivity required for the evaluation,along with the level of sophistication of evalua-tion design and methods. Nonprofit organiza-tions will not be able to conduct costly and

4 briefing paper

continued on page 7 »

continued from page 3 »

2. Engage an evalua-tion expert to interpretand apply evaluationfindings. Few nonprof-its have in-houseresources or time toassess evaluationreports or data thatcould inform decision-making and influencestrategic directions.Funder-driven evalua-tions may not, on thesurface, produce solu-tions for future pro-grammatic and opera-tional planning, butexpert evaluators canoften cull data for thevaluable learnings tothe organization.

3. Engage the evaluatorto expand the scope ofthe evaluation.A fun -der-driven evaluationcan add more value toan organizationthrough additional datacollection and a tweak-ing of the evaluationtools, protocols andprocesses.

–Chantell Johnson

Overcoming the Biggest Barrier to Evaluative Learning?

Key QuestionsZero

to Minimal Learning Modest Learning Significant Learning High Learning

1) What’s the purpose ofthe evaluation?

Accountability to funders

Accountability to funders and organizational leaders

Program planning Organizational and programplanning

2) Who is the audience for the findings?

Funders Funders and organiza-tional leaders

Funders, organiza-tional leaders andstaff

Funders, organizational leaders, staff and the broader field

3) Who will conduct the evaluation?

External evaluator External evaluator(hired by funders) withassistance from organizational staff

External evaluator(hired by organization)in conjunction withorganizational staff

Internal evaluator, perhapswith coaching from an external evaluator, if nottrained in evaluation

4) Who will determine theevaluation questions andevaluation design process?

Funders and external evaluator

Funders, external evaluator and organizational leaders

Funders, external eval-uator, organizationalleaders, and staff

Funders, external evaluator,organizational leaders, staff,clients, and communitystakeholders

5) What data are availableto address evaluation questions?

Objective data gathered only usingscientific methods

Objective data gathered only usingscientific or quasi-scientific methods

Objective and subjective data

Objective, subjective, andalternative types of data(e.g., pictures, stories, etc.)

6) What types of evaluationreports or presentations ofdata are provided?

Very detailed findings,but no examination ofrecommendationsbeyond the data

Somewhat detailed,with some examina-tion of recommenda-tions beyond the data

User friendly (i.e.,audience-defined) with examination ofrecommendationsbeyond the data

User friendly (i.e., audiencedefined), examines findingsbeyond the data, and incor-porates a reflective process(e.g., program planning“scenarios”)

7) Who will provide interpretive feedback on the findings?

Funders Funders and organiza-tional leaders

Funders, organiza-tional leaders, andstaff

Funders, organizationalleaders, staff, clients, community stakeholders,and the broader field

8) How frequently will evaluations occur?

At the conclusion ofprogram funding

At the conclusion ofeach program cycle

Periodically through-out the life of the program

Ongoing for all programswithin an organization

Exhibit B: The Evaluative Learning Continuum

5

Key Elements of Any Evaluative Learning Process

There are eight key elements to any evaluation if it is toserve ongoing organizational learning: (See Exhibit B above).

1) The primary purpose of any evaluation is to serve programand organizational planning processes. It is fine if evaluationalso serves as an accountability tool, but only secondarily.

2) The primary audience for the evaluation should be organi-zational leaders and managers. After all, these leaders andmanagers are the agents of change within their organiza-tions. Funders should be a secondary audience.

3) The evaluator should ultimately be a staff person or per-sons trained in evaluation. However, an external evaluatormay need to be engaged in the beginning, and perhaps playa small ongoing role, such as a coach, if an organizationdoesn’t have the in-house evaluation expertise and/orresources.

4) All key stakeholders should have a voice (relative to their“stake”) in the evaluation process. This includes the evalua-tion design, implementation, analysis and interpretationphases.

5) Data can come in many forms (e.g., objective, subjective,and alternative), as long as it best serves organizationalplanning needs. A mix of different types of data is best.

6) Reports should be presented in a user-friendly mannerthat combines a presentation of data with a clear analysisand synthesis of the findings, including implications and rec-ommendations.

7) All key stakeholders should have the opportunity to pro-vide interpretive feedback on the findings.

8) The frequency of evaluation should be ongoing.

Exhibit C: Partnering for Learning

The following diagram shows six critical questions to any evaluation, how nonprofits organization and funders typically respond to these questions, and how they can meet in the middle.

6 briefing paper

nering for Learning

questions to any evaluation, how nonprofits ese questions, and how they can meet in the middle.

7

sophisticated evaluations on an ongoing basis, yet funderswill need some level of objectivity and sophistication forthe purposes of accountability.

• Provide support to develop the nonprofit organization’s in-house staff capacity to evaluate. For the purposes ofongoing learning, the ideal evaluator should be someonewithin the organization. That person could be a currentstaff member who receives professional development orpossibly a new employee for whom evaluation will be aresponsibility of the job. An outside coach or evaluator can provide technical assistance for designing and imple-menting an evaluation.

• Evaluate on an ongoing basis, especially in alignment withregular program and organizational planning activities,both for the nonprofit and the funder. Achieving this goalon a routine basis may require some negotiation betweenfunders and nonprofits so that findings meet the planningneeds of both.

• Ensure that evaluation findings are used for planning andlearning. This measure requires commitments in principleand in time. It might be helpful for the funder and nonprofitto arrange periodic meetings to share what each is learn-ing, including both the successes and challenges with thelearning process.

The logic model, according to the W. K. Kellogg Foundation,is “a picture of how your organization does its work - the the -ory and assumptions underlying the program.” It shows therelationships between short- and long-term outcomes, theprogram strategies or activities and their outputs, and pro-gram inputs or resources. (See Exhibit D below for defini-tions of these terms.)

All stakeholders should be involved in the logic model devel-opment process. The power of the logic model is that itgrows out of consensus about everyone’s underlying pro-gram assumptions, and the result is an evaluation designthat is much more relevant to the organization. Once the logic model and corresponding evaluation ques-tions have been developed, nonprofits and funders mightwant the assistance of a professional evaluator to provideexpert guidance. However, the data collection design andanalysis should be driven by the organization and the evalu-ator will serve only as an expert collaborator.

8 briefing paper

Exhibit D: Example of a Logic Model

continued from page 4 »

The First Step in Ongoing Learning:Develop a Logic Model

The logic model helps evaluative learning byserving as a tool to develop a shared understand -ing of the organization and program. As a result,everyone close to the organization helps set andmeet learning objectives. More than any otherevaluation approach, the logic model requireseveryone to ask questions, seek answers, makechanges based on what was learned, and repeatthe process as often as necessary. In otherwords, the logic model forces everyone to continue learning.

Over time, the logic model can be revised until itrepresents not only what everyone assumes willwork, but also what actually does work. Then,everyone has a more accurate road map that willserve their mission.

The first step is to bring staff and key stakehold -ers together to discuss everyone’s readiness formaking evaluation a more integral part of theeveryday work of the organization. We recom-mend using the “Readiness Checklist”(seeExhibit E on page 10).

The next step is to assess what the organizationis already doing to evaluate programs, servicesand the organization. Rather than discontinuewhat the organization is doing, it can reframehow it meets key organizational learning needsand goals. At this stage, a nonprofit organizationmay determine that it isn’t ready, and thereforeneeds more time for preparation in order todevelop learning goals, objectives and strategiesfor evaluative learning efforts.

When ready, a nonprofit must assess its in-housecapacity to develop and implement evaluativelearning strategies. If some staff already havetraining and skills, they can be tapped asresources and given time to begin developing a plan.

Then, the nonprofit leaders identify what addi -tional resources they may need, such as time,money, human resources, and technical assis-tance. There are many ways to meet these needs.Examples include using volunteers, collaboratingwith similar organizations to share resources,changing job responsibilities, hiring a part-timeevaluator, and/or identifying funding to hire acoach or consultant. Above all, nonprofit leadersneed to think creatively about how to meet theirorganizations’ ongoing learning needs.

The first step for a funder planning to pursueevaluative learning is to determine the readinessof program staff and the board to do it (see the“Readiness Checklist” in Exhibit E). Most funderswill first have to explain how evaluative learningdiffers from typical accountability-based evalua-tions and how it offers real value to the organization.

One of the first hurdles in the education processis that evaluative learning processes do not generate the immediate outcomes-based find -ings that funders often want; evaluative learningwill generate outcome data, but the initialprocess takes longer.

Over time, however, through evaluative learning,outcomes improve because an organization islearning more about what works and what doesn’t, and correcting as it goes. Funders needto understand these differences and buy into thelong-term benefits for both grantees and them-selves alike. The long-term benefit to funders isthat their grantees are more effective, whichallows for making more strategic grantmakingdecisions.

Funders may want to take a more hands-offapproach and just provide resources, such asfunding, referrals, and access to technical assistance.

For step two, funders need to assess their grant-making strategies to determine which programsor program areas are most “ready” for evaluativelearning. The readiness requires strong leader-ship from program staff, good communicationwith grantees, a clearly defined program theoryor logic model, and clearly measurable goals andobjectives. In addition, program staff must bewilling to make mid-course corrections if evaluative findings suggest doing so. If a grant-making strategy meets these criteria, then a fun-der is ready to move forward.

The third step for funders is to determine theirrole and level of involvement with grantees forthe purposes of evaluative learning. Funders maywant to take a more hands-off approach and justprovide resources, such as funding, referrals, andaccess to technical assistance, so that granteesmay improve their evaluation capacity.

9

Where Do Nonprofits Go From Here?

Where Do Funders Go From Here?

Some funders may prefer to become more involved in thedesign and implementation of grantees’ evaluative learningprocesses. After all, funders are key stakeholders. This degreeof involvement is probably most appropriate for very strategicor large, multi-year investments.

Finally, funders must determine the type and level of supportthey will grant for evaluative learning. In addition to – orinstead of – grant support, some funders can bring their ownevaluation expertise to the table.

While larger funders should, ideally, provide resources tograntees to develop evaluative learning systems, many smallerfunders lack the wherewithal to do so, and do not have enoughfunds to pay consultants or coaches for individual grantees.

If they’re serious about evaluation, here are two excellent andeconomical options to consider:

1) Support evaluation workshops for a set of grantees so thatpeer learning can occur at the same time. The PhiladelphiaCultural

Management Initiative, an intermediary organization thatreceives funding from the Pew Charitable Trusts to supportarts and culture organizations in the Delaware Valley, has hiredTCC to provide a two-day evaluation workshop to a number oforganizations. Every participant leaves with a beginning evalu-ation plan.

2) Change how funders request evaluation information fromgrantees. Ask grantees to document how they “used” theirevaluation findings to improve their organizations. This shiftin reporting will demonstrate that evaluation is valued notjust for accountability, but also for joint learning.

Nonprofits and funders need to change the way they thinkabout evaluation. They must see that evaluation can be morethan an accountability tool for demonstrating something forsomeone else. Instead, evaluation can create more opportuni-ties for learning. By pursuing an evaluative learning approach,nonprofits and funders together, as part of a learning commu-nity, can figure out how to strengthen programs, allocateresources better, and share successful models. This makesevaluation work for everyone.

10 briefing paper

Exhibit E: The Evaluative Learning “Readiness” Checklist

A Final Word

11

The Howard Hughes Medical Institute (HHMI) was seek-ing to improve the evaluation capacity of its PrecollegeProgram grantees and contracted with TCC Group tomanage, facilitate and assess the process.

Grantees consisted of biomedical research institutions,science museums and other informal science educationinstitutions that sponsor projects to improve science edu-cation from pre-kindergarten through 12th grade. As withmany funders, HHMI has evaluated its grantmaking pro-grams. But over many years, HHMI found that evaluatingmultiple grantees was expensive, time-consuming, andoften inconclusive. It was difficult to generalize evaluationfindings and use them for real learning.

HHMI developed a pilot "Peer Cluster Evaluation Project"(PCEP) with the overall goal of improving the evaluationcapacity of each grantee. HHMI selected 12 grantee pro-ject managers to participate in the pilot project. The pro-ject managers were divided into three groups of four.Each group member was to host one visit from all othermembers in order to receive feedback on the strengthsand weaknesses of current evaluation approaches. At sitevisits, team members observed projects and evaluation

data collection; reviewed and provided feedback on evalu-ation plans, tools and instruments; and offered what theycalled "critical friendship." After each site visit, thegroups drafted a report highlighting observations andoffering recommendations. Reports were shared withteam members, HHMI, and other PCEP groups.

This "Peer Cluster Evaluation Project" proved to be apowerful learning experience, because participants:

• used suggestions from peers to improve survey instruments

• learned how to use control groups in contexts that theyhad previously thought impossible.

• learned to use video as an effective evaluation tool.

• said they better understood "formative evaluation" andthe importance of evaluating the "big picture," not just"outcomes."

• developed and shared their logic model with each other.

–Chantell Johnson

Selected Readings

Fetterman, D.M., Kaftarian, S.J., & Eandersman, A.Empowerment Evaluation: Knowledge and Tools for Self-Assessment and Accountability . (Thousand Oaks, CA: SagePublications, 1996).

Gray, Sandra Trice and Associates, Evaluation with Power: ANew Approach to Organizational Effectiveness, Empowermentand Excellence. (Washington, D.C.: Independent Sector, 1998).

Love, Arnold J., Internal Evaluation: Building Organizationsfrom Within (Thousand Oaks: Sage Publications, 1991).

Russ-Eft, Darlene and Hallie Preskill, Evaluation InOrganizations: A Systematic Approach to Enhancing Learning,Performance, And Change (Cambridge, Perseus Publishing,2001).

Sonnichsen, Richard C., High Impact Evaluation (ThousandOaks: Sage Publications, 1999).

Online Resources

AMERICAN EVALUATION ASSOCIATION:Information about evaluation practices, methods, and usesthrough application and exploration of program evaluation,personnel evaluation, and technology. http://www.eval.org.

GETTING TO OUTCOMES: METHODS AND TOOLS FOR PLANNING, SELF-EVALUATION AND ACCOUNTABILITY:A manual that works through ten questions that incorporatethe basic elements of program planning, implementation, eval-uation, and sustainability.http://www.stanford.edu/~davidf/empowermentevaluation.html#GTO

W.K. KELLOGG FOUNDATION LOGIC MODEL DEVELOPMENT GUIDE, 2001:http://www.wkkf.org/Pubs/Tools/Evaluation/Pub3669.pdf

Resources

A Private Funder’s Innovative Peer-led Approach to Evaluative Learning

About TCC Group

For over two decades, TCC has provided strate-gic planning, program development, evaluationand management consulting services to nonprof-it organizations, foundations, corporatecommunity involvement programs and govern-ment agencies. In this time, the firm hasdeveloped substantive knowledge and expertisein fields as diverse as community and economicdevelopment, human services, children and fami-ly issues, education, health care, theenvironment, and the arts.

From offices in Philadelphia and New York, andfull-time staff in Chicago, the firm works withclients nationally and, increasingly, globally. Ourservices include strategic planning, organization-al assessment and development, feasibility stud-ies, program evaluation and development, boarddevelopment, restructuring and repositioning, aswell as grant program design, evaluation, andfacilitation.

Approach

Our approach is governed by the need to estab-lish a clear and engaging consulting process thatoffers structure and predictability as well as flex-ibility to meet unforeseen needs. Working in mul-tidisciplinary teams, we tailor each new assign-ment to meet the individual needs and circum-stances of the client. We develop a scope of workthat responds to the particular challenges,timetable and budget for the assignment.

Sometimes clients engage us for short-termresearch, problem solving, or facilitation pro-jects. Other times we provide comprehensiveplanning and evaluation assistance over a longerperiod or conduct other activities, over one ormore years. Increasingly, TCC helps clients man-age and implement their work and provide adviceon an ongoing basis. We bring to each newassignment the perspective of our expertise,broad experience and the enthusiastic commit-ment to get the job done right.

Evaluation Services

Our evaluation services are geared to improveand enhance ongoing program development (for-mative evaluation) and provide information thatinforms decision-making on the continuation orevolution of programs (summative evaluation).We offer evaluation services to the nonprofit aswell as funders.

We believe that evaluation is an integral part ofthe planning process and as such can be used toassess and develop current capacity so that anorganization can enhance its overall effective-ness. Our evaluation team will assist in design-ing the processes and tools necessary to createan organization's internal evaluation system, aswell as provide professional development andtechnical assistance related to evaluation theory,design, implementation and data collection toexecutives, program officers and staff.

Our Clients

We have provided consulting services to a broadrange of nonprofit groups, governmental agen-cies, corporate citizenship programs, and philan-thropic organizations in many fields, from thearts and community development to educationand medical research.

Among our grantmaker clients are such leadingfoundations as The Ford Foundation, theRockefeller Foundation, the Knight Foundation,Pew Charitable Trusts, and the William PennFoundation. Yet we also have served smallerfoundations such as the Brandywine Foundationin Philadelphia. Corporate and nonprofit clientsinclude Goldman Sachs, the Industrial Bank ofJapan, Lorraine Monroe Leadership Institute,Chicago Historical Society, Kraft Foods, TheAltman Foundation, Pfizer and UBSPaineWebber.

12

TCC Group

New York50 East 42nd Street19th FloorNew York, NY 10017phone: 212.949.0990fax: 212.949.1672

PhiladelphiaOne Penn CenterSuite 1550Philadelphia, PA 19103phone: 215.568.0399fax: 215.568.2619

Chicago875 North Michigan Ave.Suite 3930Chicago, IL 60611phone: 312.642.2249fax: 312.642.2259

Websitehttp://www.tccgrp.com

[email protected]

Contact a TCC office

First Published June 2003

Peter York is Director of Evaluation at TCC Group. Chantell Johnson, a consultant at TCC who focuseson evaluation, made substantial contributions to this paper. John Riggan, Richard Mittenthal, PaulConnolly, and Cara Cipollone also provided assistance.