A conversation with W. Haven North, Director, Center for Development Information and Evaluation,...

9
Interview of an Evaluator A Conversation with W. Haven North, Director, Center for Development Information and Evaluation, U .8. Agency for International Development Michael Hendricks Interview Series Editor W. Haven North has been in the development assistance business since 1952. He spent 15years in Africa, was heavily involved in the Nigeria relief operation in the late 1960s, then became Deputy Assistant Administrator in AID's Africa Bureau in 1976. In 1982 he became special assistant to the Administrator and was shortly thereafter asked to establish the Center for Development Information and Evaluation. The purpose of the center is twofold: (I) to maintain AID's institutional memory by managing its over 40,000 documents and data bases, and (2) to create an independent office in which staff can learn about and disseminate AID's experience with development programs. Almost 100 persons work for the center, with the staff evenly divided between the documentation and analysis functions. Approximately 15to 20 analytical studies are underway at any given time. The center has an annual budget of $4.0-4.5 million. On October 55

Transcript of A conversation with W. Haven North, Director, Center for Development Information and Evaluation,...

Page 1: A conversation with W. Haven North, Director, Center for Development Information and Evaluation, U.S. Agency for International Development

Interview ofan Evaluator

A Conversation withW. Haven North, Director,Center for Development Informationand Evaluation, U.8. Agency forInternational DevelopmentMichael HendricksInterview Series Editor

W. Haven North has been in the development assistance business since1952. He spent 15 years in Africa, was heavily involved in the Nigeriarelief operation in the late 1960s, then became Deputy AssistantAdministrator in AID's Africa Bureau in 1976. In 1982 he becamespecial assistant to the Administrator and was shortly thereafter askedto establish the Center for Development Information and Evaluation.

The purpose of the center is twofold: (I) to maintain AID'sinstitutional memory by managing its over 40,000 documents and databases, and (2) to create an independent office in which staff can learnabout and disseminate AID's experience with development programs.Almost 100 persons work for the center, with the staff evenly dividedbetween the documentation and analysis functions.

Approximately 15 to 20 analytical studies are underway at any giventime. The center has an annual budget of $4.0-4.5 million. On October

55

Page 2: A conversation with W. Haven North, Director, Center for Development Information and Evaluation, U.S. Agency for International Development

56

22, 1986, we sat down to a two-hour conversation in Mr. North'sRosslyn, Virginia, office overlooking the Potomac River.

M.H.: Why don't we start with something easy. Different people defineevaluation in different ways. I'd be curious to know how you define it.

W.H.N.: My perspective on evaluation might be different from that ofprofessional evaluators, since I come out of a management role ratherthan an evaluative background. My view is that it's important that theevaluation process be directly associated with the decision-makingprocess of the agency. Not that that's a new idea by any means, but veryoften the evaluation business tends to get a little isolated or ivory towerishin a form that's not relevant to AID's management interests.

First, I look at evaluation work as a continuation of the design process.It's a rare project that doesn't need course corrections as it proceeds, soevaluation becomes important to keep the project on track. Assumptionschange, settings change, implementation situations don't quite work outas expected, and you can easily end up getting in trouble. You need aninformation system-which evaluation is a very central part of-to giveyou the guidance for course corrections as you go along. So, for me,evaluation is first and foremost an extension of the design processthrough the life of a project.

A second dimension of evaluations, and the one we're particularly con­cerned with here at the center, is trying to feed back into the system whatwe've learned from our experience over the previous years. In that sense it'sless concerned with existing projects and their execution than it is withtrying to distill the tremendous experience that we've had over the last 25to 30 years into a form that provides practical guidance for policy makers,project managers, and project design officers. So I also look uponevaluation as a way of analyzing our experience.

With the fairly rapid turnover of staff-lots of new people and lots ofnew contractors-this second function becomes even more important.Some people who've been in AID a long time say, "This isn't terriblyimportant because I've done this sort of thing." But we find more andmore that a very large proportion of AID staff have been with us less than10 years, so the depth of experience is far less than it used to be. And, ofcourse, with contractors there's always a large turnover of new people.

That's a very ambitious agenda for us, but that's what we're supposed tobe doing. And we've had lots of discussions with other donor agencies whoare trying to do the same things-just yesterday with a delegation fromFinland and the day before with a delegation from the Netherlands.

M. H.: Do these other donor agencies view evaluation the same ways you do?

W.H.N.: I think so, yes. The Development Assistance Committee (DAC) ofthe Organisation for Economic Co-operation and Development just

Page 3: A conversation with W. Haven North, Director, Center for Development Information and Evaluation, U.S. Agency for International Development

57

completed a study involving 18bilateral donors plus the World Bank and3 regional banks. I'm a member of DAC's expert group on evaluation,and wejust published Me/hods and Procedures in Aid Evaluation, a veryinteresting document in which 18 donors have agreed to a commonstatement of what evaluation is about and how it should be evolvingwithin their agencies. These donors vary quite substantially in the degreethey've been involved in evaluation, but they all perceive it as animportant function, at least judging by the amount of staff and resourcesthey provide for it. For some donors the evaluation process has come outof the audit and inspection functions, and for other donors it's lookedupon more as an academic, research-type program. So those two streamsare blending together in what is now a better understanding of theevaluation process.

M.H.: I've looked at that new DAC book, and it points out that some donorsstill view evaluation in a negative term, and that there needs to be moreeffort in the donor nations to change evaluation into something viewedmore positively.

W.H.N.: When you say "evaluation" to managers, they often say, "Oh, no,not one of those! How can I avoid this or put it off so I don't have to dealwith it?" because they fear it's going to be a criticism of their performance,with no recognition of what they've been able to do. We've tried to getaway from that. We essentially say to managers, "We don't want to do anevaluation of your management. What we're interested in doing islearning from your experience, and we want you to join with us inthinking through your experience, what have you learned from that, andwhat are the issues that you see and that we sec." Almost invariably we geta turnaround of attitude, with people seeing this as an opportunity to getideas off their chest, to express their problems, the issues they face, thepolicy environment in which they work, as well as an opportunity to getsome recognition for what has been accomplished. In most cases this hascome out nicely.

M. H.: Given that development programs do operate in such a political arena,how should evaluators treat these political considerations?

W.H.N.: Our evaluation group at DAC realized that there are significantfactors external to projects that have an enormous impact on the qualityof a project. This gets into very sensitive issues about political objectives,domestic economic interests, commercial interests, market pressures,priorities, administrative and legislative rules and regulations, and so on.Even to mention some of them is difficult, because we rarely address themin our evaluations.

But out point is that maybe an evaluation should address these issues,so that political managers have a better sense of the consequences of the

Page 4: A conversation with W. Haven North, Director, Center for Development Information and Evaluation, U.S. Agency for International Development

58

decisions they make. We're trying to urge that the evaluation processshould look at these considerations as factors that affect the context of aprogram and thus its success. A good evaluation should try to bring outthe importance of these considerations.

M.H.: How about the developing countries? How positively or negatively dothey view evaluation?

W.H.N.: In terms of development programs, I'm afraid evaluation still hasvery much a negative connotation, because the donors are always doingevaluations of the developing countries' programs, and that doesn't godown very well. At best, developing countries sometimes say, "That'syour business-do what you want, but don't involve us," so they feel likeoutsiders. In fact, they are outsiders, because donors have tended to beself-centered about the evaluation process. I think that's beginning tochange, though, because the need for evaluation work is becoming moreevident as something developing countries should be doing forthemselves.So I see the developing countries having more of an interest in havingevaluation units, evaluation staff, and evaluation processes going on in amore formal way. But there are still many countries where even goodaudit work is not being done, so good evaluation work is something that'sstill down the road a while.

M. H.: From my own limited experiences, part of the problem may be thatour American beliefs in public accountability, self-examination, and soon simply don't exist in other countries.

W.H.N.: I think that's quite right. The value systems and the information wemight want from an evaluation are sometimes quite different in developingcountries. But more significantly, the whole concept ofdevelopment as aninvestment activity is also sometimes less clear-cut. Obviously there's anenormous range among the developing countries-some are very ad­vanced and very sophisticated in this area-so you can't generalize toowell. But we, in our own work, are putting a lot more emphasis on what wecall "collaborative evaluation." We've held a number of workshops withrepresentatives from developing countries and our own staff to work onevaluation topics-how you do then, what are the methods, what are theissues-so we think together about the process of evaluating programs.

That trend is evident among all the donors, and becoming more feasiblebecause of the developing countries'growing interest in evaluation work.But there's still a long way to go, because evaluation is still not a highpriority, is still not recognized as being an integral part of an agency'sresponsibilities in development programs. And I think this may partly bebecause it needs to be restated or redefined.

I sometimes would like to have a word other than "evaluation," becauseit carries with it a baggage of negative connotations. What we're really

Page 5: A conversation with W. Haven North, Director, Center for Development Information and Evaluation, U.S. Agency for International Development

59

talking about, in a general sense, is management information systemsbeing built into programs that feed back not just administrative data, butalso programmatic information. We have a very complete system formanaging the budget, and we have a very complete system for managingpersonnel, but we've never had one in the program information area.We're trying to fill that need.

M.H.: Let's talk for a minute about the quality of evaluations of developmentprograms. How well do you and the other donor agencies do evaluations?

W.H.N.: Well, I don't know about the other agencies, but my first reaction isthat we're not as good as weshould be. There are obviously some excellentevaluations, but our own review of AID's work suggests that they're notof the quality we'd like.

On the other hand, there's the question about the quality of the evalu­ation process, where the product is less important than the process youwent through. For example, in the collaborative evaluations with devel­oping countries-getting them to think through and define the issues andthen to address those issues over time-you begin to have an impact, eventhough you can't see it. But that makes it hard to judge the quality. 111 bethe first to say that the quality is never at the level we'd like to see, butthere are lots of excellent studies that have made very importantcontributions.

M.H.: You're talking now about the quality of the evaluation process. Howabout the quality of an evaluationproduct? How would you distinguish ahigh-quality evaluation report from a low-quality one?

W.H.N.: Well, I guess I'd measure quality particularly by the value of theinsights that were offered. Obviously, they'd have to be reasonably wellsupported, but I look for fresh understanding of a situation that peoplehadn't really recognized, a new understanding, a new perspective, or agreater appreciation of a situation than before. So it's the significance ofthe insights that are gained. I think high-quality evaluations bring out,these new perspectives, these new insights, and they deal with some of thecliches.

For example, take our review of narcotics eradication projects. Foryears the standard view of how to deal with the problem was to use cropsubstitution. You substitute some other crops for narcotics crops. Well,those directly involved were better informed, but most people in theagency still thought that's what you should do. But our review broughtout the fact that crop substitution as such doesn't work-there are nocrops of sufficient value to offset the enormous earnings that go withproducing narcotics crops. So we put that concept aside and tried to lookat the problem from a broader perspective, say alternative income sourcesin conjunction with eradication of narcotics crops. You will never find an

Page 6: A conversation with W. Haven North, Director, Center for Development Information and Evaluation, U.S. Agency for International Development

60

income substitution crop to motivate people to give up a very lucrative,easy-to-grow narcotics crop, but what you may be able to do is toeliminate one source of income while simultaneously providing analternative.

M.H.: Now that you're talking about the center's work, let's get specific. AIDdoes evaluation in lots of different places. Can you explain aboutevaluation within AID and how the center fits into the picture?

W.H.N.: Evaluations in AID are very much determined by the organizationalstructure of the agency. The agency is divided into three regionalbureaus-Africa, Latin America and Caribbean, and Asia and NearEast-and several offices like education, health, and agriculture in theBureau for Science and Technology. The regional bureaus back up ourmissions-about 60 around the world. The agency's management phi­losophy gives a substantial delegation of authority to our missions in thefield. The general concept is that the authority and the responsibilityshould be as close together as possible. Some donor agencies have a highlycentralized approval process, but the AID administrator very rarely getsinvolved in approving a project.

Well, this decentralized management structure implies decentralizedevaluations. So most evaluation work is done in the missions, whereprojects are designed and implemented. The center's job here in Washing­ton is to provide oversight and general support to the system-ensure thatannual evaluation plans are being done, review and critique them, look atthe quality as best we can, train and orient, and provide technical assis­tance. We'll soon be coming out with a new handbook on evaluation; thelast one was 12 years ago and we want to bring it up to date. We're alsopromoting the "collaborative evaluation"workshops I mentioned earlier.Our job is really to promote evaluation within the agency and constantlyshow senior management that it is important and therefore needs per­sonnel and other resources.

In another sense, though, weare the evaluation unit for the top manage­ment ofthe agency, so we're constantly looking at issuesof interest to them.You've seen some of those reports-Contraceptive Social Marketing,Small Farmer Credit, Women in Development-so you know that we doa lot of our own studies.

M.H.: Let's talk about those studies, because that's one of the changes you'vepersonally made here. The units that were merged to create the centerused to evaluate specific sectoral projects-family planning, elementaryeducation, and so on. You came in and broadened the focus ofevaluations to include generic issues such as technology transfer, projectmanagement, and so on.

Page 7: A conversation with W. Haven North, Director, Center for Development Information and Evaluation, U.S. Agency for International Development

61

W.H.N.: That's true. We've tried to slice the agency's program differently bylooking at common topics of interest that cut across all sectors andprograms. This is partly because the agency has set its own policyconcerns for technology transfer, private sector development, andinstitution building as the means by which AID's ends should be met. Sowe decided to look at some of these means and see what we've learned.

The whole series on the management of development projects, forexample, is an attempt to promote within the agency an understanding ofa cross-cutting issue that has not been given enough attention. And that'salso true for the concern with sustainable programs-how we think aboutsustainability and how we develop more recognition of a need to buildsustainable projects.

M.H.: The whole notion of sustainability seems to be quite a hot topic withdonor agencies these days.

W.H.N.: It is a hot topic, and I think it's become so partly because weevaluators have been questioning the effectiveness of programs. Onething we're starting to discover is that good projects aren't necessarilysustainable. Maybe the rate of return is high and the performance andimpact are good, but the ability to sustain them after the donor's aid iswithdrawn is highly questionable. There is a lot of concern about theability of countries to keep programs going.

We now have under wayan effort by all donors in DAC to include intheir evaluations during 1986 a set of questions specifically related tosustainability. This will result in a common set of questions being used byall donors in their evaluations. After the donor synthesizes what has beenlearned, there will be a global synthesis of all donor experiences to seewhat the issues are and what we should be doing. Whether this willsucceed or not, I don't know, but it's an attempt to produce somecomparable information across donors that can be combined at the end.If it works, we should have some very important insights on how youcreate sustainable programs.

M.H.: In one of our earlier conversations you said that your three mainconcerns were the relevance, speed of completion, and utilization of yourwork.

W.H.N.: I think of the center as something like a manufacturer of aproduct-cocoa or something else-where you have the raw materialsand you process them and refine them and then you have a product to getto people when they want it, and ensure that they like it. It's an analogy Ilike, because our product is information-wisdom may be going too far[Iaughs]-that people need to do their work in development programs. Sothe whole process is trying to judge your market-to know whom you're

Page 8: A conversation with W. Haven North, Director, Center for Development Information and Evaluation, U.S. Agency for International Development

62

trying to serve, to try to refine your products in a way that wiII meet theirneeds when they need it.

To do so, we have to be relevant to what's on people's minds. We alsohave to be timely because decisions are being made every day and we needto get our information to them when they need it, otherwise it becomesacademic because it's too late. They're not unreasonable, but they can'twait for a two-year study. It's not an easy thing to do well, but we try.Usually we can do better if wecan anticipate, and even better yet if we cancreate an interest within the agency. That's what we're doing with thestudy on management of development projects-trying to create aninterest within AID. This can get very controversial and much morecomplicated than you might think, but we're struggling with it as best wecan.

Regarding utilization, we're trying to find better waysto summarize andcommunicate our findings in a form that will whet people's appetites.Once people realize that they're not getting a big mound of paper, butrather something very specifically providing what they want, it workspretty well. That's not easy to do, but it can work well.

Workshops and conferences are good ways to share information, too,since people are part of a process of thinking through the information.These have proven to be the most effective way, but not a very cost­effective way. You can't reach a large audience, because large conferencesare difficult to organize. We do a lot of small ones-maybe 25 to 30peoplefor a couple of days-but not many large ones.

M.H.: We're almost out of time, so let me ask you one last question. You're ina very key position. What changes do you foresee during the next 5 to IOyears in evaluating development programs?

W.H.N.: Well, that's not entirely clear to me. Ifyou relate evaluation to theinformation age, to the whole business ofcommunications and computer­ization and so on, we may find the evaluation process becoming moreintegrated with what I call a "linked system of communicating knowl- 'edge." Rather than something that's off by itself operating independently,evaluation may be brought to the decision-making process.

Second, there is a need in our business for more research on the socialscience aspects of what we're doing. Several years ago, when the newadministration came in, social sciences had a rather bad image. They werenot very much appreciated, and maybe they brought that upon themselves.But the fact remains that social sciences weren't viewed as very productiveor meaningful. It seems to me that the future will bring a lot moreattention to social science analyses that relate to development work andthat have a visible impact on development programs. In our business, themajor problems are institutional and social, especially in programs that

Page 9: A conversation with W. Haven North, Director, Center for Development Information and Evaluation, U.S. Agency for International Development

63

require a high degree of people's participation. Social science needs a lotmore involvement in that process.

Finally, ofcourse, I mentioned the growth of evaluation in developingcountries. I have no doubt that the evaluation function will becomeincreasingly important in the developing countries themselves, whether ornot donors are more involved. That may take a long time, though, maybeanother couple of decades before it becomes meaningful in somecountries, but it will happen.