Impact Evaluation, Policy Making and Academic Research: Some Reflections and Examples - Professor...

Post on 01-Nov-2014

1.498 views 0 download

Tags:

description

 

Transcript of Impact Evaluation, Policy Making and Academic Research: Some Reflections and Examples - Professor...

Impact evaluation, policy making and academicresearch:

Some reflections and examples

Orazio P. Attanasio

EDePo @ IFS & UCLo.attanasio@ucl.ac.uk

Toward an Evidence Based Development PolicyOctober 11th 2010

1 Impact evaluations & Academia.Some history.Recent developments.Some criticism of small and narrow.

2 The political economy of impact evaluation.The policy process.From policy to academia and back.

3 Large and structural.

4 An Example: CCT evaluations

Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 2 / 14

Impact evaluations & Academia. Some history.

Some history.

Impact evaluations have played a big role in applied economics for along time.

From an scientific point of view, policies introduce potentiallyexogenous variation in the environment that allows one to estimatestructural parameters.This is particularly true for randomized experiments, where thevariation in incentives is controlled.Negative income tax experiments in the 1960s and 1970s(SIME/DIME) .

Urban areas in New Jersey and Pennsylvania from 1968-1972 (1375families).Rural areas in Iowa and North Carolina from 1969-1973 (809 families).Gary, Indiana from 1971-1974 (1800 families).Seattle and Denver, from 1971-1982 (4800 families).

These experiments were interesting because they combined policyinterest (welfare policies) with academic interest (estimates of laboursupply elasticities).

Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 3 / 14

Impact evaluations & Academia. Some history.

Some history.

Impact evaluations have played a big role in applied economics for along time.From an scientific point of view, policies introduce potentiallyexogenous variation in the environment that allows one to estimatestructural parameters.

This is particularly true for randomized experiments, where thevariation in incentives is controlled.Negative income tax experiments in the 1960s and 1970s(SIME/DIME) .

Urban areas in New Jersey and Pennsylvania from 1968-1972 (1375families).Rural areas in Iowa and North Carolina from 1969-1973 (809 families).Gary, Indiana from 1971-1974 (1800 families).Seattle and Denver, from 1971-1982 (4800 families).

These experiments were interesting because they combined policyinterest (welfare policies) with academic interest (estimates of laboursupply elasticities).

Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 3 / 14

Impact evaluations & Academia. Some history.

Some history.

Impact evaluations have played a big role in applied economics for along time.From an scientific point of view, policies introduce potentiallyexogenous variation in the environment that allows one to estimatestructural parameters.This is particularly true for randomized experiments, where thevariation in incentives is controlled.Negative income tax experiments in the 1960s and 1970s(SIME/DIME) .

Urban areas in New Jersey and Pennsylvania from 1968-1972 (1375families).Rural areas in Iowa and North Carolina from 1969-1973 (809 families).Gary, Indiana from 1971-1974 (1800 families).Seattle and Denver, from 1971-1982 (4800 families).

These experiments were interesting because they combined policyinterest (welfare policies) with academic interest (estimates of laboursupply elasticities).

Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 3 / 14

Impact evaluations & Academia. Some history.

Some history.

Impact evaluations have played a big role in applied economics for along time.From an scientific point of view, policies introduce potentiallyexogenous variation in the environment that allows one to estimatestructural parameters.This is particularly true for randomized experiments, where thevariation in incentives is controlled.Negative income tax experiments in the 1960s and 1970s(SIME/DIME) .

Urban areas in New Jersey and Pennsylvania from 1968-1972 (1375families).Rural areas in Iowa and North Carolina from 1969-1973 (809 families).Gary, Indiana from 1971-1974 (1800 families).Seattle and Denver, from 1971-1982 (4800 families).

These experiments were interesting because they combined policyinterest (welfare policies) with academic interest (estimates of laboursupply elasticities).

Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 3 / 14

Impact evaluations & Academia. Recent developments.

Recent developments.

More recently Randomized Control Trials (RCT) have received somerenewed interest.

The proponents of RCTs have been extremely influential and havechanged the way development policy is approached in many contexts.

Overall the development has been positive in many dimensions:

Renewed interest.Accountability and transparency.Wealth of new evidence and data.

Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 4 / 14

Impact evaluations & Academia. Recent developments.

Recent developments.

More recently Randomized Control Trials (RCT) have received somerenewed interest.

The proponents of RCTs have been extremely influential and havechanged the way development policy is approached in many contexts.

Overall the development has been positive in many dimensions:

Renewed interest.Accountability and transparency.Wealth of new evidence and data.

Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 4 / 14

Impact evaluations & Academia. Recent developments.

Recent developments.

More recently Randomized Control Trials (RCT) have received somerenewed interest.

The proponents of RCTs have been extremely influential and havechanged the way development policy is approached in many contexts.

Overall the development has been positive in many dimensions:

Renewed interest.Accountability and transparency.Wealth of new evidence and data.

Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 4 / 14

Impact evaluations & Academia. Some criticism of small and narrow.

Some criticism of RCT.

Often the new RCTs are focused on a small policy experiment.

They allow to estimate a very narrowly defined parameter.

... and sometimes researchers have been willing to change theparameter of interest to the feasibility of a randomization.

How useful is this approach?

External validity.Effectiveness vs Efficacy.Extrapolation, scalability and policy design: what are the mechanismsat play?

Sometime (but not always) the proponents of RCTs havecharacterized their approach as ’theory-free’.

...but is that a virtue?

Identifying the mechanisms behind certain impacts is crucial to policydesign.

Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 5 / 14

Impact evaluations & Academia. Some criticism of small and narrow.

Some criticism of RCT.

Often the new RCTs are focused on a small policy experiment.

They allow to estimate a very narrowly defined parameter.

... and sometimes researchers have been willing to change theparameter of interest to the feasibility of a randomization.

How useful is this approach?

External validity.Effectiveness vs Efficacy.Extrapolation, scalability and policy design: what are the mechanismsat play?

Sometime (but not always) the proponents of RCTs havecharacterized their approach as ’theory-free’.

...but is that a virtue?

Identifying the mechanisms behind certain impacts is crucial to policydesign.

Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 5 / 14

Impact evaluations & Academia. Some criticism of small and narrow.

Some criticism of RCT.

Often the new RCTs are focused on a small policy experiment.

They allow to estimate a very narrowly defined parameter.

... and sometimes researchers have been willing to change theparameter of interest to the feasibility of a randomization.

How useful is this approach?

External validity.Effectiveness vs Efficacy.Extrapolation, scalability and policy design: what are the mechanismsat play?

Sometime (but not always) the proponents of RCTs havecharacterized their approach as ’theory-free’.

...but is that a virtue?

Identifying the mechanisms behind certain impacts is crucial to policydesign.

Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 5 / 14

Impact evaluations & Academia. Some criticism of small and narrow.

Some criticism of RCT.

Often the new RCTs are focused on a small policy experiment.

They allow to estimate a very narrowly defined parameter.

... and sometimes researchers have been willing to change theparameter of interest to the feasibility of a randomization.

How useful is this approach?

External validity.Effectiveness vs Efficacy.Extrapolation, scalability and policy design: what are the mechanismsat play?

Sometime (but not always) the proponents of RCTs havecharacterized their approach as ’theory-free’.

...but is that a virtue?

Identifying the mechanisms behind certain impacts is crucial to policydesign.

Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 5 / 14

Impact evaluations & Academia. Some criticism of small and narrow.

Some criticism of RCT.

Often the new RCTs are focused on a small policy experiment.

They allow to estimate a very narrowly defined parameter.

... and sometimes researchers have been willing to change theparameter of interest to the feasibility of a randomization.

How useful is this approach?

External validity.Effectiveness vs Efficacy.Extrapolation, scalability and policy design: what are the mechanismsat play?

Sometime (but not always) the proponents of RCTs havecharacterized their approach as ’theory-free’.

...but is that a virtue?

Identifying the mechanisms behind certain impacts is crucial to policydesign.

Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 5 / 14

Impact evaluations & Academia. Some criticism of small and narrow.

Some criticism of RCT.

Often the new RCTs are focused on a small policy experiment.

They allow to estimate a very narrowly defined parameter.

... and sometimes researchers have been willing to change theparameter of interest to the feasibility of a randomization.

How useful is this approach?

External validity.Effectiveness vs Efficacy.Extrapolation, scalability and policy design: what are the mechanismsat play?

Sometime (but not always) the proponents of RCTs havecharacterized their approach as ’theory-free’.

...but is that a virtue?

Identifying the mechanisms behind certain impacts is crucial to policydesign.

Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 5 / 14

Impact evaluations & Academia. Some criticism of small and narrow.

Some criticism of RCT.

But the new wave of RCTs has been an extremely positivephenomenon.

The criticism of models identified exclusively by strong functionalform assumptions is healthy and important.

RCT introduce variation that is controlled by the researcher/evaluatorand therefore exogenous by construction.

In addition to estimate in a credible fashion the impact of anintervention, data from RCTs allow the estimation of more credibleand less restrictive models.

The field is extremely vibrant and evolving.

Before sharing some thoughts on future directions a small detour onthe political economy of evaluation.

Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 6 / 14

Impact evaluations & Academia. Some criticism of small and narrow.

Some criticism of RCT.

But the new wave of RCTs has been an extremely positivephenomenon.

The criticism of models identified exclusively by strong functionalform assumptions is healthy and important.

RCT introduce variation that is controlled by the researcher/evaluatorand therefore exogenous by construction.

In addition to estimate in a credible fashion the impact of anintervention, data from RCTs allow the estimation of more credibleand less restrictive models.

The field is extremely vibrant and evolving.

Before sharing some thoughts on future directions a small detour onthe political economy of evaluation.

Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 6 / 14

Impact evaluations & Academia. Some criticism of small and narrow.

Some criticism of RCT.

But the new wave of RCTs has been an extremely positivephenomenon.

The criticism of models identified exclusively by strong functionalform assumptions is healthy and important.

RCT introduce variation that is controlled by the researcher/evaluatorand therefore exogenous by construction.

In addition to estimate in a credible fashion the impact of anintervention, data from RCTs allow the estimation of more credibleand less restrictive models.

The field is extremely vibrant and evolving.

Before sharing some thoughts on future directions a small detour onthe political economy of evaluation.

Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 6 / 14

Impact evaluations & Academia. Some criticism of small and narrow.

Some criticism of RCT.

But the new wave of RCTs has been an extremely positivephenomenon.

The criticism of models identified exclusively by strong functionalform assumptions is healthy and important.

RCT introduce variation that is controlled by the researcher/evaluatorand therefore exogenous by construction.

In addition to estimate in a credible fashion the impact of anintervention, data from RCTs allow the estimation of more credibleand less restrictive models.

The field is extremely vibrant and evolving.

Before sharing some thoughts on future directions a small detour onthe political economy of evaluation.

Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 6 / 14

Impact evaluations & Academia. Some criticism of small and narrow.

Some criticism of RCT.

But the new wave of RCTs has been an extremely positivephenomenon.

The criticism of models identified exclusively by strong functionalform assumptions is healthy and important.

RCT introduce variation that is controlled by the researcher/evaluatorand therefore exogenous by construction.

In addition to estimate in a credible fashion the impact of anintervention, data from RCTs allow the estimation of more credibleand less restrictive models.

The field is extremely vibrant and evolving.

Before sharing some thoughts on future directions a small detour onthe political economy of evaluation.

Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 6 / 14

Impact evaluations & Academia. Some criticism of small and narrow.

Some criticism of RCT.

But the new wave of RCTs has been an extremely positivephenomenon.

The criticism of models identified exclusively by strong functionalform assumptions is healthy and important.

RCT introduce variation that is controlled by the researcher/evaluatorand therefore exogenous by construction.

In addition to estimate in a credible fashion the impact of anintervention, data from RCTs allow the estimation of more credibleand less restrictive models.

The field is extremely vibrant and evolving.

Before sharing some thoughts on future directions a small detour onthe political economy of evaluation.

Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 6 / 14

The political economy of impact evaluation. The policy process.

The political economy of evaluations.

Evaluations are not politically rentable.

You do not win an election with evaluations.The horizon may be very different.Control groups can be politically very sensitive.

The relationship between evaluators and policy makers can be delicateand difficult.

For this reason many ’large’ evaluations were started from the outside(international financial institutions etc.)

Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 7 / 14

The political economy of impact evaluation. The policy process.

The political economy of evaluations.

Evaluations are not politically rentable.

You do not win an election with evaluations.The horizon may be very different.Control groups can be politically very sensitive.

The relationship between evaluators and policy makers can be delicateand difficult.

For this reason many ’large’ evaluations were started from the outside(international financial institutions etc.)

Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 7 / 14

The political economy of impact evaluation. The policy process.

The political economy of evaluations.

Evaluations are not politically rentable.

You do not win an election with evaluations.The horizon may be very different.Control groups can be politically very sensitive.

The relationship between evaluators and policy makers can be delicateand difficult.

For this reason many ’large’ evaluations were started from the outside(international financial institutions etc.)

Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 7 / 14

The political economy of impact evaluation. The policy process.

The political economy of evaluations.

Evaluations are not politically rentable.

You do not win an election with evaluations.The horizon may be very different.Control groups can be politically very sensitive.

The relationship between evaluators and policy makers can be delicateand difficult.

For this reason many ’large’ evaluations were started from the outside(international financial institutions etc.)

Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 7 / 14

The political economy of impact evaluation. From policy to academia and back.

From policy to academia and back.

For these reasons academia (and international institutions) mighthave an important role to play.

It may be easier to maintain independence.Incentives might be different for local consultants.

Interaction with local policy makers and researchers is crucial.

Given the political difficulties of evaluations, it is advisable to runevaluations early, when policies at at the design stage.

When there is more flexibility on design.When the program does not have an established constituency.When limited resources can be used as arguments to justifyexperimentation.

In this context the role of an institution such as 3IE can be crucial.

Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 8 / 14

The political economy of impact evaluation. From policy to academia and back.

From policy to academia and back.

For these reasons academia (and international institutions) mighthave an important role to play.

It may be easier to maintain independence.Incentives might be different for local consultants.Interaction with local policy makers and researchers is crucial.

Given the political difficulties of evaluations, it is advisable to runevaluations early, when policies at at the design stage.

When there is more flexibility on design.When the program does not have an established constituency.When limited resources can be used as arguments to justifyexperimentation.

In this context the role of an institution such as 3IE can be crucial.

Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 8 / 14

The political economy of impact evaluation. From policy to academia and back.

From policy to academia and back.

For these reasons academia (and international institutions) mighthave an important role to play.

It may be easier to maintain independence.Incentives might be different for local consultants.Interaction with local policy makers and researchers is crucial.

Given the political difficulties of evaluations, it is advisable to runevaluations early, when policies at at the design stage.

When there is more flexibility on design.When the program does not have an established constituency.When limited resources can be used as arguments to justifyexperimentation.

In this context the role of an institution such as 3IE can be crucial.

Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 8 / 14

The political economy of impact evaluation. From policy to academia and back.

From policy to academia and back.

For these reasons academia (and international institutions) mighthave an important role to play.

It may be easier to maintain independence.Incentives might be different for local consultants.Interaction with local policy makers and researchers is crucial.

Given the political difficulties of evaluations, it is advisable to runevaluations early, when policies at at the design stage.

When there is more flexibility on design.When the program does not have an established constituency.When limited resources can be used as arguments to justifyexperimentation.

In this context the role of an institution such as 3IE can be crucial.

Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 8 / 14

Large and structural.

An agenda for impact evaluations in developing countries.

More emphasis on the identification of mechanisms behind impacts.

More emphasis on the distinction between efficacy and effectivenessand scalability.

More emphasis on large and ambitious programs.

Data and measurements are important:

It is essential not to limit surveys to the measurement of outcomes ofinterest.To identify the determinants of behaviour comprehensive surveys areneeded.There is much work to be done on measurement issues.

Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 9 / 14

Large and structural.

An agenda for impact evaluations in developing countries.

More emphasis on the identification of mechanisms behind impacts.

More emphasis on the distinction between efficacy and effectivenessand scalability.

More emphasis on large and ambitious programs.

Data and measurements are important:

It is essential not to limit surveys to the measurement of outcomes ofinterest.To identify the determinants of behaviour comprehensive surveys areneeded.There is much work to be done on measurement issues.

Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 9 / 14

Large and structural.

An agenda for impact evaluations in developing countries.

Much exciting questions need an answer:The determinants of human capital accumulation:

The production function.Information people act uponIncentives and interaction between demand and supply.

Imperfections in credit and insurance markets.Institution building and social capital.

Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 10 / 14

An Example: CCT evaluations

The evaluation of Conditional Cash Transfers.

CCTs, starting with PROGRESA in Mexico, have been extensivelyevaluated.

The original PROGRESA evaluation constitutes a sterling example ofan evaluation of a very large program based on RCT.

That evaluation has generated a very large literature.

The evaluation is also illustrative of many interesting issues.

Given the impacts, how does one change the program?Are impacts homogeneous?Can impacts estimated in a context be generalized?What are unintended consequences (positive and negative) andspillovers?Can CCT be used as a platform for new additional interventions.

Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 11 / 14

An Example: CCT evaluations

The evaluation of Conditional Cash Transfers.

CCTs, starting with PROGRESA in Mexico, have been extensivelyevaluated.

The original PROGRESA evaluation constitutes a sterling example ofan evaluation of a very large program based on RCT.

That evaluation has generated a very large literature.

The evaluation is also illustrative of many interesting issues.

Given the impacts, how does one change the program?Are impacts homogeneous?Can impacts estimated in a context be generalized?What are unintended consequences (positive and negative) andspillovers?Can CCT be used as a platform for new additional interventions.

Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 11 / 14

An Example: CCT evaluations

The evaluation of Conditional Cash Transfers.

CCTs, starting with PROGRESA in Mexico, have been extensivelyevaluated.

The original PROGRESA evaluation constitutes a sterling example ofan evaluation of a very large program based on RCT.

That evaluation has generated a very large literature.

The evaluation is also illustrative of many interesting issues.

Given the impacts, how does one change the program?Are impacts homogeneous?Can impacts estimated in a context be generalized?What are unintended consequences (positive and negative) andspillovers?Can CCT be used as a platform for new additional interventions.

Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 11 / 14

An Example: CCT evaluations

The evaluation of Conditional Cash Transfers.

CCTs, starting with PROGRESA in Mexico, have been extensivelyevaluated.

The original PROGRESA evaluation constitutes a sterling example ofan evaluation of a very large program based on RCT.

That evaluation has generated a very large literature.

The evaluation is also illustrative of many interesting issues.

Given the impacts, how does one change the program?

Are impacts homogeneous?Can impacts estimated in a context be generalized?What are unintended consequences (positive and negative) andspillovers?Can CCT be used as a platform for new additional interventions.

Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 11 / 14

An Example: CCT evaluations

The evaluation of Conditional Cash Transfers.

CCTs, starting with PROGRESA in Mexico, have been extensivelyevaluated.

The original PROGRESA evaluation constitutes a sterling example ofan evaluation of a very large program based on RCT.

That evaluation has generated a very large literature.

The evaluation is also illustrative of many interesting issues.

Given the impacts, how does one change the program?Are impacts homogeneous?Can impacts estimated in a context be generalized?

What are unintended consequences (positive and negative) andspillovers?Can CCT be used as a platform for new additional interventions.

Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 11 / 14

An Example: CCT evaluations

The evaluation of Conditional Cash Transfers.

CCTs, starting with PROGRESA in Mexico, have been extensivelyevaluated.

The original PROGRESA evaluation constitutes a sterling example ofan evaluation of a very large program based on RCT.

That evaluation has generated a very large literature.

The evaluation is also illustrative of many interesting issues.

Given the impacts, how does one change the program?Are impacts homogeneous?Can impacts estimated in a context be generalized?What are unintended consequences (positive and negative) andspillovers?

Can CCT be used as a platform for new additional interventions.

Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 11 / 14

An Example: CCT evaluations

The evaluation of Conditional Cash Transfers.

CCTs, starting with PROGRESA in Mexico, have been extensivelyevaluated.

The original PROGRESA evaluation constitutes a sterling example ofan evaluation of a very large program based on RCT.

That evaluation has generated a very large literature.

The evaluation is also illustrative of many interesting issues.

Given the impacts, how does one change the program?Are impacts homogeneous?Can impacts estimated in a context be generalized?What are unintended consequences (positive and negative) andspillovers?Can CCT be used as a platform for new additional interventions.

Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 11 / 14

An Example: CCT evaluations

How does one change the program?

To answer this question is necessary to estimate a structural model ofindividual behaviour.

This was done by Todd and Wolpin (2006) and Attanasio, Meghirand Santiago (2005).

The availability of the evaluation data allows the estimation of moreflexible models.

Having estimated a structural model, one can simulate changes to theprogram and predict impacts.

These estimates and simulations have inspired recent innovations tothe program in Mexico:

Oportunidades is piloting new versions of the grant structure in Pueblaand Ecatepec.The new grant structure eliminates the primary school subsidies andincreases the secondary school ones.

Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 12 / 14

An Example: CCT evaluations

How does one change the program?

To answer this question is necessary to estimate a structural model ofindividual behaviour.

This was done by Todd and Wolpin (2006) and Attanasio, Meghirand Santiago (2005).

The availability of the evaluation data allows the estimation of moreflexible models.

Having estimated a structural model, one can simulate changes to theprogram and predict impacts.

These estimates and simulations have inspired recent innovations tothe program in Mexico:

Oportunidades is piloting new versions of the grant structure in Pueblaand Ecatepec.The new grant structure eliminates the primary school subsidies andincreases the secondary school ones.

Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 12 / 14

An Example: CCT evaluations

How does one change the program?

To answer this question is necessary to estimate a structural model ofindividual behaviour.

This was done by Todd and Wolpin (2006) and Attanasio, Meghirand Santiago (2005).

The availability of the evaluation data allows the estimation of moreflexible models.

Having estimated a structural model, one can simulate changes to theprogram and predict impacts.

These estimates and simulations have inspired recent innovations tothe program in Mexico:

Oportunidades is piloting new versions of the grant structure in Pueblaand Ecatepec.The new grant structure eliminates the primary school subsidies andincreases the secondary school ones.

Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 12 / 14

An Example: CCT evaluations

How does one change the program?

To answer this question is necessary to estimate a structural model ofindividual behaviour.

This was done by Todd and Wolpin (2006) and Attanasio, Meghirand Santiago (2005).

The availability of the evaluation data allows the estimation of moreflexible models.

Having estimated a structural model, one can simulate changes to theprogram and predict impacts.

These estimates and simulations have inspired recent innovations tothe program in Mexico:

Oportunidades is piloting new versions of the grant structure in Pueblaand Ecatepec.The new grant structure eliminates the primary school subsidies andincreases the secondary school ones.

Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 12 / 14

An Example: CCT evaluations

How does one change the program?

To answer this question is necessary to estimate a structural model ofindividual behaviour.

This was done by Todd and Wolpin (2006) and Attanasio, Meghirand Santiago (2005).

The availability of the evaluation data allows the estimation of moreflexible models.

Having estimated a structural model, one can simulate changes to theprogram and predict impacts.

These estimates and simulations have inspired recent innovations tothe program in Mexico:

Oportunidades is piloting new versions of the grant structure in Pueblaand Ecatepec.The new grant structure eliminates the primary school subsidies andincreases the secondary school ones.

Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 12 / 14

An Example: CCT evaluations

Impact heterogeneity

There is much evidence of impact heterogeneity.

rural v urban.different states.different levels of infrastructure.

What is the effect in urban areas?.

What are the spillover effects?

What is the interaction with quality?

Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 13 / 14

An Example: CCT evaluations

Impact heterogeneity

There is much evidence of impact heterogeneity.

rural v urban.different states.different levels of infrastructure.

What is the effect in urban areas?.

What are the spillover effects?

What is the interaction with quality?

Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 13 / 14

An Example: CCT evaluations

Other challenges

To model behaviour we need information on its determinants.

Information and expectations.Beliefs.Access to (quality) services and markets.

Much work is needed on measurement.

Example of cognitive development in early years.

Can CCT be used for interventions on nutrition or cognitivedevelopment?

Food prices?

Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 14 / 14

An Example: CCT evaluations

Other challenges

To model behaviour we need information on its determinants.

Information and expectations.Beliefs.Access to (quality) services and markets.

Much work is needed on measurement.

Example of cognitive development in early years.

Can CCT be used for interventions on nutrition or cognitivedevelopment?

Food prices?

Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 14 / 14

An Example: CCT evaluations

Other challenges

To model behaviour we need information on its determinants.

Information and expectations.Beliefs.Access to (quality) services and markets.

Much work is needed on measurement.

Example of cognitive development in early years.

Can CCT be used for interventions on nutrition or cognitivedevelopment?

Food prices?

Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 14 / 14