Experimental Design and Other Evaluation Methods Lana Muraskin [email protected].

13
Experimental Design and Other Evaluation Methods Lana Muraskin [email protected]

Transcript of Experimental Design and Other Evaluation Methods Lana Muraskin [email protected].

Page 1: Experimental Design and Other Evaluation Methods Lana Muraskin lmuraskin@yahoo.com.

Experimental Design and Other Evaluation Methods

Lana Muraskin

[email protected]

Page 2: Experimental Design and Other Evaluation Methods Lana Muraskin lmuraskin@yahoo.com.

Clearing the Air

Experimental design has a poor reputation in TRIO community Over-recruitment seen as difficult, unfair Outcomes of evaluation have been disappointing,

counter intuitive Reputation unfortunate

Experimental design offers opportunities for understanding program and component effectiveness

Page 3: Experimental Design and Other Evaluation Methods Lana Muraskin lmuraskin@yahoo.com.

Why employ experimental design?

Good way to understand the impact of an intervention because it allows us to Eliminate selection bias (serious problem, not fully

addressed in quasi-experimental evaluations—such as national evaluation of SSS)

Be sure we are comparing groups on all significant characteristics (can’t always be sure with quasi-experimental designs)

Page 4: Experimental Design and Other Evaluation Methods Lana Muraskin lmuraskin@yahoo.com.

Experimental design can overcome these obstacles, but…

Random assignment of people to services doesn’t always ensure that a treatment/no treatment design will occur

Behavior in a project setting is hard to “control.”

Project staff may behave differently when over-recruiting than they would under other circumstances

Random assignment may be impossible or extremely costly

Page 5: Experimental Design and Other Evaluation Methods Lana Muraskin lmuraskin@yahoo.com.

Quasi-experimental design

Matched comparison groups Comparison groups sometimes drawn from

same cohorts, sometimes from other cohorts Probably more suited to individual TRIO

projects Already in effect in many projects

Page 6: Experimental Design and Other Evaluation Methods Lana Muraskin lmuraskin@yahoo.com.

What is/is not learned through treatment/no treatment designs—experimental or quasi-experimental?

Can learn whether project participation “works” in a global sense, and for different participant subgroups

Both approaches often treat the services projects provide as a “black box”

Even when services are counted, rarely learn what project features account for project success or lack of success (can’t randomly assign to different services)

Page 7: Experimental Design and Other Evaluation Methods Lana Muraskin lmuraskin@yahoo.com.

Are there other alternatives for project evaluations? Service variation or service mix designs Can be “experimental” under some

circumstances, quasi-experimental more often

Can enable projects to learn about their performance and make changes as needed

Hard to implement but worth the effort—best done with groups of projects

Page 8: Experimental Design and Other Evaluation Methods Lana Muraskin lmuraskin@yahoo.com.

Possible experimental or quasi experimental designs within a project

Vary services over time (compare participants in a baseline year and participants in subsequent year(s) as services or mix of services differ)

Randomly assign participants to different services or mix of services

Create artificial comparison groups and track participants and comparisons over time

Page 9: Experimental Design and Other Evaluation Methods Lana Muraskin lmuraskin@yahoo.com.

Another alternative—type and intensity of implementation evaluation Can be done by all projects (if only so staff can

sleep easier) Track mix and extent (intensity) of service each

participant receives—what each receives, how much, from whom

Decide what you consider “high fidelity” service (observe service, create measures) and high/medium/low participation

See whether more and “better” services lead to better participant outcomes—if not, which services seem to account for better or worse outcomes

Page 10: Experimental Design and Other Evaluation Methods Lana Muraskin lmuraskin@yahoo.com.

Some caveats…

Some services are aimed at students with greatest difficulty (esp. SSS counseling services), so more may not be associated with better outcomes

This design won’t answer the question of whether service is better than no service (but should lead to improved service over time).

This approach won’t work if all participants get exactly the same services in the same amount (rare?)

Page 11: Experimental Design and Other Evaluation Methods Lana Muraskin lmuraskin@yahoo.com.

On the plus side…

If “high fidelity” service and solid participation lead to better outcomes, it is pretty likely that project participation is worth the effort.

If there is no relationship between a service and outcomes, it’s time to take a hard look at reforms—but the evaluation is still useful to the project

Page 12: Experimental Design and Other Evaluation Methods Lana Muraskin lmuraskin@yahoo.com.

A word about federalism

Push to project-level experimental design evaluations seems to confuse federal and local roles

Projects do not have the resources to conduct such evaluations—to over recruit, to conduct the evaluations, to track participants over time

Incentives for experimental design evaluations at the local level encourage projects to shift resources from service to evaluation with little likelihood that the results will be worth the resources expended.

Page 13: Experimental Design and Other Evaluation Methods Lana Muraskin lmuraskin@yahoo.com.

The Executive Branch should implement sophisticated evaluations that study program effects writ large. It has the responsibility to Congress and the public to ensure that all projects have positive outcomes.