Effect of Failing a High School Exit Exam on Course Achievement, Persistence and Graduation
-
Upload
sipos-gabriella -
Category
Documents
-
view
86 -
download
2
Transcript of Effect of Failing a High School Exit Exam on Course Achievement, Persistence and Graduation
http://eepa.aera.netPolicy Analysis
Educational Evaluation and
http://epa.sagepub.com/content/32/4/498The online version of this article can be found at:
DOI: 10.3102/0162373710382655
2010 32: 498EDUCATIONAL EVALUATION AND POLICY ANALYSISSean F. Reardon, Nicole Arshan, Allison Atteberry and Michal Kurlaender
Persistence, and GraduationEffects of Failing a High School Exit Exam on Course Taking, Achievement,
Published on behalf of
American Educational Research Association
and
http://www.sagepublications.com
can be found at:Educational Evaluation and Policy AnalysisAdditional services and information for
http://eepa.aera.net/alertsEmail Alerts:
http://eepa.aera.net/subscriptionsSubscriptions:
http://www.aera.net/reprintsReprints:
http://www.aera.net/permissionsPermissions:
at Stanford University Libraries on February 15, 2011http://eepa.aera.netDownloaded from
Educational Evaluation and Policy Analysis Fall XXXX, Vol. XX, No. X, pp. 215–229
Effects of Failing a High School Exit Exam on Course Taking, Achievement, Persistence, and Graduation
Sean F. ReardonNicole Arshan
Allison AtteberryStanford University
Michal KurlaenderUniversity of California, Davis
The increasing use of state-mandated public high school exit exams is one manifestation of the cur-rent movement in U.S. public schooling toward more explicit standards of instruction and account-ability. Exit exam requirements implicitly argue that raising the bar for graduation creates incentives both for students to work harder in school and for schools to increase their efforts for low-achieving students. Such incentives should most strongly affect the motivation of students who fail an exit exam the first time they take the test because failing provides a clear signal of students’ need to improve their academic skills. Others argue that failing an exit exam discourages low-achieving students from staying in school. In this article, the authors use a regression discontinuity design and student-level longitudinal data from four large California public school districts to estimate the effect of failing a high school exit exam in 10th grade on subsequent student achievement, course taking, persistence in high school, and graduation. The analyses show no evidence of any significant or sizeable effect of failing the exam on high school course-taking, achievement, persistence, or gradu-ation for students with test scores near the exit exam passing score. In each case, the estimates are precise enough to rule out modest to large effects. This implies that the negative impacts of high school exit exam policies on graduation rates found in other studies are more likely a result of reduced graduation rates of very low-achieving students than of discouragement of marginally low-achieving students.
Keywords: accountability, exit exams, regression discontinuity
The increasing use of state-mandated public high school exit exams—tests each student must pass before becoming eligible to receive a high school
diploma—is one manifestation of the current movement in U.S. public schooling toward more explicit standards of instruction and accountability.
The research reported here was supported by a grant from the James Irvine Foundation and by the Hewlett Foundation through a grant to the Institute for Research on Educational Policy and Practice at Stanford University. We are indebted to the staff of the four school districts (in particular, James Gulek, Dave Calhoun, Robert Maass, and Peter Bell) for sharing their data and expertise with us; without their generosity and commitment to quality research this work would not have been possible. We also benefited from the excellent research assistance of Noli Brazil and Demetra Kalogrides. All errors, of course, are our own.
Educational Evaluation and Policy AnalysisDecember 2010, Vol. 32, No. 4, pp. 498–520
DOI: 10.3102/0162373710382655 © 2010 AERA. http://eepa.aera.net
at Stanford University Libraries on February 15, 2011http://eepa.aera.netDownloaded from
499
High School Exit Exam
The number of states requiring students to pass an exam to graduate has increased from 18 in 2002 to 24 in school year 2008–2009, with an additional two states intending to implement exit exams by 2012. Soon, more than 70% of U.S. students will be subject to such exam require-ments (see, e.g., Center on Education Policy, 2004, 2005, 2009; Dee & Jacob, 2006; Warren, Jenkins, & Kulick, 2006). The effects of exit exam policies remain somewhat unclear, despite a number of recent studies. Competing notions of how such exams might influence student and school behaviors lead to divergent predictions of how these policies will affect students. Some argue that a high school exit exam requirement creates incentives both for schools to provide better instruction to struggling students and for these students to work harder to learn more before graduation. Others argue that creating additional barriers to graduation discourages students—particularly academically and socially disadvan-taged students—from persisting in school and hence leads to increased dropout rates and greater inequality (for discussion, see Dee & Jacob, 2006; Reardon & Galindo, 2002; Warren et al., 2006).
In this article, we use longitudinal data from four large California school districts to estimate the effect of failing a high school exit exam in 10th grade on subsequent student achievement, persistence in high school, and graduation. We begin in Section I with a discussion of the mech-anisms through which exit exams might influ-ence student outcomes. In Section II, we review relevant research on these topics with an empha-sis on distinguishing between two strands of research: research on the effects of exit exam requirements per se and research on the effects of failing (relative to passing) an exit exam, given the existence of the requirement. This article falls under the second type of research, which may be particularly useful for providing insight about the mechanisms through which exit exams oper-ate. In Section III, we describe the California High School Exit Exam (CAHSEE) policy and its history to provide a context for the analyses that follow. Section IV briefly describes the data, sample, and measures used in our analyses. In Sec-tion V we describe our regression discontinuity design, followed by a discussion of the results of these analyses in Section VI.
I. Potential Effects of Exit Exams
By adding the requirement that students must pass a test to graduate from high school, exit exams may affect the motivation, effort, and behavior of both students and schools. According to the implicit logic of exit exam policies, schools and teachers may change their curricula in response to the requirement, focus-ing instruction on subject areas and material covered by the exam. Likewise, low-achieving students may alter their study habits, atten-dance, and attention to ensure they pass the exam. This presumption—that there is a link between exit exam policies and student or school motivation—is common in both policy discourse and academic literature regarding exit exams (see, e.g., Bishop & Mane, 2001, p. 205; Carnoy & Loeb, 2002, p. 306; Catterall, 1986, p. 4; Dee & Jacob, 2006, pp. 5–6; Jacob, 2001, p. 100; Madaus & Clarke, 2001, p. 9; Martorell, 2005, pp. 7–8; Ou, 2009, p. 3; Roderick & Engel, 2001, pp. 198–199; Warren et al., 2006, pp. 132–133).1 Nonetheless, the nature of the relationship between the two is far from straightforward.
Exit exams may affect students in two ways. First, the policy itself may have an effect if stu-dents and schools alter their behavior prior to the grade when students first take the exam. Second, after students take the exit exam for the first time (typically in 10th grade), their scores serve as a signal of their ability to ultimately pass the exam. In particular, students who fail the exam may feel motivated to work harder to pass or may drop out because of discouragement arising from their failure. This article focuses on this latter issue—how failing an exit exam affects student motiva-tion, behavior, and outcomes.
Students’ initial performance on an exit exam may provide key information to students and their teachers that clarifies areas of weakness, stimu-lates student effort, and triggers the creation of a plan for acquiring missing skills before the end of high school. Bishop (2006) posits that curriculum-based, external exit exams may moti-vate students because they produce real, visible signals about their likelihood of graduation and hence ultimate potential to reach college or succeed in the labor market. “Higher extrinsic rewards for learning,” he argues, “are associated
at Stanford University Libraries on February 15, 2011http://eepa.aera.netDownloaded from
500
Reardon et al.
[with] the taking of more rigorous courses, teach-ers setting higher standards and more time devo-ted to homework” (p. 910).
Schools may also respond in ways that posi-tively affect student outcomes by targeting needed instructional services to students who fail. For instance, many schools delineate intervention plans for all students who fail on their first attempt. When these remediation efforts are effective, stu-dents receive the targeted assistance they need to progress, and their chances of subsequent suc-cess may increase (for more on this argument, see Dee & Jacob, 2006).
There is relatively little empirical evidence to confirm these hypothesized motivational or cur-ricular responses exit exam failure. One study in Chicago found that high-stakes tests appear to induce greater effort and motivation among low-achieving middle school students (Roderick & Engel, 2001). Roderick and Engel (2001) reported that the majority of the 120 low-achieving middle school students they surveyed described increased work effort under a grade retention policy—including greater attention to class work, increased academic press and sup-port from teachers, and more time spent studying outside school. Teachers’ reports corroborated these claims. The increased effort reported in this study, however, appears to be a result of the test-based retention policy itself, not a result of failing the test. Moreover, the motivational effects of the policy appear to be heterogeneous. One third of the respondents showed little work effort despite an expressed desire not to be retained. These students tended to be lower-achieving students than those with high work effort, sug-gesting that perhaps the links among testing policies, test failure, motivation, effort, and sub-sequent achievement operate differently depend-ing on students’ skill level relative to the passing threshold on the test.
Despite claims that exit exam failure serves as a useful signal to schools and students that triggers increased effort and motivation, many scholars have argued the opposite. According to this alternative view, exit exam failure does not lead to genuine student achievement gains and may, in fact, discourage students from persisting in school. Some argue that reliance on a single standardized test may have unintended conse-quences (Huebert & Hauser, 1998). For instance,
it is possible that students who experience failure are not motivated to try harder—some may instead feel discouraged about their pros-pects of eventual success (Madaus & Clarke, 2001). This concern about the unintended dis-couragement of low-achieving students has been a common theme in educational and socio-logical literature for many years (for early exam-ples, see Blau, 1980; Glass, 1978).
Psychological research on general student motivation suggests that students’ responses to an exit exam depend largely on students’ percep-tions of the reward. Goal theory suggests that passing an exit exam may represent a “perfor-mance goal”—a goal based on attaining some extrinsic standard, as opposed to a goal based on attaining mastery of some particular domain (Ames, 1984; Covington, 2000). Research on student motivation shows that performance goals generally do not lead students to improve their substantive mastery of the material but rather lead students to focus on attaining a performance stan-dard that may be irrelevant to their mastery. That is, students perceiving the exit exam as a perfor-mance goal will focus on passing the test rather than mastering the substantive material tested. To the extent that an exit exam validly assesses students’ knowledge of the material these two goals will coincide, and any motivational impact of test failure will lead to improved mastery of the material.
Such motivational impacts, however, rely on the students’ perception that the goal is obtain-able. For low-achieving students, particularly those who score far below the passing score on an exit exam, however, the effects of failing may not lead to increased motivation. Students motivated by external standards link performance to innate ability instead of hard work and effort and are therefore more likely to interpret low per-formance as implying low self-worth. As a result, students who fail or who are at risk of failing an exam may implement defensive psychologi-cal strategies to protect themselves from such negative self-assessments (Ames, 1984; Covington, 2000), much as Roderick and Engel (2001) observed among the low-achieving students in their Chicago middle school retention study. At best, according to this argument, performance goals lead to superficial, ineffective study techniques such as memorization and rehearsal, which may
at Stanford University Libraries on February 15, 2011http://eepa.aera.netDownloaded from
501
High School Exit Exam
help on the exam but are unlikely to boost sub-sequent achievement. At worst, students looking to avoid failure (which they link to their own self-worth) will avoid challenging tasks (Ames, 1984).
Only a few studies have attempted to capture the psychological and intermediate behavioral responses specific to exit exams. Richman, Brown, and Clark (1987) found that students who failed minimum competency tests exhibited a signifi-cant increase in apprehension alongside a corre-sponding decrease in general self-esteem. Because neither the students with little risk of failing nor the students who had a high risk of failing but passed had any such changes along these dimen-sions, the authors attribute the psychological changes they observed to the experience of fail-ing. These findings corroborate the hypothesis that students who fail will feel discouraged—rather than encouraged—and will not necessar-ily believe that increased effort will lead to later success.
Teachers and schools may also respond to a student’s failure on an exit exam in ways that undermine subsequent student outcomes.2 Indeed, some evidence suggests that high-stakes exams trigger unintended teacher and school behaviors. For example, Booher-Jennings (2005) studied one Texas public school where teachers con-fronted a new test-based grade promotion policy. Booher-Jennings describes an “educational tri-age” system whereby teachers used the test scores of initial failers to classify them into three groups—“safe cases,” “hopeless cases,” and “cases suitable for treatment”—and diverted resources away from the first two groups toward the latter. Teachers understood that students just below the proficiency cut score required modest educational intervention—perhaps only improved test-taking skills—to pass the exam. Neal and Schanzenbach (2007) find evidence of a similar strategy of focus-ing on the “bubble kids”—those who score very near the proficiency cut point—in response to a Chicago high-stakes testing policy. They found that although overall proficiency rates improved, there were little to no gains among the least aca-demically advantaged students. These studies suggest that if schools feel pressure to respond to high-stakes tests by targeting instructional efforts on students just below the margin of initially passing the test, we might see no improvement
(or declines) in the outcomes of the very lowest achieving students, although we may see improve-ments in the outcomes of students who score just below the passing score.
In sum, there are cogent arguments and some empirical evidence supporting competing hypoth-eses about the effects of exit exams, with evidence that the effects may differ between those who just fail the exam and those who are far from pass-ing. Evidence from the Chicago grade-retention policy suggests that students may increase their effort once they learn they are in danger of being retained, unless the students are very low achiev-ing. It is also possible exit exam failure leads instead to low self-assessment, disengagement, and no increase in student effort. In terms of school responses, the early administration of the exam several years prior to graduation could provide schools the chance to intervene on the students’ behalf. Yet because the stakes are high and the amount of time in which to produce results is quite short, teachers may feel forced to adopt coping strategies that are not productive in the long run. To complicate the question of the moti-vational and curricular effects of failing an exit exam, we note that these effects may be hetero-geneous, affecting students just below the cut score differently from those who initially score far below.
Our goal in this article is to provide empirical evidence of the effects of initially failing a high school exit exam on subsequent persistence in school, course-taking, achievement, and gradu-ation on the population of students who fall just below the passing threshold. To the extent that our results indicate negative or positive impacts of initially failing the exam, they may provide evidence that is more or less consistent with each of the arguments des cribed above.
II. Prior Research on the Effects of Exit Exams
Prior research on the effects of exit exams varies in the treatment of interest (effects of the presence of exit exam policies or the effects of failing the exam), the outcome of interest (stu-dent achievement versus dropout, persistence, or graduation), and the type of data used (individual longitudinal data or aggregate cohort data). Most
at Stanford University Libraries on February 15, 2011http://eepa.aera.netDownloaded from
502
Reardon et al.
prior research has estimated the effect of a high school exit exam requirement on high school drop-out or completion rates. Several studies using individual-level data from nationally represen-tative samples (mostly from cohorts of students graduating high school in the early 1990s) found that state high school exit exams increase high school dropout rates among low-achieving stu-dents (Bishop & Mane, 2001; Jacob, 2001) or Black males (Dee & Jacob, 2006), although one similar study found no such effects (Warren & Edwards, 2005). In contrast, a set of studies exam-ining the relationship between state exit exam policies and state-level graduation rates generally finds no effect of exit exams on dropout rates (Carnoy & Loeb, 2002; Greene & Winters, 2004; Grodsky, Warren, & Kalogrides, 2009; Warren & Jenkins, 2005), although at least two such studies find a different result (Amerin & Berliner, 2002; Marchant & Paulson, 2005). Some of these stud-ies have important methodological shortcomings, however, that may bias their estimated effects of exit exam policies on dropout rates (discussed at length in Dee & Jacob, 2006; Warren et al., 2006). Two newer studies that correct many of the methodological shortcomings of these stud-ies find that high school dropout rates tend to increase by roughly 1 to 2 percentage points, on average, when states implement rigorous exit exams (Dee & Jacob, 2006; Warren et al., 2006).3 Dee and Jacob (2006) find that these effects are concentrated among Black students and students in high-poverty schools.
The studies described above estimate the effect of exit exam requirements on student outcomes. Because they estimate the effect of exit exam policies on student outcomes, they provide lit-tle information about the mechanisms through which exams have their effects or about whether the effects are concentrated among very low-achieving students or students at the margin of passing. The finding that graduation rates decline under exit exam policies could result from dis-couragement effects but could also be simply because of the fact that some students never ulti-mately pass the exams. Exit exams could increase student motivation but still reduce graduation rates, if some students never pass the exam despite increased motivation. Moreover, exit exams’ het-erogeneous effects may reduce graduation rates
of very low-performing students while having no effect or a positive effect on the graduation rates of other students. One way to investigate the mechanisms through which exit exams operate is to examine the effects of failing a high-stakes exit exam on subsequent student academic outcomes. Moreover, investigating the effects of initially fail-ing the exam for students at the margin of passing may help us understand what type of students the exam requirement affects.
Several existing studies provide evidence regarding the effects of failing an exit exam on the likelihood of high school graduation. Catterall (1989) finds that students who initially failed an exit exam expressed more doubt about their chances of finishing high school than those who passed, although the authors do not observe whether these students were in fact less likely to finish school nor whether test failure caused the desire to drop out. Griffin and Heidorn (1996), using longitudinal student data, find that exit exam failure is associated with increased drop-out rates, but only for students with moderately high GPAs, a pattern the authors interpret as evi-dence of a discouragement or stigma effect.
Neither of these two articles is fully persuasive regarding the effects of failing an exit exam because both rely on regression adjustment to estimate the effects of failing. Better evidence comes from a set of more recent research that relies on regression discontinuity designs. Because passing the exam is determined by a sharp thresh-old on an observable variable (the exit exam score), regression discontinuity estimators can provide unbiased estimates of the effects of fail-ing an exit exam for students with skill levels near the passing threshold (Imbens & Lemieux, 2008).
Using a regression discontinuity analysis, Martorell (2005) finds that Texas students who barely fail the state exit exam on their first attempt are no more likely to drop out of school by 12th grade than those who just pass on their first attempt. Martorell does not investigate whether these effects differ among student subgroups defined by race/ethnicity, poverty, or language status. In a similar analysis using Massachusetts data, Papay, Murnane, and Willett (2010) find that failing the exam reduces the probability of on-time graduation by 8 percentage points among low-income urban students at the margin of passing
at Stanford University Libraries on February 15, 2011http://eepa.aera.netDownloaded from
503
High School Exit Exam
but failure has no effect on the graduation rates of other students. One possible explanation for these patterns is that schools attended by low-income urban students may not provide adequate remedial instruction for students who initially fail the exam, but Papay et al. cannot test this possibility with their data. Likewise, Ou (2009) finds that failing a math exit exam in 10th grade lowers students’ probability of staying in school through 12th grade and graduating, although this effect is confined to low-income and minor-ity students.
Each of these recent studies provides significant evidence regarding the effects of exit exams on student persistence and graduation. Two of the three studies suggest that failing an exit exam leads to lower graduation rates, particularly for disadvantaged students, a finding more consis-tent with the discouragement hypothesis than the motivational hypothesis. Nonetheless, none of these three studies provide estimates of the effect of failing an exit exam on student achieve-ment, an important element in understanding the effects of exit exams. In fact, there is little evi-dence on the effects of exit exam requirements on achievement.4
In this article, we estimate the effects of fail-ing an exit exam using a regression discontinuity design similar to that used by Martorell (2005), Papay et al. (2010), and Ou (2009). In addition to examining the effects of initial exit exam fail-ure on persistence and graduation, we also esti-mate the effects of 10th grade exit exam failure on subsequent achievement. Our estimates on achievement are particularly useful for assessing the extent to which exit exams may motivate students who fail to work harder or may motivate schools to target instructional resources toward such students. We rely on data from four large school districts in California, where an exit exam requirement was implemented for students in the graduating class of 2006.
III. The California High School Exit Exam
The California State Legislature passed Senate Bill 2X in March 1999, requiring all local school districts to administer the CAHSEE and pro-vide supplemental instruction to those students who do not demonstrate sufficient progress toward
passing the exam. The bill claimed that the CAHSEE would “significantly improve pupil achievement in high school and to ensure that pupils who graduate from high school can dem-onstrate grade level competency in reading, writ-ing, and mathematics, the state must set higher standards for high school graduation” (Senate Bill 2X, sec. 1(b)).5
The CAHSEE is a two-part exam of mathe-matics and English language arts (ELA) skills. The math section assesses students’ mastery of the California math content standards for 6th and 7th grade and their Algebra 1 skills using a multiple-choice format. The ELA section is aligned with state content standards through 10th grade and utilizes a multiple-choice format along with one essay. Both tests are administered in English, regardless of a student’s primary lan-guage.6 Students must pass both parts (with a minimum score of 350) to earn a high school diploma.
The test is first administered to students in the spring of 10th grade, and students have at least five subsequent opportunities to retake the sections they have not yet passed (twice in 11th grade and 12th grade and at least once following the end of the 12th grade school year).7 Districts notify students and their parents of their CAHSEE performance about 7 weeks after the exam is administered. Because students are told their exact score, not simply whether they passed or failed, students who fail have some sense of how close they came to scoring the requisite 350 they need to meet the CAHSEE requirement.
IV. Data
We use longitudinal student-level data from four large California school districts to investi-gate the effects of failing the CAHSEE. The districts are 4 of the 10 largest school districts in California, collectively enrolling more than 110,000 high school students (about 5.5% of high school students in the state) annually. For our pri-mary analyses, we use data from five cohorts of students—students scheduled to graduate in 2006 through 2010—for whom the CAHSEE require-ment was binding and for whom we have outcome data.8 Specifically, our analyses include students who took the CAHSEE for the first time in 10th grade in spring 2004 through spring 2008. We
at Stanford University Libraries on February 15, 2011http://eepa.aera.netDownloaded from
504
Reardon et al.
exclude from our analyses students classified as special education students (roughly 10% of stu-dents) because these students were not subject to the CAHSEE requirement in most of the years covered by our analyses.
Outcome Measures
We estimate the effect of failing the CAHSEE exam on four primary outcomes—academic achievement, subsequent course taking, persis-tence to 12th grade, and graduation. We measure academic achievement in two ways. As a mea-sure of academic achievement, we use the spring 11th grade ELA California Standards Test (CST) score. All students in California take the same ELA CST test in 11th grade. We have no outcome measure of mathematics achievement, however, because there is no common math test taken by all students in 11th grade in California. Instead, students take one of a number of different sub-ject or content math CST tests (e.g., Geometry, Algebra 2), depending on what 11th grade math course they are enrolled in. As a result, 11th grade math CST scores are not comparable across stu-dents. Moreover, because students’ GPAs are partly a function of the courses they take—which may be affected by performance on the 10th grade CAHSEE—we do not use GPA as a measure of students’ academic achievement.
Although we have no common measure of mathematics achievement, the math test taken by a student indicates the level of mathematics cur-riculum to which he or she was exposed in 11th grade. We use this information as an additional outcome measure. Specifically, we construct two math course indicator variables using the infor-mation on which specific math CST test a stu-dent took in 11th grade—one indicating whether a student was enrolled in a class higher than Algebra I in the 11th grade, the other indicating whether a student was enrolled in a class higher than Geometry in the 11th grade.9
Although we cannot directly determine whether students have dropped out of high school—because students who leave a given district prior to graduation may be dropouts or may have left and enrolled elsewhere—we can identify whether students are present in the district 2 years after first taking the CAHSEE (in the spring of 12th grade). We construct a binary variable indicating
whether students are present in the district in the spring semester 2 years after they first took the CAHSEE in 10th grade.10 We use the indicator of presence in spring of the scheduled 12th grade year as an indicator of persistence in schooling. Of course, some students may not be present in the district because they have transferred to another district. Nonetheless, if we observe that failing the CAHSEE affects the probability that a student is present in the district in 12th grade, we can assume that this is because failing the CAHSEE affects persistence or dropout rates. It is unlikely that CAHSEE failure affects the probability of transferring to another district within the state because students will be subject to the CAHSEE requirement in any district within the state. It is, however, possible that CAHSEE failure may affect the probability of transferring into private schools (where the CAHSEE is not required) or even out of the state, but such effects would likely be extremely small. Thus, we argue that any effects of CAHSEE failure on persis-tence are likely because of dropout effects.
Finally, we estimate the effect of failing the CAHSEE on the probability of graduating from the district using a binary indicator of graduation status provided by the districts. We have access to 11th grade outcome data—ELA CST scores and math course enrollment—for the five cohorts of students who were in 10th grade in 2004, 2005, 2006, 2007, and 2008; 12th grade outcome data (persistence and graduation) are available only for the first four of these cohorts.11
Descriptive Statistics
The first column (“All Students”) of Table 1 displays basic descriptive statistics and mean out-comes for the students included in this analysis (we discuss columns 2–7 of Table 1 in the follow-ing section on the analytic strategy). Students in our sample have slightly higher ELA CST test scores in 8th and 10th grade than the state aver-age. The graduation rate is about 69% (though recall that this understates the true graduation rate, as some students have likely transferred and graduated from other districts). The persistence rate, however, is 8 percentage points higher than the graduation rate, indicating that a large num-ber of students remain in school and in the same district through 12th grade but do not graduate
at Stanford University Libraries on February 15, 2011http://eepa.aera.netDownloaded from
505
TAB
LE
1Se
lect
ed S
tude
nt C
hara
cter
isti
cs a
nd O
utco
mes
, for
Var
ious
Stu
dent
Sam
ples
All
st
uden
ts
Stu
dent
s w
hose
lo
wer
sco
re
is 3
45–3
55
Stu
dent
s w
ho
pass
ed m
ath
and
scor
ed 3
45–3
55
on E
LA
Stu
dent
s w
ho
pass
ed E
LA
and
sc
ored
345
–355
on
mat
h
Stu
dent
s w
hose
hi
gher
sco
re
is 3
45–3
55
Stu
dent
s w
ho
fail
ed m
ath
and
scor
ed 3
45–3
55
on E
LA
Stu
dent
s w
ho
fail
ed E
LA
and
sc
ored
345
–355
on
mat
h
Cov
aria
tes
Per
cent
age
Whi
te0.
194
0.10
30.
084
0.11
90.
064
0.06
70.
059
Per
cent
age
His
pani
c0.
388
0.48
20.
467
0.49
10.
563
0.56
80.
565
Per
cent
age
Bla
ck0.
134
0.18
00.
139
0.21
40.
185
0.21
80.
147
Per
cent
age
Asi
an0.
259
0.21
40.
287
0.15
40.
171
0.13
30.
213
Per
cent
age
Fem
ale
0.51
40.
532
0.42
20.
623
0.51
40.
579
0.44
4P
erce
ntag
e fr
ee-l
unch
eli
gibl
e0.
455
0.54
40.
573
0.51
30.
596
0.58
80.
617
Per
cent
age
EL
L0.
171
0.22
50.
328
0.13
60.
401
0.31
10.
511
8th
grad
e E
LA
CS
T s
core
(st
anda
rdiz
ed)
0.08
6–0
.432
–0.5
86–0
.297
–0.7
89–0
.724
–0.8
8410
th g
rade
EL
A C
ST
sco
re (
stan
dard
ized
)0.
068
–0.4
38–0
.584
–0.3
14–0
.809
–0.7
49–0
.885
Out
com
esM
ean
11th
gra
de E
LA
CS
T s
core
(s
tand
ardi
zed)
0.09
0–0
.471
–0.5
95–0
.361
–0.8
00–0
.765
–0.8
52
Per
cent
age
pres
ent i
n sp
ring
12t
h gr
ade
0.77
20.
762
0.78
10.
749
0.68
60.
682
0.68
5G
radu
atio
n ra
te (
by s
prin
g of
12t
h gr
ade)
0.68
60.
664
0.68
70.
647
0.50
00.
496
0.49
8To
ok 1
1th
grad
e m
ath
cour
se h
ighe
r th
an
Alg
ebra
10.
833
0.78
50.
823
0.75
70.
679
0.64
80.
703
Took
11t
h gr
ade
mat
h co
urse
hig
her
than
G
eom
etry
0.53
90.
331
0.40
60.
270
0.19
10.
161
0.21
6
n10
6,45
49,
744
4,46
35,
342
6,27
73,
264
2,95
2
Not
e. E
LA
= E
ngli
sh l
angu
age
arts
; E
LL
= E
ngli
sh l
angu
age
lear
ner;
CS
T =
Cal
ifor
nia
Sta
ndar
ds T
est.
Col
umn
1 co
ntai
ns d
escr
ipti
ve s
tati
stic
s of
stu
dent
cov
aria
tes
and
outc
omes
for
the
en
tire
ana
lyti
c sa
mpl
e. T
he s
ampl
e is
lim
ited
to
non-
spec
ial-
educ
atio
n-st
atus
stu
dent
s w
ho h
ave
com
plet
e co
vari
ate
data
, Cal
ifor
nia
Hig
h S
choo
l E
xit
Exa
m (
CA
HS
EE
) te
st s
core
dat
a, a
nd
nonm
issi
ng 1
1th
grad
e E
LA
CS
T s
core
s. C
olum
ns 2
–7 a
re r
estr
icte
d to
stu
dent
s w
ithi
n fi
ve p
oint
s of
the
spe
cifi
ed C
AH
SE
E c
ut s
core
. The
se s
ampl
es r
epre
sent
the
app
roxi
mat
e po
pula
tion
s to
who
m th
e es
tim
ates
fro
m e
ach
of o
ur m
odel
s ap
ply
(the
mod
els
are
desc
ribe
d be
low
). S
tude
nts
are
desi
gnat
ed a
s E
LL
if th
ey w
ere
desi
gnat
ed a
s an
EL
L in
spr
ing
of 1
0th
grad
e. A
stu
dent
w
as c
ateg
oriz
ed a
s a
spec
ial
educ
atio
n st
uden
t if
he
or s
he h
ad e
ver
rece
ived
spe
cial
edu
cati
on s
ervi
ces
or h
ad b
een
desi
gnat
ed s
peci
al e
duca
tion
sta
tus
at a
ny p
oint
aft
er 8
th g
rade
. All
EL
A
CS
T s
core
s ar
e re
port
ed in
z s
core
s, s
tand
ardi
zed
by te
st y
ear
wit
hin
the
stat
e.
at Stanford University Libraries on February 15, 2011http://eepa.aera.netDownloaded from
506
from the district. It is not clear from these figures, however, to what extent, if at all, this failure to graduate is because of the CAHSEE. Certainly some students do not graduate because of a fail-ure to accumulate sufficient credits, regardless of whether they have passed the CAHSEE or not.
Table 2 summarizes initial CAHSEE passing rates and mean ELA and math CAHSEE scores, broken down by race/ethnicity, gender, socioeco-nomic status, English language learner (ELL) status, and eighth grade ELA proficiency level (as defined by state standards on the state eighth grade ELA test). Overall, about two thirds of students pass both of the CAHSEE exams on their first attempts. White students have the high-est initial passing rates on the CAHSEE, fol-lowed by Asian students. Hispanic and African American students have passing rates more than
30 percentage points lower than those of their white counterparts.
Students designated as ELLs have by far the lowest passing rates of any demographic group, with an average pass rate of only 21.4%. Furthermore, eighth grade CST performance level is a very strong predictor of subsequent CAHSEE pass rates, which is not surprising given that the CAHSEE is intended to measure skills largely in line with California’s middle school perfor-mance standards (also see Kurlaender, Reardon, & Jackson, 2008; Zau & Betts, 2008). Evidence from Table 2 suggests that initial passing status is generally related to many factors that may independently influence student outcomes. This highlights the importance of adopting a research design that can disentangle exit exam failure from these other factors.
Table 2Initial Attempt Passing Rates and Average Scores on ELA and Math CAHSEE, by Student Characteristics
Mean Proportion Who Pass…. CAHSEE Scores
….ELA on … Math on … Both on First Attempt First Attempt First Attempt ELA Math
Overall 0.789 0.777 0.681 379.0 381.4Race/ Ethnicity White 0.944 0.916 0.873 401.9 399.3 Hispanic 0.693 0.670 0.556 366.2 366.9 Black 0.742 0.651 0.560 369.8 364.2 Asian 0.828 0.885 0.774 384.3 396.5Gender Male 0.759 0.788 0.672 375.1 383.1 Female 0.817 0.767 0.690 382.7 379.8Socio-Economic Status Not Free-Lunch Eligible 0.850 0.826 0.751 387.5 388.1 Free-Lunch Eligible 0.716 0.718 0.598 368.8 373.4ELL Status Non-Ell 0.886 0.840 0.777 387.7 387.5 ELL 0.308 0.465 0.214 335.8 351.28th Grade ELA Performance Level Far Below Basic 0.210 0.326 0.125 331.4 341.9 Below Basic 0.518 0.527 0.345 349.8 353.6 Basic 0.895 0.827 0.757 375.5 375.9 Proficient 0.993 0.973 0.956 403.7 404.1 Advanced 0.999 0.998 0.990 427.0 428.2
Note: Columns 1-3 cell contents represent the percentage of students in each subcategory who passed either the ela, math, or both sections of the spring 10th grade CaHSee administration. Columns 4-5 represent mean CaHSee scores of each group. The sample is limited to non-special education status students who have complete covariate data, CaHSee test score data, and non-missing eleventh grade ela CST scores.
at Stanford University Libraries on February 15, 2011http://eepa.aera.netDownloaded from
507
High School Exit Exam
V. Analytic Strategy
We rely on a regression discontinuity strategy to obtain unbiased estimates of the effect of fail-ing the CAHSEE for students near the cut score. Regression discontinuity relies on the plausible assumption that, in the limit, students who score arbitrarily close to, and below, the passing score are similar on average in every way to students who score arbitrarily close to, but above, the passing score. As a result, the estimated average outcomes of those who score just above the pass-ing score can stand as valid counterfactual esti-mates for what the average outcomes of those who score just below the passing score would have been had they passed the test (Imbens & Lemieux, 2008).
Because the CAHSEE consists of two sepa-rate tests—math and ELA—a student may fail the CAHSEE in 10th grade in one of three dif-ferent ways (failing the ELA portion, failing the math portion, or failing both). As a result, there are several possible estimands of interest (for a discussion of the multiple estimands available when using multiple rating scores in regression discontinuity, see Reardon & Robinson, 2010). For example, one may estimate the effect of fail-ing at least one of the exams (relative to passing both); in addition, we estimate the effect of failing both of the exams (relative to passing at least one). These two estimates each may pro-vide some evidence of the effect of failing an exit exam in 10th grade. Moreover, differences among the two estimates may be suggestive of motivational and discouragement effects. For example, if students who perceive themselves as unlikely to ever pass the test are more likely to be discouraged and students who perceive them-selves as close to passing are more likely to be motivated, we might expect more positive effects of failing one versus no exams than of failing both versus one exam.
The effects also may depend on which test a student failed. This possibility is particularly likely with subject-specific outcomes. For exam-ple, failing the ELA exam may have positive effects on subsequent ELA achievement if it motivates students or causes schools to provide remedial ELA instruction for them. But failing the math exam may have no effects or negative effects on ELA achievement because it likely
induces little motivation to improve one’s ELA skills or causes students (and schools) to focus on math skills at the expense of ELA. Thus, tak-ing into account the specific subject failed may be important to understanding the effects on the subject-specific outcomes investigated herein, including 11th grade ELA CST scores and 11th grade math course taking.
To estimate the effects of failing different numbers and subjects of the exit exams, we fit multiple regression discontinuity models. Each of the models has the form
yicd = f (Ricd) + δ (Ricd < 350) + Xicd B + Γcs+ εicd, (1)
where yicd is the outcome (CST score in 11th grade, math course type in 11th grade, presence in spring 12th grade, graduation) for student i in cohort c in district d, Ricd is the 10th grade math or ELA CAHSEE score for student i in district d and cohort c (or some monotonic combination of the math or ELA scores), f is a continuous func-tion of R, X is a vector of student covariates, Γ is a vector of cohort-by-school fixed effects, and ε is an error term. The parameter of interest here is δ, which indicates the average effect of failing the CAHSEE on the outcome for a student with a score right at the margin of passing. We estimate these models using ordinary least squares (OLS); although two of our outcomes are binary, our OLS linear probability models provide results nearly identical to those from logistic regression models, but the OLS estimates are much more readily interpretable. We compute standard errors accounting for the clustering by CAHSEE scores, as recommended by Lee and Card (2008) because of the discrete nature of the scores.
Fitting regression discontinuity models req-uires the choice of a functional form f and a choice of bandwidth around the cutoff score. Based on visual inspection of the data, we use a quadratic function for f, allowing the quadratic to have a different slope and different curvature on either side of the passing score. We use the cross-validation method suggested by Ludwig and Miller to choose the optimal bandwidth, given this functional form (Imbens & Lemieux, 2008; Ludwig & Miller, 2007). Depending on the spe-cific outcome and/or model, this procedure indicates optimal bandwidths ranging from 30 to 50 points on the CAHSEE scale (roughly 1 SD),
at Stanford University Libraries on February 15, 2011http://eepa.aera.netDownloaded from
508
Reardon et al.
although the cross-validation criterion is largely insensitive to differences in the bandwidth above bandwidths of 20. We present results in the article using a bandwidth of 50 for all analy-ses. Most of the results are substantively unchan-ged if we use larger or smaller bandwidths (see Table A2 in the appendix).
To estimate the effect of failing both tests (relative to passing at least one), we construct a new rating score that is the maximum of a student’s scores on the math and ELA tests. If this maximum score is less than 350 (i.e., the pass–fail cut score for both tests), the student has failed both tests; if the maximum is 350 or greater, the student has passed at least one of the two. Using this maximum score as the rat-ing score in the regression discontinuity model allows us to estimate the effect of failing both versus passing at least one of the tests. Like-wise, using the minimum of the math and ELA scores as the rating score allows us to estimate the effect of failing at least one test versus pass-ing both.
To estimate the effects of failing a specific sub-ject test, we restrict the sample to include only students who have either passed or failed one of the tests and then use the regression discontinuity model to estimate the effect of failing the other test. For example, we limit the sample to those who passed the math test and estimate the effect of failing the ELA test, using the ELA test score as the rating score in Equation 1. This yields four additional estimands: the effect of failing the math CAHSEE for those students who passed the ELA CAHSEE and for those students who failed the ELA CAHSEE and the effect of failing the ELA CAHSEE for those students who passed the math CAHSEE and for those who failed the math CAHSEE. In sum-mary, we estimate the effect of six different types of exit exam failure: (a) the effect of failing at least one of the two CAHSEE sections, (b) the effect of fail-ing the ELA section, when passing the math section, (c) the effect of failing the math section, when pass-ing the ELA section, (d) the effect of failing both sections, (e) the effect of failing the ELA section, when also failing the math section, and (f) the effect of failing the ELA section, when also failing the math section.
Each of our regression discontinuity models yields an estimate of the average effect of failing (vs. passing) the exam for students with scores at
the margin of passing, as defined by the specific type of failure. As with all regression disconti-nuity designs, these estimated effects may not generalize to students far from passing, as we discuss in the conclusion. To this end, columns 2 to 7 of Table 1 provide descriptive statistics for students near (i.e., within 5 points—roughly one tenth of a standard deviation—on either side) each passing margin. These descriptive data indicate that students on the margin of passing the CAHSEE (those to whom our estimates apply) are more likely to be Black or Hispanic, low-income, ELL, and low-achieving students than the popu-lation of all students in the four districts.
Prior to estimating the effects of CAHSEE failure, we conduct a series of checks to assess the validity of the key regression discontinuity assumptions—the exogeneity of the cutoff score and the rating scores and the continuity of the potential outcomes (Imbens & Lemieux, 2008). The data show no evidence of manipulation of the rating scores; the density of the CAHSEE scores is similar on each side of the 350 passing score (results not shown). Moreover, we find no evidence of any systematic discontinuity on any pretreatment student covariates (see Table A1 in the appendix).
Finally, in addition to estimating the effect of failing the CAHSEE on academic achievement, math course taking, persistence, and graduation for the total sample, we also estimate the models separately by race/ethnicity, gender, ELL status, and free or reduced-price lunch eligibility status to investigate whether disadvantaged popula-tions experience different average effects than their more advantaged counterparts (as found in Ou, 2009; Papay et al., 2010).
VI. Results
We begin by examining the estimated effects of failing the CAHSEE ELA or math exam on sub sequent achievement, course enrollment, per-si stence, and graduation. Figure 1 graphically illus trates the relationship between students’ minimum CAHSEE scores (the minimum of their ELA and math 10th grade CAHSEE scores) and four of the outcomes of interest. The graphs show little or no discontinuity in average out-comes across the passing score of 350. Figures using the maximum score, the ELA score, or the
at Stanford University Libraries on February 15, 2011http://eepa.aera.netDownloaded from
509
math score as the rating score likewise display little or no evidence of discontinuities in out-comes (additional figures not shown, in the inter-est of space).
Table 3 reports the estimated overall effects of the six ways in which students can fail the CAHSEE in 10th grade, based on regression discontinuity models of the form shown in Equation 1. In general, there is no evidence of any systematic effect of failing one or both parts of the CAHSEE on students’ 11th grade ELA achieve-ment or their persistence in school through 12th grade. With respect to graduation, the estimated effect of failing at least one part of the CAHSEE is positive (+2.7 percentage points) and statisti-cally significant. However, this estimate is very sensitive to the bandwidth used (see Table A2 in the appendix); the point estimate ranges from –2.8 to +2.8 percentage points, depending on the bandwidth (and is not statistically significant in most cases). The instability of this estimate leads us to discount its reliability.
Failing the math CAHSEE exam appears to lower the probability of taking a math course higher than Geometry in 11th grade by 5.1 per-centage points (against a base rate of 33% for students who barely passed the math CAHSEE and who passed the ELA CAHSEE). No such effect is evident for failing the ELA exam, suggesting that the reduction in upper-level math course tak-ing is a result of students being placed in reme-dial or lower-level math courses as a result of failing specifically the math portion of the CAHSEE. Failing the math CAHSEE does not appear to significantly reduce students’ likelihood of tak-ing upper-level math courses if they have also failed the ELA CAHSEE, although the point estimate is negative. Nor does failing an exam affect students’ likelihood of enrolling in a low-level math course (Algebra 1 or below) in 11th grade, likely because the remedial math courses are at a higher level than Algebra 1.
The general lack of statistically significant estimated effects of failing the CAHSEE here is
–1
–.5
0
.5
1
300 320 340 360 380 40010th Grade Minimum CAHSEE Score
Mean 11th Grade ELA CST Score(Standardized)
.5
.6
.7
.8
.9
300 320 340 360 380 40010th Grade Minimum CAHSEE Score
Proportion Still Present in Spring 12
.2
.4
.6
.8
1
300 320 340 360 380 40010th Grade Minimum CAHSEE Score
Proportion Who Graduate in Spring 12
.2
.4
.6
.8
300 320 340 360 380 40010th Grade Minimum CAHSEE Score
Proportion Who Took 11th Math > Geometry
Outcomes, by 10th Grade Minimum CAHSEE Score
FIGURE 1. Outcomes, by 10th grade minimum CAHSEE score.CAHSEE = California High School Exit Exam; ELA = English language arts; CST = California Standards Test.
at Stanford University Libraries on February 15, 2011http://eepa.aera.netDownloaded from
510
not the result of low statistical power. The stan-dard errors of these estimated effects are small. The estimates of the effects on 11th grade CST scores are sufficiently precise to rule out effects as small as 0.03 to 0.05 standard deviations; the estimates of the effects on persistence, graduation, and math course taking are sufficiently small to rule out effects as small as 3 to 5 percentage points. Thus, if there are undetected systematic effects of failing an exit exam, they are very modest in size.
Our final analysis investigates the effects of CAHSEE failure among student subgroups. Table 4 reports the estimated effects of failing one or both parts of the CAHSEE separately by race/ethnicity, gender, socioeconomic status, and ELL status. In the interest of space, we report only the estimates from the models using the minimum or maximum of the CAHSEE scores as the rat-ing score. Each set of estimates (e.g., the race/ethnicity-specific effects) comes from a model that allows the shape of the fitted model to vary
TABLE 3Estimated Effects of Failing the CAHSEE (First Attempt) on Achievement, Persistence, Graduation, and Math Course Taken
Effect of failing . . .
At least one section
ELA section, when passing math section
Math section, when passing ELA section
Both sections
ELA section when also
failing math
Math section, when also
failing ELA
11th grade ELA CST score (standardized)
n
0.005(0.012)
59,998
0.003(0.018)38,751
–0.027(0.017)38,104
–0.005(0.014)44,551
–0.020(0.020)13,924
0.008(0.021)13,770
Present in spring 12th grade
n
0.006(0.009)
55,418
0.004(0.014)33,746
0.013(0.014)34,372
0.009(0.014)43,036
0.012(0.019)15,270
–0.001(0.020)15,066
Graduation (by spring of 12th grade)
n
0.027*(0.011)
0.007(0.018)
0.019(0.017)
0.012(0.016)
0.026(0.022)
–0.002(0.023)
43,855 26,901 27,310 33,600 11,691 11,553
11th grade math course higher than Algebra 1
n
–0.017(0.009)
63,408
–0.003(0.013)40,394
–0.024(0.014)40,025
–0.008(0.013)47,612
–0.003(0.018)15,407
–0.017(0.018)15,194
11th Grade math course higher than Geometry
n
–0.027**(0.009)
63,337
–0.007(0.015)40,350
–0.051***(0.013)39,960
–0.016(0.011)47,558
–0.016(0.013)15,395
–0.030(0.016)15,190
Note. CAHSEE = California High School Exit Exam; ELA = English language arts; CST = California Standards Test. Cluster-corrected standard errors (clustered on the rating score) are shown in parentheses. The table reports the point estimate and standard error on the coefficients estimating the effect of failing the CAHSEE exam on each of the outcome variables for stu-dents near the passing score. Models use students from all available data from the four districts (see text for details). All models use school-by-cohort fixed effects. The estimated models regress the outcome variable on the math and ELA scale scores, the math and ELA scale scores squared, and interaction terms that allow both the linear and quadratic terms to vary on either side of the passing score. Models also include covariates to control for ELL status, gender, free-lunch eligibility status, and race/ethnicity. All models restrict the sample to students scoring within a bandwidth of 50 points from the passing score. Columns correspond to various conceptualizations of the treatment, that is, different combinations of CAHSEE failing status (see text for a complete explanation). All models exclude students who were classified as special education at any point since 8th grade.*p < .05. **p < .01. ***p < .001.
at Stanford University Libraries on February 15, 2011http://eepa.aera.netDownloaded from
511
TAB
LE
4E
stim
ated
Eff
ect o
f Fai
ling
CA
HSE
E (
Fir
st A
ttem
pt)
on A
chie
vem
ent,
Per
sist
ence
, Gra
duat
ion,
and
Sub
sequ
ent M
ath
Test
Tak
ing,
by
Stud
ent C
hara
cter
isti
cs
Mea
n 11
th g
rade
EL
A
CST
sco
re (
stan
dard
ized
)P
erce
ntag
e pr
esen
t in
spri
ng 1
2th
grad
eG
radu
atio
n ra
te (
by
spri
ng o
f 12
th g
rade
)To
ok 1
1th
grad
e m
ath
high
er th
an A
lgeb
ra 1
Took
11t
h gr
ade
mat
h hi
gher
than
Geo
met
ry
Eff
ect o
f fa
ilin
g at
le
ast o
ne
sect
ion
Eff
ect o
f fa
ilin
g bo
th
sect
ions
Eff
ect o
f fa
ilin
g at
le
ast o
ne
sect
ion
Eff
ect o
f fa
ilin
g bo
th
sect
ions
Eff
ect o
f fa
ilin
g at
le
ast o
ne
sect
ion
Eff
ect o
f fa
ilin
g bo
th
sect
ions
Eff
ect o
f fa
ilin
g at
le
ast o
ne
sect
ion
Eff
ect o
f fa
ilin
g bo
th
sect
ions
Eff
ect o
f fa
ilin
g at
le
ast o
ne
sect
ion
Eff
ect o
f fa
ilin
g bo
th
sect
ions
Rac
e/et
hnic
ity
Whi
te–0
.055
–0.0
08–0
.027
0.04
3–0
.025
0.00
1–0
.073
**–0
.042
–0.0
230.
048
(0.0
32)
(0.0
48)
(0.0
23)
(0.0
42)
(0.0
26)
(0.0
41)
(0.0
23)
(0.0
40)
(0.0
21)
(0.0
30)
His
pani
c0.
024
0.00
20.
005
0.00
90.
021
0.00
7–0
.006
–0.0
13–0
.021
–0.0
23(0
.017
)(0
.018
)(0
.014
)(0
.019
)(0
.017
)(0
.022
)(0
.013
)(0
.018
)(0
.013
)(0
.013
)B
lack
–0.0
25–0
.012
0.04
2–0
.023
0.05
5–0
.002
–0.0
300.
001
–0.0
32–0
.028
(0.0
29)
(0.0
33)
(0.0
24)
(0.0
32)
(0.0
29)
(0.0
37)
(0.0
23)
(0.0
30)
(0.0
22)
(0.0
23)
Asi
an–0
.003
–0.0
26–0
.025
–0.0
370.
008
–0.0
16–0
.014
–0.0
35–0
.035
–0.0
07(0
.023
)(0
.034
)(0
.019
)(0
.033
)(0
.023
)(0
.038
)(0
.017
)(0
.030
)(0
.022
)(0
.030
)P
acif
ic I
slan
der
–0.1
140.
033
0.07
50.
083
0.09
90.
163
–0.0
400.
071
–0.0
48–0
.040
(0.0
96)
(0.1
18)
(0.0
84)
(0.1
19)
(0.1
12)
(0.1
48)
(0.0
72)
(0.1
09)
(0.0
74)
(0.0
68)
Am
eric
an I
ndia
n–0
.209
0.13
8–0
.084
0.24
40.
253
0.35
70.
193
0.14
80.
044
–0.0
29(0
.197
)(0
.208
)(0
.164
)(0
.191
)(0
.185
)(0
.197
)(0
.131
)(0
.160
)(0
.131
)(0
.107
)F
sta
tist
ic1.
666
0.24
21.
496
1.13
21.
386
0.95
01.
958
0.54
20.
155
1.05
2p
valu
e on
F te
st0.
139
0.94
40.
187
0.34
10.
226
0.44
70.
081
0.74
50.
979
0.38
5
Gen
der
Mal
e–0
.015
–0.0
120.
001
0.01
90.
012
0.02
0–0
.016
–0.0
13–0
.002
0.00
7(0
.018
)(0
.021
)(0
.014
)(0
.020
)(0
.017
)(0
.024
)(0
.013
)(0
.019
)(0
.013
)(0
.015
)F
emal
e0.
021
0.00
40.
010
–0.0
020.
037*
0.00
6–0
.018
–0.0
03–0
.050
***
–0.0
38**
(0.0
15)
(0.0
19)
(0.0
13)
(0.0
19)
(0.0
16)
(0.0
22)
(0.0
12)
(0.0
18)
(0.0
12)
(0.0
15)
F s
tati
stic
2.28
50.
335
0.20
00.
641
1.15
50.
172
0.01
70.
120
7.16
95.
007
p va
lue
on F
test
0.13
10.
563
0.65
50.
423
0.28
30.
678
0.89
70.
729
0.00
7**
0.02
5*
(con
tinu
ed)
at Stanford University Libraries on February 15, 2011http://eepa.aera.netDownloaded from
512
Reardon et al.
TAB
LE
4
(con
tinu
ed)
Mea
n 11
th g
rade
EL
A
CST
sco
re (
stan
dard
ized
)P
erce
ntag
e pr
esen
t in
spri
ng 1
2th
grad
eG
radu
atio
n ra
te (
by
spri
ng o
f 12
th g
rade
)To
ok 1
1th
grad
e m
ath
high
er th
an A
lgeb
ra 1
Took
11t
h gr
ade
mat
h hi
gher
than
Geo
met
ry
Eff
ect o
f fa
ilin
g at
le
ast o
ne
sect
ion
Eff
ect o
f fa
ilin
g bo
th
sect
ions
Eff
ect o
f fa
ilin
g at
le
ast o
ne
sect
ion
Eff
ect o
f fa
ilin
g bo
th
sect
ions
Eff
ect o
f fa
ilin
g at
le
ast o
ne
sect
ion
Eff
ect o
f fa
ilin
g bo
th
sect
ions
Eff
ect o
f fa
ilin
g at
le
ast o
ne
sect
ion
Eff
ect o
f fa
ilin
g bo
th
sect
ions
Eff
ect o
f fa
ilin
g at
le
ast o
ne
sect
ion
Eff
ect o
f fa
ilin
g bo
th
sect
ions
Soc
ioec
onom
ic s
tatu
s (S
ES
)N
ot f
ree-
lunc
h el
igib
le0.
011
–0.0
200.
005
0.00
1–0
.001
0.00
4–0
.014
–0.0
14–0
.032
*0.
000
(0.0
19)
(0.0
24)
(0.0
15)
(0.0
24)
(0.0
17)
(0.0
24)
(0.0
13)
(0.0
20)
(0.0
14)
(0.0
20)
Fre
e-lu
nch
elig
ible
0.00
10.
003
0.00
50.
013
0.05
0**
0.01
5–0
.020
–0.0
04–0
.026
*–0
.026
*(0
.014
)(0
.017
)(0
.012
)(0
.017
)(0
.015
)(0
.021
)(0
.012
)(0
.017
)(0
.012
)(0
.012
)F
sta
tist
ic0.
177
0.59
50.
001
0.17
05.
221
0.12
10.
121
0.15
90.
105
1.28
1p
valu
e on
F te
st0.
674
0.44
10.
975
0.68
00.
022*
0.72
80.
728
0.69
00.
746
0.25
8
EL
L s
tatu
sE
LL
0.03
90.
007
–0.0
17–0
.016
0.02
1–0
.004
–0.0
030.
000
–0.0
32–0
.020
(0.0
24)
(0.0
22)
(0.0
20)
(0.0
21)
(0.0
24)
(0.0
25)
(0.0
18)
(0.0
20)
(0.0
20)
(0.0
18)
Nat
ive
Eng
lish
spe
aker
–0.0
10–0
.009
0.03
9*0.
027
0.04
6*0.
010
–0.0
22–0
.008
–0.0
20–0
.018
(0.0
19)
(0.0
24)
(0.0
16)
(0.0
24)
(0.0
19)
(0.0
26)
(0.0
15)
(0.0
22)
(0.0
14)
(0.0
17)
Red
esig
nate
d fl
uent
0.00
00.
009
–0.0
35*
0.05
5–0
.006
0.08
4*–0
.026
0.01
5–0
.036
*0.
008
(0.0
22)
(0.0
36)
(0.0
18)
(0.0
33)
(0.0
23)
(0.0
41)
(0.0
18)
(0.0
34)
(0.0
17)
(0.0
26)
Init
iall
y fl
uent
0.08
7–0
.098
–0.0
04–0
.074
–0.0
16–0
.067
0.01
8–0
.069
0.00
6–0
.025
(0.0
52)
(0.0
74)
(0.0
45)
(0.0
73)
(0.0
46)
(0.0
77)
(0.0
36)
(0.0
69)
(0.0
42)
(0.0
58)
F s
tati
stic
1.66
80.
657
3.42
41.
726
1.21
21.
626
0.64
30.
421
0.38
50.
307
p va
lue
on F
test
0.17
20.
579
0.01
6*0.
159
0.30
40.
181
0.58
70.
738
0.76
40.
820
Not
e. C
AH
SE
E =
Cal
ifor
nia
Hig
h S
choo
l E
xit
Exa
m;
EL
A =
Eng
lish
lan
guag
e ar
ts;
CS
T =
Cal
ifor
nia
Sta
ndar
ds T
est;
EL
L =
Eng
lish
lan
guag
e le
arne
r. C
lust
er-c
orre
cted
sta
ndar
d er
rors
(c
lust
ered
on
the
rati
ng s
core
) ar
e sh
own
in p
aren
thes
es. T
he ta
ble
repo
rts
the
poin
t est
imat
e an
d st
anda
rd e
rror
on
the
coef
fici
ents
est
imat
ing
the
effe
ct o
f fa
ilin
g th
e C
AH
SE
E e
xam
on
each
of
the
outc
ome
vari
able
s, b
y se
lect
ed s
tude
nt c
hara
cter
isti
cs, u
sing
the
sam
e m
odel
s an
d sa
mpl
es u
sed
to e
stim
ate
the
effe
cts
show
n in
Tab
le 3
, alb
eit i
nclu
ding
inte
ract
ions
of
the
scal
e sc
ores
, th
e sc
ale
scor
es s
quar
ed, a
nd s
cale
sco
re–p
assi
ng i
nter
acti
ons
wit
h th
e re
leva
nt s
et o
f de
mog
raph
ic g
roup
dum
mie
s. S
epar
ate
mod
els
wer
e ru
n to
tes
t di
ffer
ence
s fo
r ea
ch o
f th
e fo
ur d
emo-
grap
hic
attr
ibut
es—
race
/eth
nici
ty, g
ende
r, S
ES
, and
EL
L s
tatu
s—th
ough
mod
els
incl
ude
cont
rols
for
the
othe
r de
mog
raph
ic a
ttri
bute
s. W
e co
nduc
ted
post
-est
imat
ion
join
t F te
sts
on th
e G
roup
×
Tre
atm
ent i
nter
acti
on v
aria
bles
.*p
< .0
5. *
*p <
.01.
***
p <
.001
.
at Stanford University Libraries on February 15, 2011http://eepa.aera.netDownloaded from
513
High School Exit Exam
one demographic characteristic (e.g., race/ethnicity) and allows the effect of failing to vary by that same demographic characteristic. Separate models are used to test for interac-tions by race/ethnicity, by gender, by socioeco-nomic status, and by ELL status. For each model, we conduct an F test of the null hypoth-esis that the effects are the same for each group of students.
Overall, 4 of 40 of the F tests in Table 4 are significant at α = .05, double what we would expect by chance. However, the significant effects are not obviously clustered in any one subgroup. The one notable finding among these subgroups is that the effect of failing the CAHSEE on math course taking varies by gender. More specifically, the differential course taking effect observed in Table 3 appears to be driven entirely by the effect among girls. In the last two columns of Table 4, the estimated effects of the CAHSEE on the like-lihood of taking a math class higher than geome-try in 11th grade differ significantly by gender. In the model testing the effect of failing at least one section, the estimated effect of failing for girls is –0.050, compared to an estimated effect of –0.002 for boys (note that the average effect failing at least one test among all students was –0.027; see column 1 in Table 3). Likewise, the estimated effect of failing both sections of the CAHSEE on higher level math course taking is –0.038 (p < .01) for girls and +0.007 (p > .05) for boys.
VII. Discussion and Conclusion
In this article, we used a regression disconti-nuity design to estimate the effects of failing a high school exit exam in 10th grade for the pop-ulation of students with scores near the passing score. We find little evidence of any systematic positive or negative effects of failing an exit exam. CAHSEE exit exam failure does appear to have some effect on subsequent course-taking patterns in the following year. When students near the cut score fail at least one section—specifically the math section—in spring of 10th grade, they appear less likely to take higher-level math courses the following year, a pattern that suggests that these students are placed in reme-dial or lower-level math courses to improve their likelihood of passing the exam in the future. We
are not able to estimate whether these effects on course-taking patterns are beneficial, however, because we have no common measure of subse-quent math achievement. Moreover, this pattern seems to be driven exclusively by chan ges in female students’ curriculum, although we do not know whether this is because schools respond differently to girls’ failure of the math exam than they do to boys’ failure or whether it is because girls choose lower courses in res ponse to failing the exam than do boys (or choose higher courses in response to passing than do boys). We find no effects of exit exam failure on subsequent ELA achievement, persistence, or graduation.
However, we caution against overgeneraliza-tion of this finding of no effects. Although the regression discontinuity design provides unbi-ased estimation of these effects, the estimates are strictly generalizable only to a particular subset of the population of students who fail the exit exam—those who score near the passing score. These estimates do not, for example, indicate the effect of failing the exam for stu-dents with very low skills, whose scores are far from the passing threshold. For these students, it is quite plausible that the effects may differ (positively or negatively) from those estimated here.12 Nonetheless, this subsample of students is of interest in its own right. Unlike in Massachusetts, where Papay et al. (2010) report that only 13% of students fail (implying that the density of students near the passing score is relatively low), in California there are many student whose initial scores are very close to the passing score (on each exam, roughly 25% of students fail, and roughly 15% of students have scores within a quarter of a standard deviation of the passing score, implying a high density of students near the passing score). Hence our find-ings pertain to a nontrivial number of students.
Furthermore, students who score near the cut score are a substantively interesting group of students from the perspective of understanding the mechanisms through which exit exams oper-ate. For these students, there is likely consider-able uncertainty about whether they will pass the exam the first time they take it. As a result, their initial score on the exam provides a strong signal of their likelihood of ultimately passing the test. For this group of students, then, motivational or
at Stanford University Libraries on February 15, 2011http://eepa.aera.netDownloaded from
514
Reardon et al.
discouragement effects of failing the exam may be particularly strong. Likewise, students scor-ing just below the passing score may be those who are most likely to receive additional instruc-tional resources and support from their schools. The fact that we find little evidence of any dif-ference in outcomes for those just above and just below the passing score, therefore, may suggest that exit exam failure has little motivational or discouragement on students and yields little in the way of effective intervention by schools and teachers.
Our results are consistent with some prior work—namely Martorell’s (2005) analysis of the effects of failing the Texas exit exam. They are less consistent with Papay et al.’s (2010) find-ing, using Massachusetts data, that failing an exit exam appears to decrease graduation rates by 8 percentage points for low-income urban stu-dents. Because most students in Massachusetts who are on the margin of passing the math exam in 10th grade have passed the ELA exam, Papay et al.’s estimates are most comparable to our estimates of the effect of failing the math exam when passing the ELA exam. Our estimates of the effect of failing at least one exam for stu-dents of low socioeconomic status, however, suggest that failing appears to increase gradua-tion rates for these students by five percentage points (compared to no effect for higher-SES students, although we cannot reject the null hypo-thesis that the effects are the same for the low- and high-SES students). Although we do not think it prudent to make too much of the apparent positive effect of failing the math exam for low-income students, given the fact that the esti-mates for low- and high-SES students are not significantly different, and given the large num-ber of hypothesis tests in Table 4, it is clear that our results do not correspond well to the Massachusetts findings. Likewise, our results do not correspond well to those in New Jersey, where students who fail the exam have lower persistence rates and lower graduation rates than those who just barely pass the exam (Ou, 2009).
The preponderance of estimates from our models suggest that failing an exit exam in 10th grade has little or no effect on students’ persistence, achievement, or graduation. Why might exit exam failure have so little effect on
subsequent educational outcomes? One reason may be that the passing threshold is set relatively low. A low passing threshold means that students at the margin of passing (who are the students most likely to have an incentive to be motivated to work harder if they fall the 10th grade test) are relatively low-achieving students, who have rel-atively low graduation rates in the absence of the exit exam policy. For such students, the exit exam is therefore often not the binding con-straint on their graduation. A closer look at our data supports this argument: Among the students who score just above the passing score on the math and ELA CAHSEE exam in 10th grade, 34% leave their district by 12th grade and 24% remain in their district but do not graduate (Table 1, column 2). This suggests that for students near the margin, passing the CAHSEE is not sufficient to ensure graduation.
It is unclear what the effects on those near the margin of passing would be if the passing threshold were higher. A higher threshold would mean the CAHSEE would be more of a constraint to gradu-ation; this might motivate students and schools to work harder to improve the outcomes of students near the passing score, and so have positive effects. On the other hand, because the CAHSEE would be more of a binding constraint for those near the margin of passing, passing or failing the exam would be more determinative of graduation for students near the higher passing threshold, and so students who failed would be more likely not to graduate than those who passed. Our data cannot tell us which of these effects would dominate if the passing threshold were higher.
Another possible explanation for why exit exam failure might have no effect on subsequent educational outcomes is that although the threat of failing causes increased student motivation and effort as well as intervention by their schools that increase students’ cognitive skills sufficiently to pass the test, those responses are not enough to affect student achievement on state standardized CST tests. In general, when a student fails on his or her first attempt at the CAHSEE in 10th grade, California districts are expected to inter-vene in some way to ensure that the student has the opportunity to acquire missing skills before he or she graduate. However, a great deal of variation exists in how districts ensure that these
at Stanford University Libraries on February 15, 2011http://eepa.aera.netDownloaded from
515
High School Exit Exam
students receive support; indeed, a great deal of variation also seems to exist across schools within the same district, based on our conversa-tions with district officials in the four districts studied. Each of the four districts reports making use of some sort of school-level coordinator who is responsible for monitoring student interven-tion plans and communicating those plans to the district. In addition, schools in each district have created CAHSEE-targeted remedial courses into which first-time failers are placed in 11th grade, although these courses may vary considerably across schools. Nonetheless, in some additional analyses (not shown here), we find little variation among schools and districts in students’ initial and subsequent CAHSEE passing rates, conditi-onal on students’ prior test scores (which are very
strong predictors of passing). This analysis sug-gests that variation among schools in CAHSEE preparation and remediation strategies yields little variation in student outcomes, although it is worth investigating this further to determine if there are strategies used by some schools that lead to better outcomes for students who initially fail one or both of the tests.
Finally, it is important to note that our estimates here reflect the impact of failing the CAHSEE exam in 10th grade, given that the exit exam pol-icy is in place. These should not be compared to estimates of the effect of the policy itself, which prior work suggests may lead to higher dropout rates (Dee & Jacob, 2006; Warren et al., 2006). We investigate this possibility in a companion article to the present one.
TABLE A1Effect of Failing CAHSEE (First Attempt) on Student Covariates
Effect of failing . . .
At least one
section
ELA section, when passing math section
Math section, when passing ELA section
Both sections
ELA section when also
failing math
Math section, when also
failing ELA
White –0.005 –0.011 –0.001 0.001 0.012 0.002(0.007) (0.009) (0.011) (0.007) (0.010) (0.009)
Hispanic 0.012 0.029 –0.005 0.002 0.018 –0.042*(0.011) (0.016) (0.016) (0.014) (0.020) (0.020)
Black 0.012 0.012 0.018 0.002 –0.003 0.013(0.008) (0.011) (0.013) (0.011) (0.017) (0.015)
Asian –0.012 –0.018 –0.007 –0.003 –0.023 0.023(0.009) (0.015) (0.012) (0.011) (0.014) (0.017)
Female 0.012 0.014 –0.001 –0.001 –0.007 –0.001(0.011) (0.018) (0.016) (0.015) (0.021) (0.022)
Free-lunch eligible –0.008 0.003 –0.015 0.005 0.012 –0.014(0.008) (0.012) (0.011) (0.010) (0.014) (0.013)
ELL –0.012 –0.004 –0.004 –0.012 –0.013 –0.034(0.009) (0.017) (0.011) (0.014) (0.019) (0.020)
9th grade ELA CST score
0.226(0.649)
0.129(1.038)
–1.296(0.893)
–0.274(0.824)
–0.574(1.134)
1.935(1.190)
10th grade ELA CST score
0.491 –1.053 0.252 0.713 1.454 –0.209(0.635) (0.949) (0.926) (0.732) (1.041) (1.086)
n 59,025 38,259 37,514 43,673 13,527 13,398
Note. CAHSEE = California High School Exit Exam; ELA = English language arts; ELL = English language learner; CST = California Standards Test. Cluster-corrected standard errors (clustered on the rating score) are shown in parentheses. The table reports the point estimate and standard error on the coefficients estimating the effect of failing the CAHSEE exam on student covariates, using the same models and samples used to estimate the effects shown in Table 3. Columns correspond to various conceptualizations of the treatment, that is, different combinations of CAHSEE failing status (see text for a complete explana-tion). All models exclude students who were classified as special education at any point since 8th grade.*p < .05.
at Stanford University Libraries on February 15, 2011http://eepa.aera.netDownloaded from
516
TAB
LE
A2
Est
imat
ed E
ffec
ts o
f Fai
ling
CA
HSE
E (
Fir
st A
ttem
pt),
Usi
ng A
lter
nati
ve B
andw
idth
s A
roun
d P
ass–
Fai
l Cut
Sco
re
Ban
dwid
th =
10
Ban
dwid
th =
20
Ban
dwid
th =
30
Ban
dwid
th =
40
Ban
dwid
th =
50
Ban
dwid
th =
60
Eff
ect o
f fa
ilin
g at
le
ast o
ne
sect
ion
Eff
ect o
f fa
ilin
g bo
th
sect
ions
Eff
ect o
f fa
ilin
g at
le
ast o
ne
sect
ion
Eff
ect o
f fa
ilin
g bo
th
sect
ions
Eff
ect o
f fa
ilin
g at
le
ast o
ne
sect
ion
Eff
ect o
f fa
ilin
g bo
th
sect
ions
Eff
ect o
f fa
ilin
g at
le
ast o
ne
sect
ion
Eff
ect o
f fa
ilin
g bo
th
sect
ions
Eff
ect o
f fa
ilin
g at
le
ast o
ne
sect
ion
Eff
ect o
f fa
ilin
g bo
th
sect
ions
Eff
ect o
f fa
ilin
g at
le
ast o
ne
sect
ion
Eff
ect o
f fa
ilin
g bo
th
sect
ions
11th
gra
de E
LA
CS
T s
core
(s
tand
ardi
zed)
n
–0.0
32(0
.026
)14
,924
0.02
9(0
.028
)9,
024
–0.0
27(0
.018
)28
,742
0.00
5(0
.020
)18
,281
–0.0
10(0
.015
)41
,104
0.00
3(0
.017
)27
,400
–0.0
06(0
.013
)51
,527
0.00
5(0
.015
)36
,475
0.00
5(0
.012
)59
,998
–0.0
05(0
.014
)44
,551
0.00
4(0
.011
)66
,728
–0.0
03(0
.013
)52
,116
Pre
sent
in s
prin
g 12
th g
rade
n
0.01
3(0
.020
)13
,994
–0.0
36(0
.029
)9,
397
0.01
8(0
.014
)26
,879
0.01
4(0
.021
)18
,611
0.00
5(0
.011
)38
,422
0.01
5(0
.017
)27
,503
0.00
6(0
.010
)48
,025
0.01
5(0
.015
)36
,131
0.00
6(0
.009
)55
,418
0.00
9(0
.014
)43
,036
0.00
3(0
.009
)61
,116
0.00
7(0
.013
)49
,531
Gra
duat
ion
(by
spri
ng o
f 12
th g
rade
)n
–0.0
28(0
.025
)11
,115
0.00
6(0
.036
)7,
242
0.01
2(0
.017
)21
,271
0.01
5(0
.024
)14
,417
0.00
6(0
.014
)30
,350
0.01
9(0
.020
)21
,379
0.02
0(0
.013
)37
,949
0.02
1(0
.018
)28
,172
0.02
7*(0
.011
)43
,855
0.01
2(0
.016
)33
,600
0.02
5*(0
.011
)48
,533
0.00
6(0
.015
)38
,908
11th
gra
de C
ST
hig
her
than
A
lgeb
ra 1
n
–0.0
52*
–0.0
26–0
.018
–0.0
36*
–0.0
22–0
.030
*–0
.023
*–0
.018
–0.0
24*
–0.0
13–0
.021
*–0
.005
(0.0
20)
(0.0
24)
(0.0
14)
(0.0
17)
(0.0
12)
(0.0
14)
(0.0
10)
(0.0
13)
(0.0
09)
(0.0
12)
(0.0
09)
(0.0
11)
14,4
308,
677
27,8
5917
,577
39,9
0826
,434
50,0
6835
,237
58,3
8843
,160
65,0
1550
,578
11th
gra
de C
ST
hig
her
than
G
eom
etry
n
–0.0
260.
047
–0.0
030.
006
–0.0
030.
016
–0.0
07–0
.006
–0.0
08–0
.004
–0.0
06–0
.009
(0.0
17)
(0.0
24)
(0.0
12)
(0.0
17)
(0.0
10)
(0.0
14)
(0.0
09)
(0.0
13)
(0.0
09)
(0.0
12)
(0.0
08)
(0.0
12)
14,4
308,
677
27,8
5917
,577
39,9
0826
,434
50,0
6835
,237
58,3
8843
,160
65,0
1550
,578
Not
e. C
AH
SE
E =
Cal
ifor
nia
Hig
h S
choo
l E
xit
Exa
m;
EL
A =
Eng
lish
lan
guag
e ar
ts;
CS
T =
Cal
ifor
nia
Sta
ndar
ds T
est.
Col
umns
cor
resp
ond
to v
aryi
ng b
andw
idth
s ar
ound
the
cut
sco
re o
f 35
0 (a
ban
dwid
th o
f 50
was
use
d th
roug
hout
the
rest
of
the
arti
cle)
. Res
ults
are
sho
wn
only
for
min
imum
and
max
imum
sco
res
as tr
eatm
ents
. The
abo
ve ta
ble
repo
rts
the
poin
t est
imat
e an
d st
anda
rd e
rror
on
the
coef
fici
ents
est
imat
ing
the
effe
ct o
f fa
ilin
g th
e C
AH
SE
E e
xam
on
each
of
the
outc
ome
vari
able
s fo
r st
uden
ts n
ear
the
cut
scor
e. M
odel
s us
e st
uden
ts f
rom
all
fou
r di
stri
cts
in t
he 2
004,
200
5, 2
006,
200
7, a
nd 2
008
coho
rts
prov
ided
by
the
dist
rict
s (s
ome
data
not
ava
ilab
le f
or a
ll c
ohor
ts i
n so
me
dist
rict
s—se
e F
ootn
ote
11).
All
mod
els
use
scho
ol-b
y-co
hort
fix
ed e
ffec
ts. T
he e
stim
ated
mod
els
regr
ess
the
outc
ome
vari
able
on
the
cent
ered
mat
h an
d E
LA
sca
le s
core
s, th
e ce
nter
ed m
ath
and
EL
A s
cale
sco
res
squa
red,
and
inte
ract
ion
term
s th
at a
llow
bot
h th
e li
near
and
qua
drat
ic f
orm
s to
var
y on
eit
her
side
of
the
cut
scor
e. M
odel
s al
so i
nclu
de c
ovar
iate
s to
con
trol
for
ell
sta
tus,
gen
der,
free
-lun
ch e
ligi
bili
ty s
tatu
s, a
nd r
ace/
ethn
icit
y. S
tand
ard
erro
rs a
re c
lust
ered
at
the
rati
ng s
core
. All
mod
els
excl
ude
stud
ents
who
wer
e cl
assi
fied
as
spec
ial
educ
atio
n at
any
poi
nt s
ince
8h
grad
e.
at Stanford University Libraries on February 15, 2011http://eepa.aera.netDownloaded from
517
High School Exit Exam
Notes
1California—the source of the data used in this study—provides examples of policy rhetoric presum-ing a link between exit exam policies and student and school motivation. California state superintendent of public instruction Jack O’Connell claims that schools hold students to higher expectations as a result of the exam (News Release 09-103, July 8, 2009, “Schools Chief Jack O’Connell Announces Latest High School Exit Exam Results Show Steady Progress in Passage Rate,” http://www.cde.ca.gov/nr/ne/yr09/yr09rel103.asp) and that the exit exam also motivates students: “As a result of the exit exam, students are working harder, learning more and persevering in school” (State of Education Address, February 6, 2007, annual address from Superintendent O’Connell on the status of edu-cation in California, http://www.cde.ca.gov/eo/in/se/yr07stateofed.asp). Likewise, Mary Bergen, the presi-dent of the California Teachers Association, has claimed that having students take an exit exam early in high school gave students an idea of the work expected of them and allows schools to steer resources to where they are most needed.
2Numerous studies have discussed the motiva-tional effects of high-stakes testing and accountabil-ity systems more generally on teachers and schools, focusing largely on how these actors respond to incentives and sanctions. See Booher-Jennings (2005) for a list of such studies. When stakes for a single exam are high, teachers and schools may feel compelled to dedicate more time to preparing for its administration. If teachers opt to “teach to the test” in place of more authentic teaching, students may learn only the material to pass that particular exam but miss out on the broader set of skills they may need in their future endeavors (Pedulla et al., 2003; Popham, 2001). If schools respond in this way to students who fail the initial exit exam, then we might expect to see these students’ subsequent scores on the exit exam improve, but such improvements might be small (i.e., just large enough to reach the test’s passing score) or might not be mirrored in achievement as measured by a different standardized assessment of achievement.
3Warren, Jenkins, and Kulick (2006) also address why their results are inconsistent with prior analyses based on National Education Longitudinal Study (NELS) data that tend to show no significant relation-ships between exit exams and high school completion measures. When they confine their sample to state-years prior to 1992 (i.e., when the final NELS-88 cohort should have graduated from high school), Warren et al. also find no significant associations. They sug-gest that many states have moved to more rigorous exit exams since that time and generally conclude
that “the consequences of state HSEEs have changed in important ways since the NELS-88 cohort moved through the secondary school system” (p. 146).
4One exception is Jacob (2001), who uses NELS data to examine whether exit exam policies lead to increased student achievement. He does not find any evidence that exit exam policies affect student achievement. Marchant and Paulson (2005), in con-trast, report that states with exit exam requirements have lower average SAT scores, although this associa-tion is not clearly causal. Moreover, neither of these studies investigates the effect of failing an exam on student achievement.
5See http://www.leginfo.ca.gov/pub/99-00/bill/sen/sb_0001-0050/sbx1_2_bill_19990329_chaptered .html.
6However, English learners must be permitted to take the California High School Exit Exam (CAHSEE) with certain test variations if used regularly in the classroom. For example, if regularly used in the classroom, English learners must be permitted to hear the test directions in their primary language or use a translation glossary.
7Testing dates are centrally scheduled by individ-ual districts, and the exam is administered over the course of 2 days (1 day for each portion). The test is untimed, although students typically complete each section in 3 to 4 hours.
8The first class for whom the CAHSEE require-ment was binding was the graduating class of 2006, who first took the test in 10th grade in spring 2004, at which time they knew it to be a graduation require-ment. A minor complication in the CAHSEE timeline arose for the class of 2006—the first class to ulti-mately be subject to the CAHSEE exam requirement. In February of their senior year (2006), a lawsuit (Valenzuela v. O’Connell) was filed on behalf of high school students who had not yet passed the CAHSEE exam. As a result, for 12 days of their final semester, students in the class of 2006 were relieved by an Alameda Superior Court Judge from their require-ment to pass the CAHSEE. This decision was quickly overturned, however, by the California Supreme Court. Nonetheless, one may worry that the debate surrounding the legality of the CAHSEE in the spring of 2006 may have led to some ambiguity for students about whether the CAHSEE would be enforced. However, seniors in the class of 2006 had already completed their final administration of the CAHSEE before the 12 days when the CAHSEE requirement was temporarily suspended. The results of our analy-ses are substantively unchanged if we exclude the class of 2006 from the analyses.
9Each of these outcomes is conditional on whether a student took any math course in 11th grade. Supplemental analyses (not shown), however, indicate
at Stanford University Libraries on February 15, 2011http://eepa.aera.netDownloaded from
518
Reardon et al.
no effect of failing the exit exam on whether a student took a math course in 11th grade.
10We construct this variable using data on students’ GPA, California Standards Test (CST) scores, and CAHSEE scores. Students with any evidence that they were enrolled and attended school in a given semes-ter—specifically, a nonzero GPA, a nonmissing CST score, or a nonmissing CAHSEE score—are coded as present in the district in that semester, and as not pres-ent otherwise. For students who leave the district and then return (are coded as present in some later term), we retroactively code them as present for all terms prior to the final one in which they are observed to be present. In addition, students who received a diploma from the district in an earlier semester are coded as present so that they not be counted among leavers or dropouts in our persistence models (i.e., our “present” indicator is coded 0 for anyone who left the district without returning prior to receiving a diploma and is coded 1 for those who have graduated or are still enrolled at a given semester).
11We have five cohorts with outcome data in two of the four districts. In one of the remaining districts, we have graduation data for only two cohorts but other outcome data for five cohorts. The fourth district pro-vided outcomes for only three cohorts.
12In fact, in preliminary analyses comparing out-comes of low-skill students in cohorts who were and were not required to pass the CAHSEE, we find a substantial negative effect of the CAHSEE pol-icy on graduation rates for low-skill students. These analyses are the subject of a companion article to this one.
References
Amerin, A. L., & Berliner, D. C. (2002). An analysis of some unintended and negative consequences of high-stakes testing (ESPL-0211-125-EPRU). Tempe: Arizona State University, Educational Policy Research Unit, Education Policy Studies Laboratory.
Ames, C. (1984). Competitive, cooperative, and indi-vidualistic goal structures: A cognitive-motivational analysis. In C. Ames & R. Ames (Eds.), Research on motivation in education (Vol. 1, pp. 177–207). San Diego, CA: Academic Press.
Bishop, J. H. (2006). Drinking from the fountain of knowledge: Student incentive to study and learn—Externalities, information problems and peer pres-sure. In E. A. Hanushek & F. Welch (Eds.), Handbook of the economics of education (Vol. 2, pp. 909–944). Amsterdam, Netherlands: Elsevier.
Bishop, J. H., & Mane, F. (2001). The impacts of minimum competency exam graduation requirements on high school graduation, college attendance and
early labor market success. Labour Economics, 8, 203–222.
Blau, T. H. (1980). Minimum competency testing: Psychological implications for students. In R. Jaeger & C. Tittle (Eds.), Minimum competency achievement testing (pp. 172–181). Berkeley, CA: McCutchan.
Booher-Jennings, J. (2005). Below the bubble: “Educational triage” and the Texas accountability system. American Educational Research Journal, 42, 231–268.
Carnoy, M., & Loeb, S. (2002). Does external account-ability affect student outcomes? A cross-state analy-sis. Educational Evaluation and Policy Analysis, 24, 305–331.
Catterall, J. S. (1986). Dropping out of schools as a process: Implications for assessing the effects of competency test required for graduation. Effects of testing reforms and standards (Working paper). Los Angeles: University of California, Los Angeles.
Catterall, J. S. (1989). Standards and school dropouts: A national study of tests required for high school gradu-ation. American Journal of Education, 98, 1–34.
Center on Education Policy. (2004). State high school exit exams: States try harder but gaps persist. Washington, DC: Author.
Center on Education Policy. (2005). How have high school exit exams changed our schools? Some per-spectives from Virginia and Maryland. Washington, DC: Author.
Center on Education Policy. (2009). State high school exit exams: Trends in test programs, alternate path-ways, and pass rates. Washington, DC: Author.
Covington, M. V. (2000). Goal theory, motivation, and school achievement: An integrative review. Annual Review of Psychology, 51, 171–200.
Dee, T. S., & Jacob, B. A. (2006). Do high school exit exams influence educational attainment or labor mar-ket performance? (Working paper). NBER Working Paper Series Working Paper 12199 http://www.nber .org/papers/w12199 National Bureau of Economic Research 1050 Massachusetts Avenue Cambridge, MA 02138, April 2006.
Glass, G. V. (1978). Minimum competence and incompetence in Florida. Phi Delta Kappan, 59, 602–605.
Greene, J. P., & Winters, M. A. (2004). Pushed out or pulled up? Exit exams and dropout rates in public high schools. New York, NY: Manhattan Institute.
Griffin, B. W., & Heidorn, M. H. (1996). An examina-tion of the relationship between minimum compe-tency test performance and dropping out of high school. Educational Evaluation and Policy Analysis, 18, 243–252.
Grodsky, E., Warren, J. R., & Kalogrides, D. (2009). State high school exit examinations and NAEP
at Stanford University Libraries on February 15, 2011http://eepa.aera.netDownloaded from
519
High School Exit Exam
long-term trends in reading and mathematics, 1971–2004. Educational Policy, 23, 589–614.
Huebert, J. P., & Hauser, R. M. (1998). High stakes testing for tracking, promotion and graduation. Washington, DC: National Academy Press.
Imbens, G. M., & Lemieux, T. (2008). Regression discontinuity designs: A guide to practice. Journal of Econometrics, 142, 615–635.
Jacob, B. A. (2001). Getting tough? The impact of high school graduation exams. Educational Evaluation and Policy Analysis, 23, 99–121.
Kurlaender, M., Reardon, S. F., & Jackson, J. (2008). Middle school predictors of high school achieve-ment in three California school districts. http://www.cdrp.ucsb.edu/dropouts/pubs_reports.htm California Dropout Research Project.
Lee, D. S., & Card, D. (2008). Regression discontinu-ity inference with specification error. Journal of Econometrics, 142, 655–674.
Ludwig, J., & Miller, D. (2007). Does Head Start improve children’s life chances? Evidence from a regression discontinuity design. Quarterly Journal of Econometrics, 122, 159–208.
Madaus, G. F., & Clarke, M. (2001). The adverse impact of high-stakes testing on minority students: Evidence from one hundred years of test data. In G. Orfield & M. L. Kornhaber (Eds.), Raising standards or raising barriers? Inequality and high-stakes testing in public education (pp. 85–106). New York, NY: Century Foundation Press.
Marchant, G. J., & Paulson, S. E. (2005). The rela-tionship of high school graduation exams to gradu-ation rates and SAT scores. Education Policy Analysis Archives, 13. Retrieved from Citation: Marchant, G. J. & Paulson, S. E. (2005, January 21). The relationship of high school graduation exams to graduation rates and SAT scores. Education Policy Analysis Archives, 13(6). Retrieved from http://epaa.asu.edu/epaa/v13n6/.
Martorell, P. (2005). Do high school exit exams mat-ter? Evaluating the effects of high school exit exam performance on student outcomes. Berkeley: University of California, Berkeley.
Neal, D., & Schanzenbach, D. (2007). Left behind by design: Proficiency counts and test-based account-ability. Cambridge, MA: National Bureau of Economic Research.
Ou, D. (2009). To leave or not to leave? A regression discontinuity analysis of the impact of failing the high school exit exam. London: Centre for Economic Performance, London School of Economics.
Papay, J. P., Murnane, R. J., & Willett, J. B. (2010). The consequences of high school exit examina-tions for low-performing urban students: Evidence
from Massachusetts. Educational Evaluation and Policy Analysis, 32, 5–23.
Pedulla, J. J., Abrams, L. M., Madaus, G. F., Russell, M. K., Ramos, M. A., & Miao, J. (2003). Perceived effects of state mandated testing pro-grams on teaching and learning: Findings from a national survey of teachers. Boston, MA: Boston College, National Board of Educational Testing and Public Policy.
Popham, W. J. (2001). The truth about testing: An edu-cator’s call to action. Alexandria, VA: Association for Supervision and Curriculum Development.
Reardon, S. F., & Galindo, C. (2002, April). Do high-stakes tests affect students’ decisions to drop out of school? Evidence from NELS. Paper presented at the annual meeting of the American Educational Research Association, New Orleans, LA.
Reardon, S. F., & Robinson, J. (2010). Regression discontinuity designs with multiple rating score variables (Working paper). Stanford, CA: Stanford University, Institute for Research on Education Policy and Practice.
Richman, C. L., Brown, K., & Clark, M. (1987). Personality changes as a function of minimum competency test success or failure. Contemporary Educational Psychology, 12, 7–16.
Roderick, M., & Engel, M. (2001). The grasshop-per and the ant: Motivational responses of low-achieving students to high-stakes testing. Educational Evaluation and Policy Analysis, 2, 197–227.
Warren, J. R., & Edwards, M. R. (2005). High school exit examinations and high school completion: Evidence from the early 1990s. Educational Evaluation and Policy Analysis, 27, 53–74.
Warren, J. R., & Jenkins, K. N. (2005). High school exit examinations and high school dropout in Texas and Florida, 1971–2000. Sociology of Education, 78, 122–143.
Warren, J. R., Jenkins, K. N., & Kulick, R. B. (2006). High school exit examinations and state-level com-pletion and GED Rates, 1972–2002. Educational Evaluation and Policy Analysis, 28, 131–152.
Zau, A. C., & Betts, J. R. (2008). Predicting success, preventing failure: An investigation of the California High School Exit Exam. San Francisco: Public Policy Institute of California.
Authors
SEAN F. REARDON is Associate Professor of Education and (by courtesy) Sociology at Stanford University, 520 Galvez Mall, #526, Stanford, CA 94305; [email protected]. His research
at Stanford University Libraries on February 15, 2011http://eepa.aera.netDownloaded from
520
Reardon et al.
focuses on the causes, patterns, and consequences of educational and social inequality.
NICOLE ARSHAN is a PhD candidate in Administration and Policy Analysis at Stanford University, 520 Galvez Mall, Stanford, CA 94305; [email protected]. Her research investigates how education policy reproduces or alleviates inequality.
ALLISON ATTEBERRY is a PhD candidate in Education at Stanford University, 520 Galvez Mall, #509, Stanford, CA 94305; [email protected]. Her res earch focuses on education policy analysis and value-added modeling.
MICHAL KURLAENDER is Associate Professor of Edu cation Policy at the University of California, Davis, One Shields Avenue, Davis, CA 95616; [email protected]. Her research focuses on policies and interventions at various stages of the edu-cational attainment process; in particular, dimensions of racial/ethnic and socioeconomic inequality in achieve-ment, and postsecondary access and completion.
Manuscript received January 29, 2009Final revision received July 22, 2009
Accepted July 22, 2010
at Stanford University Libraries on February 15, 2011http://eepa.aera.netDownloaded from