AUTHOR TITLE SPONS AGENCY REPORT NO PUB DATE …Newsome, Roy Pope, Pat Tick Reed, Sylvia Rodeffer,...

25
DOCUMENT RESUME ED 348 447 UD 028 818 AUTHOR Mac Iver, Douglas J. TITLE Motivating Disadvantaged Early Adolescents To Reach New Heights: Effective Evaluation, Reward, and Recognition Structures. INSTITUTION Center for Research on Effective Schooling for Disadvantaged Students, Baltimore, MD. SPONS AGENCY Office of Educational Research and Improvement (ED) Washington, DC. REPORT NO CDS-R-32 PUB DATE Jun 92 CONTRACT ,''117R90002 NOTE 29p. AVAILABLE FROM Center for Research on Effective Schooling for Disadvantaged Students, Johns Hopkins University, 3505 North Charles Street, Baltimore, MD 21218. PUB TYPE Reports - Evaluative/Feasibility (142) EDRS PRICE MF01/PCO2 Plus Postage. DESCRIPTORS *Academic Achievement; Adolescents; *Disadvantaged Youth; High Risk Students; *Incentives; Junior High Schools; Junior High School Students; Low Achievement; Middle Schools; *Program Effectiveness; Program E7aluation; Recognition (Achievement); *Rewards Student Evaluation; *Student Motivation; Urban Schools IDENTIFIERS *Baltimore City Public Schools MD; Incentives for Improvement Program ABSTRACT The Incentives for Improvement Program is an alternative student evaluation and recognition system that is responsive (all students have a realistic chance to achieve success) and challenging (students are not likely to succeed consistently unless they work up to tIleir potential). The program's goals are to raise student performance and foster students' motivation to learn. An evaluation WAS conducted to determine whether the program accomplished its goals during its first year of implementation. Volunteer teachers from four Baltimore CitY (Maryland) middle schools participated in the program during the 1969-90 school year. The program's effectiveness in raising students' grades, probability of passing, intrinsic interest in their schoolwork, effort, and self-concept of ability was evaluated by comparing end-of-school-year outcomes for students in participating classes with those of similar students who were enrolled in the same courses at four other Baltimore City middle schools. To make these comparisons as precise as possible, pre-test adjusted outcome measures in hierarchical linear models were used. Tbe results illustrate the substantial positive impact of individualized, improvement-oriented reward and recognition structures on students' grades lit part4_oipating courses and,on their probability of passing these courseil. There was also a small positive effect of the program on students' self-reported levels of effort. The program provides an evaluative process in which educationally disadvantaged students share increased opportunities to experience success in a challenging curriculum by earning recognition for aoademic improvement and by building upon this improvement to earn better grades and higher passing rates. Teachers' expectations that students will succeed academically are a vital part of motivating and effectively teaching current.17 low-achieving students. Included are 25 references, 3 tables, 4 fi9ares, and an appendix providing selected questionnaire items used to measure students' perceptions. (Author/RLC)

Transcript of AUTHOR TITLE SPONS AGENCY REPORT NO PUB DATE …Newsome, Roy Pope, Pat Tick Reed, Sylvia Rodeffer,...

Page 1: AUTHOR TITLE SPONS AGENCY REPORT NO PUB DATE …Newsome, Roy Pope, Pat Tick Reed, Sylvia Rodeffer, Tressa Ronis, Elnora Saunders, and. Eleanore Sterling for their help and support.

DOCUMENT RESUME

ED 348 447 UD 028 818

AUTHOR Mac Iver, Douglas J.

TITLE Motivating Disadvantaged Early Adolescents To Reach

New Heights: Effective Evaluation, Reward, and

Recognition Structures.

INSTITUTION Center for Research on Effective Schooling for

Disadvantaged Students, Baltimore, MD.

SPONS AGENCY Office of Educational Research and Improvement (ED)

Washington, DC.

REPORT NO CDS-R-32

PUB DATE Jun 92

CONTRACT ,''117R90002

NOTE 29p.

AVAILABLE FROM Center for Research on Effective Schooling for

Disadvantaged Students, Johns Hopkins University,

3505 North Charles Street, Baltimore, MD 21218.

PUB TYPE Reports - Evaluative/Feasibility (142)

EDRS PRICE MF01/PCO2 Plus Postage.

DESCRIPTORS *Academic Achievement; Adolescents; *Disadvantaged

Youth; High Risk Students; *Incentives; Junior High

Schools; Junior High School Students; LowAchievement; Middle Schools; *Program Effectiveness;Program E7aluation; Recognition (Achievement);

*Rewards Student Evaluation; *Student Motivation;

Urban Schools

IDENTIFIERS *Baltimore City Public Schools MD; Incentives for

Improvement Program

ABSTRACTThe Incentives for Improvement Program is an

alternative student evaluation and recognition system that is

responsive (all students have a realistic chance to achieve success)

and challenging (students are not likely to succeed consistently

unless they work up to tIleir potential). The program's goals are to

raise student performance and foster students' motivation to learn.

An evaluation WAS conducted to determine whether the program

accomplished its goals during its first year of implementation.

Volunteer teachers from four Baltimore CitY (Maryland) middle schools

participated in the program during the 1969-90 school year. The

program's effectiveness in raising students' grades, probability of

passing, intrinsic interest in their schoolwork, effort, and

self-concept of ability was evaluated by comparing end-of-school-year

outcomes for students in participating classes with those of similar

students who were enrolled in the same courses at four other

Baltimore City middle schools. To make these comparisons as precise

as possible, pre-test adjusted outcome measures in hierarchical

linear models were used. Tbe results illustrate the substantial

positive impact of individualized, improvement-oriented reward and

recognition structures on students' grades lit part4_oipating courses

and,on their probability of passing these courseil. There was also a

small positive effect of the program on students' self-reported

levels of effort. The program provides an evaluative process in which

educationally disadvantaged students share increased opportunities to

experience success in a challenging curriculum by earning recognition

for aoademic improvement and by building upon this improvement to

earn better grades and higher passing rates. Teachers' expectations

that students will succeed academically are a vital part of

motivating and effectively teaching current.17 low-achieving students.Included are 25 references, 3 tables, 4 fi9ares, and an appendixproviding selected questionnaire items used to measure students'

perceptions. (Author/RLC)

Page 2: AUTHOR TITLE SPONS AGENCY REPORT NO PUB DATE …Newsome, Roy Pope, Pat Tick Reed, Sylvia Rodeffer, Tressa Ronis, Elnora Saunders, and. Eleanore Sterling for their help and support.

U.11. DEPARTMINT Of KM/CAPON014ce o4 EdvcaiKmai

Research end improverneni

EOUCATIONAL RESOURCES INFORMATIONCENTER IERICI

document has I)(ren reproduced Ill'fIrsred trom rho perenn Or org3n1ZatoOn

00qnatingSAIrtor changes hare

been rnade Ø .rnprove

reprodvction RAMO_

Pc.nls 01 ale* or upo,or,sslated ,n :hf s

men? do hot neceSSar4 reRFellent Cfhc,a1

OE RI pOSdtOr, or poIrcy

THE JOHNS HOPKINS UNIVERSITY

Motivating Disadvantaged Early Adolescents

To Reach New Heights:

Effective Evaluation, Reward, and

Recognition Structures

BEST COPY AVAILA

Douglas J. Mac Iver

Report No. 32

June 1992

up

Page 3: AUTHOR TITLE SPONS AGENCY REPORT NO PUB DATE …Newsome, Roy Pope, Pat Tick Reed, Sylvia Rodeffer, Tressa Ronis, Elnora Saunders, and. Eleanore Sterling for their help and support.

National Advisory Panel

Beatriz Arias, College of Education, Arizona State University

Mazy Frances Berry, Department of History, University of Pennsylvania

Anthony S. Bryk, Department of Education, University of Chicago

Michael Charleston, Department of Education,University of Colorado at DenVer

Constance E. Clayton, Superintendent, Philadelphia Public Schools

Edmund Gordon (Chair), Department of Psychology. Yale University

Ronald D. Henderson, National Education Association

Vinetta Jones, Executive Director, EQUITY 2000, College Board

Heman LaFontaine, Superintendent, Hartford Public Schools

Arturo Madrid, Dimcior, Tomu Rivera Center

William Julius Wilson, Department of Sociology, University of Chicago

Center Liaison

Harold Himmelfarb, Office of Educational Researdi and Improvement

3

Page 4: AUTHOR TITLE SPONS AGENCY REPORT NO PUB DATE …Newsome, Roy Pope, Pat Tick Reed, Sylvia Rodeffer, Tressa Ronis, Elnora Saunders, and. Eleanore Sterling for their help and support.

Motivating Disadvantaged Early Adolescents to Reach New Heights:

Effecatre Evaluation, Reward, and Recognition Structures

Douglas J. Mac Iver

Grant No. R117 R90002

Report No. 32

June 1992

Published by the Center for Research on Effective Schooling for Disadvantaged Students,supported Rs a national research and development center by funds from the Office of EducationalResearch and Improvement, U.S. Department of Education. The opinions expressed in thispublication cio not necessarily reflect the position or policy of the OERI, and no officialendorsement should be inferred.

Center for Research on Effective Schooling for Disadvantaged StudentsThe Johns Hopkins University

3505 North Charles SueetBaltimore, Maryland 21218

Page 5: AUTHOR TITLE SPONS AGENCY REPORT NO PUB DATE …Newsome, Roy Pope, Pat Tick Reed, Sylvia Rodeffer, Tressa Ronis, Elnora Saunders, and. Eleanore Sterling for their help and support.

The Center

The mission of the Center for Research on Effective Schooling for Disadvantaged Students(CDS) is to significantly improve the education of disadvantaged students at each level ofschooling through new knowledge and practices produced by thorough scientific study andevaluation. The Center conducts its research in four program areas: The Early and ElementaryEducation Program, The Middle Grades and High Schools Program, the Language MinorityProgram, and the School, Family, and Community Connections Program.

The Early and Elementary Education Program

This program is working to develop, evaluate, and disseminate instructional programscapable of bringing disadvantaged students to high levels of achievement, particularly in thefundamental areas of reading, writing, and mathematics. The goal is to expand the range ofeffective alternatives which schools may use under Chapter 1 and other compensatory educationfunding and to study issues of direct relevance to federal, state, and local policy on education ofdisadvantaged students.

The Middle Grades and High Schools Program

This program is conducting research syntheses, survey analyses, and field studies in middleand high schools. The three types of projects move from basic research to useful practice.Syntheses compile and analyze existing knowledge about effective education of disadvantagedstudents. Sanfey analyses identify and describe current programs, practices, and trends in middleand high schools, and allow studies of their effects. Field studies are conducted in collaborationwith school staffs to develop and evaluate effective programs and practices.

The Language Minority Program

This program represents a collaborative effort. The University of California at SantaBarbara is focusing on the education of Mexican-American students in California and Texas;studies of dropout among children of recent immigrants are being conducted in San Diego ardMiami by Johns Hopkins, and evaluations of learning strategies in schools serving NavajoIndians are being conducted by the University of Northern Arizona. The goal of the program isto identify, develop, and evaluate effective programs for disadvantaged Hispanic, AmericanIndian, Southeast Asian, and other language minority children.

The School, Family, and Community Connections Program

This program is focusing on the key connections between schools and families BM betweenschools and communities to build better educational programs for disadvantaged children andyouth. Initial work is seeking to provide a research base concerning the most effective ways forschools to interact with and assist parents of disadvantaged students and interact with thecommunity to produce effective community involvement.

Page 6: AUTHOR TITLE SPONS AGENCY REPORT NO PUB DATE …Newsome, Roy Pope, Pat Tick Reed, Sylvia Rodeffer, Tressa Ronis, Elnora Saunders, and. Eleanore Sterling for their help and support.

Abstract

The Incentives for Improvement Program is an alternative student evaluation and recognition

system that is both responsive (all students have a realistic chance to achieve success) and

challenging (none are likely to succeed consistently unless they work up to their potential). The

program's goals are to raise student performance and to foster students' motivation to learn. An

evaluation study was conducted to determine whether the program had accomplished these goals

during its first year of implementation. Volunteer teachers from four Baltimore City middle

schools participated in the program during the 1989-90 school year. The program's effectiveness

in raising students' grades, probability of passing, intrinsic interest in their schoolwork, effort, and

self-concept of ability was evaluated by comparing end-of-school-year outcomes for students in

participating classes with those of similar students who were enrolled in the same courses at four

other Baltimore City middle schools. To make these comparisons as precise as possible, pre-test

adjusted outcome measures in hierarchical linear models were used. The results indicate that the

program had substantial positive effects on students' grades in participating courses and on their

probability of passing these courses. There was also a small positive effect of the program on

students' self-reported levels of effort.

Page 7: AUTHOR TITLE SPONS AGENCY REPORT NO PUB DATE …Newsome, Roy Pope, Pat Tick Reed, Sylvia Rodeffer, Tressa Ronis, Elnora Saunders, and. Eleanore Sterling for their help and support.

Acknowledgments

The research and development of the Incentives for Irnprovemext program is being

accomplished with the input and assistance of Baltimore City middle school teachers and

administrators, whose contributions are substantial. I especially wish to thank Denise

Baler, Ian Cohen, Beverly Crisp, Jacqueline Gundy, Edith Harrison, Valerie Hooper,

Sheila Kolman, Eugene Lawrence, Alphonso Lewis, Andrenette Mack, Annette

Newsome, Roy Pope, Pat Tick Reed, Sylvia Rodeffer, Tressa Ronis, Elnora Saunders, and

Eleanore Sterling for their help and support.

Page 8: AUTHOR TITLE SPONS AGENCY REPORT NO PUB DATE …Newsome, Roy Pope, Pat Tick Reed, Sylvia Rodeffer, Tressa Ronis, Elnora Saunders, and. Eleanore Sterling for their help and support.

Introduction

Teachers provide students with feedback ontheir performance by assigning grades andmaking comments on students' quizzes,tests, and major assignments. They oftengive special recognition or awards tostudents who do well. Although feedback,evaluation, and recognition practicti aresupposed to help motivate students tc reachhigher standards of intellectual achievement,teacher reports concerning the proportion ofstudents who are not performing up topotential suggest that typical current practicesare not very effective.

There have been decades of research onstudent and worker motivation. Theprinciples of goal-setting theory tkriv.xl fromthis research suggest specific, practicalalterations that teachers can make to theirfeedback, evaluation, and recognitionpnwtices that will make them more effectivein motivating students to work hard atlearning activities.

Volunteer teachers from four Baltimore Citymiddle schools agreed to make thesealterations by implementing the Incentives forImprovement Program -- a system in whichfeedback and recognition is based uponevaluating students in reference to specificimprovement goals; individualized, short-range performance benchmarks that arechallenging but reachable. This paperevaluates the effectiveness of this program inits first year of implementation.

One basic premise of goal-setting theory isthat task performance is heavily influemed bythe conscious goals that individuals are tryingto accomplish on the task (Locke & Latham,1984). Specific challenging goals (e.g. "beatyour highest previous score") lead to betterperformance than specific easy or vaguegoals (e.g., "do your best") or no goals(Locke &Latham, 1990b). But, in order forgoals to lead to better performance, studentsmust be consciously trying to attain thosegoals (Erez & Zidon, 1984) and must receivefeedback which allows them to judge whether

they are attaining the goal (Becker, 1978;Strang, Lawrence, & Fowler, 1978).

Goal commtment is highest when goals areperceived as ;-eachable rather than impossibleand when there are clear payoffs (intrinsic orextrinsic) associated with attaining the goal(Locke, Latl-am, & Erez, 1988). Also,assigned gott6 that represent an appropriatelevel of challenge typically produce the samelevel of goal conimitment and performance asparticipatively et or self-selected goals(Latham & Lee 1986; Locke & Latham,1990a).

Traditional fe,.dback, grading, andrecognition practices in schools are based onevaluation systems that compare students'performance to that of other students or todesirable absolute standards of achievement:The implicit goal assigned to students in bothtypes of systems is to attain desirable gr&iesand the recognitions such as honor rollmembership that go along with such grades.

Academically disadvantaged students -- whoby the middle school years are significantlybehind more advantaged classmates inacademic skills -- may find it impossible toandin this implicit goal even if they work veryhard. Even dramatic progress can still leavedisadvantaged students near the bottom oftheir class in comparative terms and far fromthe absolute levels of performance that arerewarded and recognized in their school.Once students begin to realize that their bestefforts will go unrecognized and unrewarded,they become frustrated with and disengagedfrom school (Natriello, 1982), and their goalcommitment, level of effort, and rate ofprogress drop precipitously.

One reason traditional evaluation practices areineffective is that tiWyr include neither specificimprovement goals nor regular imFovement-focused feedback to make studentsaccountable for making consistent, gradualprogress toward high levels of achievementand understanding. Traditional practices do

1 8

Page 9: AUTHOR TITLE SPONS AGENCY REPORT NO PUB DATE …Newsome, Roy Pope, Pat Tick Reed, Sylvia Rodeffer, Tressa Ronis, Elnora Saunders, and. Eleanore Sterling for their help and support.

not explicitly encourage all students(regardless of current class standing andachievement level) to try to raise theirperformance levels by specific amounts inspecific time periods.

In the Incentives for Improvement Program,students are assigned specific, short-range,individualized goals. The goals are specificbecause such goals are better motivators thangeneral goals, largely because progress

toward a specific goal is easier fa- students todetect. Similarly, the goals are glosuit haul(e.g., "to score 10 points higher on the nextquiz") because such goals are moremotivating than distant goals (e.g., "tosomeday become an 'A' student.") Finally,the goals are individualized, because aspecific goal that is challenging but doable forone student may be unreachable (at least, inthe short-run) for another student and may betoo easy fcc still another student.

Program ComponentsThe Incentives for Improvement Programconsists of three major components: (a) foreach quiz or test, students are given anindividualized, specific goal at which to aim;as students improve, their individualizedgoals are gradually raised; (b) "improvementpoints" are used in sawing quizzes and tests(these points provide students with clearfeedback concerning their success ataccomplishing the individualized goals), and(c) all students who raise their performancelevel receive official recognition throughvarious types of improvement awards.

The Incentives for Improvement Programfeatures three-quiz "rounds." At the start ofeach round, each student receives his or her"base score" for that round. This base score

represents the student's average percentcorrect on recent quizzes and provides astarting point for improvement. Students areasked to try to beat this score on the nextthree quizzes. After each quiz, each student'squiz score is compared to his or her basescore and students earn 0, 10, 20 or 30improvement points on the quiz, dependingon by how much they were able to beat theirbase score.

It:fore introducing the program to students,teachers give two or more quizzes, thendetermine an initial base score for eachstudent by averaging his or her scores onthose quizzes. Then, on the next threequizzes, students are able to earnimprovement points as shown below:

Quiz Score Improvement Pts.

5 or more points below base score

4 points below to 4 points above base score

5 points to 9 points above base score

More than 9 points above base score

95% - 99% (if students base score is above 90%)

100%

0

10

20

30

20

30

Page 10: AUTHOR TITLE SPONS AGENCY REPORT NO PUB DATE …Newsome, Roy Pope, Pat Tick Reed, Sylvia Rodeffer, Tressa Ronis, Elnora Saunders, and. Eleanore Sterling for their help and support.

Thus, improvement points are given inrelationship to past performance. A studentwhose base score (average percent correct onrecent quizzes) is 65 and who gets a 70 earnsthe same number of improvement points (20)as a student whose base score is 75 and whogets an 80. In order to earn the maximumnumber of improvement points, a studentmust beat his or her base score by more than9 points. However, there is no danger ofstudents "topping out" with too high a basescore; students who have a base score over90% receive 20 improvement points whenthey score 95-99%, and receive 30improvenwnt points if they get a perfectpaper. Figure 1 shows how improvementpoints would be computed for one fictitiousset of students.

Insert Figure 1

Figuring Average improvementPoints and New Base Scores

At the end of each round of three quizzes,each student's average improvement pointsfor the round is computed (See Figure 2).Each student's new Base score is thenfigured by averaging the student's percentcorrect on the last three quizzes with thecurrent base score.

Insert Figure 2

End-of-Round Awards. Students receiveawards at the end of every round. When theprogram began, two types of awards wereoffered ("Rising Star" and "Milestone"Awards). Late in Year 1, a few of theparticipating teachers began offering a thirdtype of award ("90% Club Awards"). Use ofthis award spread to all schools in Year 2.Also, at the beginning of Year 2, Milestoneawards were replaced by "Personal Best"awards.

Rising Star Awards. A Rising StarAward is given to each student who averagesat least 20 improvement points on the threequizzes in a round. Figure 2 shows theperformance of 15 fictitious students duringRound I. Nine of these students earn aRising Star Award for Round 1 because theirimprovement point average (as listed in thefinal column of the quiz score sheet) is 20 orgreater.

Milestone and Personal Best Awards.Teachers keep a record of students' basescores throughout the year (see Figure 3). InYear 1, students received a Milestone Awardwhenever they raised their base scoregfaims beyond their initial base score (e.g., astudent whose initial base score was 65received a certificate of recopition if he orshe reached a base of 70, received a differentcertificate later for reaching a base of 75, andso on.

Insert Figure 3

In the current program, whenever a studentreaches a base score that represents a newhigh for him or her even if it is only onepoint higher -- the student receives a"Personal Best" award for breaking his or herpast "personal record."

90% Club Awards. At the end of everyround, "90% Club" awards are also nowgiven to every student whose new base scoreat the end of the round is 90% or above.

In summary, at the end of every threequizzes, students have the opportunity to earnthree awards. They receive Rising Starawards for averaging at least 20 improvementpoints on the last three quizzes, they receivePersonal Best awards for setting personal"base score" records, and they receive 90%Club awards for reaching and maintaining aperformance level of 90% or better.

Page 11: AUTHOR TITLE SPONS AGENCY REPORT NO PUB DATE …Newsome, Roy Pope, Pat Tick Reed, Sylvia Rodeffer, Tressa Ronis, Elnora Saunders, and. Eleanore Sterling for their help and support.

Description of the Evaluation Study

The purpose of the evaluation study was todetermine whether the Incentives forImprovement Program accomplished its twogoals: to raise student performance and tofoster students' motivation to learn. Amatched control group, pre-test-posttestdesign (Fitz-Gibbon & Morris, 1987) wasused to evaluate the program's effectivenessin reaching these goals.

Volunteer teachers from four Baltimore Citymiddle schools participated in the programduring the 1989-1990 school year. Whenregistering a class for the program, teacherswere asked to describe the title of the course,and the ability levels and grade levels of thestudents. This information was used tomatch participating classes with control groupclasses drawn from four nonparticipatingmiddle schools. The nonparticipatingschools were selected because they hadstudent populations that were similar to thosein the participating schools, according toprincipals' reports.

Measures

Intrinsic Value of the Subject Matter,Effort, and Self-Concept of Ability.Students in each of the participating andcontrol classes answered qlestionnaire itemsthat focused on their perceptions of intrinsicvalue (e.g., "How excited are you to learnabout this subject matter?"), effort (e.g.,"How hard are you working to learn aboutthis subject?"), and self-concept of ability(e.g., "How good are you in this subject?").Items had a response scale (ranging from 1 to7 unless otherwise noted) with verbalanchors at each end of the scale (seeAppendix).

Each construct was measured using multipleindicators taken from the Motivation to LearnScale (Mac Iver, Stipek, & Daniels, 1991).A student's responses to multiple indicatorsof the same construct were averaged to createa composite score for that construct. Thesemeasures were collected twice in 22 matched

4

pairs of classes; once shortly after theparticipating class was enrolled in theprogram, and once near the end of the fourthquarter of the school year (around mid-May).

Past and Current Performance.Information regarding students' grades fromthe year prior to the intervention wasavailable only for seventh- and eighth-gradestudents (20 matched pairs of classes). Forthese students, a "general average" gradeacross all classes and all reporting periodsfrom the pre-intervention year was computedand used as a pre-test measure. Eachstudcnt's performance in a participating class(or in a matched control group class) wasmeasured by the student's final grade in thatclass.

Implementation Measures

On the post-test questionnaire, students inparticipating classes were asked to indicatehow often their teacher had used theimprovement-points method for scoringquizzes and tests and also how often theteacher had given out improvement awards:

This year, some teachers at this schoolawarded improvement points to students ontheir quizzes, tests, or assignments.Students could win 10, 20, or 30improvement points depending on howmuch they were able to beat their "basescore." How often did the teachergive improvement points on quizzesand tests in this class?

Nevcr Very Often1 2 3 4 5

How often did the teacher give out specialimprovement awards (such as certificates,buttons, pencils, or bumper stickers) tostudents who were showing good progress?

Never Vent qua1 2 3 4 5

1 1

Page 12: AUTHOR TITLE SPONS AGENCY REPORT NO PUB DATE …Newsome, Roy Pope, Pat Tick Reed, Sylvia Rodeffer, Tressa Ronis, Elnora Saunders, and. Eleanore Sterling for their help and support.

For both items, the mean response of thestudents in each class was computed. Inclasses where the mean response to bothitems was 3.5 or higher, teachers werejudged to have successfully met theimplementation standards for Year 1. These

standards were met by 71% of theparticipating classes; the data analysesreported here focus just on these classes.Similar (but somewhat weaker) effects arefound in analyses that include all participatingclasses.

Results

Analysis Plan. The conventional way ofevaluating program effects on continuousoutcome variables after controlling fordifferences in pretest status is to conduct anANCOVA at the student level, thus ignoringthe fact that students are nested withinclassrooms.

The recent development of hierarchical linearmodeling(HLM) techniques makes it possibleto estimate an ANCOVA-like model withouterroneously assuming indepencknt responseswithin classes (Bryk & Raudenbush, 1992).This type of HUM model (a random-interceptmodel with student-level covariates) takesinto account the dependence amongresponses within classrooms and providesefficient estimates of program effects inunbalanced, nested designs. For analyseswith continuous dependent variables, weused random intercept models with covariatesto test for program effects.

One of our dependent variableswhetheror not the student received a passing finalgrade-- was a dichotomous, qualitativemeuure. Program effects on this variablewere evaluated using logistic regression.

Effects of the Incentives forImprovement Program on Students'Report Card Grades. Report cards inBaltimore middle schools contain numbergrades rather than letter grades. Gradesranging from 90-100 are labeled "excellent,"grades between 80-89 are labeled "good,"grades between 70-79 are labeled"satisfactory," and grades under 70 signify"failure."

Pretest grades. The first panel in Table 1shows the ge,.eral average (i.e., average

5

grade across all courses and all reportingperiods) that students obtained on their reportcards in the year prior to the beginning of theIncentives for Improvement Program. Theaverage grades obtained by both experimentaland control group subjects were in thesatisfactory range. Experimental groupstudents had nonsignificantly lower generalaverages than did comparison group subjectsin the year prior to the intervention.

Post-test grades. The second panel inTable 1 shows the unadjusted fourth-quartergrades obtained by students in the targetcourses at the end of the pilot year of theIncentives for Improvement Program.Students in the Incentives for Improvementprogram achieved significantly higher fourth-quarter grades in target courses than didcomparison group saw:lents (random interceptmodel with no covariates, y = 5.1, tz [onetail] 5.025, ES = +.53.)

However, this comparison betweenunadjusted fourth-quarter grades does nottake into account each student's own priorgeneral average. A more precise comparisoncan be made using en adjusted fourth-quartergrade for each child which indicates howmuch better or (worse) that child did (in thefourth quarter of the target course) thanpredicted based on his or her general averagefrom the previous year. The average(adjusted) fourth-quarter grade is six pointshigher for experimental than for controlstudents (random intercept model with onecovariate, y = 6.4, [one tail] 5 .005, ES =4-.66). The Incentives for ImprovementProgram is having a substantial positiveimpact on students' level of performance inparticipating courses; students in participatingclasses earn adjusted final grades that arealmost two-thirds of a standard deviation

Page 13: AUTHOR TITLE SPONS AGENCY REPORT NO PUB DATE …Newsome, Roy Pope, Pat Tick Reed, Sylvia Rodeffer, Tressa Ronis, Elnora Saunders, and. Eleanore Sterling for their help and support.

higher than students in matched non- potential, etc.) than did students in controlparticipating classes. classes.l.

Insert Table 1

Probability of Passing. By increasingstudent effort and improving performance,the Incentives for Improvement Programshould increase the proportion of studentswho receive a passing grade. To test theeffect of the program on students' probabilityof passing, logistic regression analyses wereused to estimate the difference in probabilityof passing in experimental versus controlclassrooms after controlling for student'sgeneral average gra& from the pitvious year.The Incentives frx Improvement program hada significant positive effect on students'probability of passing, B = .72, [one tail].01. Figure 4 shows the predictedprobability of passing for students who haddifferent levels of past performance. Notethat the Incentives fca- Improvement programespecially benefited stmlents who were mostat risk (those with low general averages fromthe previous year); 12% more of thesestudents passed in experimental than incontrol classes.

Insert Figure 4alp veoeNemrsomm oz.* ..... =.4mea....10 1.1* MP! .....

Effects of the Incentives forImprovement Program on Students'Self-Reported Effort, IntrinsicValuing of the Subject Matter, andSelf-Concept of Ability. In theIncentives for Improvement Program,students are assied specific challenginggoals and earn awards for attaining them.According to goal-setting theory (Locke &Latham. 1990a), these goals and incentivesshould lead students to work and studyharder than students in nonparticipatingclasses. Panel 1 of Table 2 indicates that,holding pm-intervention effort constant,students in participating classes reportedexpending more effort (studying harder forquizzes and tests, working closer to their

6

Insert Table 2

The Incentives for Improvement Programuses modest extrinsic rewards to recognizestudents for specific accomplishments.Convergent evidence from several studies(e.g., Deci, 1975; Kruglanski, Friedman, &Zevi, 1971; Lepper, Greene, & Nisbett,1973) indicates that the inappropriati, use ofextrinsic rewards can have detrimental effectson students' intrinsic motivation. But, asUpper and Hodell (1989, p. 78) haveargued, when rewards are given based ontask perfamaium. and convey to children clearpositive information about their increasingcompete= at an activity, the rewards areunlikely to unclermine intrinsic interest.

In fact, given the relatively low interest valueof many school tasks, the recognition andreward structures in the Incentives forimprovement Program could even enhancestudents' interest and enthusiasm becausethese structures ensure that every student isprovided with an appropriate level ofchallenge. Throughout the year, each studentin the program is asked to shoot forpersonalized goals that are neither triviallysimple nor impossibly hard. The challengeof meeting such goals can be highlyintrinsically motivating (Csikszentrnihalyi &Nakamura,1989; Lepper & liodell, 1989).

Panel 2 of Table 2 indicates that theIncentives for Improvement Program had amargiiatlly significant positive effect (ofalmost one-fifth of a standard deviation) onstudents' perceptions of the intrinsic value ofthe subject matter.

It is well-established that when individualssucceed in attaining specific goals their ability

1-Adolescents ability perceptions have a swing impact ontheir effort and intrinsic valuing of the subject matter in *derand senior high school courses (Mac Ivor. Stipa& & Daniels.1991). Thatelons, mass effecu on efforts and intrinsicvaluing of the subject matter wens evilusted after controlling forstudents' pre-intervention ability paixptions.

1 3

Page 14: AUTHOR TITLE SPONS AGENCY REPORT NO PUB DATE …Newsome, Roy Pope, Pat Tick Reed, Sylvia Rodeffer, Tressa Ronis, Elnora Saunders, and. Eleanore Sterling for their help and support.

perceptions are increased (e.g., Bandura1986; Earley, 1986; Locke, Frederick, Lee,& Bobko, 1984; Mossholder, 1980). On theother hand, individuals with specific goalswho do not reach those goals may developability perceptions that are less than or equalto those of lower-performing individuals withvague goals because the latter tend to givethemselves the benefit of the doubt whenevaluating their performance (Mossholder,1980).

Thus, it was unclear whether programparticipants, who over time r,picallyexperience Soth some success and somefailure in meeting the specific improvementgoals, would develop higher abilityperceptions than control goup participantswho may have only v?..gue goals for theirperformance. Panel 3 of Table 2 shows thatthe program had a positive but marginallysignificant impact on students' abilityperceptions.

Discussion

The results of this study show the substantialimpact of individualized, improvement-°dented reward and recognition stnictures onstudents' grades, and thus on theirprobability of passing. The program'seffectiveness is robably due to a number offactors. All students have a realistic chanceof beating their individualized base scoreswith effort. Even small improvements do notgo unrecognized. In addition, by givingstudents a proximal, concrete goal to strivefor on every quiz, the program may motivatemore students to perform up to their potentialon these quizzes, with beneficial effects ontheir grades.

Finally, because teachers closely monitorstudent progress in a regular and systematicway, they become aware of theaccomplishments of low-performingstudents. Teachers now see the small butsignificant gains that these students aremaking, and may be more likely than beforeto pass these students.

Consistent with the predictions of goal-setting theory, there was a significant butmodest positive impact of the program onstudents' self-reported effort. Thechallenging but reachable goals assigned tostudents in the program apparently motivatestudents to increase the duration and intensityof their learning efforts.

Our findings show that the Incentives forImprovement program, in its pilot year, isaccomplishing two major objectives in

seeking to improve the wademic performanceof educationally disadvantaged students.

First, it is providing an evaluative process inwhich educationally disadvantaged studentsshare increased opportunities to experiencegenuine success in a challenging curriculumby earning recognition for academicimprovement and by building upon thisimprovement to earn better grades and higherpassing rates. The current press for theinstitution of higher and even "world-class"standards in our schools will becounterproductive if it increases thelikelihood that thc best efforts ofeducationally disadvantaged students will gounrecognized and unrewarded just becausethese students are starting out so far behind.On the other hand, if the establishment ofhigher standards is accompanied by theadoption of evaluation and recognitionstructures that provide students with specificimprovement goals and regular improvement-focused feedback and recognition, theneducationally disadvantaged students mayhave the impetus and support thty need toactually reach these higher standards overtime. The Incentives for Improvementprogram is a step in this direction.

Second, teachers' expectations that studentswill succeed academically -- that all studentscan learn are a vital part of motivating andeffectively teaching currently low-achievingstudents. Teachers are constantly advisedthat they must have high expectations for allstudents and rozst offer all students a high-content curriculum. Unfortunately, when

7 14

Page 15: AUTHOR TITLE SPONS AGENCY REPORT NO PUB DATE …Newsome, Roy Pope, Pat Tick Reed, Sylvia Rodeffer, Tressa Ronis, Elnora Saunders, and. Eleanore Sterling for their help and support.

teachers take this advice, current evaluationpractices in schools almost pre-ordain thefailure of students who start out far behind intheir learning. These failures reinforceteacher expectations that educationally-disadvantaged students are incapable ofmastering advanced-level knowledge andincreases the pressure on teachers to return toa watered-down curriculum.

The Incentives for Improvement programchanges this picture dEamatically by givingeducationally-disadvantaged students agenuine opportuniv for success in a high-content curriculum and by providing teacherswith direct evidence weekly in their ownclassrooms that their low-achieving studentscan indeed improve their academicperformance. Thus, teachers' expectationsthat all students can learn, in the Incentivesfor Improvement classroom, are reinforcedand increased by concrete evidence ofacademic hnprovement.

A Helpful Innovation or a "Bad IdeaWhose Time Has Come?" Finn (1991,pp. 109-111, 222 224) has called the practiceof giving low-achieving children "positivereinforcement and favorable feedback in theform of encouraging teacher comments,upbeat report cards" and so on, "a bad ideawhose time has come." He sees this practiceas partly responsible for parents' andyoungsters' complacency and for theirtendency to overrate the youngsters'accomplishments. Finn blames colleges ofeducation and education research institutesfor promoting the use of undemandingstandards by encouraging the use ofevaluatim systems that put success within thereach of all students..

When calls to implement a more responsiveevaluation system are misunderstood as callsto avoid giving students any negativefeedback, these calls are indeedcounterproductive. Any evaluation systemthat rewards low-achieving studentsregardless of whether they are showingprogress deserves to be criticized assomething that is more damaging thanhelpful. On the other hand, it is a gravemistake "to blacken with the same brush"rigorous progress-oriented evaluation

8

systems such as the Incentives withImprovement program. In these systems,success and its rewards are indeed accessibleto all students, but only if all students workharder and smarter and thus raise their levelof performance.

In the Incentives for Improvement program,students receive realistic informationconcerning their current level of performancefrequently, including clear negative feedbackwhen they fail to reach their assignedimprovement goals. Decades of research onstudent and worker motivation make it clearthat it is possible, indeed essential, toimplement such evaluation systems that areboth responsive (every individual has arealistic chance to achieve success) andchallenging (none are likely to succeed unlessthey work up to their potential).

A recent trend in educational assessment is toreplace traditional quizzes and tests with"authentic assessments" or "true tests" thatask sugients to demonstrate their masterywhile showing "that they can producesomething of value to themselves and others-- an argument, a report, a plan, an answer orsolution; a story, a poem, a drawing, asculpthre, or a performance; that they canconduct an experiment or deliver a persuasiveoral presentation" (California AssessmentProgram, not dated). Does the Incentives forImprovement program require teachers tocontinue using traditional quizzes and tests?By no means. Several of the participatingteachers used richer, more active and realistictypes of assessments fairly often. In theirfeedback to students, these teachers gaveeach student -- in addition to their commentsand ratings of the student's performance onspecific dimensions (e.g. organization.content, creativity, handling of questions) --a summary score (e.g., "summing across alldimensions graded, you received 80% of thetotal possible points"). These summaryscores were used for computing improvementpoints, figuring new base scores, anddetermining awards.

One caveat is in order. As assessment tasksbecome more complex, increasedachievement will be associated with increasedeffort and persistence only for individuals

15

Page 16: AUTHOR TITLE SPONS AGENCY REPORT NO PUB DATE …Newsome, Roy Pope, Pat Tick Reed, Sylvia Rodeffer, Tressa Ronis, Elnora Saunders, and. Eleanore Sterling for their help and support.

who know effective task strategies (Locke &Latham, 1990). For example, animpmvement-oriented evaluation system mayprompt students to work hard on an"authentic" task, but this effort is unlikely topay off unless students havt also received

effective instruction in how to approach sucha task -- for example, direct instruction inhelpful, domain-specific metacognitive andproblem-solving strategies (Nickerson,1988).

Page 17: AUTHOR TITLE SPONS AGENCY REPORT NO PUB DATE …Newsome, Roy Pope, Pat Tick Reed, Sylvia Rodeffer, Tressa Ronis, Elnora Saunders, and. Eleanore Sterling for their help and support.

References

Bandura, A. (1986). Social foundations of thought and action: A social cognitive view.Englewood Cliffs, NJ: Prentice Hall.

Becker, L. J. (1978). Joint effect of feedback and goal setting on performance: A field study ofresidential energy conservation. Journal of Applied Psychology, 63, 428-433.

Bryk, A. S., & Raudenbush, S. W. (1992). Hierarchical linear models: Applications and dataanalysis methods. Newbury Park, CA: Sage.

California Assessment Pmgram. (not dated). What if . . . A Vision of Educational Assessment inCalifornia (pamphlet). Sacramento, CA: California State Departnwnt of Education.

Csikszentnuhalyi, M., & Nakamura, J. (1989). The dynamics of intrinsic motivation: A study ofadolescents. In C. Ames and R. Ames (Eds.), Research on motivation in education,Volume 3: Goals and Cognitions. San Diego, CA: Academic Press.

Deci, E. L. (1975). Effects of externally mediated rewards on intrinsic motivation. Journal ofPersonality and Social Psychology 18, 105-155.

Earley, P. C. (1986). An examination of the mechanisms underlying the relation of feedback toperformance. Academy of Management Proceedings, 214-218.

Epstein, J. L., & Mac Iver, D. J. (1992). Opportunities to learn: Effects on eighth graders ofcurriculum offerings and instructional approaches. (CDS Report 33). Baltimore, MD:Johns Hopldns University Center for Research on Effective Schooling for DisadvantagedStudents.

Erez, M., & Zidon, I. (1984). Effect of goal acceptance on the relationship of goal difficulty toperformance. Journal of Applied Psychology, 69, 69-78.

Finn, C. E. (1991). We must take charge: Our schools and our fisture. New York: Free Press.

Fitz-Gibbon, C. T., & Morris, L. L. (1987). How to design a program evaluation. NewburyPark, CA: Sage.

Kruglanski, A. W., Friedman, I., & Zevi, G. (1971). The effects of extrinsic incentives on somequalitative aspects of task performance. Journal of Personality, 39, 606-617.

Latham, G. P., & Lee, T. W. (1986). Goal setting. In E. A. Locke (Ed.), Generalizing fromlaboratory to field settings. Lexington, MA: Lexington.

Lepper, M. R., Greene, D., & Nisbett, R.. E. (1973). Undermining children's intrinsic interestwith extrinsic rewards: A test of the "over justification" hypothesis. Journal of Personalityand Social Psychology, 28 , 129-137.

Lepper, M. R., & Hodell, M. (1989). Intrinsic motivation in the classroom. In C. Ames and R.Ames (Eds.), Research on motivation in education, Volume 3: Goals and Cognitions. SanDiego, CA: Academic Press.

1017

Page 18: AUTHOR TITLE SPONS AGENCY REPORT NO PUB DATE …Newsome, Roy Pope, Pat Tick Reed, Sylvia Rodeffer, Tressa Ronis, Elnora Saunders, and. Eleanore Sterling for their help and support.

Locke, E. A., Frederick, E., Lee, C., & Bobko. P. (1984). Effect of self-efficacy, goals, and taskstrategies on task performance. Journal of Applied Psychology, 69, 241-251.

Locke, E. A., & Latham, G.P. (1984). Goal-setting: A motivational technique that works.Englewood Cliffs, NJ: Prentice Hall.

Locke, E. A., & Latham, G.P. (1990a). A theory of goal setting and task performance.Englewood Cliffs, NJ: Prentice Hall.

Locke, E. A., & I..atham, G. P. (1990b). Work motivation and satisfaction: Light at the end of thetunnel. Psychological Science, 1, 240-246.

Locke, E. A., Latham, G.P., & Erez, M. (1988). The determinants of goal comniitment.Academy of Management Review, 13, 23-39.

Mac Iver, D. J., Stick, D. J., & Daniels, D. H. (1991). Explaining within-semester changes instudent effort in junior high school and senior high school courses. Journal of EducationalPsychology, 83, 201-211.

Mosshokler, K. W. (1980). Effects of externally mediated goal setting on intrinsic motivation: Alaboratory experiment. Journal of Applied Psychology, 65, 202-210.

Natriello, G. (1982). Organizational evaluation systems and student disengagement in secondaryschools. Final report to the National Institute of Education. St. Louis, MO: WashingtonUniversity.

Nickerson, R. S. (1988). On improving thinking through instruction. Review of Research inEducation, 15, 3-57.

Strang, H. R., Lawrence, E. C., & Fowler, P. C. (1978). Effects of assigned goal level andknowledge of results on arithmetic computation: A laboratory study. Journal of AppliedPsychology, 63, 446-450.

1811

Page 19: AUTHOR TITLE SPONS AGENCY REPORT NO PUB DATE …Newsome, Roy Pope, Pat Tick Reed, Sylvia Rodeffer, Tressa Ronis, Elnora Saunders, and. Eleanore Sterling for their help and support.

I Ala

M..

_._

11.. /7.1 =7

t_ -.4

-

AL_

II II I ;

Page 20: AUTHOR TITLE SPONS AGENCY REPORT NO PUB DATE …Newsome, Roy Pope, Pat Tick Reed, Sylvia Rodeffer, Tressa Ronis, Elnora Saunders, and. Eleanore Sterling for their help and support.

:'i S

T :1

41

e

It

I

e 0

0 a

I A e

a 0

I t ,

a ;

a - e

. e' a

a t

A S.

a 0 0 0

0 .

. : ° . a

* : : 5 I

0 0 . a f r

. . f

0 0 I a 6

a ; a

0 f

0

3 .

I

TIT17;:.:4"1

Page 21: AUTHOR TITLE SPONS AGENCY REPORT NO PUB DATE …Newsome, Roy Pope, Pat Tick Reed, Sylvia Rodeffer, Tressa Ronis, Elnora Saunders, and. Eleanore Sterling for their help and support.

2

Base Score Sheet: Teacher Rhae5 Period 3 cgt--

i

Student

SafeScore

1

easeScore

2

sassScore

3

OmScore

4

DeesScore

5

NeseScore

4

See*Score

7

SegeScore

e

SaesScore

9

SseeScore

10

6 5-

111111me5 60 6 30 cI 5" C

8r ho. F. '7 0 Ern111111111111111

&wtlkV. C o 6 6 -C MIME5" r 5 1 C 2... 111111111111

C0atc5 T. 6 5 6 g C. MEIF /0.. va. 1 k ls- go 7 y EMIF r i te, 37 7 0 7 16.tson L 10 (1 111111

1111111

1111111111111111

ONE6;k1 v 7 o s 0II ifr tr g go

ore- 95- 81 85-rut. tt , 6 . 7 6

1ritcs4,1e k. Co 93 szammlummaiiliIIVIson) 0 . 6 o 6 5 6 .5- IM EMS

MM. MEM aiNiiii MIMI11111111111111111

q

Figure 3. Record of students' base scores up to the start of Round 3.

Page 22: AUTHOR TITLE SPONS AGENCY REPORT NO PUB DATE …Newsome, Roy Pope, Pat Tick Reed, Sylvia Rodeffer, Tressa Ronis, Elnora Saunders, and. Eleanore Sterling for their help and support.

Figure 4

Program Effects on Students' Probability of Passing0.95

25

0.90

0.85

0.80

0.75

0.7068

,_

. 4_

.Ex

Cont

'mental

ol

r..,,

.............--

..--...--

....

-

-,

......

....

4

.....

,A --,...,

_

..

_

...

..

--p.-

de,//

-,

__

"

_

_

_

--4

_

--

-

70 72 74 76 78 80 82Average on Report Card in Year Prior to the Program

2

Page 23: AUTHOR TITLE SPONS AGENCY REPORT NO PUB DATE …Newsome, Roy Pope, Pat Tick Reed, Sylvia Rodeffer, Tressa Ronis, Elnora Saunders, and. Eleanore Sterling for their help and support.

Table 1

Report Card Grades: MeansL y (HLK) Estimates of Program Effects, and EffectSizes (N 28 classes)

Faisal 1:

Mean "General Average" on Report Cardin Year Prior to the Intervention y (HLM) Effect

Size

Experimental74

-Panel 2:

Control76 -1.6 -.27

Mean Unadjusted Final Gradein Interventiol Year

Experimental79

Control74

y (HLM) EffectSize

+.53

Mean Adjusted Final Grade

in Intervention Yeara

Experimental78

Control72

y (HLM) EffectSize

+.66

aControlling for differences in students' general averages from the prioryear.

*2 < .10 2 < .05 ***2 < .025 ****2 < .005 (one-tailed tests)

2716

Page 24: AUTHOR TITLE SPONS AGENCY REPORT NO PUB DATE …Newsome, Roy Pope, Pat Tick Reed, Sylvia Rodeffer, Tressa Ronis, Elnora Saunders, and. Eleanore Sterling for their help and support.

Table 2

Students' Self-Reported Effort, Intrinsic value of Subject Matter, and Self-Conce t of Abilit : Ad usted Means, HLM Estimates of Pro ram Effects andE fect Sizes (N c asses)

Panel 1:

Adiusted Mean Fourth-Quarter Efforta

Experimental students Control students5.51 5.28

y (HLM)

.23**

EffectSize

+.20

10=012:

Adjusted Mean Fourth-Quarter Perceptions of

the Z.ntrinsic Value of the Subject Matterb y (HLM) EffectSize

Experimental students Control students5.31 5.06 .25* +.18

Panel 3:

Adjusted Mean Fourth-Quarter Self-Concept

Qi_Abilityc y (HLM) EffectSize

E.-w%imental students5.V4

Control students5.24 .16* +.13

aContr'lling for pre-test differences in effort and self-concept of ability.

bControllinq for pre-test differences in perceptions of intrinsic value andself-concept of ability.

cControlling for pre-test differences in self-concept of ability.

.10 **2 < .05 (one-tailed tests)

,)8

Page 25: AUTHOR TITLE SPONS AGENCY REPORT NO PUB DATE …Newsome, Roy Pope, Pat Tick Reed, Sylvia Rodeffer, Tressa Ronis, Elnora Saunders, and. Eleanore Sterling for their help and support.

Appendix

Questionnaire Items Used to Measure Students' Perceptions

Student's Perception of the Intrinsic Value of the Subject Matter

How excited are you to learn about this subject matter? (not at all excited)...(very excited)

How much do you enjoy learning about this subject? (not much at all)...(very

How much do you care about learning a lot about this subject? (don't care a all)...(care verymuch)

How much do you like working on the assignments in this class? (notat all)...(very much)

Do you do things for fun outside of class that are related to or have something to do with what youare learning about in this class? (never)...(yes, a lot)

Student's Self.Concept of Ability

How good are you in this subject? (not good at all)...(very good)

How good do you think you are in this subject compared to other students in the class? (muchworse than other students)...(much better than other students)

How often do you feel smart in this class? (never)...(very often)

How much natural ability do you have in this subject? (no ability at all)...(a lot of ability)

Student's Effort

If a student works to his or her highest potential in ix class, then we could say that he or she isputting forth 100% effort to learn the subject matter. How much effort do you usually putforth in this class? (0% -- I am not trying at al0...(100% -- I am working to my highestpotential)

How hard are you working to learn about this subject? (not hard at all)...(as kard as I can)

How hard do you study for tests in this class? (ititst enough to pass)...(whatever it takes to get agood grade)

How hard do you work in this class? (much less than most classes)...(much more than mostclasses)