AFrameworkforEvaluating theTechnicalQualityof...

14
A Framework for Evaluating the Technical Quality of Multiple Measures Used in California Community College Placement Rachel Lagunoff, Hillary Michaels, Patricia Morris, and Pamela Yeagley February 3, 2012

Transcript of AFrameworkforEvaluating theTechnicalQualityof...

A  Framework  for  Evaluating  the  Technical  Quality  of  Multiple  Measures  Used  in  California  Community  College  Placement  Rachel  Lagunoff,  Hillary  Michaels,  Patricia  Morris,  and  Pamela  Yeagley  

 

February  3,  2012  

 

Copyright © 2012 Chancellor’s Office, California Community Colleges.

    A  Framework  for  Evaluating  the  Technical  Quality  of  Multiple  Measures  Used  in  California  Community  College  Placement    

1  

A  Framework  for  Evaluating  the  Technical  Quality  of  Multiple  Measures  Used  in  California  Community  College  Placement  

Rachel  Lagunoff,  Hillary  Michaels,  Patricia  Morris,  and  Pamela  Yeagley  

California Community Colleges are required by law (Seymour-Campbell Matriculation Act, 1986) to use multiple measures to determine student course placement (Hughes & Scott-Clayton, 2011; Venezia, Bracco, & Nodine, 2010). The term multiple measures refers to the use of measures of student readiness for coursework in addition to use of a single placement test score. Examples of multiple measures include information on students’ educational background or demographics, either collected from documentation such as transcripts or self-reported via a written questionnaire or an interview with a counselor.

Each individual community college campus or district selects which placement tests to use, which multiple measures to use, and how to determine student placement in credit-bearing or remedial courses based on the data collected. This approach results in inconsistencies in placement testing and placement decisions across campuses, and in some cases may cause confusion among counselors and students about how to apply multiple measures or interpret the results (Brown & Niemi, 2007; Venezia et al., 2010). At the same time, studies have shown that course placement based on multiple measures, when applied in a valid and reliable manner, can result in more accurate placement and better student success than placement based on a test score alone (Gordon, 1999; Marwick, 2004; Noble, Schiel, & Sawyer, 2004).

In 2010, the California Community Colleges Chancellor’s Office convened a Multiple Measures Workgroup to develop a resource document on effective multiple measures and their application in the assessment and placement process. As one element of the workgroup’s activities, the Chancellor’s Office contracted with WestEd to develop a framework for evaluating multiple measures. This framework is intended to be used by the Chancellor’s Office and individual colleges as a resource for evaluating the technical quality of multiple measures used for California community college placement. Technical quality refers to the quality of evidence related to, for example, a measure’s validity, reliability, and freedom from bias and sensitivity issues. Colleges or districts may need to conduct local validity studies to determine the quality of a particular set of multiple measures that they use for their populations.

This framework includes two comprehensive charts listing current measures or types of measures used in California community colleges to place matriculating students in courses. The measures are organized into two tiers based on their level of technical quality—that is, the degree to which the data from these sources are trustworthy, reliable, and credible, and meet the purposes of community college course placement—as determined by available technical reports and research studies. In order to address the multiple measures approach as a whole, the charts include both standardized tests, such as the ACCUPLACER, and additional

    A  Framework  for  Evaluating  the  Technical  Quality  of  Multiple  Measures  Used  in  California  Community  College  Placement    

2  

quantitative or qualitative measures, such as high school grades or student self-reports of course readiness. The chart entry for each measure includes a description of the measure; a list of resources related to the measure, including technical manuals and research studies regarding the technical adequacy and/or use of the measure; and guidelines or considerations for use of the measure in placing matriculating students in community college courses.

In order to determine which placement tests and multiple measures are currently used by California community colleges, WestEd staff first created a draft list based on various documents that were provided by the Chancellor’s Office, including previous lists or surveys of multiple measures. In November 2011, the Regional Educational Laboratory–West (REL West) at WestEd supported the Chancellor’s Office in administering a survey of multiple measures, which was sent to matriculation officers at the 112 community colleges in California. Of the 112 colleges, 59 (53%) responded. The results of the survey confirmed that the measures listed in the survey and in this framework are all currently being used by community colleges as multiple measures to determine course placement for matriculating students.1

The concept of validation that is assumed in this framework focuses on the ways results of assessments are interpreted and used.2 In this view, a validity study evaluates “the degree to which evidence and theory support the interpretations of test scores entailed by proposed uses of tests” (AERA, APA, & NCME, 1999, p. 9). Thus, neither tests nor test scores are validated; rather, “it is the claims and decisions based on the test results that are validated” (Kane, 2006, p. 60). Validation of multiple measures for college course placement is complex, and ideally incorporates the following three elements:

1. Technical quality of an individual test or other data source (e.g., student report of high school grades)—that is, degree to which the data from these sources are trustworthy, reliable, and credible, and meet the purposes of community college course placement.

2. Validity of a set of selected measures taken together—that is, degree to which the outcomes of combining results from multiple measures in different ways accurately, reliably, and consistently predict students’ success in the courses in which they were placed.

3. Validity of the placement system as a whole—that is, degree to which the placement decisions based on the multiple measures ensure students’ overall success in community college.

1Measures that are used at community colleges but for which no research or validation studies could be found are not included in this framework. However, these measures could be included in local validation studies. Examples of measures that were listed in the survey and that do not appear in this framework include time of day attending classes, college units completed, veteran status, and student perseverance with academic challenges. 2See Hughes and Scott-Clayton (2011) for further discussion of validation in the context of community college placement methods.

    A  Framework  for  Evaluating  the  Technical  Quality  of  Multiple  Measures  Used  in  California  Community  College  Placement    

3  

As previously noted, the measures included in the following charts have been categorized into tiers based on the type and general technical quality of data they provide (that is, degree to which the data from these sources are trustworthy, reliable, and credible, and meet the purposes of community college course placement).

Tier 1 measures include standardized tests such as the ACCUPLACER and the MDTP. These measures are specifically designed to provide placement information for students in mathematics, reading, writing, and English as a second language (ESL). They have been normed on statewide or national samples of students. Technical reports on these measures provide information on content, validity, standardization, item statistics, reliability, scaling and equating, scoring, and standard setting.

Tier 2 measures include two general types: locally designed placement tests and other measures based on cognitive or noncognitive evidence, such as students’ educational background or situational characteristics. Locally designed tests are specifically designed for purposes of determining student placement into college-level courses and may collect data via multiple-choice or constructed-response items or via a writing task scored using a rubric. These tests may not have the same level of documentation of technical adequacy as the standardized placement tests do; however, their content may be better aligned to community college course content. Other student measures provide information on student preparedness through means other than a test score and can be validated using statistical methods for predictive validity,3 such as regressions or correlations (AERA, APA, & NCME, 1999). These include measures based on cognitive evidence, such as students’ previous success in academic content courses, as well as measures based on noncognitive qualitative evidence, such as students’ educational goals or demographic or personal characteristics. Cognitive evidence may be self-reported by students via a written survey or an interview with a counselor (e.g., self-report of high school GPA or highest-level mathematics or English course completed), or collected from an external source (e.g., high school transcripts). Noncognitive evidence must be self-reported, as it is based solely on personal introspection or intention (e.g., choice of major or attitude toward studying).

Especially when considering predictive validity, both Tier 1 and Tier 2 measures need to be locally validated as part of the placement system, since contexts may be different across campuses or districts. For example, different colleges may have student populations that differ demographically, or may offer different types of remedial courses intended to be taken in different sequences.

3 Predictive validity is a type of validity evidence that focuses on the prediction of the measure(s) to the criterion (e.g., student’s grade in the course into which the student was placed).

    A  Framework  for  Evaluating  the  Technical  Quality  of  Multiple  Measures  Used  in  California  Community  College  Placement    

4  

Table 1. Tier 1 Measures

Description of Measure Resources and Research

Studies Guidelines/Considerations

for Use of Measure

Standardized  Tests      

ACCUPLACER

• Provides information on students’ abilities in reading, writing, ESL, and/or mathematics

• Multiple-choice items and written essay

• Online test component is computer adaptive

College Board (2009) Deng & Melican (2010) Gordon (1999) Hughes & Scott-Clayton (2011) James (2006) Mattern & Packman (2009) Mellard & Anderson (2007) Pinkerton (2010)

CELSA (Combined English Language

Skills Assessment)

• Provides information on students’ abilities in ESL

• Multiple-choice items

Association of Classroom Teacher Testers (2011) Isonio (1992b) Isonio (1993) Mattice (1993) Pinkerton (2010) Thompson (1994)

COMPASS

• Provides information on students’ abilities in reading, writing, ESL, and/or mathematics

• Multiple-choice items and written essay

• Online test component is computer adaptive

ACT (2007a) ACT (2007b) ACT (2008a) ACT (2008b) Goosen (2008) Hughes & Scott-Clayton (2011) Pinkerton (2010)

CTEP (College Tests for English

Placement)

• Provides information on students’ abilities in reading and writing

• Multiple-choice items

Mission College (2004) Pinkerton (2010)

• The content of the tests needs to be analyzed to determine the extent to which the tests cover the same type, range, and complexity of knowledge and skills as the courses students will be placed into.

• Cut scores are recommended to be locally validated at least every 5–7 years (see Morgan [2010] for an overview of procedures for setting cut scores for college placement).

• Different methods may be used to alleviate restriction of score range concerns. Armstrong (2001) recommends allowing students to select the courses in which they enroll and lowering placement test cut scores.

• If information from multiple measures is used to adjust the raw score or cut score (e.g., by adding points to the test score), the total resulting scores need to be validated, and rules that determine placement need to be clear and consistently applied.

• Test results should be analyzed for bias against subgroups (based on, e.g., race/ethnicity, gender, age, or disability), and use of results should be analyzed for disproportionate impact.

    A  Framework  for  Evaluating  the  Technical  Quality  of  Multiple  Measures  Used  in  California  Community  College  Placement    

5  

Description of Measure Resources and Research

Studies Guidelines/Considerations

for Use of Measure

MDTP (Mathematics Diagnostic

Testing Project)

• Provides information on students’ abilities in mathematics

• Developed by researchers as a joint project between California State University and the University of California

• Used in both high schools and community colleges

• Multiple-choice items

Armstrong (1994) California State University & University of California (n.d.) College of the Canyons (1994a) College of the Canyons (1994b) Gerachis & Manaster (1995) Isonio (1992a) Mathematics Diagnostic Testing Project (2004) Mathematics Diagnostic Testing Project & Quantitative Systems Laboratory (1999) Pinkerton (2010) Slark (1991)

• ESL students need to take the appropriate test for their needs in English language development (Academic Senate for California Community Colleges, 2004).

• Course content, assessment, and grading methods/decisions need to be consistent across courses if course grades are used as a criterion for successful placement (Armstrong, 2001).

• While predictive validity studies often use the criterion of grade obtained in the course a student is placed in, colleges may want to consider what the best criterion for successful placement might be—e.g., student success in later credit-bearing or transfer-level courses or in completion of college goals (Scott-Clayton, 2011).

 Table 2. Tier 2 Measures

Description of Measure Resources and Research

Studies Guidelines/Considerations

for Use of Measure

Locally  Developed  Tests      

• Locally developed tests may provide information on student abilities in reading, writing, ESL, and/or mathematics

• Multiple-choice items and/or written essay

Brown & Niemi (2007) Behrman & Street (2005) Huot (1990) Matzen & Hoyt (2004) Scharton (1989) Sullivan & Nielsen (2009) Wilson & Tillberg (1994)

• Technical adequacy should be determined for locally developed tests, including use of validity and reliability studies.

• If results for a locally developed test correlate highly with standardized placement test results (for example, a written essay and a multiple-choice test of writing ability), then both tests do not need to be used for placement decisions.

    A  Framework  for  Evaluating  the  Technical  Quality  of  Multiple  Measures  Used  in  California  Community  College  Placement    

6  

Description of Measure Resources and Research

Studies Guidelines/Considerations

for Use of Measure

Student  Characteristics:  Educational  Background  

   

Length of time out of school Armstrong (1994) Lewallen (1994) Spicer (1989)

General proficiency in reading, writing, and/or mathematics

Frise (1996)

Highest level of educational attainment

Lewallen (1994)

High school GPA Armstrong (1994) Armstrong (1995) Armstrong (2001) Clark (1981) Goosen (2008) Lewallen (1994) Spicer (1989)

Grade in last mathematics class completed

Armstrong (2001) Clark (1981) Lewallen (1994) Spicer (1989)

Highest-level mathematics course completed

Armstrong (2001) Lewallen (1994) Spicer (1989)

Length of time since last mathematics course

Lewallen (1994)

Grade in last English class completed

Armstrong (2001) Lewallen (1994) Spicer (1989)

Highest-level English course completed

Spicer (1989)

Number of years of high school English

Armstrong (1995) Armstrong (2001) Lewallen (1994)

• Information on student educational background is typically collected via written survey—i.e., student self-report—though it can be independently verified (e.g., via transcripts).

• Self-reported information, while it has the potential of being less accurate, may serve as a valid measure as long as the information is understood to be based on the student’s views. In fact, studies have shown student self-reports and self-assessment of their academic abilities to be predictive (e.g., Armstrong, 2000) and accurate (e.g., Frise, 1996) measures.

• Studies have shown that all of the educational background characteristics listed here, except highest level of educational attainment, are predictive of later student success in college courses.

    A  Framework  for  Evaluating  the  Technical  Quality  of  Multiple  Measures  Used  in  California  Community  College  Placement    

7  

Description of Measure Resources and Research

Studies Guidelines/Considerations

for Use of Measure

Student  Characteristics:  Educational  Goals  and  Plans  

   

Educational goals Armstrong (1999) Spurling (1998)

Number of units planned Spurling (1998)

Number of hours studying/doing homework

Lewallen (1994)

• Information on student goals and plans can be collected via written survey or in an interview with counselor.

• Studies have shown that the characteristics related to plans and goals listed here are predictive of later student success in college courses.

Student  Characteristics:  Personal  and  Situational  Characteristics  

   

Importance of college to student Armstrong (1995) Goosen (2008) Spicer (1989)

Importance of college to those closest to student

Armstrong (1995) Spicer (1989)

Number of hours employed Spicer (1989)

Time spent reading in English Spicer (1989)

• Information on student personal and situational characteristics can be collected via a written survey or in an interview with a counselor.

• Studies have shown that the personal and situational characteristics listed here are predictive of later student success in college courses.

Guidelines/considerations  common  to  all  student  characteristics  measures  

• If the information is collected in an interview with a counselor, both the interview procedures and the ways the information is used in placement decisions should be consistent across counselors.

• Measures of student characteristics, if used by colleges, should be locally validated along with placement tests and any other measures used to determine student placement; in other words, the full system of multiple measures should be validated (see Noble and Sawyer [1995] and Wurtz [2008] for examples using logistic regression).

• Validation should include analysis of disproportionate impact on student subgroups (e.g., based on age or ethnicity). (See Armstrong [1995] for an example with high school GPA.)

    A  Framework  for  Evaluating  the  Technical  Quality  of  Multiple  Measures  Used  in  California  Community  College  Placement    

8  

References  

Academic Senate for California Community Colleges. (2004). Issues in basic skills assessment and placement in the California community colleges. Retrieved from http://asccc.org/sites/default/files/BasicSkillsIssuesAssessment.pdf

ACT. (2007a). COMPASS guide to effective student placement and retention in mathematics. Iowa City, IA: Author. Retrieved from http://www.act.org/compass/pdf/MathPlacementGuide.pdf

ACT. (2007b). COMPASS guide to successful ESL course placement. Iowa City, IA: Author. Retrieved from http://www.act.org/compass/pdf/ESLGuide.pdf

ACT. (2008a). COMPASS guide to effective student placement and retention in language arts. Iowa City, IA: Author.

ACT. (2008b). COMPASS course placement service interpretive guide. Iowa City, IA: Author. Retrieved from http://www.act.org/compass/pdf/CPS_Guide.pdf

American Educational Research Association, American Psychological Association, & National Council of Measurement in Education (AERA, APA, & NCME). (1999). Standards for educational and psychological testing. Washington, D.C.: American Psychological Association.

Armstrong, W. B. (1994). Math placement validation study: A summary of the criterion-related validity evidence and multiple measures data for the San Diego Community College District. San Diego, CA: Research and Planning, San Diego Community College District.

Armstrong, W. B. (1995). Validating placement tests in the community college: The role of test scores, biographical data, and grading variation. Paper presented at the Association for Institutional Research 35th Annual Forum. Retrieved from ERIC database. (ED385324)

Armstrong, W. B. (1999). Explaining community college outcomes by analyzing student data and instructor effects (Doctoral dissertation). Retrieved from ERIC database. (ED426750)

Armstrong, W. B. (2000). The association among student success in courses, placement test scores, student background data, and instructor grading practices. Community College Journal of Research & Practice, 24(8), 681–695.

Armstrong, W. B. (2001). Explaining student course outcomes by analyzing placement test scores, stgudent background data, and instructor effects. Retrieved from ERIC database. (ED454907)

Association of Classroom Teacher Testers. (2011). Combined English Language Skills Assessment: ATB test administrator’s guide for ability to benefit. Montecito, CA: Author. Retrieved from http://www.assessment-testing.com/ATBusers.doc

Behrman, E., & Street, C. (2005). The validity of using a content-specific reading comprehension test for college placement. Journal of College Reading and Learning, 35(2), 5–21.

    A  Framework  for  Evaluating  the  Technical  Quality  of  Multiple  Measures  Used  in  California  Community  College  Placement    

9  

Brown, R. S., & Niemi, D. N. (2007, June). Investigating the alignment of high school and community college assessments in California (National Center Report No. 07-3). National Center for Public Policy and Higher Education.

California State University & University of California. (n.d.). Mathematics Diagnostic Testing Project [Web page]. Retrieved from http://mdtp.ucsd.edu/

Clark, R. M. (1981). Math courses survey: Math 5a–Math analysis I. Retrieved from ERIC database. (ED211137)

College Board. (2009). ACES placement validity report for ACCUPLACER sample. New York, NY: Author. Retrieved from http://mdtp.ucsd.edu/approvalstatus.shtml

College of the Canyons. (1994a). Experience tables, predictive validity studies, and validation of placement tables for the MDTP placement tests. Santa Clarita, CA: Office of Institutional Development, College of the Canyons. Retrieved from ERIC database. (ED376916)

College of the Canyons. (1994b, October). Monitoring the disproportionate impact of MDTP tests on special populations. Santa Clarita, CA: Office of Institutional Development, College of the Canyons. Retrieved from ERIC database. (ED274858)

Deng, H., & Melican, G. (2010). An investigation of scale drift for arithmetic assessment of ACCUPLACER (Research Report No. 2010-2). New York, NY: College Board.

Frise, D. (1996, August). An analysis of student assessed skills and assessment testing: A matriculation preport prepared for College of the Canyons. Santa Clarita, CA: Office of Institutional Development, College of the Canyons.

Gerachis, C., & Manaster, A. (1995). User manual. Mathematics Diagnostic Testing Project, California State University/University of California. Retrieved from http://mdtp.ucsd.edu/approvalstatus.shtml

Goosen, R. A. (2008). Cognitive and affective measures as indicators of course outcomes for developmental mathematics students at a Texas community college (Doctoral dissertation). Retrieved from ProQuest database. (ID No. 2237240941)

Gordon, R. J. (1999, January). Using computer adaptive testing & multiple measures to ensure that students are placed in courses appropriate for their skill levels. Paper presented at the Third North American Conference on the Learning Paradigm, San Diego, CA.

Hughes, K. L., & Scott-Clayton, J. (2011, February). Assessing developmental assessment in community colleges (CCRC Working Paper No. 19). New York, NY: Community College Research Center, Teachers College, Columbia University.

Huot, B. (1990). Reliability, validity, and holistic scoring: What we know and what we need to know. College Composition and Communication, 41(2), 201–213.

Isonio, S. (1992a). Implementation and initial validation of the MDTP tests at Golden West College. Huntington Beach, CA Retrieved from ERIC database. (ED345782)

    A  Framework  for  Evaluating  the  Technical  Quality  of  Multiple  Measures  Used  in  California  Community  College  Placement    

10  

Isonio, S. (1992b). Mid-term assessment of English 10 students: A comparison of methods of entry into the course. Huntington Beach, CA: Golden West College. Retrieved from ERIC database. (ED350022)

Isonio, S. (1993). Implementation and initial validation of the combined English language skills assessment (CELSA) at Golden West College. Huntington Beach, CA: Golden West College. Retrieved from ERIC database. (ED353023)

James, C. L. (2006). ACCUPLACER online: Accurate placement tool for developmental programs? Journal of Developmental Education, 30(2), 2–8.

Kane, M. T. (2006). Validation. In R. L. Brenna (Ed.), Educational measurement (4th ed., pp. 17–64). New York, NY: American Council on Education, Macmillan Publishing.

Lewallen, W. C. (1994). Multiple measures in placement recommendations: An examination of variables related to course success. Lancaster, CA: Antelope Valley College. Retrieved from ERIC database. (ED381186)

Marwick, J. D. (2004). Charting a path to success: The association between institutional placement policies and the academic success of Latino students. Community College Journal of Research & Practice, 28(3), 263–280.

Mathematics Diagnostic Testing Project. (2004). Consequential-related validity evidence for MDTP tests: Data submitted in response to California Community Colleges Assessment Standards (March 2001, 4th edition) for renewal of placement test instruments (rev. January 2005). California State University & University of California. Retrieved from http://mdtp.ucsd.edu/pdf/MDTPcccValidity2004.pdf

Mathematics Diagnostic Testing Project and Quantitative Systems Laboratory. (1999). Consequential and criterion related validity evidence for MDTP test: Data submitted in response to California Community Colleges Assessment Standards (3rd edition) for renewal of test instruments. San Diego, CA: California State University/University of California and Quantitative Systems Laboratory, Department of Psychology, University of California, San Diego. Retrieved from http://mdtp.ucsd.edu/pdf/MDTPcccValidity1999.pdf

Mattern, K. D., & Packman, S. (2009). Predictive validity of ACCUPLACER scores for course placement: A meta-analysis (College Board Research Report No. 2009-2). New York, NY: College Board.

Mattice, N. J. (1993, June). Predictive validity studies. Valencia, CA: Office of Institutional Development, College of the Canyons.

Matzen, R. N., Jr., & Hoyt, J. E. (2004). Basic writing placement with holistically scored essays: Research evidence. Journal of Developmental Education, 28(1), 2–34.

Mellard, D. F., & Anderson, G. (2007, December). Challenges in assessing for postsecondary readiness (Policy Brief No. 10). New York, NY: National Commission on Adult Literacy, Council for the Advancement of Adult Literacy.

    A  Framework  for  Evaluating  the  Technical  Quality  of  Multiple  Measures  Used  in  California  Community  College  Placement    

11  

Mission College. (2004). CTEP assessment validation report. Santa Clara, CA: Assessment Center, Mission College. Retrieved from http://www.missioncollege.org/student_services/assess/documents/CTEPValidationReport.pdf

Morgan, D. L. (2010, September). Best practices for setting placement cut scores in postsecondary education. Paper presented at the NCPR Developmental Education Conference: What Policies and Practices Work for Students?, Teachers College, Columbia University, New York, NY.

Noble, J. P., & Sawyer, R. L. (1995, May). Alternative methods for validating admissions and course placement criteria. Paper presented at the Thirty-fifth Annual Forum of the Association for Institutional Research, Boston, MA.

Noble, J. P., Schiel, J. L., & Sawyer, R. L. (2004). Assessment and college course placement: Matching students with appropriate instruction. In J. E. Wall & G. R. Walz (Eds.), Measuring up: Assessment issues for teachers, counselors, and administrators (pp. 297–311). Greensboro, NC: ERIC Counseling & Student Services Clearinghouse, National Center for Education Statistics.

Pinkerton, K. J. (2010). College persistence of readers on the margin: A population overlooked. Research & Teaching in Developmental Education, 27(1), 24–41.

Scharton, M. A. (1989). Writing assessment as values clarification. Journal of Developmental Education, 13(2), 8–12.

Seymour-Campbell Matriculation Act, California Education Code § 78210–78218 (1986).

Slark, J. (1991). RSC validation of mathematics placement tests (Research, Planning, Resource Development Report). Santa Ana, CA: Department of Institutional Research, Rancho Santiago College. Retrieved from ERIC database. (ED341418)

Spicer, S. L. (1989). Paths to success: Volume one: Steps toward refining standards and placement in the English curriculum. Glendale, CA: Planning & Research Office, Glendale Community College. Retrieved from ERIC database. (ED312021)

Spurling, S. (1998, November). Progress and success of English, ESL and mathematics students at City College of San Francisco (Report No. SS1098). San Francisco, CA: Office of Research, Planning and Grants and Office of Matriculation and Assessment, City College of San Francisco.

Sullivan, P., & Nielsen, D. (2009). Is a writing sample necessary for "accurate placement"? Journal of Developmental Education, 33(2), 4–13.

Thompson, D. E. (1994). Combined English Language Skills Assessment (CELSA): Analysis of disproportionate impact. Huntington Beach, CA: Golden West College. Retrieved from ERIC database. (ED371799)

    A  Framework  for  Evaluating  the  Technical  Quality  of  Multiple  Measures  Used  in  California  Community  College  Placement    

12  

Venezia, A., Bracco, K. R., & Nodine, T. (2010). One-shot deal?: Students' perceptions of assessment and course placement in California's community colleges. San Francisco, CA: WestEd.

Wilson, K. M., & Tillberg, R. (1994). An assessment of selected validity-related properties of a shortened version of the Secondary Level English Proficiency Test and locally developed writing tests in the LACCD context. Retrieved from ERIC database. (ED381559)

Wurtz, K. (2008). A methodology for generating placement rules that utilizes logistic regression. Journal of Applied Research in the Community College, 16(1), 52–58.