© Regents of University of California 1 Functional Validity: Extending the Utility of State...

Post on 12-Jan-2016

216 views 0 download

Tags:

Transcript of © Regents of University of California 1 Functional Validity: Extending the Utility of State...

© Regents of University of California 1

Functional Validity: Extending the Utility of State AssessmentsEva L. Baker, Li Cai, Kilchan Choi, Ayesha Madni

UCLA/CRESST

Comparing Expectations for Validity Models and for New Assessments: Goals, Approaches, Feasibility, and Impact

Council of Chief State School Officers (CCSSO)

2015 National Conference on Student Assessment

San Diego, California – June 24, 2015

© Regents of University of California 2

So What’s New?Opting Out

• Salient target• Evidence of

benefit • Displaced anger• Transparency

© Regents of University of California 3

Transparency: Expectations Clear and Sensible?

• Better test transparency and utility for public• Specificity in the right places• Support student learning and persistence

© Regents of University of California 4

Today: Feature Analysis

• Argue that tests for “summative” purposes can contribute to transparency of findings to improve learning

• By conducting qualitative and quantitative analyses of tests (and interventions) the veil of obscurity—and what to teach—can be lifted

• FA key element of data-mining

© Regents of University of California 5

Features for Analysis and Design of Assessments

• Identify features of items and tasks on assessments and interventions that may inform teaching, learning, and performance within and across content requirements and grade levels

• Report to improve teaching • Use to design and revise items and tasks

© Regents of University of California 6

How It Works

• Rate components of items/tasks• Low inference features• Features recombined in tasks, items

(game levels, episodes) • Meta-tagged in data• Performance summaries across individual

or clusters of features• Criteria: Significant difficulty, growth, or

complexity

© Regents of University of California 7

Sample CRESST Features: Content, Cognition, Task, Linguistics

• Knowledge—mapped to standards and prerequisites– Content—topics, memory, concepts,

procedures, systems– Representations

• Cognitive requirements and skills– Problem solving components– Communication, inferencing– Pattern detection, situation awareness

© Regents of University of California 8

Task Features

• Surface requirements– Format– Stimulus content, prompts, resources, representations– Game mechanic or interaction engine– Affordances, accessibility, accommodations – Team work requirements – Narrative or scenario content and structure

• Response Requirements– Answer formats– Criteria or scoring rules– Actions or number of types and steps in a response– Essay elements or particular demands

© Regents of University of California 9

Linguistic Features

• Discourse– Complexity or number of ideas in passage or directions– Length– Literal or inferential comprehension– Academic structure, domain-dependent or independent

• Syntax– Sentence patterns, type and variation– Sentence length– Context cues

• Word choice– Academic vocabulary-specific domain– Academic language, type, density

© Regents of University of California 10

Problem Solving Constraints

Single: Increase Vector’s speed to reach stars by reducing amount of friction.

Multiple: Increase Vector’s speed to reach stars but not too fast to avoid hitting dynamite.

© Regents of University of California 11

State Assessment Study - 1

Purpose:• To predict performance from three years of

standards use and attribute results– Rated features of content, cognition,

linguistics, and tasks with high consistency– Tagged every test item in English Language

Arts (ELA) and math for grades 3 & 4 and 7 & 8 for years 2011, 2012, and 2013 by feature

– Features accounted for on average 50% of variance on item difficulty

© Regents of University of California 12

Assessment Study - 2

• Math Grades 4, 8, 11• Previous features augmented by

results of student think-alouds • A total of 70 features identified and

tagged on a sample of math items

© Regents of University of California 13

Assessment Study - 2 Findings

• 4th grade: 16 features were significantly related to difficulty, 10 harder, 6 easier

• 8th grade: 10 features significantly related to difficulty, 6 harder, 4 easier

• 11th grade: 12 features related to difficulty, 7 harder, 5 easier

© Regents of University of California 14

Feature Relationships

• Features across grades– Cognitive load– Representation type– Constructed or multiple responses– Guidance– Linguistics

• Features across ELA and math – Linguistics amount

• Able to predict item difficulty by features

© Regents of University of California 15

Current R&D

• Continuing FA of state level assessments, refining definitions, protocols, and training

• Sub-group and feature interactions• FA of interventions—PBS learning games and videos,

classroom instructional assignments• Linking features of interventions and assessments to

predict performance• Developing two ways of automated feature

extraction • Designing assessments and games using features • Engaging in FA validity studies across projects• Looking for partners

© Regents of University of California 16

Summary

• Feature analysis may make “summative” results useful for improvement

• Multiple purposes for tests• Development implications for tests

and for designing and predicting effects of interventions

Copyright © 2014 The Regents of the University of California. Do Not Distribute

Eva L. Baker

eva@ucla.edu

© Regents of University of California 18Back up slides

Back Up Slides

© Regents of University of California 19

State Assessment Functional Validity

• Data to determine year-to-year cohort performance changes – instructional sensitivity?

• Summarized across specified features significantly related to high and low difficulty

• Resulting feature sets accounted on average for 50% of variation of performance

• If confirmed by instructional studies, findings may guide teachers and professional development to improve test performance using invariant

• Guide procurement for re-designed specifications

Note: Cai, Baker, Choi, Buschang, 2014; Baker, Cai, Choi, 2014; Choi, Madni, 2015

© Regents of University of California 20

How It Is Done – Feature Parsing

• Elements are defined and rated which comprise test items and tasks or learning requirements, e.g., linguistics, content elements, detailed cognitive processes

• Each item is re-rated by pairs of trained staff for each feature.

• More granular and operational level of analysis than many currently used approaches

• Features tagged to items in data

© Regents of University of California 21

Purposes of Assessments

• Beyond accountability• Policy linking accountability and improvement • Accountability analyses interfere with guidance

supporting teaching and learning • Can improvement of learning become a useful

function of large-scale tests?• Feature analysis of item and test properties can

yield useful instructional information

© Regents of University of California 22

Mapping Features: Ontologies: Networks of Relationships

SEL

Problem Solving

Content