Concepts and-measurement

Post on 05-Dec-2014

6.090 views 0 download

description

 

Transcript of Concepts and-measurement

Conceptscreating explanations

building conceptsmeasurement

creating explanations

• Independent Variable• Explanatory Variable• Driving Force

Dependent VariableOutcome

X + Y Z

units

• Unit of analysis: the level the dependent variable is being measured; the case

• Ex: Nation (portion Christian)

• Unit of observation: the unit on which data are being gathered

• Ex: Individual within the nation (data on that individual’s belief)

what is a concept?• Gerring: “an alignment among three intertwined components: the term

( a linguistic label comprised of one or a few words), the phenomena to be defined (the referents, extension, or denotation of a concept), and the properties or attributes that define those phenomena (the definition, intension, or connotation of a concept)” (39)

How do we build (background/systematized) concepts?• process of adjustment to maximize fulfillment of

the criteria for a good concept (Gerring, see table --> )

• “successful definition involves the identification of attributes that provide necessary and sufficient conditions for locating examples of the term”

• avoid homonymy, synonymy, instability

• strategies

• choose a definition from a classic work

• find a causal-explanatory understanding of a term (term defined by what explains it or what it explains)

• intellectual history of the concept (genealogy is a version of this)

• focus on specific definitional attributes, grouping attributes that are similar

“ideology”

from concept to indicator

Measurement: sensitive, valid, reliable• Sensitivity: extent to which cases are homogenous within the value

of your variable; precision in measures

• Keep variable as sensitive as possible

• But keep in mind limits of measurement method

There is a sensitivity-reliability trade-off in self-reports with rating scales. People are generally good at giving up to 7 positions, but more than that leads to greater error.

measurement levels• nominal: variation in kind or type; categories

• Example: married, living with partner, never married, separated, divorced, widowed

• ordinal: variation in degree along a continuum; rank order• Example: support for Obama (strongly favor, favor, neutral,

oppose, strongly oppose) • interval: variation in degree along a continuum; relative positions

on a continuum• Example: year in which event occurred

• ratio: variation in degree along a continuum; numbers signify absolute position on the continuum and the number zero is meaningful• Example: number of battlefield deaths; age of a person

Measurement: sensitive, valid, reliable

• Valid: extent to which what you measure is what you say you measure

• Face Validity: plausible on its “face”

• Content Validity: extent to which all components of a systematized concept are measured in the indicator; matching a list of attributes

• Criterion-related Validity: extent to which an indicator matches criteria; predictive, concurrent

• Construct Validity: extent to which what you measure behaves as it should within a system of related concepts; an attribute of a measure/indicator

• Convergent/Divergent Tests

• Internal Validity: extent to which a research design yields strong evidence of causality; an attribute of the research design

• External Validity: extent to which a research design yields findings that generalize; an attribute of the research design

Measurement: Construct Validity• Construct Validity: extent to which what you measure behaves as it should within a

system of related concepts; an attribute of a measure/indicator• Nomological Net

• does not have to be causal, could be correlational• should be completely non-controversial, established in literature or common sense• example: GRE scores and first-year grad school GPA

Measurement: convergence and divergence

• Convergent: alternative measures of a concept should be strongly correlated

• Divergent: different concepts should not correlate highly with each other

Measurement: sensitive, valid, reliable

• Reliable: extent to which a measure is free from random error; measures are repeatable, consistent, dependable

• Assessments:

• test-retest

• interobserver; intercoder

• split-half

• Cronbach’s Alpha

True score theory & reliability

ObservedObservedscorescore == TrueTrue

abilityability ++ RandomRandomerrorerror

XX == TT ++ e

• The error term has two components:• Random Error: the more reliable the

measure, the less the random error• Systematic Error: the more valid the

measure, the less the systematic error

Assessing reliability• test-retest: about consistency of response to a treatment; assumes the

characteristic being measures is stable over time

• interobserver; intercoder: two or more observe or code; can be done as a test on a random subset of cases

• alternative-form: measure same attribute more than once using two different measures of the same concept (e.g. to measure liberalism, use two different sets of question on same respondents at two different times)

• split-half: two measures of same concept applied at the same time (e.g. survey of political opinions; ten questions related to liberalism, take two different sets of questions as different measures of liberalism)

• Cronbach’s Alpha: average of all split halves coefficients - all possible combinations; a statistical technique...

Elkins example

graded v. dichotomous measures of democracy

Concept & Measurement Exercise

describe each level for your concept

identify a scale (nominal, ordinal, etc.)

identify strategies for considering the reliability and validity of your measure

is your concept embedded in any particular ontological or epistemological understanding of the world?