Factor Analysis

download Factor Analysis

of 56

Transcript of Factor Analysis

EXPLORATORY FACTOR ANALYSIS (EFA)

Learning Objectives1.

2.3.

4.5.

Understand what is the factor analysis technique and its applications in research Discuss exploratory factor analysis (EFA) Run EFA with SPSS and interpret the resulted output Estimate shortly reliability Assess shortly construct validity

The whole worksAnalyzing the factor structure of the multi-item dataTheory Constructs Items linked to constructs Collect data

Contribute to theory Data cleaning filter Test structural hypotheses Goodness of fit filter EFA

Modify the Structural Model Conduct CFA Without CMB

Build/Run Structural Model

Modify the Measurement Model

Conduct CFA With CMB

Link items to constructs; Label constructs

Goodness of fit & psychometric properties filter

Conduct Multi-group CFA

Family Tree of SEMIs the difference between samples on a variable significant?T-test ANOVA Multi-way ANOVA

Multiple samples, multiple variables, over time, etc.Repeated Measure Designs Growth Curve Analysis

Is the correlation between different variables significant?

Bivariate Correlation

Multiple Regression

Path Analysis

Structural Equation Modeling

Latent Growth Curve Analysis

Factor Analysis

Confirmatory Factor Analysis

Source: PIRE

Exploratory Factor Analysis

Multiple variables, overall model, measurement model, etc.

SCOPE of Factor Analysis today Factor analysis and principal component analysis Carrying out the analyses in SPSS Deciding on the number of factors Rotating factors Producing factor and component scores Assumptions and sample size Exploratory and confirmatory FA

Types of Measurement ModelsExploratory (EFA) Confirmatory (CFA) Multitrait-Multimethod (MTMM) Hierarchical CFA

EFA vs. CFA

Exploratory Factor Analysis is concerned with how many factors are necessary to explain the relations among a set of indicators and with estimation of factor loadings. It is associated with theory development.

Confirmatory Factor Analysis is concerned with determining if the number of factors conform to what is expected on the basis of pre-established theory. Do items load as predicted on the expected number of factors. Hypothesize beforehand the number of factors.

End-User Computing Satisfaction (EUCS)EUCS: An instrument for measuring satisfaction with an information system CONTENT: 1. Does the system provide the precise information you need? 2. Does the information content meet your needs? 3. Does the system provide reports that seem to be just about exactly what you need? 4. Does the system provide sufficient information? ACCURACY: 1. Is the system accurate? 2. Are you satisfied with the accuracy of the system? FORMAT: 1. Do you think the output is presented in a useful format? 2. Is the information clear? EASE OF USE: 1. Is the system user friendly? 2. Is the system easy to use? TIMELINESS: 1. Do you get the information you need in time? 2. Does the system provide up-to-date information?

Factor Analysis

Factor Analysis is a method for identifying a structure (or factors, or dimensions) that underlies the relations among a set of observed variables. Factor analysis is a technique that transforms the correlations among a set of observed variables into smaller number of underlying factors, which contain all the essential information about the linear interrelationships among the original test scores. Factor analysis is a statistical procedure that involves the relationship between observed variables (measurements) and the underlying latent factors.

Factor Analysis

Factor analysis is a fundamental component of Structural Equation modeling. Factor analysis explores the inter-relationships among variables to discover if those variables can be grouped into a smaller set of underlying factors. Many variables are reduced (grouped) into a smaller number of factors These variables reflect the causal impact of the latent underlying factors Statistical technique for dealing with multiple variables

Applications of Factor AnalysisExplore data for patterns. Often a researcher is unclear if items or variables have a discernible patterns. Factor Analysis can be done in an Exploratory fashion to reveal patterns among the inter-relationships of the items. Data Reduction. Factor analysis can be used to reduce a large number of variables into a smaller and more manageable number of factors. Factor analysis can create factor scores for each subject that represents these higher order variables. Factor Analysis can be used to reduce a large number of variables into a parsimonious set of few factors that account better for the underlying variance (causal impact) in the measured phenomenon.

Confirm Hypothesis of Factor Structure. Factor Analysis can be used to test whether a set of items designed to measure a certain variable(s) do, in fact, reveal the hypothesized factor structure (i.e. whether the underlying latent factor truly causes the variance in the observed variables and how certain we can be about it). In measurement research when a researcher wishes to validate a scale with a given or hypothesized factor structure, Confirmatory Factor Analysis is used.Theory Testing. Factor Analysis can be used to test a priori hypotheses about the relations among a set of observed variables.

How would you group these Items?

Exploratory Factor AnalysisIn EFA, the researcher is attempting to explore the relationships among items to determine if the items can be grouped into a smaller number of underlying factors. In this analysis, all items are assumed to be related to all factors.

V1Factor 1

V2 V3Factor 1

V4

Factorial SolutionFactor Cross-Loading ? Loading

Item

Exploratory Factor AnalysisMeasured Variables or Indicators: These variables are those that the researcher has observed or measured. In this example, they are the four items on the scale. Note, they are drawn as rectangles or squares.

V1Factor 1

V2 V3Factor 1

V4

Exploratory Factor AnalysisUnmeasured or Latent Variables:These variables are not directly measurable, rather the researcher only has indicators of these measures.

V1Factor 1

V2 V3Factor 1

These variables are more often the more interesting, but more difficult variables to measure (e.g., self-efficacy).In this example, the latent variables are the two factors. Note, they are drawn as elipses

V4

Exploratory Factor AnalysisFactor Loadings:Measure the relationship between the items and the factors. Factor loadings can be interpreted like correlation coefficients; ranging between -1.0 and +1.0. The closer the value is to 1.0, positive or negative, the stronger the relationship between the factor and the item. Loadings can be both positive or negative.

V1Factor 1

V2 V3Factor 1

V4

Exploratory Factor AnalysisFactor Loadings:Note the direction of the arrows; the factors are thought to influence the indicators, not vice versa. Each item is being predicted by the factors.

V1Factor 1

V2 V3Factor 1

V4

Exploratory Factor AnalysisErrors in Measurement:Each of the indicator variables has some error in measurement. The small circles with the indicate the error. The error is composed of 'we know not what' or are not measured directly. These errors in measurement are considered the reliability estimates for each indicator variable.

V1Factor 1

V2 V3Factor 1

V4

Multi-Indicator Approach

A multiple-indicator approach reduces the overall effect of measurement error of any individual observed variable on the accuracy of the results A distinction is made between observed variables (indicators) and underlying latent variables or factors (constructs) Together the observed variables and the latent variables make up the measurement model

Conceptual ModelThis model holds that there are two uncorrelated factors that explain the relationships among the six emotion variablesJoy Fear

AweHappiness

Positive Affect

Negative Affect

GuiltSadness

Variables (Observed)

Factor (Latent)

Measurement ModelItems Joy Positive Affect (Factor 1) Loading* Negative Affect (Factor 2) 0

AweHappiness Fear Guilt

LoadingLoading 0 0

00 Loading Loading

Sadness

0

Loading

*The loading is a data-driven parameter that estimates the relationships(correlation) between an observed item and a latent factor.

Assumptions of Factor Analysis Data Matrix must have sufficient number of correlations

Variables must be inter-related in some way since factor analysis seeks the underlying common dimensions among the variables. If the variables are not related each variable will be its own factor!! Rule of thumb: substantial number of correlations greater than .30 Metric variables are assumed, although dummy variables may be used (coded 0,1). The factors or unobserved variables are assumed to be independent of one another. All variables in a factor analysis must consist of at least an ordinal scale. Nominal data are not appropriate for factor analysis.

Quick Quips about Factor AnalysisHow many cases? Rule of 1010 cases for every item; rule of 100 number of respondents should be the larger of (1) 5 times number of variables or (2) 100. How many variables do I need to FA? More the better (at least 3) Is normality of data required? Nope Is it necessary to standardize one variables before FA? Nope Can you pool data from two samples together in a FA? Yep, but must show they have same factor structure.

Tests for Basic AssumptionsTwo statistics on the SPSS output allow you to look at some of the basic assumptions. Kaiser-Meyer-Olkin (KMO) Measure of Sampling Adequacy, and Bartlett's Test of Sphericity Kaiser-Meyer-Olkin Measure of Sampling Adequacy generally indicates whether or not the variables are able to be grouped into a smaller set of underlying factors. That is, will data factor well??? KMO varies from 0 to 1 and should be .60 or higher to proceed (can us .50 more lenient cut-off) High values (close to 1.0) generally indicate that a factor analysis may be useful with your data. If the value is less than .50, the results of the factor analysis probably won't be very useful.

Kaiser-Meyer-Olkin (KMO)Marvelous - - - - - Meritorious - - - - Middling - - - - - - Mediocre - - - - - - Miserable - - - - - Unacceptable - -

.90s .80s .70s .60s .50s below .50

KMO Statistics: Interpreting the Output

In this example, the data support the use of factor analysis and suggest that the data may be grouped into a smaller set of underlying factors. What does Bartletts Test of Sphericity explore?

Correlation Matrix

Bartlett's Test of Sphericity Tests

hypothesis that correlation matrix is an identity matrix.Diagonals are ones Off-diagonals are zeros

Significant

result indicates matrix is not an identity matrix.

Bartletts Test of Sphericity

Bartletts Test of Sphericity compares your correlation matrix to an identity matrix An identity matrix is a correlation matrix with 1.0 on the principal diagonal and zeros in all other correlations. So clearly you want your Bartlett value to be significant as you are expecting relationships between your variables, if a factor analysis is going to be appropriate! Problem with Bartletts test occurs with large ns as small correlations tend to be statistically significant so test may not mean much!

Two Extraction Methods

Principal Component AnalysisConsiders all of the available variance (common + unique) (places 1s on diagonal of correlation matrix). Seeks a linear combination of variables such that maximum variance is extractedrepeats this step. Use when there is concern with prediction, parsimony and knows specific and error variance are small. Results in orthogonal (uncorrelated factors)

Principal Axis Factoring (PFA) or Common Factor AnalysisConsiders only common variance (places communality estimates on diagonal of correlation matrix). Seeks least number of factors that can account for the common variance (correlation) of a set of variables. PAF is only analyzing common factor variability; removing the uniqueness or unexplained variability from the model. Called Principal Axis Factoring (PFA). PFA preferred in SEM cause it accounts for co-variation, whereas PCS accounts for total variance

Methods of Factor Extraction

Principal-axis factoring (PAF) diagonals

replaced by estimates of communalities iterative process continues until negligible changes in communalities

What is a Common Factor?It is an abstraction, a hypothetical construct that affects at least two of our measurement variables. We want to estimate the common factors that contribute to the variance in our variables. Is this an act of discovery or an act of invention?

What is a Unique Factor?It is a factor that contributes to the variance in only one variable. There is one unique factor for each variable. The unique factors are unrelated to one another and unrelated to the common factors. We want to exclude these unique factors from our solution.

Comparison of Extraction Models

PCA vs. PAF Factor

loadings and eigenvalues are a little larger with Principal Components One may always obtain a solution with Principal Components Often little practical differenceFYIOther less-used Extraction Methods (Image, alpha, ML ULS, GLS factoring)

Principal Components Extraction

A communality (C) is the extent to which an item correlates with all other items. Thus, in PCA extraction method when the initial communalities are set to 1.0, then all of the variability of each item is accounted for in the analysis.Of course some of the variability is explained and some is unexplained. In PCA with these initial communalities set to 1.0, you are trying to find both the common factor variance and the unique or error variance.

Principal Components Extraction

Statisticians have indicated that assuming that all of the variability of the items whether explained or unique can be accounted for in the analysis is flawed and definitely should not be used in an exploratory factor model. Some researchers suggest PAF as the appropriate method for factor extraction using EFA. In PAF extraction, the amount of variability each item shares with all other items is determined and this value is inserted into the correlation matrix replacing the 1.0 on the diagonals. As a result, PAF is only analyzing common factor variability; removing the uniqueness or unexplained variability from the model.

Factor Rotation: OrthogonalVarimax (most common) minimizes number of variables with high loadings (or low) on a factormakes it possible to identify a variable with a factor Quartimax minimizes the number of factors needed to explain each variable. Tend to generate a general factor on which most variables load with med to high valesnot helpful for research Equimax combination of Varimax and Quartimax Q&A: Why use rotation method? Rotation causes factor loading to be more clearly differentiatednecessary to facilitate interpretation

Non-orthogonal (oblique)The real issue is you dont have a basis for knowing how many factors there are or what they are much less whether they are correlated! Researchers assume variables are indicators of two or more factors, a measurement model which implies orthogonal rotation.

Direct oblimin (DO) Factors are allowed to be correlated. Diminished interpretability Promax Computationally faster than DO Used for large datasets

Oblique Rotation The

variables are assessed for the unique relationship between each factor and the variables (removing relationships that are shared by multiple factors) matrix of unique relationships is called the pattern matrix. pattern matrix is treated like the loading matrix in orthogonal rotation.

The

The

Decisions to be made

EXTRACTION: PCA

vs PAF

ROTATION: Orthogonal

or Oblique (non-orthogonal)

Procedures for Factor Analysis

Multiple different statistical procedures exist by which the number of appropriate number of factors can be identified. These procedures are called "Extraction Methods." By default SPSS does PCA extraction This Principal Components Method is simpler and until more recently was considered the appropriate method for Exploratory Factor Analysis. Statisticians now advocate for a different extraction method due to a flaw in the approach that Principal Components utilizes for extraction.

What else?

How many factors do you extract? One

convention is to extract all factors with eigenvalues greater than 1 (e.g. PCA) Another is to extract all factors with nonnegative eigenvalues Yet another is to look at the scree plot Number based on theory Try multiple numbers and see what gives best interpretation.

Eigenvalues greater than 1Total Variance Explained Initial Eigenvalues Factor 1 2 3 4 5 6 7 8 9 10 11 12 Total % of Variance 3.513 29.276 3.141 1.321 .801 .675 .645 .527 .471 .342 .232 .221 .111 26.171 11.008 6.676 5.623 5.375 4.391 3.921 2.851 1.936 1.841 .928 Cumulative % 29.276 55.447 66.455 73.132 78.755 84.131 88.522 92.443 95.294 97.231 99.072 100.000 Extraction Sums of Sq uared Loadings Total % of Variance 3.296 27.467 2.681 .843 .329 22.338 7.023 2.745 Cumulative % 27.467 49.805 56.828 59.573 Rotation Sums of Squared Loading s Total % of Variance 3.251 27.094 1.509 1.495 .894 12.573 12.455 7.452 Cumulative % 27.094 39.666 52.121 59.573

Extraction Method: Principal Axis Factoring .

Scree PlotScree Plot4

Three Factor Solution3

2

Eigenvalue

1

0 1 2 3 4 5 6 7 8 9 10 11 12

Factor Number

Criteria For Retention Of Factors

Eigenvalue greater than 1 Single

variable has variance equal to 1

Plot of total variance - Scree plot Gradual

trailing off of variance accounted for is called the scree.

Note cumulative % of variance of rotated factors

Interpretation of Rotated Matrix

Loadings of .40 or higherName each factor based on 3 or 4 variables with highest loadings.

Do not expect perfect conceptual fit of all variables.

Loading size based on sample(from Hair et al 2010 Table 3-2)Significant Factor Loadings based on Sample SizeSample Size 50 60 70 85 100 120 150 200 250 350 Sufficient Factor Loading 0.75 0.70 0.65 0.60 0.55 0.50 0.45 0.40 0.35 0.30

What else?

How do you know when the factor structure is good? When

it makes sense and has a (relatively) simple and clean structure. Total Variance Explained > .60

How do you interpret factors? Good

question, that is where the true art of this comes in.

Why EFA?

49

Why EFA?

?

50

Reflective versus FormativeDiet (Reflective) R1. I eat healthy food. R2. I do not each much junk food. R3. I have a balanced diet.Diet

Health (Formative) F1. I have a balanced diet F2. I exercise regularly F3. I get sufficient sleep each nighte3

Health

R1e1

R2e2

R3e3EDM 643

F1

F2

F3

51

Diet (Reflective)Diet

Health (Formative)e3

Health

R1e1

R2e2

R3e3

F1

F2

F3

Direction of causality is from construct to measure Measures expected to be correlated Indicators are interchangeable

Direction of causality is from measure to construct No reason to expect the measures are correlated Indicators are not interchangeable

EDM 643

*From Jarvis et al 2003

52

AdequacyResiduals 5% KMO 0.8 is better Communalities 0.5 is better

Validity

Face Validity (do they make sense?) Pattern Matrix Convergent

(high loadings) Discriminant (no cross-loadings)

Factor Correlations .7

is better

EDM 643

54

ReliabilitySplit data and do two EFAs Cronbachs Alpha (>.70) for each factor SPSS:

Scale Reliability Analysis

EDM 643

55