Vroom TX and Work Criteria Meta Analy

download Vroom TX and Work Criteria Meta Analy

of 12

Transcript of Vroom TX and Work Criteria Meta Analy

  • 7/29/2019 Vroom TX and Work Criteria Meta Analy

    1/12

    J o u r n a l of Applied Psychology1996, Vol. 81 , No. 5, 575-586 Copyright 1996 by the American Psychological Association, Inc.0021-9010/96/53.00

    Vroom's Expectancy Modelsand Work-Related Criteria:A Meta-AnalysisWendelien Van EerdeUniversity of Amsterdam Henk ThierryUniversity of Tilburg

    This meta-analysis integrates the correlations of 77 studies on V . H. Vroom's (1964)original expectancy models and wo rk-related criteria. Correlations referr ing to predic-tions with th e models and the single componentsvalence, instrumentality, an d expec-tancywere included in relation to 5 types of criterion variables: performance, effort,intention, preferen ce, and choice. W ithin-subjects correlations and between-subjects cor-relations were included separately. Overall, the average correlations were somew hat lowerthan reported in previous narra tive reviews. In certain categories, moderators pertainin gto the me asurem ent of the concepts were analyzed with a hierarchical linea r model, butthese moderators did not explain heterogeneity. The results show a differentiated over-view: the use of the correlational material for the validity of expectancy theory isdiscussed.

    Expectancy theory (Vroom, 1964) has held a majorposition in the study ofwork motivation. Vroom's (1964)Valence - Instrumentality - Expectancy Model (VIEmodel), in particular, has been the subject of numerousempirical studies. It has served as a rich source for theo-retical innovations in domains such as organizational be-havior (Naylor , Pritchard, & Ilgen, 1980), leadership(House, 1971) , and compensation (Lawler, 1 9 7 1 ) . Re-views on expectancy theory (Mitchell, 1974, 1982;Pritchard & Campbell, 1976; Schwab, Olian-Gottlieb, &Heneman, 1979; Wanous, Keon, & Latack, 1983) ad-dressed several conceptual and empirical problems andgave important suggestions for fu ture research.

    Recent publications show a revived interest in ex-pectancy theory as it relates to training motivation(Mathieu, Tannenbaum, & Salas, 1992), turnover(Summers & Hendrix, 1991), productivity loss in groupperformance (Shepperd, 1993), self-set goals (Tubbs,Boehne, & Dahl, 1993), goal commitment (Klein &Wright, 1994; Tubbs, 1993), and goal level (Mento,

    Wendelien V an Eerde, Department of Psychology, Universityof Amsterdam, Am sterdam, The Netherlands; Henk Thierry,Department of Human Resource Sciences, University of Til-burg, Tilburg, The Netherlands.We thank Joop Hox,Nathalie Allen, Bob Pritchard, SabineSonnentag, and Carsten de Dreu.Correspondence concerning this article should be addressedto Wendelien V an Eerde, Department of Psychology, Workan d Organizational Psychology, University of Amsterdam,Roetersstraat 15 , 1018W B Amsterdam, The Netherlands.Electronic mail may be sent vi a Internet to ao_eerde@macmail .psy.uva.nl .

    Locke, &Klein, 1992 ) . Also, some argue that expectancytheory should be combined with other motivation theo-ries (e.g., Kanfer, 1987; Kernan & Lord, 1990; Klein,1989; Landy & Becker, 1990). Therefore, it is importantto establish the validity of expectancy theory. Does 30years of research support its main tenets? Is the theorystill "promising," though not firmly supported empiri-cally, such asearlier reviews seem to conclude? Is it usefulto combine expectancy theory with other approaches,and, if so, how should this be done?Many different interpretations, operationalizations,application purposes, and methods of statistical analysishave been used. Tomake a comparison and combinationof the results possible, we referred to Vroom's basicmodels and their components. The objective of this arti-cle is to analyze the literature on expectancy theory sys-tematically and to integrate the empirical results meta-analytically. We did so in order to establish the relationbetween expectancy theory and work-related criterionvariables.

    Landy and Becker (1990) suggested that the key to im-proving the predictions of the expectancy model mightlie in variables such as the number of outcomes, valenceof outcomes, and the particular dependent variable cho-sen for study. Schwab et al. (1979) examined the rela-tionship between the VIE model and two criterion vari-ables, ef for t and performance. They included severalmoderators of this relationship in 32 between-subjectstudies in a statistical analysis. The current article pro-vides a partial update of their findings. In addition, stud-ies with components of the VIE model, that is, valence,instrumentality,expectancy,and theforce model(or EV)

    575

  • 7/29/2019 Vroom TX and Work Criteria Meta Analy

    2/12

    576 V A N EERDE AN D THIERRYand the valence model (or VI) were included, as well asthe results of within-subjects correlational analyses andstudies on other criterion variables than performanceand eifort. In total, the effect sizes of 77 studies were in-tegrated meta-analytically.

    Measurement of Expectancy Theory ConceptsThe models and their components (Vroom, 1964) are

    abstract and susceptible to different interpretations. Ingeneral, researchers disagree on what the constructsmean and how to measure them. Some of the differentoperationalizations of the concepts are outlined below.

    ValenceVroom (1964) denned this concept as all possible

    affective orientations toward outcomes, and it is inter-preted as the importance, attractiveness, desirability,or anticipated satisfaction with outcomes. Several au-thors have compared the operationalizations empiri-cally (Ilgen, Nebeker, & Pritchard, 1981; Pecotich &Churchill, 1981; Schwab et al., 1979; Tubbs, Boehne, &Paese, 1991) . The results of their studies show that thedifferences in the operationalizations do not always causeconsistent effects. Insofar as the effects are consistent, va-lence operationalized as attractiveness, desirability, or an-ticipated satisfaction explains more variance than va-lence operationalized as importance. In some cases, thevalence of performance wasmeasured directly instead ofbeing obtained byquestioningthe instrumentalityofper-formance in relation to certain outcomes and then weigh-ing the instrumentalityby the valenceof these outcomes.Often, scales with only positive anchors were used,whereas the theory states that valence can assume nega-tive values as well. In the case of goal-setting studies, va-lence was sometimes summated for the different levelsof effort. Because the theoretical meaning is unclear (cf.Klein, 1991), these studies were removed from thecurrent meta-analysis. We examined the moderatingeffect of the operationalization of valence by coding it asattractiveness, importance, desirability, and otheroperationalizations.

    InstrumentalityVroom (1964) denned this concept as an outcome-

    outcome association, and it has been interpreted not onlyas a relationship between an outcome and another out-come but also as a probability to obtain an outcome. Theleast controversy appears to exist over this construct, andboth interpretations appear in the meta-analysis withoutdistinction.

    Number of OutcomesVroom's (1964 ) models state that the instrumentalityof a number of outcomes, weighted by valence, is to be

    summed. In most research, these outcomes were selectedby the researcher and presented to the subject for rating.This procedure increases the risk that some outcomes areirrelevant to the subject, whereas relevant outcomesmaynot have been included. In Vroom's view, an irrelevantoutcome should have an instrumentality score of 0, andtherefore it should haveno effect on the relationship withthe criterion. However, a large number of outcomes tendsto decrease the prediction of the criterion (Mitchell,1982) , possibly because outcomes that have gone unno-ticed previously introduce measurement error (Parker &Dyer, 1976). Clearly, there should be a difference be-tween truly irrelevant outcomes and those that a persondid not consider yet. One method to ensure that out-comes are in fact relevant is having the subject choose themost relevant outcomes from a list ofoutcomes (cf. Horn,1980; Kinicki, 1989; Parker & Dyer, 1976). Only rarelyare subjectsgiven the opportunity to name their ownper-ceived outcomes (Matsui & Ikeda, 1976). Schwab et al.(1979) showed that (between-subjects) studies using 10to 15 outcomes in order to predict effort orperformanceyield stronger effect sizes than those with either less ormore outcomes, suggesting a curvilinear relationship be-tween the number ofoutcomes and effect size. In the cur-rent meta-analysis we included the number of outcomesas a moderator.

    ExpectancyVroom (1964) denned expectancy as a subjective

    probability of an action or effort (e) leading to an out-come or performance (p) expressed as e - p. In practice,expectancy has also been measured as the perceived rela-tion orcorrelation betweenan action and an outcome. Inaddition, expectancy has been interpreted as the subjec-tive probability that effort leads to the outcome of perfor-mance or second-level outcome (o) expressed as e - o.The latter view confounds expectancy with instrumental-ity (p - o). In order to establish whether the originaldefinition isrelated to higher effect sizes,wecoded expec-tancy of an action (e -* p) and expectancy of second-level outcomes(e -* o).

    Although Vroom(1964 ) conceptualized expectancyashaving more than one level, we decided to include themeasurementof one level ofexpectancy because this typeof measurement was a rule rather than an exception.Summated expectancy scores, however, were not in-cluded because we considered these as too distant fromthe original conceptualization.

  • 7/29/2019 Vroom TX and Work Criteria Meta Analy

    3/12

    E X P E C T A N C Y THEORY: A M E T A - A N A L Y S I S 577

    Measurem ent o f the CriterionIn dispute is how work m otivation, as predicted by theV IE model, should be measured. As Vroom (1964) re -marked, "The only concept in the model that ha s beendirectly linked with potentially observable events is the

    concept offeree [where] behavior on the pa rt of a personis assumed to be the result of a field of forces each ofwhich has direction and magnitude" (p. 20) . Force is ametaphor: In the literature, it has been operationalizedin te rms of effort, intention, or it has been derived frommeasures of performance or from the engagement in anactivity such as participation. Performan ce has served asa dependent variable as well. The criterion variable forthe V I model has usua lly been operationalized as prefer-ence (attractiveness), intention, or choice.Another issue regarding the criterion measure iswhether verbal self-reports are valid measures of force.Perhaps self-repo rts are m ost closely related to force, bu tthere is a risk that the relationship between expectancytheory measures and this criterion is spuriously inflatedby common method bias and by shared variance in mea-surement error when m easured simu ltaneously with th eV IE variables (Horn, 1980). In the present meta-analy-sis, we distinguished the following criterion measures: '( a ) performa nce, which includes th e objective measuresof productivity, gain in performance, task performance,grades, perform ance ratings by supervisors, and self-rat-ings; (b) effort, which includes objective measures ofeffort expenditure on a task, such as time spent, effortratings by supervisors, self-reports of effort spent on atask or applying for a job, and intended effort; (c ) in ten-tion (either to apply for a job o r to tu rn over in a j o b ) ;( d ) preference, w hich refers to the attractiveness or pref-erence ratings of jobs, o ccupations, or organ izations; and( e ) choice, the actual vo lun tary turnover, job choice, andorganizational choice. Note that the preference measu recontains the ratings of options, whereas choice containsreal choices. Within these categories, we coded the m ea-surement of the criterion variable, that is , self-ratings,ratings by supervisors, or objective m easures.

    Analysis of Expec tancy Theo ry Predictions:Between- Versus Within-Subjects AnalysesTypically, when the VIE model is applied, the followingmethod is used: Su bjects rate the expectancy of the pre-dicted variable and the instrumentality and valence ofthe outcom es of reaching the predicted variable, and thenthe three V IE variables are combined in to a force score.A criterion m easure is obtained, and the subjects' scoresare correlated to it according to a between-subjects ana l-ysis. It is imp ortant to no te that this method is at variancewith Vroom's (1964) idea of the model. Vroom referred

    to an individual 's force as one wh ich acts relative to otherforces w ithin the indiv idua l. As such, a relation betweenV IE variables and a criterion should be performed ac-cording to a w ithin-su bjects analysis. The models are in-dividual decision-making models and need to be viewedipsatively (Mitchell , 1974) . An alyzin g scores of differentindividuals as a grou p only gives inform ation abou t thea m o u n t of variation in the group (cf . Tubbs et al., 1993).It is unclear why so m an y empirical studies have used theinappropriate between-subjects methodology, althoughthe cumbersomeness of a within-subject test may havecontributed to th is. Also, how to p redict who is going toperform well, which was apparently the objective of someresearchers, is a valid concern. However, it is our viewthat V room's model was originally not mean t to be usedthisway.Another issue in the between -within debate relates towhether response set bias and other sources o f between-subjects var iance in a between-subjects analysis wouldcause the amount of variance explained in a between-subjects analy sis to be lower than in a within-subject anal-ysis (Mitchell , 1974) . In response to this, Nickerson andMcClelland ( 1 9 8 9 ) showed by simulation that between-subjects analyses ca n yield larger correla tions exactly be-cause of response se t bias. In practice, however, thewithin-subjects analyses have usu ally yielded somewhatstronger correlations (Mitchell, 1982). In the presentmeta-analysis, we distinguished between the within-sub-jects analyses and the between-subject analyses.

    MethodStep 1 ProcedureA meta-analytic integration was conducted examining the re-lation between the expectancy models an d work-related criteria. 2The major goal of this meta-analysis was to provide a precisesum ma ry of the overall strength of the relat ion between VIE var i -ables and work-related criteria. A second goal was to establishwhether certain variables modify this relation.A search for empirical studies on expectancy theory was con-ducted with the following databases: PsycLIT of the AmericanPsychological Association (1973-1993) , Abstracted Business In-formation ( k n o w n as AB1 I n f o r m ) of University Microfi lms

    (1987-1992) , and ERIC, the Educa t iona l Resources Informa-tion Center (198 1 -1991). These computer searches were supple-mented by ancestry approach: articles were traced by references,1 Authors often did not mention the t ime between the mea-su remen t of the prediction and the criterion variable, but most

    studies were cross-sectional. In the performance an d effort cat-egories, aggregations of criterion measures over t ime weresometimes made. In the criterion category choice, correlat ionsacross t ime were available an d were included in the data set.2 The tables with individua l effect sizes of the studies in themeta-analysis can be obtained from the corresponding author.

  • 7/29/2019 Vroom TX and Work Criteria Meta Analy

    4/12

    57 8Table 1Results of the Meta-Analysis (Step 1)

    V A N EERDE A N D THIERRY

    Variable Valence Instrumentality Expectancy EVModel

    V I V IEPerformance

    Between-subjectsk 11Homogenei ty: x 2 ( f c ~ 1) 16.54Average r .2 1N 1,49095% Confidence intervalLower .16Upper .26Within-subjectskHomogenei ty: x 2 (k - 1)Average rN95 % Confidence in tervalLowerUpper

    Between-subjectsk 1Homogenei ty: x 2 (k - 1) 7.10Average r .29N 66 995% C onfidence intervalLower .21Upper .3 6Within-subjectsk 2Homogenei ty: x2 (k - 1 ) .01Average r .54A' 12095 % Confidence intervalLower .40Upper .6 5

    Between-subjectsk 2Homogenei ty: x2 (k - 1) 7.58**Average r .44N 44 495% Confidence intervalLower .36Upper .51Within-subjectsk 1Homogenei ty: x2 (k - 1)Average r .25N 9295 % Confidence in tervalLower .05Upper .4 5

    Between-subjectskHomogenei ty: x2 (k - 1 )Average rN

    1233.70***.161,532.07.17

    88.41.1 5726.08.22

    1.42

    27.05.69

    327.77***.1 7572.09.25

    412.87**.321,356.27.36

    1.4077 1

    21 1560.03*** 17.60.22 .262,618 3,004.17 .22.25 .30

    3.1 4.3340 3.24.41

    Effort

    15 1421.95 19.26.19 .281,494 3,127.14 .25.24 .32

    2 56.01** 9.75*.45 .52120 320

    .29 .44.58 .59Intent ion

    318.45***.3851 3.30.45

    1.4250.1 6.36

    Preference

    1 511.82.1 51,668.10.20

    2.62.09593.01.17

    820.50**.25890.1 9.32

    24.08*.42175

    .29.54

    106.86.401,234.35.45

    44.35.331,316.28.37

    52.31.361,220

    2959.91***.193,361.16.22

    414.45**.2373 1.1 6.29

    1622.81.231,427.1 8.27

    43.80.5729 5

    .64.49

    433.04***.341,033.28.39

    39.28**.49262.38.57

  • 7/29/2019 Vroom TX and Work Criteria Meta Analy

    5/12

    EXPECTANCY THEORY: A META-ANALYS IS 579

    Table 1 (continued)

    V ariable V alence Instrumentali ty Expectancy EVModel

    V I V IEPreference ( c o n t . )

    Between-subjects ( cont.)95 % Confidence in tervalLowerUppe rWithin-subjectskHomogenei ty: x2 (k 1 )Average rN95 % Confidence in tervalLowerUppe r

    .34.451383.08***.631,053

    .59.66

    .31.411 7105.67***.651,498

    .62.68

    1.74129.65.8 1

    ChoiceBetween-subjectsk 1Homogenei ty: x2(k- 1)Average r .21N 12 195 % Confidence in tervalLower .09Upper .4 3Within-subjectskHomogenei ty: x2 (k - 1)Average rN95 % Confidence intervalLowerUppe r

    1.21

    1 2 1.03.37

    318.08***.222,050.1 8.25

    416.02**.3144 0.23.39

    1.221,521.1 7.26

    11.39.5641.30.7 3

    1.4941.22.70

    1.29121.1 2.45

    41.83.34605.26.41

    317.61***.1 4565.06.22

    415.57**.232,406.1 9.26

    Note. EV = Expectancy X Valence; V I = Valence X Instrumentali ty ; VIE = V alence-In strume ntality-Expecta ncy. Boldface type signifies hetero-geneous categories co ntaining more than 10 correlations.*p

  • 7/29/2019 Vroom TX and Work Criteria Meta Analy

    6/12

    58 0 V A N EERDE A N D T H I E RRYTable 2Unweighted Average Correlations: Valence X Instrumentality X Expectancy (VIE) an d Criterion Variables

    Variable 1 10 1 11.2.3.4.5.6.7.8.9.10.11.

    ValenceInstrumentali tyExpectancyEV modelV I modelV IE modelPerformanceEffortIn t en t i onPreferenceChoice

    _.30.22.94.84.61.1 8.34.39.27

    7.20.52.84.58.1 7.24.29.7 1.27

    86.52.21.65.17.26.40.38

    111.90.94.27.32.42

    .53

    2361

    .87.1 7.29.37.66.28

    45917.1 9.29.42.74.25

    1 11221181633.46.43

    -.35

    99171910208.7 3.41

    373114721

    .37

    114 45222 5

    1 7114_

    Note. EV = ExpectancyX Valence; VI = Valence X Instrumentali ty . Correlations were averaged with a Fisher's z transformation. Numbers in theupper t r iangle are the n u m b e r of observations correspondingto the coefficients in the lower triangle.

    computed within each effect category (Hedges & O lk in , 1985;Rosenthal, 1991) . In the categories with heterogeneous results,disjoint cluster analysis (Mullen, 1989) was used to identifyoutliers. We decided to include the significant outliers, as thenumber was sufficiently low. Finally, 95 % confidence intervalswere computed for the weighted means.

    Step 2 ProcedureSome of the categories established in the meta-analysis were

    heterogeneous, as indicated by the chi-square test. One of theinterpretations of a heterogeneous category is that the effectsizes do n ot come from the same population of effect sizes an dthat moderators can explain systematically higher or lowereffect sizes. T he hierarchical linear model (HLM) procedure byRa udenbush and Bryk (1985) was used to examine moderators( B r y k & Raudenbush, 1992; Hox, 1994; Raudenbush & Bryk,1985; for another example of HLM applied to a meta-analysis,se e Hox & De Leeuw, 1994). The moderators in a meta-analysisrepresent a "variance known" problem because the samplingvariance of the effect sizes (Fisher's zs) is known [1 ^ (n -3)]. The model separates unsystematic sampling error (withinstudies-Level 1) as a source of variation from systematic varia-tion (between studies-Level 2 ) because of moderator influ-ences. The results of the analysis provided information aboutthe percentage of heterogeneity in the effect sizes explained bythe moderator variables and about the amount and significanceof the residual heterogeneity. We performed the moderatoranalyses in the heterogeneous categories that contained 10 ormore effect sizes. Five categories met these conditions and areshown in Table 1.

    The following five moderator variables were dummy coded:criterion measure (self-report , supervisor ratings, and objectivemeasure); number of outcomes; type of subjects (blue-collaremployees, white-collar employees, and students); operation-alization of expectancy (expectancy of a first-level outcome andexpectancy of a second-level outcome); operationalization ofvalence (attractiveness, importance, desirability, and other).All correlations within a study that differed on these moderatorswere included after a Fisher'sz transformation.

    ResultsStepJ

    The results of the first step in the meta-analysis areshown in Table 1. Table 2 presents the unweighted aver-age correlations of the variables used. Note that the cellsin Table 1 are filled unevenly and that most studies re-ported between-subjects correlations. The correlationsrange widely, from r = .09 for the average within-subjectscorrelation between VI and performance to r = .65 forthe average within-subjects correlation between VI andpreference.

    The within-subjects method gave higher average corre-lations than the between-subject analyses in 75% of thecategories. However, 75% of the confidence intervals ofthe between-subjects and within-subjects results overlap.Furthermore, Table 1shows that Vroom's (1964) modelsdo not yield higher correlations than the single constructsvalence, instrumentality, or expectancy. Therefore, wedecided to collapse the average results of the VIE vari-ables over the criterion category (see Table 1) in orderto obtain weighted average for a single criterion variablewithin the type of analysis. The within-subjects averagecorrelations are significantly higher than the between-subjects for the criterion measures, effort (z = 6.02, p