Common Method Variance in IS Research: A Comparison of Alternative Approaches and a Reanalysis of...

download Common Method Variance in IS Research: A Comparison of Alternative Approaches and a Reanalysis of Past Research

of 20

Transcript of Common Method Variance in IS Research: A Comparison of Alternative Approaches and a Reanalysis of...

  • 8/21/2019 Common Method Variance in IS Research: A Comparison of Alternative Approaches and a Reanalysis of Past Rese

    1/20

    This article was downloaded by: [121.52.158.163] On: 27 October 2014, At: 02:02

    Publisher: Institute for Operations Research and the Management Sciences (INFORMS)

    INFORMS is located in Maryland, USA

    Management Science

    Publication details, including instructions for authors and subscription information:

    http://pubsonline.informs.org

    Common Method Variance in IS Research: A Comparison Alternative Approaches and a Reanalysis of Past Researc

    Naresh K. Malhotra, Sung S. Kim, Ashutosh Patil,

    To cite this article:Naresh K. Malhotra, Sung S. Kim, Ashutosh Patil, (2006) Common Method Variance in IS Research: A Comparison of Alternat

    Approaches and a Reanalysis of Past Research. Management Science 52(12):1865-1883. http://dx.doi.org/10.1287/mnsc.1060.0597

    Full terms and conditions of use: http://pubsonline.informs.org/page/terms-and-conditions

    This article may be used only for the purposes of research, teaching, and/or private study. Commercial useor systematic downloading (by robots or other automatic processes) is prohibited without explicit Publisherapproval, unless otherwise noted. For more information, contact [email protected].

    The Publisher does not warrant or guarantee the articles accuracy, completeness, merchantability, fitnessfor a particular purpose, or non-infringement. Descriptions of, or references to, products or publications, orinclusion of an advertisement in this article, neither constitutes nor implies a guarantee, endorsement, orsupport of claims made of that product, publication, or service.

    Copyright 2006, INFORMS

    Please scroll down for articleit is on subsequent pages

    INFORMS is the largest professional society in the world for professionals in the fields of operations research, managementscience, and analytics.

    For more information on INFORMS, its publications, membership, or meetings visit http://www.informs.org

    http://www.informs.org/http://pubsonline.informs.org/page/terms-and-conditionshttp://dx.doi.org/10.1287/mnsc.1060.0597http://dx.doi.org/10.1287/mnsc.1060.0597http://pubsonline.informs.org/
  • 8/21/2019 Common Method Variance in IS Research: A Comparison of Alternative Approaches and a Reanalysis of Past Rese

    2/20

    MANAGEMENT SCIENCEVol. 52, No. 12, December 2006, pp. 18651883issn 0025-1909 eissn 1526-5501 06 5212 1865

    informs

    doi10.1287/mnsc.1060.0597 2006 INFORMS

    Common Method Variance in IS Research:

    A Comparison of Alternative Approachesand a Reanalysis of Past Research

    Naresh K. MalhotraCollege of Management, Georgia Institute of Technology, 800 West Peachtree Street, Atlanta, Georgia 30332,

    [email protected]

    Sung S. KimSchool of Business, University of WisconsinMadison, 975 University Avenue, Madison, Wisconsin 53706,

    [email protected]

    Ashutosh PatilCollege of Management, Georgia Institute of Technology, 800 West Peachtree Street, Atlanta, Georgia 30332,

    [email protected]

    Despite recurring concerns about common method variance (CMV) in survey research, the information sys-tems (IS) community remains largely uncertain of the extent of such potential biases. To address thisuncertainty, this paper attempts to systematically examine the impact of CMV on the inferences drawn fromsurvey research in the IS area. First, we describe the available approaches for assessing CMV and conduct anempirical study to compare them. From an actual survey involving 227 respondents, we find that althoughCMV is present in the research areas examined, such biases are not substantial. The results also suggest that fewdifferences exist between the relatively new marker-variable technique and other well-established conventionaltools in terms of their ability to detect CMV. Accordingly, the marker-variable technique was employed to inferthe effect of CMV on correlations from previously published studies. Our findings, based on the reanalysis of216 correlations, suggest that the inflated correlation caused by CMV may be expected to be on the order of0.10 or less, and most of the originally significant correlations remain significant even after controlling for CMV.Finally, by extending the marker-variable technique, we examined the effect of CMV on structural relationshipsin past literature. Our reanalysis reveals that contrary to the concerns of some skeptics, CMV-adjusted structuralrelationships not only remain largely significant but also are not statistically differentiable from uncorrectedestimates. In summary, this comprehensive and systematic analysis offers initial evidence that (1) the marker-variable technique can serve as a convenient, yet effective, tool for accounting for CMV, and (2) common method

    biases in the IS domain are not as serious as those found in other disciplines.

    Key words : common method variance; method biases; marker variable; logit analysis; path analysisHistory : Accepted by Ramayya Krishnan, information systems; received December 6, 2005. This paper was

    with the authors 2 months for 1 revision.

    1. IntroductionCommon method variance (CMV) refers to theamount of spurious covariance shared among vari-ables because of the common method used in collect-ing data (Buckley et al. 1990). Such method biases areproblematic because the actual phenomenon underinvestigation becomes hard to differentiate from mea-surement artifacts (Hufnagel and Conca 1994, Avolioand Bass 1991). Myriad sources, such as social desir-ability (Ganster et al. 1983), ambiguous wording (Huf-nagel and Conca 1994), and scale length (Harrisonet al. 1996), can result in common method biases.Recently, Podsakoff et al. (2003) systematically clas-sified such causes of CMV into the following fourcategories: common rater effects, item characteristic

    effects, item context effects, and measurement con-text effects. According to the literature, these types ofeffects differentially influence how the rater respondsto questions, thereby resulting in method biases

    (Tourangeau et al. 2000).Self-report surveys are the most common form

    of data collection in the social sciences, includingpsychology (Feldman and Lynch 1988), organiza-tional research (Crampton and Wagner 1994), mar-keting (Malhotra 2004), and information systems (IS)(Hufnagel and Conca 1994). In typical survey stud-ies in which the same rater responds to the itemsin a single questionnaire at the same point in time,data are likely to be susceptible to CMV (Kemery andDunlap 1986, Lindell and Whitney 2001). Thus, it does

    1865

  • 8/21/2019 Common Method Variance in IS Research: A Comparison of Alternative Approaches and a Reanalysis of Past Rese

    3/20

    Malhotra, Kim, and Patil: Common Method Variance in IS Research1866 Management Science 52(12), pp. 18651883, 2006 INFORMS

    not come as a surprise that CMV is one of the mostfrequently mentioned concerns among social scien-tists in general (Feldman and Lynch 1988, Williamsand Brown 1994) and among IS researchers in par-ticular (Woszczynski and Whitman 2001, Straub et al.2004).

    Although researchers generally agree that CMV hasthe potential to affect the results of a single-methodstudy, no consensus exists about the seriousness ofsuch biases. For example, Spector (1987) found from areanalysis of previously published studies that a con-sideration of method effects did not seriously under-mine the validity of the past studies. In contrast,Williams et al. (1989) showed that about one fourthof the variance in the measures examined in the pastliterature was attributable to method effects, indicat-ing that method biases are more serious than whatwas initially suggested by Spector (1987). Meanwhile,Crampton and Wagner (1994) found from their meta-

    analysis that although self-report methods causedbiases in some cases, method effects do not have theserious and pervasive consequences that critics havealleged.

    As this discussion shows, little consensus has beenreached regarding the extent of common method

    biases. Moreover, if such a consensus were ever re-ached, it is likely to be specific to a particular researcharea (Podsakoff et al. 2003). In particular, Cramp-ton and Wagner (1994) demonstrated that methodeffects varied considerably with research domainsand suggested that domain-specific investigations

    are required to determine which areas of researchare especially susceptible to percept-percept effects(p. 67). In a similar vein, Cote and Buckley (1987)examined 70 studies and found that method variancein the measured items differed greatly across disci-plines: 15.8% in marketing, 23.8% in other businessareas (mainly management), 28.9% in psychologyand sociology, and 30.5% in education. According toCote and Buckley (1987), such measurement artifactsdepend, at least to some extent, on whether the con-structs measured are concrete or abstract. In general,when measures are difficult or ambiguous, respon-dents tend to interpret them in a relatively subjective

    manner, and this may increase random respond-ing or increase the probability that respondents ownsystematic response tendencies (e.g., implicit theo-ries, affectivity, central tendency and leniency biases)may come into play (Podsakoff et al. 2003, p. 883).Accordingly, abstract concepts are likely to be moresusceptible to method biases than are concrete con-cepts (Feldman and Lynch 1988). Consistent withthis proposition, Cote and Buckley (1987) showedthat concrete constructs such as satisfaction and per-formance were less associated with method effects

    (22.5% method variance) than were abstract con-structs such as attitudes (40.7% method variance).

    IS research often deals with concepts that are rel-atively specific to an information technology (IT)artifact of interest. Accordingly, the scales used in ISresearch tend to be more concrete and less ambigu-

    ous when compared with those in psychology, sociol-ogy, or education. Therefore, to form answers to suchIT-related questions, respondents are expected to feelless need to guess or rely on other situational cues.Furthermore, in IS research, the major issues in ques-tion are mostly technological objects. As such, respon-dents are more likely to express their true opinions onsuch impersonal objects than when they express theiropinions on socially sensitive topics (i.e., social desir-ability) or on someone they know well (i.e., leniency

    biases). Of course, as with other research disciplines,IS research is subject to biases resulting from itemand measurement context effects. Nevertheless, the

    discussion mentioned earlier generally suggests that,compared with other areas, IS research may be lesssusceptible to some forms of method effects, such ascommon rater effects (e.g., social desirability, leniency

    biases) or item characteristic effects (e.g., item ambi-guity). However, few researchers have systematicallyexamined CMV in the IS area, and consequently,IS researchers remain uncertain of the impact thatmethod biases exert on the inferences drawn fromtheir studies (Woszczynski and Whitman 2001).

    This study is intended to fill this need for a com-prehensive and systematic investigation of the impact

    of CMV on survey-based IS research. To this end,we briefly describe four alternative approaches toaccounting for CMV and conduct an empirical studyto compare these analytical techniques. The four alter-native approaches are (1) the traditional multitraitmultimethod (MTMM) procedure, (2) the modernMTMM technique using confirmatory factor analy-sis (CFA), (3) Harmans single-factor test, and (4)the marker-variable technique. Given the efficacy ofthe marker-variable technique in comparison withother techniques, we used the marker-variable tech-nique to infer the effect of CMV on correlations

    between variables in previously published studies.

    Finally, this study is designed to examine the effectof CMV on structural relationships in prior litera-ture. To the best of our knowledge, this research isone of the first in any discipline to empirically com-pare the marker-variable technique with other con-ventional procedures in terms of their ability to detectCMV. Furthermore, this study expands the bound-ary of knowledge by revealing the extent of CMVand its impact on correlations in past IS research.More important, this study contributes to the litera-ture by showing how CMV biases can be assessed

  • 8/21/2019 Common Method Variance in IS Research: A Comparison of Alternative Approaches and a Reanalysis of Past Rese

    4/20

    Malhotra, Kim, and Patil: Common Method Variance in IS ResearchManagement Science 52(12), pp. 18651883, 2006 INFORMS 1867

    in structural relationships in previously publishedcausal models.

    The organization of this paper is as follows. Thenext section conceptually compares several tech-niques used to assess CMV. Section 3 empirically com-pares the marker-variable technique with alternative

    procedures in terms of their ability to detect methodeffects. Section 4 reports the results of our analysis ofCMV-corrected correlations in previously publishedstudies. In 5, we examine the impact of CMV on pathestimates in some selected studies. In 6, we concludewith a discussion of our research findings and direc-tions for further research.

    2. Four Alternative Approaches toAssessing CMV

    2.1. Traditional MTMM ProcedureIn the MTMM procedure, researchers are expected to

    measure each of the research variables using mul-tiple methods and use the data collected to createan MTMM matrix. The MTMM matrix is a table ofcorrelations among the combinations of traits andmethods (Campbell and Fiske 1959, Straub 1989). Inthis technique, the extent of CMV is estimated bythe difference between the monomethod-heterotrait(MH) correlations and the heteromethod-heterotrait(HH) correlations (Millsap 1990). Accordingly, CMVis assumed to exist if the average MH correlation isconsiderably greater than the average HH correlation.Likewise, CMV is assumed to be trivial if the aver-age MH correlation is comparable to the average HH

    correlation.Although instrumental in assessing method biases,the MTMM procedure has several limitations. First,its results depend heavily on the types of methodsemployed in a particular study. For example, whenmeasurement methods closely resemble each other,the HH correlations tend to be similar to the MH cor-relations. In this case, the similarity between HH andMH correlations does not necessarily indicate that themeasures are free of CMV (Millsap 1990, Marsh andHocevar 1988). Second, because the validity of thistechnique relies on the methods employed, no for-mal means are available to assess the level of CMV

    in the study under scrutiny. As a result, the tradi-tional MTMM approach does not allow the researcherto systematically account for method biases (Bagozzi1980). Finally, this procedure is restrictive in actualapplication because it requires measurement of eachof the traits using at least two methods.

    2.2. CFA-Based MTMM TechniqueOne of the major limitations of the traditional MTMMtechnique is the lack of capability to assess accuratelythe extent to which CMV is present. The CFA tech-nique is known to address this shortcoming of the

    traditional method (Straub et al. 2004, Podsakoff et al.2003). Specifically, CFA allows the researcher to modelexplicitly the variance in a measure as a function ofthree components, namely, the true score variance,the variance due to method effect, and random error.Thus, unlike the traditional method, the CFA tech-

    nique makes it possible to estimate how similar or dis-similar the methods adopted in the MTMM study are.Consequently, it allows estimation of the true relation-ships between latent factors that are free from method

    biases and random error.Despite its powerful properties, the CFA-based

    MTMM technique has some disadvantages. One lim-itation lies in identification of the CFA model. Evi-dently, the CFA technique involves complex modelspecification and thus produces a highly parameter-ized model. Therefore, the resulting model is saidto be often underidentified or to result in invalidparameter estimates (Williams and Anderson 1994,Podsakoff et al. 2003). Another limitation of thisapproach is essentially intrinsic to MTMM itself (i.e.,the employment of multiple methods). As discussedpreviously, MTMM requires measuring each of thetraits using multiple methods, and in many cases theuse of multiple methods for data collection is notviable. Therefore, this sophisticated technique doesnot seem to be a realistic option for evaluating CMV,let alone reanalyzing past studies in which multiplemethods were rarely employed.

    2.3. Harmans Single-Factor Test

    Harmans single-factor test is arguably the mostwidely known approach for assessing CMV in asingle-method research design (Podsakoff et al. 2003,Podsakoff and Organ 1986). Typically, in this single-factor test, all of the items in a study are subject toexploratory factor analysis (EFA). Then, CMV is as-sumed to exist if (1) a single factor emerges from unro-tated factor solutions, or (2) a first factor explains themajority of the variance in the variables (Podsakoffand Organ 1986, p. 536). As an alternative to EFA, CFAcan be used when implementing Harmons single-factor test. In particular, in the CFA approach, all of

    the manifested items are modeled as the indicators ofa single factor that represents method effects. Method

    biases are assumed to be substantial if the hypothe-sized model fits the data (e.g., Mossholder et al. 1998).

    Although simple and straightforward, this tech-nique, like the others, has limitations. Foremost amongthem is its well-known insufficient sensitivity to detectmoderate or small levels of CMV effects (Kemery andDunlap 1986, Podsakoff et al. 2003). This is becauseas the number of latent factors increases, one factor isless likely to account for the majority of the variance

  • 8/21/2019 Common Method Variance in IS Research: A Comparison of Alternative Approaches and a Reanalysis of Past Rese

    5/20

    Malhotra, Kim, and Patil: Common Method Variance in IS Research1868 Management Science 52(12), pp. 18651883, 2006 INFORMS

    in the manifested variables. In fact, Kemery and Dun-lap (1986) demonstrated in a study based on numeri-cal analysis and simulation that this single-factor tech-nique was extremely unreliable. Because of the limita-tions discussed earlier, experts recommend use of thistest only as a last resort (Lindell and Whitney 2001,

    Podsakoff and Todor 1985). Thus, it seems reason-able to conclude that this technique does not offer anacceptable means to estimate and control for methodeffects.

    2.4. Marker-Variable TechniqueLindell and Whitney (2001) proposed a new approachto address the problems related to Harmans testin a single-method research design. In particular,their marker-variable technique takes advantage ofa special variable that is deliberately prepared andincorporated into a study along with the researchvariables. In this approach, a marker variable is

    implemented in the study such that the marker vari-able is theoretically unrelated to at least one variablein the study. Because the marker variable is assumedto have no relationship with one or more variables inthe study, CMV can be assessed based on the correla-tion between the marker variable and the theoreticallyunrelated variable. In this paper, such a correlationis denoted by rM, and it is treated as an indicator ofCMV.

    To acquire a reliable estimate of CMV, a markervariable should be carefully identified before the startof data collection. Alternatively, however, it is pos-sible to estimate rM in a post hoc fashion without

    the marker variable identified a priori (Lindell andBrandt 2000). Specifically, Lindell and Whitney (2001)stated that the smallest correlation among the mani-fest variables provides a reasonable proxy for CMV(p. 115). Because an uncorrected correlation is influ-enced not only by true covariance but also by CMV,the smallest positive value in the correlation matrix,or rM1, would be a conservative estimate of rM.

    1

    Although fairly reasonable, the post hoc approach hasthe potential to capitalize on chance factors. There-fore, according to Lindell and Whitney (2001), inves-tigators can use the second-smallest positive correla-tion, rM2as a more conservative estimate ofrM.

    Within the framework of marker-variable analysis,a method factor is assumed to have a constant cor-relation with all of the measured items. Under thisassumption, a CMV-adjusted correlation between thevariables under investigation, rA, will be computed

    by partialling outrMfrom the uncorrected correlation,rU. In particular, with a sample size ofn, rA and its

    1 Here, an assumption is made that correlations among the variablesare positive or at least can be represented as being positive (Lindelland Whitney 2001).

    t-statistic can be calculated as follows:

    rA =rU rM

    1 rM2 (1)

    t/2 n3 = rA

    1 r2A/n 3

    (2)

    Using Equations (1) and (2), investigators can exam-ine the impact of CMV on the magnitude and sig-nificance of a correlation. Furthermore, although itwas not explicated in their study, we can extendLindell and Whitneys (2001) framework to test acausal model. Specifically, to estimate CMV-adjustedpath coefficients and explained variance, one shouldfirst develop a complete matrix of CMV-adjusted cor-relations using Equation (1). This correlation matrixcan subsequently be used as input for path analysis.

    As discussed previously, this marker-variablemethod, unlike the traditional MTMM approach,

    yields a specific estimate of CMV along with thestatistical significance of the CMV-adjusted correla-tion between the variables. In addition, this techniquedoes not force investigators to use multiple meth-ods. Thus, based on theoretical and practical consid-erations, the marker-variable method seems to be anappealing alternative to assess CMV biases in generaland those biases in previously published studies inparticular. However, its validity and efficacy need to

    be further assessed (Podsakoff et al. 2003).

    3. An Empirical Comparison of the

    Alternative ApproachesThis section empirically compares the four tech-niques. First, we describe the methodology of our em-pirical study, including the measurement instrumentsand the data collection process. Second, we assessCMV using each of the four techniques. Finally, wediscuss the extent of CMV estimated from the col-lected data as well as the efficacy of the alternativeapproaches to assessing CMV.

    3.1. MethodologyIn the survey, we used two sets of scales thatare well established in IS literaturethe technol-

    ogy acceptance model (TAM) and concern for infor-mation privacy (CFIP). In particular, TAM specifiesthat two factorsperceived ease of use (PE) and

    2 If measures are associated with random errors, and these errorsare not controlled for, then this equation is generally known tounderestimate CMV-adjusted correlations (Lindell and Whitney2001). Lindell and Whitney showed how to correct for such mea-surement errors to acquire a more reliable estimate of a CMV-adjusted correlation; however, when the measures are not per-fectly reliable, the t-value of the corrected correlation cannot becalculated.

  • 8/21/2019 Common Method Variance in IS Research: A Comparison of Alternative Approaches and a Reanalysis of Past Rese

    6/20

    Malhotra, Kim, and Patil: Common Method Variance in IS ResearchManagement Science 52(12), pp. 18651883, 2006 INFORMS 1869

    perceived usefulness (PU)determine usage inten-tion (UI) (Davis et al. 1989).3 In this study, TAMscales were adapted to examine individuals reactionsto a portal site in an academic setting. Meanwhile,CFIP centers on four factors, namely, collection (CO),improper access (IA), secondary use (SU), and errors

    (ER), as drivers of consumers behavior in the con-text of information privacy (Smith et al. 1996). Refusalintention (RI), defined as ones reluctance to cooperatewith a marketing firm, is one of such privacy-related

    behaviors, and it is conceptualized as a dependentvariable in this study (Malhotra et al. 2004). To applythe marker-variable technique, we carefully identi-fied variables that would not relate to the phenomenaunder investigation. As a result, fashion conscious-ness (FC), which refers to the importance that oneattaches to being fashionably dressed, was chosen asa marker variable for TAM (Wells and Tigert 1971,Lumpkin and Darden 1982). In addition, fantasiz-

    ing (FA), defined as the extent to which one has avivid imagination, was selected as a marker variablefor CFIP. The constructs and corresponding items aredescribed in the online appendix on the ManagementScience website at http://mansci.pubs.informs.org/ecompanion.html.

    Two versions of the survey questionnaire were pre-pared: One version was Web-based, and the otherwas paper-and-pencil (P&P). The Web version wasintended to test the efficacy of the single-method tech-niques, such as Harmans single-factor test and themarker-variable technique. Accordingly, we includedall of the items related to TAM, CFIP, and their marker

    variables in the Web-based questionnaire. Meanwhile,the P&P version was designed exclusively for themultiple-method techniques such as MTMM andCFA-based MTMM. Thus, this questionnaire did notcontain the marker variables. In addition, the P&Pversion included only a selected set of the itemsas explicitly indicated in the online appendix. Theitems in the P&P version were expected to be com-

    bined with the other items in the Web version withinthe context of the multiple-method techniques. Weadministered both versions of the questionnaire tocollege students taking summer classes in a majorpublic university in the Southeast. The P&P ques-

    tionnaire was administered while the students wereattending class. The Web-based questionnaire wasanswered by the same students outside of the class-room setting. As an incentive, extra credit to a coursegrade was offered to those who completed both thesurveys, and a total of 227 such responses wereobtained.

    3 Note that IT usage is excluded from our consideration because itis relatively free from CMV. On the other hand, although PE takescausal precedence over PU in TAM, for the sake of simplicity, thisstudy treats PE and PU equally as exogenous variables.

    3.2. Data Analysis

    3.2.1. Original Results Without Controlling forCMV. We examined TAM and CFIP without any con-sideration of method biases, as would be the case intypical circumstances. Initially, we performed CFA onthe data set collected through the Web. The results

    of CFA indicated that the measurement model forTAM fit the data satisfactorily: 224 = 3043, com-parative fit index (CFI) = 099, nonnormed fit index(NNFI)= 099, and root mean square error of approxi-mation (RMSEA) = 0034 (Hu and Bentler 1999, Gefenet al. 2000). Similarly, the measurement model forCFIP was found to be reasonable: 2125 = 29793,CFI= 0944, NNFI= 093, and RMSEA = 0078. Wealso checked the convergent validity, discriminantvalidity, and reliability of the scales and found thatthe scales were satisfactory in terms of psychomet-ric properties (Bagozzi and Yi 1988, Fornell and Lar-cker 1981). Table 1 shows the estimated correlations

    in the Uncorrected estimates column. Subsequently,we tested each of the structural models using struc-tural equation modeling (SEM). The results, includingpath estimates and explained variance, are describedin the same column of Table 1.4

    3.2.2. Results of Single-Method Techniques. Weattempted to assess and, if possible, control for CMVusing single-method techniques. First, the single-factor test was performed via CFA by specifying ahypothesized method factor as an underlying driverof all of the indicators. The results revealed thatthe fit of the single-factor model was extremelyunsatisfactory, indicating that CMV is not the majorsource of the variations in the observed items; forTAM, 227= 105143, CFI= 058, NNFI= 044, andRMSEA = 0410; for CFIP, 2135 = 215179, CFI =053, NNFI = 047, and RMSEA = 0257. Neverthe-less, as discussed earlier, Harmans test is known to

    be highly conservative in detecting biases. Thus, thisfinding does not necessarily indicate that TAM andCFIP are completely free from method biases.

    We then examined the marker-variable technique.First, a measurement model for the TAM factors,together with the marker variable (i.e., fashion con-sciousness), was tested. The results of CFA indicatedthat the model fit the data very well: 248 = 5661,CFI = 099, NNFI = 099, and RMSEA = 0028. Sim-ilarly, we tested the measurement model for CFIP,along with its marker variable, and the model wasalso found to be acceptable: 2173 = 36264, CFI =094, NNFI = 093, and RMSEA = 0069. To assessCMV, we examined the correlations of a marker vari-able and the other variables. In this study, the aver-age correlation between a marker variable and the

    4 Because the structural models are saturated, fit was found to beidentical to that of the measurement models.

  • 8/21/2019 Common Method Variance in IS Research: A Comparison of Alternative Approaches and a Reanalysis of Past Rese

    7/20

    Malhotra, Kim, and Patil: Common Method Variance in IS Research1870 Management Science 52(12), pp. 18651883, 2006 INFORMS

    Table 1 Parameter Estimates Varying with Different Methods of

    CMV Adjustments

    Adjusted estimates

    Uncorrected CFA-based

    estimates Marker variable MTMM

    A. TAM rM = 006Factor correlations

    r PE PU 053 050 052

    r PE UI 054 051 054

    r PU UI 058 055 058

    Structural paths

    Effect: UI

    (PE) 032 031 033

    (PU) 041 039 041

    Predictive power

    SMC (UI) 041 038 041

    B. CFIP rM = 008

    Factor correlations

    r CO IA 059 055 055

    r CO SU 049 045 051

    r CO ER 046 041 043

    r CO RI 066 063 064

    r IA SU 088 087 081

    r IA ER 059 055 045

    r IA RI 037 032 036

    r SU ER 053 049 038

    r SU RI 027 021 031

    r ER RI 025 018 024

    Structural paths

    Effect: RI

    (CO) 068 066 065

    (IA) 011 018 008

    (SU) 013 020 007

    (ER) 006 009 005

    Predictive power

    SMC (RI) 044 041 041

    Notes.rM= shared correlation resulting from CMV; PU = perceived useful-

    ness; PE = perceived ease of use; UI = usage intention; CO = collection;

    IA= improper access; SU= secondary use; ER = errors; RI = refusal inten-

    tion; SMC= squared multiple correlation.p < 005. p < 001. p < 0001 (two-tailed).

    other variables was treated as an estimate of rM. Wefound that the average correlations were 0.06 and0.08, respectively, for TAM and CFIP. Based on therM estimates, we calculated CMV-adjusted correla-tions using Equation (1), and the results are shown inthe Marker variable column of Table 1. As shownin Table 1, the differences between the original andCMV-adjusted correlations are relatively small for

    both TAM and CFIP cases (i.e.,r 007). To formallycompare original and CMV-adjusted correlations, weconducted a chi-square difference test (Bollen 1989).Specifically, we replaced an original correlation valuewith its CMV-adjusted correlation value and exam-ined whether the substitution significantly deterio-rated fit (21 > 384, p < 005). We found that none

    of the original correlations were significantly differentfrom their CMV-adjusted counterparts, implying thatthe biases are not substantial.

    We attempted to extend the original procedure toestimate CMV-adjusted structural relationships andthe explained variance of endogenous variables. In

    particular, given a CMV estimate, CMV-adjusted cor-relations can be calculated using Equation (1). Sub-sequently, path analysis was conducted based on theCMV-adjusted correlations to acquire CMV-adjustedstructural relationships and explained variance. AsTable 1 shows, for both TAM and CFIP cases, theCMV-adjusted structural relationships and explainedvariance are close to the original estimates. We alsoconducted chi-square difference tests to compare thepath estimates, and the results indicated that theuncorrected estimates were not statistically differentfrom the adjusted estimates, which suggests that inthis particular setting, CMV does not seriously distort

    our inferences.

    3.2.3. Results of Multiple-Method Techniques. Wefirst performed traditional MTMM with a focus onthe comparison of HH and MH correlations. In thecase of TAM, we found that, whereas the average cor-relation for MH was 0.89, the average for HH was0.63. The mean values were not statistically the same(p < 0001), implying that CMV is present in the data.Similarly, in the case of CFIP, the average MH valueof 0.68 was greater than the average HH value of0.50 (p < 005), suggesting that the data are not freeof CMV. Although the results generally indicated the

    presence of CMV, this traditional technique does notoffer an estimate of such biases. Thus, it is difficult forus to assess the extent of CMV and its effect on factorrelationships.

    To further examine the extent to which CMV affectsour inferences, we performed CFA-based MTMM.Specifically, within this rigorous framework, eachindictor is assumed to consist of three sources ofvariancethe true factor, method factor, and ran-dom error. The results of CFA-based MTMM indi-cated that the fit of the model was satisfactory; forTAM, 214= 3208, CFI= 099, NNFI = 097, andRMSEA = 0076; for CFIP, 2106 = 20085, CFI =

    095, NNFI = 093, and RMSEA = 0063. The detailedspecifications for the models and their results aredepicted in Figures 1 and 2. The results of the pathanalyses are shown in the CFA-based MTMM col-umn of Table 1. It should be noted that the factorrelationships and their statistical significance basedon the CFA-based MTMM were quite comparable tothose of the original estimates. We conducted formaltests to examine the statistical differences between theoriginal and corrected estimates. The results revealedthat in the case of TAM, no original correlations

  • 8/21/2019 Common Method Variance in IS Research: A Comparison of Alternative Approaches and a Reanalysis of Past Rese

    8/20

    Malhotra, Kim, and Patil: Common Method Variance in IS ResearchManagement Science 52(12), pp. 18651883, 2006 INFORMS 1871

    Figure 1 Results of CFA-Based MTMM for TAM

    Web

    Perceived

    usefulness

    Perceived

    ease of

    use

    Paper

    and

    pencil

    Usage

    intention

    W-PU1

    P-PU2

    W-PU3

    P-UI2

    P-PE3

    W-UI3

    W-UI1

    0.07*

    0.94***

    0.49***

    0.96***

    0.47**

    0.92***

    0.59***

    0.91***

    0.77***

    0.94***

    0.52***

    0.54***

    P-PE1

    W-PE20.58***

    0.66***

    0.06

    0.18*

    0.14

    0.29***

    0.06***

    0.26***

    0.02

    0.210.31***

    0.12

    0.12

    0.60***

    0.78***

    0.34*

    0.38***

    0.29

    p < 005. p < 001. p < 0001 (two-tailed).

    and path estimates differed significantly from CMV-adjusted values. In the case of CFIP, we found thatthree of the CMV-adjusted correlations were signifi-cantly smaller than their respective original correla-tions, but the CMV-adjusted path estimates did notdiffer statistically from the original path estimates.Taken together, the results of the multiple-methodtechniques largely indicated that although CMV waspresent in the data, the biases resulting from themethod effects would not substantially change ourinferences.

    3.3. Discussion of the ResultsWe found from the traditional MTMM analysis thatsurvey-based research is not free of method biases.Yet, at the same time, the single-factor analysis alsoreveals that such biases are not substantial. Mean-while, CFA-based MTMM offers better insights intothe issue of method biases. For example, the results

    in Figure 1 indicate that the average amount of trait,method, and error variance is 64%, 16%, and 19% forTAM and 45%, 16%, and 40% for CFIP, respectively.5

    That is, in both the TAM and CFIP cases, the aver-age amount of method variance is 16%. Interestingly,

    5 The percent of trait variance in a variable is equal to the squaredvalue of the variables standardized loading for the respective traitfactor. Similarly, the percent of method variance can be calculated

    by squaring the variables standardized loading for the method fac-tor. Meanwhile, the percent of error variance is directly represented

    by a standardized error term.

    this estimated level of method variance is almost thesame as the average method variance reported in themarketing discipline (i.e., 15.8%), but it is consider-ably lower than what was found in psychology andsociology (i.e., 28.9%) or education (i.e., 30.5%) (Coteand Buckley 1987). It is probable that the low methodvariance observed in the IS and marketing areas is

    because both disciplines often deal with relativelyconcrete targets (e.g., systems/information and prod-ucts/services) as focal issues of research, thereby low-ering common rater and item characteristic effects. Inany case, the results of this study imply that method

    biases in the IS domain may not be as high as those inother social science disciplines. Similarly, the rM val-ues for TAM (i.e., 0.06) and CFIP (i.e., 0.08) found inthis study are deemed to be relatively small, giventhat Cohen (1977) characterized an effect size as smallif r= 010. In fact, a correlation of 0.10 is not evensignificant (p = ns) in our study (n = 227) (Malhotra2004). Overall, our findings suggest that in the caseof IS, the inflated correlation resulting from CMV issmaller than that reported in other disciplines.

    Meanwhile, the marker-variable technique appearsto be quite robust against the violation of its majorassumption that the indicators are equally influenced

    by the method factor. For example, Figure 1 showsthat the loadings from the Web method factor to indi-cators range from 0.12 (p = ns) to 0.34 (p < 005) forTAM. Similarly, in the case of CFIP, factor loadings inthe Web method vary considerably, from 0.02 (p = ns)

  • 8/21/2019 Common Method Variance in IS Research: A Comparison of Alternative Approaches and a Reanalysis of Past Rese

    9/20

    Malhotra, Kim, and Patil: Common Method Variance in IS Research1872 Management Science 52(12), pp. 18651883, 2006 INFORMS

    Figure 2 Results of CFA-Based MTMM for CFIP

    Web

    Collection

    Improper

    access

    Secondaryuse

    Paper

    andpencil

    Errors

    Refusal

    intention

    W-CO2

    P-CO1

    P-IA1

    P-CO3

    W-CO4

    P-IA3

    P-PI2

    W-SU3

    W-ER3

    W-ER1

    P-SU4

    P-ER4

    W-PI3

    W-PI1

    0.73***

    0.79***

    0.59***

    0.90***

    0.77***

    0.73***

    0.77***

    0.70***

    0.65***

    0.64***

    0.68***

    0.80***

    0.65***

    0.87***

    0.58***

    0.97***

    0.42***

    0.87***

    0.51***

    0.36***

    0.45***

    0.31***

    0.43***

    W-IA2

    W-SU1

    P-SU2

    P-ER2

    0.64***

    0.55***

    0.81***

    0.38***

    0.24***

    0.41***

    0.03

    0.13

    0.22***

    0.65***

    0.47***

    0.06

    0.17*

    0.24***

    0.47***

    0.37***

    0.17*

    0.46***

    0.32***

    0.51***

    0.68***

    0.01

    0.02

    0.29***

    0.34***

    0.44***

    0.20***

    0.26***

    0.25***

    0.38***

    0.43***

    0.16*

    0.37***

    0.43***

    0.31***

    0.31***

    0.22***

    0.20***

    0.06

    0.82***

    0.24***

    p < 005. p < 001. p < 0001 (two-tailed).

    to 0.65 (p < 0001). Nevertheless, as discussed previ-ously, the results based on the marker-variable tech-

    nique were found to be quite consistent with thosebased on CFA-based MTMM in this study. Thus,our study provides initial evidence that like otherpopular statistical tools (e.g., regression, SEM, etc.),the marker-variable technique performs quite well indelivering reliable parameter estimates even whenthe key assumptions are not met exactly (Lindelland Whitney 2001). Overall, our findings imply thatthe marker-variable technique provides a reasonablemeans to assess CMV effects, and thus we selectedthis technique for reanalyzing past IS literature.

    4. CMV-Adjusted Correlations in

    Previously Published IS Studies4.1. Selecting Individual StudiesTo understand potential CMV problems in IS research,we used the marker-variable technique to reanalyzepreviously published IS studies. To identify the stud-ies for which the marker-variable technique was appli-cable, we examined all issues ofManagement Science,Information Systems Research, and MIS Quarterly pub-lished from January 1993 to December 2003. We chosea study for our analysis if (1) both predictor and crite-rion variables were collected using a single question-

  • 8/21/2019 Common Method Variance in IS Research: A Comparison of Alternative Approaches and a Reanalysis of Past Rese

    10/20

    Malhotra, Kim, and Patil: Common Method Variance in IS ResearchManagement Science 52(12), pp. 18651883, 2006 INFORMS 1873

    naire, and (2) the correlations between the researchvariables were reported. We found that a total of19 papers met these selection criteria. In these 19selected studies, we further identified the correlationsto include in our analysis. Because the relationships

    between independent and dependent variables are

    of particular interest in most studies, it was impor-tant to check whether those relationships that wereoriginally significant became nonsignificant because ofCMV adjustments. Thus, our analysis only includedthe correlations between the independent and depen-dent variables that were statistically significant at thelevel of 0.05 (two-tailed). Accordingly, we were ableto locate a total of 216 significant correlations betweenindependent and dependent variables. Therefore, eachof the identified correlations was subject to subsequentanalysis.

    4.2. Summary of the Results of Marker-Variable

    AnalysesAs recommended by Lindell and Whitney (2001), weused rM1 as the estimate ofrM. However, to minimizethe capitalization on the chance that rM1 might incur,rM2 was also used as a more conservative estimateof rM. For both estimates of rM, we calculated CMV-adjusted correlations and their significance usingEquations (1) and (2), respectively.6 A summary of theresults of the data analyses is described in Table 2. Inparticular, the results based on rM1 appear in the col-umn labeled CMV1, whereas those based on rM2 arein the column designated as CMV2.

    WhenrM1 was used as the estimate ofrM, the aver-

    age size ofrMwas found to be 0.081. In addition, theaverage value of the original correlations (i.e., uncor-rected correlations) was 0.37, and the average valueof CMV-adjusted correlations was 0.31. As shownin Table 2, of 216 significant correlations examined,18.06% became nonsignificant when we adjusted forCMV. On the other hand, when rM2 was used, theaverage size of the CMV estimates increased to 0.11.Whereas the average value of the original correlationswas the same as in the previous case, the averagevalue of adjusted correlations was found to decreaseto 0.29. Consequently, of these 216 significant correla-tions, 25% became nonsignificant when adjusted for

    CMV.In general, our results indicated that a majority of

    the originally significant correlations remained sig-nificant even after controlling for CMV. Moreover,these results should be regarded as conservative

    6 Note that in our marker-variable analysis, research variables wereassumed to have no measurement errors. In general, this treat-ment of the variables is known to overestimate CMV biases (Lindelland Whitney 2001). However, we took this approach because manystudies did not report reliabilities. Thus, the results should be inter-preted as a conservative estimate of CMV.

    because by not accounting for measurement errors,method biases are likely to be overestimated. Inter-estingly, Crampton and Wagner (1994) also foundfrom their meta-analysis that common method effectsinflated correlations in only 38 of 143 cases (27%)and then concluded that the inflation of correlations

    is a less probable effect of self-report methods thanare more neutral effects (p. 72). Our findings areconsistent with their observation in suggesting thatmethod biases are not necessarily omnipresent insingle-method research designs.

    4.3. Sensitivity Analysis of the Summary ResultsWe previously estimated rM using both the smallestand second-smallest positive values in a correlationmatrix. Although we followed the procedure outlined

    by Lindell and Whitney (2001), it was still possiblethat the estimated rM value did not reflect an actualamount of CMV. Thus, we examined how the resultsof our previous analyses would vary with respect todifferent estimates ofrM. Although a reasonable esti-mate ofrMwas expected to be about 0.10 as discussedpreviously, we attempted to consider more extremecases in which rMwould be as high as 0.30. Accord-ingly, we conducted sensitivity analysis by varying rMfrom 0.05 to 0.30 in increments of 0.05.

    The results of data analysis according to these sixdifferent levels of rM are shown in Table 2. As ex-pected, CMV effects on the correlation coefficients andtheir significance were not substantial when the esti-mated rM value was less than or equal to 0.10. AsTable 2 shows, whenrM= 010, the difference betweenthe average of the original correlations and that ofthe CMV-adjusted correlations was 0.07 p = ns. Sim-ilarly, the probability of a correlation becoming non-significant was found to be 26%, suggesting that amajority of the corrected correlations would stay sig-nificant at this reasonable range ofrM. In the case ofrM= 015, about one third of the correlations becamenonsignificant, but the difference in the correlations

    between original and corrected estimates was not sig-nificantp = ns.

    AsrMincreased to 0.20 or more, the biases resultingfrom CMV became more evident. For example, whenrM= 020, the average of the CMV-adjusted correla-tions decreased to 0.21, and 45% of the correlations

    became nonsignificant. In fact, given the average sam-ple size (i.e.,n = 214), the difference between the aver-age of the original correlations r= 037 and thatof the CMV-adjusted correlations r= 021 is statis-tically significant (p

  • 8/21/2019 Common Method Variance in IS Research: A Comparison of Alternative Approaches and a Reanalysis of Past Rese

    11/20

    Malhotra, Kim, and Patil: Common Method Variance in IS Research1874 Management Science 52(12), pp. 18651883, 2006 INFORMS

    Table 2 Summary of the Results of Marker-Variable Analyses

    CMV level

    CMV1 CMV2 Sensitivity analysis

    rM 008 011 005 010 015 020 025 030

    Original studies

    Averagen 21403 21403 21403 21403 21403 21403 21403 21403

    AveragerU 037 037 037 037 037 037 037 037

    CMV adjustments

    AveragerA 031 029 033 030 026 021 016 010

    Nonsignificant (%) 1806 2500 1157 2593 3611 4537 5278 6065

    Notes.rU= uncorrected correlation;rA = CMV-adjusted correlation;rM= shared correlation resulting from CMV;

    CMV1= smallest positive correlation in a correlation matrixrM1; CMV2 = second-smallest positive correlation in

    a correlation matrixrM2;n= sample size.Percentage of a correlation becoming nonsignificant.

    results imply that most of the inferences drawn frompast studies would be untrustworthy under such anextreme condition. For example, in a case in whichrM= 030, the average size of a correlation (i.e., rU=037) will no longer be statistically significant aftercontrolling for CMV effects when the sample size isabout average (i.e., n = 214). Yet, it is important tohighlight again that rM is expected to be approxi-mately 0.10 or less, and thus, in practical applica-tion, its value is unlikely to be as much as 0.20,let alone 0.30. Furthermore, given that measurementerrors could not be taken into account because most ofthe past studies did not report reliabilities, our reanal-ysis is likely to overestimate CMV biases.

    4.4. Logit Analysis of the Probability of a

    Correlation Becoming NonsignificantAs has been alluded to earlier, some researchers haveshown that method effects vary with research areasacross or within disciplinary boundaries (Cote andBuckley 1987, Crampton and Wagner 1994). Thus, aninteresting question that arises from those observa-tions is whether method biases differ across researchtopics within the IS field. For example, it is possi-

    ble that the likelihood that a CMV-adjusted correla-tion will become nonsignificant varies from one topic(e.g., behavioral intention) to another (e.g., user sat-isfaction). Thus, we examined whether the probabil-ity of a correlation becoming nonsignificant will varywith the types of endogenous variables in the original19 studies.

    First, we developed a model of the likelihood ofa CMV-adjusted correlation becoming nonsignificant.Specifically, we used a logit model that is suited topredict the probability that an event will occur (Hairet al. 1998). Our logit model included six research top-ics (classified according to dependent variables exam-ined) that were most frequently investigated amongthe 19 selected articlesspecifically, they were inten-tion, satisfaction, expectation, self-efficacy, perceived

    usefulness, and perceived ease of use. All of the19 studies examined in our research included eitherone of the six variables or fell in the others cate-gory. The others category was the base category indummy variable coding. In the logit model, dummyvariable coding was employed such that a binary vari-able was used to indicate the presence or absenceof each topic in a given study that could potentiallyinfluence the likelihood of a CMV-adjusted correlation

    becoming nonsignificant. In addition, the logit modelincluded three key parameters of Equations (1) and(2) as control variables (i.e., rU, rM, and n). Accord-ingly, our key inquiry here was whether the typesof research topics would offer additional predictivepower beyond the three known determinants.

    The specific form of the logit model is

    loge

    P

    1P

    =

    ni=1

    aiXi (3)

    where P is the probability of a CMV-adjusted corre-lation becoming nonsignificant,Xi denotes a determi-nant variable, and ai indicates a coefficient. Note thatif ai is positive, P will increase as Xi increases; if itis negative, Pwill decrease as Xi increases; if it is 0,no change in Pwill occur regardless ofXi. Based onEquations (1) and (2), the coefficients ofrU andnareexpected to be negative and that of rM is expectedto be positive. We conducted eight logit analysesone each for rM1, rM2, and six hypothetical rM levelsused in the sensitivity analysis. The results, includingparameter estimates and model fit, are described inTable 3.

    As indicated by Nagelkerkes pseudo R2 in Table 3,the fit between the logit model and the data for allthe levels ofrMwas highly satisfactory. Thus, the cor-responding parameter estimates from logit analysiswere considered trustworthy. As expected, we foundthat the coefficient for the size of the original corre-lation was negative and significant in all cases. These

  • 8/21/2019 Common Method Variance in IS Research: A Comparison of Alternative Approaches and a Reanalysis of Past Rese

    12/20

    Malhotra, Kim, and Patil: Common Method Variance in IS ResearchManagement Science 52(12), pp. 18651883, 2006 INFORMS 1875

    Table 3 Results of Logit Analyses

    CMV level

    CMV1 CMV2 Sensitivity analysis

    rM 008 011 005 010 015 020 025 030

    Predictive power

    NagelkerkeR2 085 085 084 091 092 098 091 084

    Parameter estimates

    BI 129 017 119 113 234 750 345 128

    SAT 351 074 183 321 259 2412 304 123

    EXP 468 332 812 662 657 713 164 106

    SE 035 060 469 310 210 931 208 025

    PU 099 226 191 330 007 765 250 044

    PE 073 148 006 400 008 383 214 109

    rU 8299 6692 12430 11637 8033 25347 5798 3874

    rM 6016 5662

    n 003 003 005 005 004 010 003 002

    Constant 2099 1452 3460 3606 3135 10945 2897 1983

    Notes. BI = behavioral intention; SAT = satisfaction; EXP = expectation; SE = self-efficacy; PU = perceived usefulness; PE = perceived

    ease of use;rU= uncorrected correlation;rM= shared correlation resulting from CMV; CMV1 = smallest positive correlation in a corre-

    lation matrixrM1; CMV2 = second-smallest positive correlation in a correlation matrix rM2;n= sample size.p < 005. p < 001. p < 0001 (two-tailed).

    results indicated that an increase in the original cor-relation (rU decreased the probability that the cor-relation would become nonsignificant when adjustedfor CMV. As Table 3 shows, the same applies to thesample size n; that is, the larger the sample size,the lower the probability that the correlation will

    become nonsignificant when adjusted for CMV. Alsoas expected, the coefficient for the size of the markervariable correlation was positive and significant for

    bothrM1 andrM2. Note that for each level of sensitiv-

    ity, this correlation coefficient was a constant; there-fore, it was not included in the sensitivity analysis.Finally, the results of logit analysis indicated that oncewe took into account the uncorrected correlation, theestimate ofrM, and the sample size, the research top-ics represented by the types of the endogenous vari-ables had little influence on the extent of CMV biases.In fact, of 48 estimates (i.e., six variables eight CMVlevels), only four were significant (i.e., 8.3%)a num-

    ber that can be attributed largely to chance. Therefore,it is safe to conclude that CMV does not vary muchwith research topics in the domains considered.

    5. Path Analysis of SelectedIS Studies

    This section focuses on our extension of Lindell andWhitney (2001) to examine, in addition to its effecton correlation coefficients, the extent to which CMVinflates path estimates and explained variance. In thismore sophisticated analysis, we considered only thosestudies that attempt to explain behavioral intention(BI). The rationale for this is that an explanation of BIhas been a dominant focus in behavioral IS research,

    and as our logit analysis showed, CMV does notseem to vary much with research topics, at least inthe IS domain. First, we examine the impact of CMVon the studies centering on TAM. Then, we identifyIS research that attempts to extend TAM and examinehow CMV affects such research. Finally, we apply asimilar analysis to other (non-TAM) studies in whichBI also served as the domain of investigation.

    5.1. TAM Research

    5.1.1. Study Selection Process. TAM is consid-ered one of the most widely used theories in the ISdiscipline (Venkatesh and Davis 2000), and thereforeit seems essential to check whether its cumulativefindings are robust against method biases. Accord-ingly, we searched for those studies that attemptedto explain BI based on the two antecedents (i.e., PEand PU). It should also be mentioned that we selectedonly those articles that reported factor correlationsafter controlling for measurement errors. This extracriterion was imposed because when measures are notperfectly reliable, a CMV-adjusted correlation basedon Equation (1) would be largely underestimated(Lindell and Whitney 2001). Using factor correlations,however, we could treat each of the latent factors asa single-item variable without measurement errors. Inthis way, we tried to take a more rigorous approachto the examination of CMV.

    Consequently, three studies were identified for fur-ther analysis. These three studies were Venkatesh(2000), Agarwal and Karahanna (2000), and Plouffeet al. (2001). First, Venkatesh (2000) examined TAM aspart of a larger effort to identify the antecedents of PE.Although the variables in Venkateshs (2000) study

  • 8/21/2019 Common Method Variance in IS Research: A Comparison of Alternative Approaches and a Reanalysis of Past Rese

    13/20

    Malhotra, Kim, and Patil: Common Method Variance in IS Research1876 Management Science 52(12), pp. 18651883, 2006 INFORMS

    were measured at three different points in time, wechose to use only the correlations for the first periodn = 246. Second, Agarwal and Karahanna (2000)adopted TAM to understand how the TAM determi-nants mediate the influence of cognitive absorption on

    behavioral intention n = 288. Finally, Plouffe et al.

    (2001) tested TAM to compare its efficacy with that ofthe model of perceived characteristics of innovating(PCI) (Moore and Benbasat 1991) n = 172.

    5.1.2. CMV-AdjustedCorrelations, PathEstimates,and Explained Variance.Before assessing the impactof CMV on path estimates and explained variance,we first attempted to replicate the results of theselected TAM studies using path analysis (Jreskog

    Table 4 Reanalysis of TAM

    CMV level

    Without CMV Sensitivity analysis

    rM 000 With CMV 005 010 015 020 025 030

    A. Venkatesh (2000) rM = 008

    Factor correlations

    r PE PU 033 027 029 026 021 016 011 004

    r PE BI 034 028 031 027 022 018 012 006

    r PU BI 052 048 049 047 044 040 036 031

    Structural paths

    Effect: BI

    (PE) 019 016 018 016 013 012 008 005

    (PU) 046 044 044 043 041 038 035 031

    Predictive power

    SMC (BI) 030 025 027 024 021 017 014 010

    B. Agarwal and Karahanna (2000) rM = 008

    Factor correlations

    r PE PU 055 051 053 050 047 044 040 036

    r PE BI 057 053 055 052 049 046 043 039

    r PU BI 065 062 063 061 059 056 053 050

    Structural paths

    Effect: BI

    (PE) 030 029 030 029 027 026 026 024

    (PU) 048 047 047 047 046 044 043 041

    Predictive power

    SMC (BI) 049 045 046 043 041 037 034 030

    C. Plouffe et al. (2001) rM = 005

    Factor correlations

    r PE PU 053 051 051 048 045 041 037 033r PE BI 038 035 035 031 027 023 017 011

    r PU BI 056 054 054 051 048 045 041 037

    Structural paths

    Effect: BI

    (PE) 012 010 010 008 007 005 002 001

    (PU) 050 049 049 047 045 043 040 037

    Predictive power

    SMC (BI) 032 030 030 027 023 020 017 014

    Notes.rM= shared correlation resulting from CMV; BI = behavioral intention; PU = perceived usefulness; PE = perceived ease-of-use;

    SMC= squared multiple correlation.p < 005, p < 001, p < 0001 (two-tailed).

    and Srbom 1996). In Table 4, the column With-out CMV displays the original correlations as theywere published in the three studies. We used thesecorrelations as input to path analysis. The results ofthree path analyses are shown in the same columnof Table 4. We found that our results based on the

    original correlations were highly comparable to thosereported in the selected studies. Note that the origi-nal correlations and path estimates were constructed

    based on the assumption that there were no CMVbiases (i.e., CMV= 0).

    In this reanalysis of TAM research, we used onlyrM2 as a proxy for CMV because, by definition, rM2is always at least as large as rM1 and thus a moreconservative estimate thanrM1 (Lindell and Whitney

  • 8/21/2019 Common Method Variance in IS Research: A Comparison of Alternative Approaches and a Reanalysis of Past Rese

    14/20

    Malhotra, Kim, and Patil: Common Method Variance in IS ResearchManagement Science 52(12), pp. 18651883, 2006 INFORMS 1877

    2001). We examined the correlation matrix of eachidentified study and selected rM2. These estimatedCMV values are shown in the column With CMV ofTable 4. Based on the CMV estimates, we used Equa-tion (1) to calculate CMV-adjusted correlations andused Equation (2) to calculate their significance. Then,

    to estimate CMV-adjusted structural relationships andexplained variance, we conducted path analysis basedon the corrected correlations. The results of this dataanalysis are reported in Table 4.

    As for correlation coefficients, the original andadjusted values were found generally comparable. Asshown in Table 4, the differences in the coefficients

    between the original and adjusted values are less than0.10. Also, as with the original correlations, the CMV-adjusted correlations remain statistically significant.The biggest difference between the original and theadjusted estimates was 0.03, suggesting that the infla-tion of path estimates resulting from method biases

    was essentially negligible. Moreover, no changes wereobserved in the statistical significance of structuralrelationships between the two cases. The reductionin squared multiple correlations (SMC) was from 2%to 5%, suggesting that there is little difference inthe explained variance in BI. In general, our resultsshowed that the conclusions based on the originalestimates would be comparable to those from theCMV-adjusted estimates.

    5.1.3. Sensitivity Analysis. We also conducted asensitivity analysis, and the results are shown inTable 4. As far as correlation coefficients are con-

    cerned, we found that all of the CMV-adjusted corre-lations remained significant whenrM 0.20. However,some of the correlations (28%) became nonsignificantwhenrM 0.25. To formally compare the original andadjusted correlations, we performed chi-square differ-ence tests. The results indicated that when rM 010,no significant differences existed. Meanwhile, in thecase rM= 015, three out of the nine adjusted corre-lations (33%) were found to be smaller than the orig-inal correlations. Moreover, in the case rM 020, allof the CMV-adjusted correlations were found to besignificantly different from the original correlations.These results generally showed that correlation esti-

    mates would not be reliable under a condition inwhichrM 020.

    In contrast, structural path estimates under theassumption that rM = 0 were found to be robustagainst a reasonable degree of method biases. Forexample, all of the structural relationships that wereoriginally significant remained significant when rM020. Even in an extreme condition in which rM=030, only one of the five originally significant paths

    became nonsignificant (20%). Consistent with theseobservations, the results of chi-square difference tests

    revealed that no differences existed between the orig-inal and corrected path estimates, except for the twocases in Vankatesh et al. (2000) at rM= 030.

    As for explained variance, we found that the dif-ferences in SMC between the original and adjustedvalues were not substantial ifrM 015 (i.e.,r 09).

    However, the differences approached 0.20 at the rMlevel of 0.30. Put simply, in this extreme case, morethan half of the variance originally explained could

    be attributable to method biases. That is, as with cor-relations, explained variance could be more sensitiveto method biases than structural path estimates are.

    In summary, the results of sensitivity analysisshowed that under a considerably high level of CMV,for example, rM 015, the conclusions based on anassumption that CMV = 0 held up reasonably well.Considering that the estimate of rM will fall below0.15 in most cases, method biases did not seem tocontaminate research findings in the TAM domain.

    5.2. Extended TAM Research

    5.2.1. Study SelectionProcess. IS researchers haveattempted to extend the original TAM framework tooffer a better explanation of individuals IT adoptionand use. Thus, we attempted to examine the impact ofCMV on the results of such extended TAM research.This investigation into extended TAM research is

    believed to be important because it may show howthe marker-variable technique can be extended to testa causal model with mediating variables. We firstsearched for published articles that drew on extendedTAM and reported factor correlations while control-ling for measurement errors. As a result, we were ableto identify one study by Gefen et al. (2003) that incor-porated an additional predictor of BI, i.e., trust (TR),

    besides PE and PU. Designed to explain individu-als online shopping behavior, the model proposed byGefen et al. (2003) posits that PE and TR will influencePU, and the three factors will jointly determine BI.Based on the data collected from 213 college students,Gefen et al. (2003) showed that their model reason-ably represented consumers intentions toward trans-actions with an online vendor.7

    7 Before estimating the structural model, Gefen et al. (2003) rig-orously evaluated the reliability, convergent validity, discriminantvalidity, and unidimensionality of the scale. Whereas reliability,convergent validity, and discriminant validity are often examined,a unidimensionality test has rarely been a part of the scale devel-opment process in IS research. Yet, it is important to note that sucha test can be especially useful to identify some forms of method

    biases such as negative wording of items and common scale for-mats and anchors. The topic of unidimensionality is beyond thescope of this paper, thus we will not elaborate on this issue here.Instead, we refer readers to Gefen (2003) and Segars (1997) fordetails.

  • 8/21/2019 Common Method Variance in IS Research: A Comparison of Alternative Approaches and a Reanalysis of Past Rese

    15/20

    Malhotra, Kim, and Patil: Common Method Variance in IS Research1878 Management Science 52(12), pp. 18651883, 2006 INFORMS

    Table 5 Reanalysis of Extended TAM (Gefen et al. 2003)

    CMV level

    Without CMV With CMV Sensitivity analysis

    rM 0.00 0.13 0.05 0.10 0.15 0.20 0.25 0.30

    Factor correlations

    r PE TR 056 049 054 051 048 045 041 037

    r PE PU 070 066 068 067 065 063 060 057

    r PE BI 067 062 065 063 061 059 056 053

    r TR PU 057 051 055 052 049 046 043 039

    r TR BI 062 056 060 058 055 053 049 046

    r PU BI 072 068 071 069 067 065 063 060

    Structural paths

    Effect: PU

    (PE) 055 054 054 055 054 053 051 049

    (TR) 026 025 026 024 023 022 022 021

    Effect: BI

    (PE) 025 024 024 023 023 023 023 023

    (TR) 025 024 024 025 024 025 023 023

    (PU) 040 040 042 040 040 039 039 038

    Predictive powerSMC (PU) 054 048 051 049 046 044 040 036

    SMC (BI) 061 055 059 057 054 052 049 045

    Notes. rM= shared correlation resulting from CMV; BI = behavioral intention; PU = perceived usefulness; TR = trust;

    PE = perceived ease of use; SMC= squared multiple correlation.p < 005. p < 001. p < 0001 (two-tailed).

    5.2.2. CMV-Adjusted Correlations, Path Esti-mates, and Explained Variance. The original corre-lations reported in Gefen et al. (2003) are shown inthe Without CMV column of Table 5. Based on theoriginal correlations, we performed a path analysis ofthe extended TAM. The results of the path analysis,

    including path estimates and explained variance, areshown in the same column of Table 5. As expected,under the assumption that rM= 0, the results of pathanalysis were highly consistent with those in the orig-inal study.

    To assess the amount of CMV, we inspected thecorrelation matrix and found that the second-smallestpositive correlation (i.e., rM2 was 0.13. Based on therMvalue of 0.13, CMV-adjusted correlations and theirsignificance were calculated using Equations (1) and(2), respectively. In addition, CMV-adjusted path esti-mates and explained variance were derived from apath analysis. These CMV-adjusted values are shownin the With CMV column of Table 5. As shownin Table 5, the CMV-adjusted solutions deviate lit-tle from those made under the assumption that rM=0. In particular, the CMV-adjusted correlations werecomparable with the original correlations in terms oftheir magnitude and statistical significance. Further-more, the path estimates between the original andCMV-adjusted values were almost identical; in fact,the largest gap between the two values was a neg-ligible 0.01 p= ns. Naturally, as with the originalcases, all the CMV-adjusted structural relationships

    remained statistically significant at a level of 0.001.Finally, the results showed that after controlling forCMV effects, the explained variance in PU and BIdecreased, but the drop was not considered substan-tial. Thus, our results indicated that the inferencesdrawn from the study by Gefen et al. (2003), whichassumed thatrM= 0, would be reasonably acceptable.

    5.2.3. Sensitivity Analysis. We conducted a sen-sitivity analysis of the study by Gefen et al. (2003).The corrected correlations and their significance withrespect to potential CMV effects are listed in Table 5.In addition, the corresponding path estimates, theirsignificance, and explained variance are reported inthe same table.

    Interestingly, we found that, as with the originalcorrelations, CMV-adjusted correlations remained sig-nificant at the level of 0.001 for the entire range ofrMconsidered in our research. These results could be

    attributable, to some extent, to the strength of orig-inal correlations (i.e., r 056). As Table 5 shows,there was a considerable difference between the orig-inal and corrected coefficients (e.g.,r= 016; in fact,under the conditionrM 020, the observed differencewas statistically significant. Despite that, the CMV-adjusted correlations remained highly significant evenin the extreme case that rM= 030.

    As for structural relationships, all of the five pathswere found to be significant at a level of 0.001 for thewhole range ofrM. It is worthwhile to note that even

  • 8/21/2019 Common Method Variance in IS Research: A Comparison of Alternative Approaches and a Reanalysis of Past Rese

    16/20

    Malhotra, Kim, and Patil: Common Method Variance in IS ResearchManagement Science 52(12), pp. 18651883, 2006 INFORMS 1879

    forrM= 030, the differences between the original andCMV-adjusted path estimates were less than 0.06p =ns. Thus, it seems reasonable to argue that the pathestimates and their significance in the study by Gefenet al. (2003) would be robust against even very highlevels of potential CMV biases.

    Finally, as for the explained variance, the differ-ences between the original and adjusted values werefound to be within 0.08 of each other when rM015, but they became as large as 0.18 as rM increasedto 0.30. The results indicated that although CMVcould affect the interpretation of explained variancein extreme conditions, its effect on explained variancewould be modest under realistic circumstances.

    Overall, our results provided strong support thatthe results of the study by Gefen et al. (2003) retainedtheir robustness when method biases are within a rea-sonable range (i.e.,rM 015.

    5.3. Other BI StudiesSimilar to the reanalysis conducted for TAM andextended TAM, we examined the impact of CMV oncorrelations, structural relationships, and explainedvariance in other studies that were based on inten-tional models designed to use beliefs, judgments,or attitudes to explain behavioral intentions. Weidentified six BI studies (i.e., Venkatesh et al. 2003,Venkatesh and Davis 2000, Szajna 1996, Koufaris 2002,Chwelos et al. 2001, Bhattacherjee 2001) and exam-ined them using the same analyses conducted forthe TAM-based studies. Although the details are notreported because of space constraints, the results con-

    sistently demonstrated that the findings based onintentional models were largely robust against CMV.Taken together with the results based on the fourTAM-related studies, these findings provide furthersupport that CMV has not substantively inflated fac-tor relationships in BI studies. We previously foundfrom logit analysis that CMV effects do not vary withdependent variables. Therefore, the findings relatedto the 10 BI studies here can be cautiously expected tohold for other domains of survey-based IS research.

    6. Discussion and ConclusionThe major objective of this research is to under-stand the effects of CMV on the inferences drawnfrom IS studies. In an attempt to assess CMV, weinitially conducted an empirical study n = 227 cen-tering on the areas of technology adoption and infor-mation privacy. Our findings indicate that method

    biases are not substantial in the research domainsconsidered. From this study, we also found that themarker-variable technique is as reliable as CFA-basedMTMM in terms of assessing and controlling forCMV. Then, we employed the marker-variable tech-nique to infer CMV from prior research in a post hoc

    manner. Our findings based on 216 correlations in19 studies indicate that the inflation in correlationresulting from CMV is on the order of 0.10 or lessand that most of the significant correlations exam-ined here remain significant even after controlling forCMV. Finally, our reanalysis of selected studies in the

    IS domain indicates that contrary to the concerns ofsome skeptics, CMV-adjusted structural relationshipsgenerally remain significant and moreover are not sta-tistically differentiable from uncorrected estimates.8 Insummary, this comprehensive and systematic analy-sis offers initial evidence that (1) the marker-variabletechnique can serve as a convenient, yet effective, toolfor accounting for CMV, and (2) method biases in theIS domain are not as serious as those found in otherdisciplines.

    6.1. Theoretical and Methodological ContributionsResearch in the marketing discipline is known to be

    less susceptible to method biases than research in psy-chology, sociology, and education (Cote and Buckley1987). Because most IS studies focus on concrete tar-gets (e.g., systems or information), as is also the casewith the studies in the marketing discipline (e.g.,products or services), it is possible that method biasesin the IS area are relatively small. Nevertheless, a sys-tematic investigation into method biases specific toIS research was lacking. Because of this lacuna, ISresearchers have had little knowledge of the extent ofthis potential problem. To shed light on this importantissue, we have conducted an empirical study involv-ing 227 respondents and have examined the extent

    of CMV and its effects on factor relationships. Ourfindings demonstrate that the inflation in correlation

    because of CMV is only 0.06 for TAM and 0.08 forCFIP, implying that CMV is minimal in these often-examined research areas. Furthermore, we systemat-ically examined the amount of CMV in previouslypublished IS studies. Our findings suggest that whena survey questionnaire is used to collect data, the cor-relation between two theoretically unrelated variablestypically will not exceed 0.10. Overall, our findingssuggest that in IS research, CMV resulting from asingle-method procedure is less serious than it is inother areas.

    In addition to examining CMV per se, this studyalso analyzed the influence of CMV on the conclu-sions drawn from past research. Based on a meta-analysis of 11,710 correlations, Crampton and Wagner(1994) concluded that although self-report methodssometimes resulted in the inflation of correlations

    8 It is important to examine whether or not the underlying corre-lations remain significant, but it is probably more critical to knowhow much CMV affects a parameter estimate itself (an effect sizeorientation). The authors thank the associate editor for bringing thisto our attention.

  • 8/21/2019 Common Method Variance in IS Research: A Comparison of Alternative Approaches and a Reanalysis of Past Rese

    17/20

    Malhotra, Kim, and Patil: Common Method Variance in IS Research1880 Management Science 52(12), pp. 18651883, 2006 INFORMS

    in microorganizational research, this type of infla-tion would be more the exception than the rule(p. 72). Our research is consistent with their study inalso indicating that, whereas CMV produces overesti-mated correlations, its biases are typically less prob-lematic. Therefore, if the IS community as a whole

    disapproves of the validity of past survey researchonly because of a fear of method biases, this extremesuspicion may result in costly and unproductive out-comes for the discipline.

    Originally, Lindell and Whitney (2001) introducedthe marker-variable technique as a method to estimateCMV and its effects on the correlations between inde-pendent and dependent variables. One of the majorcontributions of our study lies in its attempt to extendthe application of the marker-variable technique totest a causal model. In essence, we have taken a two-step approach in which factor correlations are firstadjusted for method biases, and then a structural

    model is later tested based on the corrected correla-tions. Actually, this type of two-step estimation tech-nique has been widely used by researchers in a vari-ety of different forms (Jreskog 1998, Jonsson 1998,Kim and Malhotra 2005, Lance 1988, Joshi and Stump1999, Bottomley and Holden 2001). As such, ourextension of the marker-variable technique is deemedto be on solid methodological ground. Consideringthat investigators conclusions are based mostly ontests of structural relationships, our new applicationof the marker-variable technique to the examinationof causal paths seems particularly important and rel-evant.

    The IS community has devoted a great amount ofeffort toward building the body of knowledge cen-tering on the TAM framework (Davis et al. 1989,Venkatesh and Davis 2000, Venkatesh et al. 2003).However, the validity of such cumulative knowledgehas been challenged because of a concern for method

    biases. Our research is important because it inves-tigates this healthy suspicion and reveals that theinferences gained from the TAM studies would holdeven after taking CMV into account. In fact, our find-ings were similar even for other studies that were

    based on intentional models designed to use beliefs,judgments, or attitudes to explain behavioral inten-tions (Venkatesh et al. 2003, Venkatesh and Davis2000, Szajna 1996, Koufaris 2002, Chwelos et al. 2001,Bhattacherjee 2001). Considering that the results ofthe logit analysis did not indicate any topic-specificeffects within the IS area, we can cautiously extendthe results on the robustness of the structural modelto the other studies we have considered (i.e., satisfac-tion, expectation, self-efficacy, perceived usefulness,and perceived ease of use). Thus, more credibility can

    be given to our recurring argument that when factorrelationships are concerned, method biases will pose

    a less serious problem in IS research than in otherdisciplines.

    The marker-variable technique has recently beenproposed as a convenient alternative to a more com-plicated multiple-method procedure. Although thelogic of the marker-variable technique makes intu-

    itive sense, little was known about how the rela-tively new single-method technique compares withthe well-established multiple-method analysis. Thisstudy contributes to the IS literature by empiricallyassessing the efficacy of the marker-variable tech-nique, particularly in comparison with CFA-basedMTMM. Although not reported in this paper becauseof space constraints, we actually compared factorcorrelations estimated with and without accountingfor method effects within the context of CFA-basedMTMM. From the results, we found that the differ-ences in the correlations were on average 0.03 forTAM and 0.10 for CFIP. Interestingly, these findings

    are quite consistent with those of the marker-variabletechnique that shows that an inflated correlationranges from 0.06 (i.e., for TAM) to 0.08 (i.e., for CFIP).Thus, the findings presented in this paper lead us to

    believe that in the IS domain, an inflated correlationcaused by CMV would be on the order of 0.10 or less.Moreover, it is important to note that even thoughmethod factors are found to affect the indicatorsunequally, the marker-variable technique producedresults highly comparable to those of the CFA-basedMTMM. This finding implies that the marker-variabletechnique is relatively robust against deviations fromits major assumption. It provides additional confi-

    dence for the inferences acquired from the reanalysisof past IS literature. At the same time, it encouragesthose who may consider using the marker-variabletechnique as an alternative to a more costly multiple-method technique.

    6.2. Limitations of this StudyCaution is needed in interpreting the results becauseof some limitations of this study. One of the limita-tions relates to the small number of articles reexam-ined in our study. As mentioned earlier, our reanalysisincluded 19 studies for correlations and 10 studiesfor structural relationships in some selected journals.

    Thus, until the effects of CMV on past research arethoroughly understood, care should be taken in thegeneralization of our findings.

    The marker-variable technique is based on an im-plicit assumption that CMV always inflates relation-ships between variables. However, theoretical andempirical evidence exists that suggests that CMV notonly inflates but also deflates relationships betweenobserved items (Cote and Buckley 1988). Neverthe-less, it is important to understand that once mea-surement errors are taken into account, relationships

  • 8/21/2019 Common Method Variance in IS Research: A Comparison of Alternative Approaches and a Reanalysis of Past Rese

    18/20

    Malhotra, Kim, and Patil: Common Method Variance in IS ResearchManagement Science 52(12), pp. 18651883, 2006 INFORMS 1881

    between latent factors are inflated because of com-mon method effects (Williams and Brown 1994). Notethat the focus of this study is on factor relation-ships, as opposed to item relationships, and ouranalysis of structural paths was based on factorcorrelations after taking into account measurement

    errors. Thus, another assumption of the marker-variable techniquerelationships between variablesare inflated because of CMVdoes not seem to posea problem in our study.

    Because a marker variable does not theoreticallyrelate to research variables, its relationship with theresearch variables will not be sensitive to such com-mon rater effects as implicit theories or social desir-ability (Podsakoff et al. 2003). Accordingly, the lack ofdetecting (some) common rater effects can be consid-ered as a limitation of the marker-variable technique.Nevertheless, recall that at least in our study, theresults based on the marker-variable technique and

    CFA-based MTMM were quite comparable. Perhapsthe observed similarity may imply that CMV in theIS area comes mainly from other sources of method

    biases such as item context effects and measure-ment context effects. Alternatively, it may be becauseCFA-based MTMM itself is not effective in detect-ing method biases. For example, within the contextof CFA-based MTMM, a method factor is assumedto represent a variety of method effects as a whole,rather than controlling for such effects individually.Thus, CFA-based MTMM is also imperfect in identify-ing CMV, especially when multiple sources of method

    biases differently affect the survey response process.An in-depth investigation into these potential causesis left for future research. Yet, we have shown that themarker-variable technique is one of the most practicaltools available for assessing and controlling for CMVin the IS discipline.

    Finally, the marker-variable technique is based onan assumption that individual items are affectedequally by a common method (Lindell and Whitney2001). Although this assumption appears somewhatstrict, this type of strict condition is not uncommonin other techniques. For example, SEM, which is oneof the most popular tools in IS research, requires

    multivariate normality of observed variablesa situ-ation that is hardly met in practice. Thus, a key ques-tion here would be whether the assumption wouldapproximate reality, if not exactly replicate it, andwhether the technique would produce reliable esti-mates with some deviations from the assumption. Inthis regard, this study empirically demonstrates thatalthough the loadings from the method factors to theitems are unequal, the results of the marker-variabletechnique are similar to those of the multiple-methodprocedure. Thus, although more research should be

    conducted to examine the validity of the marker-variable technique, we generally subscribe to the viewof Lindell and Whitney (2001) that this new tool isrealistic as well as practical when compared withother popular CMV analysis techniques.

    6.3. ConclusionDespite potential risks associated with method biases,IS researchers have rarely attempted to assess or con-trol for such biases in their studies. As a result, the IScommunity remained largely uncertain of the extentto which CMV contaminates the body of knowledgethat it has painstakingly built over several decades.To address this state of uncertainty, we systematicallyexamined the effect of CMV on factor relationships.Our findings suggest that contrary to the widespreadconcerns, method biases in the IS area are not as seri-ous as those observed in other disciplines. Althoughour initial findings are reassuring, more work cer-

    tainly needs to be done to establish the credibility ofour conclusions. We hope that our research will behelpful for such endeavors.

    AcknowledgmentsThe authors are grateful to Ramayya Krishnan (the depart-ment editor) and an associate editor for their valuableguidance throughout the review process. Insightful andconstructive comments provided by the two anonymousreviewers are deeply appreciated. The authors also thankBarry Gerhart at the University of Wisconsin and Jeff Strat-man at Georgia Institute of Technology for their helpfulsuggestions that improved the earlier version of this paper.

    Special thanks to J. Stanford Fisher for his editorial contri-butions to the presentation of this paper.

    References

    Agarwal, R., E. Karahanna. 2000. Time flies when youre havingfun: Cognitive absorption and beliefs about information tech-nology usage. MIS Quart. 24(4) 665682.

    Avolio, B. J., B. M. Bass. 1991. Identifying common methods vari-ance with data collected from a single source: An unresolvedsticky issue. J. Management17(3) 571587.

    Bagozzi, R. P. 1980. Causal Models in Marketing. John Wiley andSons, New York.

    Bagozzi, R. P., Y. Yi. 1988. On the evaluation of structural equation

    models. J. Acad. Marketing Sci. 16(1) 7494.Bhattacherjee, A. 2001. Understanding information systems contin-

    uance: An expectation-confirmation model. MIS Quart. 25(3)351370.

    Bollen, K. A. 1989.Structural Equation Modeling with Latent Variables.John Wiley and Sons, New York.

    Bottomley, P. A., S. J. S. Holden. 2001. Do we really know how con-sumers evaluate brand extensions? Empirical generalizations

    based on secondary analysis of eight studies. J. Marketing Res.38(4) 494500.

    Buckley, M. R., J. A. Cote, S. M. Comstock. 1990. Measurementerrors in behavioral sciences: The case of personality/attituderesearch.Educational Psych. Measurement 50(3) 447474.

  • 8/21/2019 Common Method Variance in IS Research: A Comparison of Alternative Approaches and a Reanalysis of Past Rese

    19/20

    Malhotra, Kim, and Patil: Common Method Variance in IS Research1882 Management Science 52(12), pp. 18651883, 2006 INFORMS

    Campbell, D. T., D. Fiske. 1959. Convergent and discriminant vali-dation by the multitrait-multimethod matrix. Psych. Bull. 56(2)81105.

    Chwelos, P., I. Benbasat, A. Dexter. 2001. Research report: Empir-ical test of an EDI adoption model. Inform. Systems Res. 12(3)304321.

    Cohen, J. 1977. Statistical Power Analysis for the Behavioral Sciences,

    2nd ed. Lawrence Erlbaum Associates, Hillsdale, NJ.Cote, J. A., M. R. Buckley. 1987. Estimating trait, method, and error

    variance: Generalizing across 70 construct validation studies.J. Marketing Res. 24(3) 315318.

    Cote, J. A., M. R. Buckley. 1988. Measurement error and theorytesting in consumer research: An illustration of the importanceof construct validation. J. Consumer Res.14(4) 579582.

    Crampton, S. M., J. A. Wagner III. 1994. Percept-percept inflation inmicroorganizational research: An investigation of prevalenceand effect. J. Appl. Psych. 79(1) 6776.

    Davis, F. D., R. P. Bagozzi, P. R. Warshaw. 1989. User acceptance ofcomputer technology: A comparison of two theoretical models.

    Management Sci. 35(8) 9821003.

    Feldman, J. M