Barki & hartwick, 1994, misq, measuring user participation, user involvement, and user attitude

25
Measuring User Participation, User Involvement, and User Attitude By: Henri Barki Ecoie des Hautes Etudes Commerciales 5255 Decelies Montreal, Quebec H3T 1V6 Canada Jon Hartwick Facuity of iVIanagement McGiil University 1001 Sherbrooke Street West iViontr^ai, Quebec H3A 1G5 Canada Abstract Defining user participation as the activities per- formed by users during systems development, user involvement as the importance and personai relevance of a system to its user, and user at- titude as the affective evaluation of a system by the user, this study aims to: (1) develop separate measures of user participation, user invoivement, and user attitude, (2) identify key dimensions of each construct, and (3) investigate the relation- ships among them. Responses from users in organizations developing new information systems were used to create an overall scale measuring user participation (along with three subscales reflecting the dimensions of respon- sibiiity, user-IS relationship, and hands-on ac- tivities), an overall scale measuring user involvement (along with two subscales reflecting the dimensions of importance and personal relevance), and a scale measuring user attitude. Analysis of the data provides evidence for the reiiabiiity and validity of the three constructs and their dimensions. User participation has long been considered a key variable in the successful development of information systems. However, past research has failed to clearly demonstrate its benefits. The measures developed in this User Measures Study provide a useful starting point for decipher- ing the precise nature of the relationship among user participation, involvement, and attitude dur- ing systems implementation. Keywords: IS implementation, user involvement, user participation, measurement ISRL Categories: AI04, FD, FD02, FD03 Introduction Beginning in the 1960s, the practitioner and re- searcher communities considered user participa- tion in the development of information systems (IS) applications to be critical to IS implenhenta- tioh. Since that time, researchers have studied user participation, convinced of its influence on such key criteria as systems quality, user satisfaction, and systems use (Ives and Olson, 1984). This work continues today. Indeed, a com- plete issue of Communications of the ACM (June 1993) was recently devoted to a review of cur- rent theories and practices in participative systems design. However, after reviewing past research on user participation, Ives and Olson (1984) conclude, "The benefits of user involvement have not been strongly demonstrated" (p. 600). Meta-analytical reviews suggest a similar conclusion, with average correlations of less than .30 among user participation and outcome criteria (Pettingell, et al., 1988; Straub and Trower, 1988). Two reasons have been provided to explain such results: (1) methodological weaknesses of past studies, and (2) the presence of intervening or moderating variables among user participation and important outcome criteria (Ives and Olson, 1984; Straub and Trower, 1988). Future research into the ef- fects of user participation needs to address both of these issues. An important step in addressing these issues is the development of a better measure of the user participation construct (Alavi and Joachimsthaler, 1992). In IS, the terms "user participation" and "user involvement" have frequently been used to mean the same thing. However, Barki and Hartwick (1989) claim the concepts of user par- ticipation and user involvement are distinct, and thus need to be defined separately. In IS, the term "user involvement" has generally referred to a MIS Quarteriy/March 1994 59

Transcript of Barki & hartwick, 1994, misq, measuring user participation, user involvement, and user attitude

Measuring UserParticipation, UserInvolvement, andUser Attitude

By: Henri BarkiEcoie des Hautes Etudes

Commerciales5255 DeceliesMontreal, Quebec H3T 1V6Canada

Jon HartwickFacuity of iVIanagementMcGiil University1001 Sherbrooke Street WestiViontr^ai, Quebec H3A 1G5Canada

AbstractDefining user participation as the activities per-formed by users during systems development,user involvement as the importance and personairelevance of a system to its user, and user at-titude as the affective evaluation of a system bythe user, this study aims to: (1) develop separatemeasures of user participation, user invoivement,and user attitude, (2) identify key dimensions ofeach construct, and (3) investigate the relation-ships among them. Responses from users inorganizations developing new informationsystems were used to create an overall scalemeasuring user participation (along with threesubscales reflecting the dimensions of respon-sibiiity, user-IS relationship, and hands-on ac-tivities), an overall scale measuring userinvolvement (along with two subscales reflectingthe dimensions of importance and personalrelevance), and a scale measuring user attitude.Analysis of the data provides evidence for thereiiabiiity and validity of the three constructs andtheir dimensions. User participation has longbeen considered a key variable in the successfuldevelopment of information systems. However,past research has failed to clearly demonstrateits benefits. The measures developed in this

User Measures

Study provide a useful starting point for decipher-ing the precise nature of the relationship amonguser participation, involvement, and attitude dur-ing systems implementation.

Keywords: IS implementation, user involvement,user participation, measurement

ISRL Categories: AI04, FD, FD02, FD03

IntroductionBeginning in the 1960s, the practitioner and re-searcher communities considered user participa-tion in the development of information systems(IS) applications to be critical to IS implenhenta-tioh. Since that time, researchers have studieduser participation, convinced of its influence onsuch key criteria as systems quality, usersatisfaction, and systems use (Ives and Olson,1984). This work continues today. Indeed, a com-plete issue of Communications of the ACM (June1993) was recently devoted to a review of cur-rent theories and practices in participativesystems design.

However, after reviewing past research on userparticipation, Ives and Olson (1984) conclude,"The benefits of user involvement have not beenstrongly demonstrated" (p. 600). Meta-analyticalreviews suggest a similar conclusion, withaverage correlations of less than .30 among userparticipation and outcome criteria (Pettingell, etal., 1988; Straub and Trower, 1988). Two reasonshave been provided to explain such results:(1) methodological weaknesses of past studies,and (2) the presence of intervening or moderatingvariables among user participation and importantoutcome criteria (Ives and Olson, 1984; Strauband Trower, 1988). Future research into the ef-fects of user participation needs to address bothof these issues.

An important step in addressing these issues isthe development of a better measure of the userparticipation construct (Alavi and Joachimsthaler,1992). In IS, the terms "user participation" and"user involvement" have frequently been usedto mean the same thing. However, Barki andHartwick (1989) claim the concepts of user par-ticipation and user involvement are distinct, andthus need to be defined separately. In IS, the term"user involvement" has generally referred to a

MIS Quarteriy/March 1994 59

User Measures

series of activities or behaviors performed bypotential users or their representatives during thesystems development process. In contrast, theterm "involvement" has been used in other fieldsto describe a subjective psychological statereflecting the importance and personal relevanceof an issue (e.g,, Sherif, et al,, 1965), of an adver-tisement or product (e.g,, Krugman, 1967), or ofan individual's job (e.g,, Lawler and Hall, 1970).Thus, in other fields, involvement reflects an in-dividual's beliefs or feelings concerning some ob-ject. To align work in IS with that of otherdisciplines, Barki and Hartwick (1989) make tworecommendations: (1) to use the term "user par-ticipation" instead of "user involvement" whenreferring to the assignments, activities, andbehaviors that users or their representatives per-form during the systems development process,and (2) to use the term "user involvement" torefer to a subjective psychological state reflect-ing the importance and personal relevance thata user attaches to a given system.

Some recent studies have looked at the separateeffects of user participation and involvement, pro-viding initial empirical support for differentiatingthe two constructs (Jarvenpaa and Ives, 1991;Kappelman and McLean, 1991). In furthering thiswork, this study has three major goals: (1) todevelop separate measures of user participation,user involvement, and user attitude that can beused in future research, (2) to identify key dimen-sions of each construct, and (3) to investigate therelationships among them.

In the organizational behavior literature, there hasbeen little consensus concerning a definition ofparticipation (Locke and Schweiger, 1979; Vroomand Jago, 1988), Vroom and Jago (1988) notethat, in everyday terms, participation refers to"taking part," They go on to suggest that, typical-ly, one participates when one has contributed tosomething. Such participation can take a varie-ty of forms (Locke and Schweiger, 1979; Vroomand Jago, 1988): direct (participation through per-sonal action) or indirect (participation throughrepresentation by others); formal (using formalgroups, teams, meetings, and mechanisms) orinformal (through informal relationships, discus-sions, and tasks); performed alone (activities

done by oneself) or shared (activities performedwith others).^ In addition, participation can alsovary in scope, occurring during one or severalstages of the problem-solving process (problemidentification, evaluation, solution generation,choice, and implementation), Vroom and Jago(1988) also distinguish between actual andperceived participation. Research in participativedecision making (PDM). has shown that themotivational effects of participation (for example,its effects on decision satisfaction and implemen-tation) are more closely related to perceived par-ticipation. On the other hand, decision quality hasbeen found to be more closely related to actualparticipation.

Vroom and Jago's (1988) conceptualization ofparticipation implies that, in ISD, one participateswhen one takes part in, or contributes to, thesystem being developed. Moreover, to be con-sistent with research in PDM, participation in ISDneeds to include activities that are both direct andindirect, formal and informal, performed aloneand with others, and that occur throughout thedifferent stages of ISD, A similar point has beenmade by Ives and Olson (1984; see also Olsonand Ives, 1981) who view participation as a con-cept that includes a wide variety of behaviors, ac-tivities, and responsibilities, carried out by bothusers and systems developers,

Ives and Olson (1984) provide a review and cri-tique of user participation measures employedin research prior to 1984, In this review, they noteone important methodological weakness of pastparticipation measures, Mahy such measures askrespondents for their general opinions concern-ing the level or extent of their participation. Un-fortunately, general opinions can be biased. Asa result, measures based on such opinions areless likely to be accurate or reliable (Cote andBuckley, 1987). On the other hand, specificbehaviors, activities, and assignments denoteevents and facts that are readily observable.Such events tend to be more easily and objec-tively recognized, remembered, and reported(Cote and Buckley, 1987). Thus, measuresassessing a wide variety of specific behaviors,activities, and assignments should be more ac-

' While Locke and Schweiger (1979) exciude activities per-formed alone from their definition of participation, Vroom andJago (1988) consider such activities to represent the highestform of participation.

60 MiS Quarterly/March 1994

User Measures

curate, reliable, and valid than measures assess-ing general opinions.

Since the Ives and Olson (1984) review, fourstudies using multiple-item behavioral measuresof user participation have been published, Franzand Robey (1986) assessed participation by ask-ing users to evaluate the extent to which they,rather than the systems analyst, performed sixdesign-related activities and seven implementa-tion-related activities. Both activities, such asclarifying information needs and input/outputrequirements, and responsibilities, such as di-recting the planning and design phases of theproject, were included. While Franz and Robey(1986) report high reliabilities for their scales,assessments of validity are not reported,

Baroudi, et al, (1986) identify 47 development-related activities, 20 general activities, and 27 ac-tivities that are from one of three stages of thedevelopment life cycle—systems definition,systems design, and implementation. These ac-tivities were used to construct a comprehensiveparticipation scale, which was then used to testalternative theories linking participation to usersatisfaction and system use. However, scaleitems were not reported, nor were any reliabilityor validity results provided.

Robey, et al, (1989) measured participation dur-ing project meetings using two different methods,A three-item scale asked users to assess theamount of time they spent preparing for, the ex-tent to which their opinions were consulted in,and the number of questions they asked duringproject meetings. Strong evidence for the reliabili-ty of the scale is reported. In addition, a countof users' speech acts during meetings was alsotaken and was used as evidence for the validityof the participation scale,

Doll and Torkzadeh (1989; 1990) propose aneight-item measure of end-user software involve-ment that asks users to assess the amount oftime they spent in each of eight development ac-tivities. Examples of such activities include proj-ect initiation, determining systems objectives anduser information needs, and developing input/output forms. The authors provide impressiveevidence of reliability and validity for theirmeasure.

These four studies provide the best of what isavailable regarding the conceptualization and

measurement of user participation. However, on-ly two of the studies provide complete scale in-formation, including assessments of reliabilityand validity. Unfortunately, in both cases, con-ceptualization of the participation construct islimited, Robey, et al, (1989) measured participa-tion by observing behaviors that occur only dur-ing project team meetings. Thus, the manyparticipatory behaviors that occur outside suchmeetings were excluded. The measure proposedby Doll and Torkzadeh (1990), while broader, stilldoes not take into account a number of impor-tant participatory tasks that have been referredto in the IS literature. For example, Ives andOlson (1984) mention a number of tasks andassignments that seem to characterize a"responsibility" dimension of the user participa-tion construct. Examples include users beingresponsible for estimating costs and benefits,users having sign-off responsibility at each stageof the development process, users paying for thesystem, users being responsible for the successof the new system, and a user being the leaderof the project team, Ives and Olson (1984) alsoinclude participative activities of a passive naturein their conceptualization of the construct. Ex-amples of such participative activities includeusers being informed by the IS staff of a project'sprogress and users being presented with asystem walk-through. In these instances, userstypically receive information, participatingthrough observation and listening behaviors.While the participatory content of such actionsmay be low, they nonetheless form a type of par-ticipation. Not only do the users share informa-tion concerning the system or its development,they also spend time on ISD; a user not provid-ed with such information does neither.

In developing a general measure of user par-ticipation, all its forms need to be considered. Ourreview indicates that no such measure currentlyexists. Thus, there is a need for a reliable andvalid measure of participation that reflects ex-isting conceptualizations in PDM and is basedupon a wide range of user assignments, ac-tivities, and behaviors. Such a measure shouldinclude participative activities that are both for-mal and informal, direct and indirect, active andpassive, performed alone or with others, and thatoccur both overall in, and at specific stages of,the systems development process.

MIS Quarterly/March 1994 61

User Measures

User Involvement and UserAttitudeFollowing a review of the construct of involvementin psychology, organizational behavior, andmarketing, Barki and Hartwick (1989) concludethat these disciplines have converged to a defini-tion of involvement " , , , a s a subjectivepsychological state, reflecting the importance andpersonal relevance of an object or event" (p. 61),In a systems development context, this suggeststhat the term user involvement should be usedto refer to a psychological state reflecting the im-portance and personal relevance of a new systemto the user,2 Barki and Hartwick (1989) alsorecommend that a context-free measure of in-volvement, developed by Zaichowsky (1985), beused as a starting point for the development ofa measure of user involvement. Following theserecommendations, this paper develops ameasure of user involvement reflecting the twodimensions of importance and personalrelevance.

The construct of involvement, defined as apsychological state, needs to be differentiatedfrom other psychological states, particularly at-titude. While many different definitions of attitudehave been proposed over the years, inpsychology, attitude is now generally concep-tualized as an affective or evaluative judgmentof some person, object, or event (Fishbein andAjzen, 1975; Osgood, et al,, 1957; Thurstone,1931; Zanna and Rempel, 1988), Fishbein andAjzen (1975) suggest that attitude be measuredusing a procedure that locates the person's posi-tion on a bipolar affective or evaluative dimen-sion (e,g,, bad/good); for example, semantic

^The concept of involvement has had significant infiuence inthe fieids of social psychoiogy, consumer behavior, andorganizational behavior. A related construct, perceivedusefulness, has been proposed in IS by Davis (1989). Accord-ing to Davis (1989), perceived usefulness is defined as "thedegree to which a person beiieves that using a particuiarsystem would enhance his or her job performance" (p. 320).It is measured with items assessing perceived usefulness, asweli as perceived increases in productivity, effectiveness, andperformance. As such, Davis' construct of perceivedusefulness is likely to be highiy reiated to our construct of at-titude (which inciudes an item assessing usefulness).Usefulness is aiso iil<ely to be reiated to invoivement. The twoare, however, distinct constructs. A system may be seen tobe usefui, but not necessariiy important or personally relevant.i/

differential scales using bi-polar evaluative end-points would be appropriate.

In a systems development context, this suggeststhat the term user attitude should be used to refei;to a psychological state reflecting the affectiveor evaluative feelings concerning a new system.On the other hand, user Involvement refers to abelief—the extent to which a user believes thata new system is both important and personallyrelevant. Thus, in measuring user involvement,the evaluative (i.e., attitude) component needsto be excluded. And, since the involvementmeasure proposed by Zaichowsky (1985) in-cludes items pertaining not only to importanceand personal relevance, but also to evaluation,researchers should exercise care in adapting theinstrument to IS development. It is also impor-tant to note that, while user involvement and userattitude represent two distinct psychological con-structs, they are likely to be related. Systemsdeemed to be both important and personally rele-vant are likely to engender positive affective orevaluative feelings.

Method

Generation of participation questionsAfter consulting the works of Olson and Ives(1980; 1981), Baroudi, et al, (1986), and Franzand Robey (1986), 59 items depicting specificbehaviors, activities, and assignments that usersmay be engaged in during the systems develop-ment process were generated (these items arepresented in the Appendix), The 59 items forma comprehensive set, including activities that areboth direct and indirect, formal and informal, andperformed alone or with others. In addition, giventhe importance of taking into account the stageof the development process (Ives and Olson,1984), items were identified from four categories:one category for participative activities of ageneral and non-stage-specific nature (items 1to 14 in the Appendix), and three categories forthe stage-specific activities that occur duringsystems definition (items 15 to 29), physicaldesign (items 30 to 44), and implementation(items 45 to 59),

In the questionnaire, respondents were asked toassess each participation' item with a yes/noanswer. The amount of activity (i,e,, high/low) was

62 MIS Quarterly/March 1994

User Measures

not assessed. There were two fundamentalreasons for assessing specific actions asdichotomies. First, many participation items are,by nature, dichotomous (e,g., whether or not oneis a member of the development team, whetheror not a formal liaison function between the usersand IS exists, whether or not an agreement ofwork to be done was drawn-up, etc). Such par-ticipatory actions have no highs or lows—oneeither performs them or not. Second, participa-tion is conceptualized as having taken part in orhaving done things. This is different from par-ticipation viewed as frequency (i.e,, the numberof times one performs a given activity), effort (i.e,,the time or energy invested in a given activity),or influence (i.e., the effect of a given activity),all of which have different meanings. Each ofthese perspectives may provide a valid basis forthe assessment of user participation. A con-tinuous participation measure can be developedfrom each perspective. In the case of dichoto-

. mous variables, a continuous total score can becomputed by adding the scores from individualitems (Ghiselli, et al,, 1981). In this study, severalcontinuous measures of participation werecreated by summing the scores from multipledichotomously measured participation items.

Two questionnaires were developed in this study,one to be given prior to systems development,and a second one to be given following im-plementation. Both questionnaires contained thesame set of 59 yes/no participation activities. Inthe pre-development questionnaire, users wereasked about their expected participation; in thepost-implementation questionnaire, users wereasked about their actual participation. Both ex-pected and actual participation are thoughtsabout participative activities: questions of ex-pected participation ask the person to look aheadand think about the activities he/she expects toperform; similarly, questions of actual participa-tion ask the person to look back and think aboutthe activities he/she has performed. The impor-tance of considering both expected and actualuser participation has been demonstrated by Dolland Torkzadeh (1989).

Generation of involvement andattitude questionsUser involvement is a psychological state reflect-ing the importance and personal relevance of a

specific system to the user (Barki and Hartwick,1989). As such, involvement will be related to,but conceptually distinct from, user attitudes con-cerning the system (that is, how good or bad thesystem is perceived to be). To measure user in-volvement, 11 seven-point unipolar scales per-taining to importance and personal relevance(e.g., non-essential/essential) were selected fromZaichowsky's (1985) instrument (these scales arepresented in the Appendix). The other nine itemsfrom Zaichowsky's instrument were not used tomeasure involvement because, in our opinion,they appeared to assess attitude, notinvolvement.

Consistent with theorizing in the field ofpsychology, attitude is conceptualized as an af-fective or evaluative judgment of some person,object, or event. Attitude can be measured witha procedure that locates the person's position ona bipolar affective or evaluative dimension; forexample, semantic differential scales (Fishbeinand Ajzen, 1975), Osgood, et al, (1957), in theirstudy'of meaning, have identified many evalua-tive scales. For our study, four seven-point bipolarevaluative scales (e,g., bad/good), based on theworks of Osgood, et al. (1957), were used tomeasure user attitude and are presented in theAppendix. Such scales have been recommend-ed by Ajzen and Fishbein (1980), Of note, two ofthe four attitude scales were among the nineitems discarded from Zaichowsky's measure ofinvolvement.

These 15 items (11 assessing involvement; four .assessing attitude) were included on both ques-tionnaires. In the pre-development questionnaire,users were asked to think about "the new systemto be developed." Thus, current thoughts andfeelings concerning a future system wereassessed. In the post-implementation question-naire, users were asked about "the new systemthat has been developed." Thus, thoughts andfeelings concerning an existing system weremeasured.

Sample of respondentsA letter describing the purpose of the study wasmailed to 2603 data processing managers whowere members of CIPS (the Canadian Informa-tion Processing Society). The data processingmanagers of virtually all Canadian companiesthat owned a mini or a mainframe computer were

MIS Quarterly/March 1994 63

User Measures

therefore contacted. In the letter, respondentswere asked whether their organization plannedto start the development of a new informationsystem application within the next few months,A total of 460 responses were received to this in-itial letter (17.7 percent of the initial mailing list).Of these, 210 organizations stated that they wereplanning to start developing one or more newsystems in the near future. Several criteria wereemployed to select applications for the study.Specifically, we wished to study business-oriented applications that were new systems, notenhancements of existing ones, and that wereto be developed in-house. For practical reasons,we also needed systems that were expected totake less than one year to be developed. Onehundred thirty organizations met these criteria.

The 130 organizations were contacted by phoneto identify the future users of each system thatwas to be developed, A user was defined as aperson who, as part of his or her regular job,either used the system hands-on or made use ofthe outputs produced by the system. One thou-sand fifty-nine users were identified, Pre-development questionnaires were mailed to theusers identified during the phone survey. Ofthese, 293 usable questionnaires were returned(27.7 percent of the respondents) and form thepre-development sample. The 293 responsespertain to 116 applications in 86 organizations.

Once pre-development questionnaires werereceived, we kept in touch with the user and/orthe contact person so that the time of projectcompletion would be known. Depending on eachproject, this period lasted from four to 22 months.Three months after the system was put into use,a post-implementation questionnaire was mailedto each user (or contact person within theorganization). One hundred twenty-seven usablequestionnaires, pertaining to 74 applications in60 organizations, were returned and form thepost-implementation sample. Of these, 105 ques-tionnaires were from respondents in the pre-development sample; the remaining 22 question-naires were from new respondents, typically aperson replacing a pre-development respondentwho had been promoted or who had left theorganization. Thus, two partially overlappingsamples were used in the study, with itemsmeasured at two points in time (expected and ac-

tual participation, pre- and post-involvement, pre-and post-attitude).

The overall strategy in data analysis was to ex-plore the underlying structure of user participa-tion, user involvement, land user attitude byconducting and comparirig factor analyses fromthe two samples.^ Replication of factor struc-tures with different samples constitutes evidenceof their validity (Kerlinger, 1986), While it is possi-ble that levels of expected participation may dif-fer from levels of actual participation, there is noreal reason to expect differences in the way thatsuch thoughts are organized or structured inone's mind. Thus, expected and actual participa-tion should be found to have similar factor struc-tures. Similarly, while pre-development andpost-implementation levels of involvement and at-titude may differ, there is no real reason to ex-pect differences in the way such thoughts andfeelings are mentally organized or structured.Similar pre-development and post-implementa-tion factor structures are therefore expected.

In the factor analyses of user participation, weexpected that two dimensions would emerge—one dimension reflecting development-relatedresponsibilities and a second dimension reflect-ing specific activities. For user involvement andattitude, three dimensiohs were expected to

'Given that participation items were each scored dichotomous-iy, an objective can be raised to the use of factor-anaiytic pro-cedures for such data. While the use of dichotomous variablesis not, in a strict sense, compatibie with factor analytic models,many experts are favorable toward such practice. For exam-ple, Thurstone (1969) states that factor analysis can be pur-sued with qualitative, aii-or-nothing, non-continuous scaiesand reiations. Cattell (1973) states that factors obtained us-ing second-ciass scores (i.e., dichotomous variabies) are" . . .remarkably immune to distorted distributions or crudecoefficients" (p. 328). Kim and Mueiler (1978), while more con-servative in their recommendation, stiii claim that phi and pointbiserial coefficients (i.e., correlations between dichotomousvariabies and correlations between dichotomous and con-tinuous variables) may be used heuristicaiiy as a means offinding generai ciusterings of variabies. Finally, Gorsuch(1983) states that phi and point biseriai coefficients may befactor analyzed with no major probiems, provided variabieswith extreme sitews are avoided. In this study, an analysis ofindividuai item skews reveaied ah absence of items with ex-treme skews. Further, possible problems of instability in thefactor solution and loadings are niinimized in our applicationby the use and comparison of factor solutions and loadingsdrawn from two separate samples.

64 MIS Quarteriy/March 1994

User Measures

emerge—two dimensions reflecting the twoaspects of involvement (importance and personalrelevance) and a third dimension reflecting theseparate construct of attitude (affect orevaluation).

The factor analyses were also used to selectitems for scales of user participation, user in-volvement, and user attitude. Two estimates ofitem loadings, one from each sample, wereemployed to select items for these scales. Thus,items were selected from responses assessingboth expected and actual participation, and pre-development and post-implementation involve-ment and attitude. The resulting scales shouldtherefore be both stable and comprehensive, andapplicable in a variety of contexts.

In using factor analysis, a number of decisionsmust be made. In this study, two key decisionsconcerned the number of factors to be extractedand the choice of items for measures of our threeconstructs. Our criteria in making these decisionswere as follows. First, to choose the number offactors to be extracted:

1, Consider from each factor analysis (pre-development sample, post-implementationsample) those factors with eigenvaluesgreater than 1,00. If the two factor analysesagree, select that number of factors,

2, If the two factor analyses disagree, usetheoretical expectations to select the numberof factors. However, theoretical expectationsmay be used only if (a) the number of factorsobtained from one of the factor analysesmatches the a priori theoretical expectation,and (b) the specific items that load on eachfactor also match theoretical expectations,

3, Using theoretical expectations to select thenumber of factors assumes that the newly ob-tained factor structures are understandable.If not, retain the two original and different fac-tor structures.

Second, to choose items for the three measures:

1, Select items that load greater than ,50 on thesame factor of both factor analyses (pre-development sample, post-implementationsample),''

2, If more items are needed to describe thedimension, select items that load greater than,50 on one of the factor solutions, and load

greater than ,35 on the second factor solution,and show substantial differentiation in theloadings between different factors of this sec-ond solution,^

Analysis of User Participation

As previously mentioned, the 59 participationitems consist of four groups of items: 15 stage-specific items for each of the three systemsdevelopment stages, and 14 non-stage-specificitems. Factor analyzing all 59 itemssimultaneously would not be appropriate for threereasons. First, since the activities performed dur-ing a given stage tend to occur together in time,stages may emerge as dimensions in the factoranalysis. To avoid such superficial results, the.participative activities pertaining to each stageneed to be analyzed separately. Second, sevenof the 15 items refer to activities common to allthree stages (items 15 to 35 in the Appendix). Forexample, users were asked if they formallyreviewed the work done by the IS staff during thesystem definition stage (item 27), during thephysical design stage (item 28), and during theimplementation stage (item 29)), Since there islikely to be a tendency for individuals to performsimilar activities either at all three stages or atnone of the stages, a simultaneous analysis ofthese items is likely to result in superficial dimen-sions reflecting each activity. Again, this suggeststhat stages should be analyzed separately. Third,our sample size, especially in the post-implemen-tation sample (n = 127), is apt to produce an un-stable solution if all 59 items are factor analyzedsimultaneously.

To avoid these potential problems, the analysisof user participation was conducted in two steps,'Step one may be considered a preliminaryanalysis. Here, factor analyses were performedon subsets of the total item pool. Four subsetswere created and are presented in the Appendix

* As Keriinger (1986) states: "Unfortunateiy, there is no general-ly accepted standard error of factor ioadings. . . Some factoranaiysts in some studies do not bother with loadings less than.30 or even .40" (p. 572). A factor loading of .50 is conser-vative and is used by many researchers (Ives, et al., 1983;Straub, 1989).

'This is in fact a siight relaxation of the first criterion above,and was introduced in order to deal with Items that have theirprimary ioading on the same conceptual factor in bothsamples, but whose ioadings do not satisfy the criterion of .50in both anaiyses.

MIS Quarteriy/March 1994 65

User Measures

(15 system definition items; 15 physical designitems; 15 implementation items; and 21 general,non-stage-specific items—the 14 non-stage-specific items plus seven composites generatedby averaging across the seven common stage-specific items). The goal of the preliminaryanalysis was to eliminate ambiguous or idiosyn-cratic items. The factor analyses were examined,and items not meeting the selection criteria notedabove were discarded,^ Twenty-five items,presented in Table 1a, were retained for the mainanalysis.

In step two, the 25 participation items were sub-mitted to two factor analyses—one for the pre-development sample and one for the post-implementation sample. The pre-developmentand post-implementation factor structures werecompared, both intuitively and using a correla-tion measure (to be described below). Factorloadings from the pre-development and post-implementation samples were examined, anditems were selected to create a user participa-tion scale.

Analysis of User involvement and Attitude

The 11 involvement and four attitude items werecombined into a single set of 15 items and alsosubmitted to two factor analyses—one for the pre-development sample and one for the post-implementation sample. Again, pre-developmentand post-implementation factor structures were,compared, both intuitively and using a correla-tion measure. Factor loadings from the pre-development and post-implementation sampleswere examined, and items were selected tocreate user involvement and attitude scales.

User participationTwo principal component factor analyses withvarimax rotation were conducted for the set of25 participation items. In the analysis of the pre-development sample, three factors with pre-rotation eigenvalues of 7,54, 2,04, and 1,26 wereobtained. The three factors were labeled user-IS relationship, responsibility, and hands-on ac-

° A detaiied description of the preliminary analysis can be foundin Barki and Hartwicl< (1993b).

tivities. The first dimension, user-IS relationship,taps participation activities involving a relation-ship between the users and the IS staff; for ex-ample, "Information Systems/Data Processingstaff kept me informed," "I formally reviewedwork done by Informatioh Systems/Data Process-ing," and "I formally approved work done by In-formation Systems/Data Processing." Thesecond dimension, responsibility, refers tomanagerial assignments or activities that aretypically performed by the project leader ormanager; for example, ''I had responsibility toselect hardware/software," "I had responsibilityfor estimating development costs," "I hadresponsibility to requestladditional funds," and"I was the leader of the project team." The thirddimension, hands-on activities, reflects hands-onsystems development activities that users per-sonally perform; for example, "I defined/helpeddefine report formats," "I created the user pro-cedures manual," and "I designed the user train-ing program." ;

Four factors emerged in the analysis of the post-implementation sample, with pre-rotation eigen-values of 7,20,2,25,1.26, and 1,25, The first twofactors represent the user-IS relationship andresponsibility dimensions obtained with the pre-development sample. The remaining two dimen-sions both pertain to hands-on activities, one forthe physical design stage and one for the im-plementation stage. Given the three-factor struc-ture obtained in the pre-development sample, athree-factor solution was forced on the post-implementation data. However, the forced solu-tion was uninterpretable. Consequently, the in-terpretable four-factor solution was kept. Factorloadings and post-rotation eigenvalues for thethree-factor (pre-development sample) and four-factor (post-implementation sample) solutions areshown in Table 1a,

To quantitatively assess whether the three fac-tors of the pre-development analysis correspond-ed to the four factors of the post-implementationanalysis, a correlation analysis of factor scoreswas performed. Four sets of factor scores werecreated for each respondent—(1) one set usingstandardized scoring coefficients from the pre-development factor solution applied to the pre-development user responses, (2) one set usingstandardized scoring coefficients from the post-implementation factor solution applied to the pre-

66 MiS Quarterly/March 1994

User Measures

development user responses, (3) one set usingstandardized scoring coefficients from the pre-development factor solution applied to the post-implementation user responses, and (4) one setusing standardized scoring coefficients from thepost-implementation factor solution applied to thepost-implementation user responses.

Two correlation analyses were then performed,one analysis on the pre-questionnaire responses(factor scores from (1) and (2) should be cor-related if they represent the same dimensions),and one analysis on the post-questionnaireresponses (factor scores from (3) and (4) shouldbe correlated if they represent the same dimen-sions). In essence, this procedure performs adouble cross-validation of the factor structures.To the extent that the pre-development scoresfor a factor correlate with post-implementationscores, that factor may be claimed to be stableacross the two samples. If all factors are so cor-related, it may be claimed that the factorsunderlying expected participation are similar tothe factors that underlie actual participation.

Results of this analysis are presented in Table1b and Table 1c. As can be seen, pre-factor 1is strongly correlated with post-factor 1 (r's = ,97and ,97); similarly, pre-factor 2 is strongly cor-related with post-factor 2 (r's = ,86 and ,87); final-ly, pre-factor 3 is correlated with both post-factor3 (r's = .92 and .91) and post-factor 4 (r's = ,52and ,37). Thus, the factor structure that underliesusers' expected participation, assessed prior tosystems development, seems quite similar to thefactor structure that underlies actual participation,as reported following the implementation of thesystem. The one difference is a more differen-tiated structure for actual participation, reflectingthe different hands-on activities performed dur-ing physical design and during implementation.

Given the general stability of the two structures,items based upon the loadings of both factoranalyses may then be used together in order toselect items to measure user participation. Threesubscales were created for the three dimensionsof user participation: user-IS relationship (con-sisting of eight items—A2, A3, A4, A5, A6, A7,39, and 40), responsibility (consisting of sixitems—A1, 2, 5, 7, 8, and 9), and hands-on ac-tivities (consisting of six items—44, 45, 46, 57,58, and 59), Five items were discarded (items 11,12, 42, and 47—because they failed to load

higher than ,50 in either analysis; item 6 becauseits primary loading was on different factors in thepre-development and post-implementationsamples). In addition, a 20-item overall scale ofuser participation was also created by combin-ing the items from all three subscales.

The factor analyses for the pre-development andpost-implementation samples were re-run withoutthe discarded items in order to ensure the replica-tion of the factor structures. With the reduced setof 20 participation items, the three-factor solutionof the pre-development sample and four-factorsolution of the post-implementation sample wereagain obtained. Further, all participation itemshad their primary loading on the same factors.Thus, elimination of the five items affected neitherthe factor structures nor the individual itemloadings.

Developing scales of userinvoivement and attitudeTwo principal component factor analyses, one forthe pre-development sample and one for thepost-implementation sample, were conducted forthe 15 involvement and attitude items. In the pre-development analysis, one factor emerged withan eigenvalue of 8,87. Thus, user involvementand attitude concerning the system to bedeveloped were not distinguished by respondentsin the pre-development sample. On the otherhand, the anticipated three-factor solution wasfound in the analysis of post-implementationresponses. In this analysis, three factors emergedwith pre-rotated eigenvalues of 8.18, 1,25, and1,08, reflecting the hypothesized dimensions ofimportance, personal relevance, and attitude.Thus, users' thoughts concerning a system in useare more differentiated than for a system to bedeveloped in the future. However, when thetheoretically expected three-factor solution wasforced on the pre-development sample, thehypothesized dimensions of importance, per-sonal relevance, and attitude again emerged.Factor loadings and post-rotation eigenvalues forthe two three-factor analyses are presented inTable 2a.

To quantitatively assess whether the three fac-tors forced in the pre-development analysis cor-responded to the three factors that emerged inthe post-implementation analysis, a correlation

MiS Quarteriy/March 1994 67

User Measures

Table. 1a. Factor Analysis of Participation(General and Stage Items Combined]

Item

Evaluated requirements analysis (39)—RELFormally approved IS work (A6)—RELApproved requirements analysis (40)—RELFormally reviewed IS work (A5)—REL

Pre-OuestionnaireFI

(REL)

.70

.71

.69,70

Could make changes to agreement (A3)—REL ,66Formal agreement of work to do (A2)—RELIS kept me informed (A4)—RELSigned off formal agreement (A7)—REL

Responsibility to select H/S (8)—RESResponsibility to estimate costs (5)—RESMain responsibil ty for project (Al)—RESLeader of project team (2)—RESResponsibility for overall success (9)—RESResponsibility to request funds (7)—RES

1 helped define report formats (46)—HA1 helped define screen layouts (45)—HA1 helped define I/O forms (44)—HACreated user procedures manual (59)—HADesigned user training program (57)—HATrained other users (58)—HA

Responsibility to estimate benefits (6)Evaluated cost/benefit analysis (42)Department member as liaison (12)IS staff member as liaison (11)Developed control procedures (47)

Eigenvalues (before rotation)Eigenvalues (after rotation)

REL—User-IS Relationship SubscaleRES—Responsibility SubscaleHA—Hands-on Activity Subscale

,69,57,49

,21,03.28.13,36,28

.29

.30

.20

.01

.06

.01

,46,42.41.39,10

7,544,65

F2(RES)

.05,38.17,29,18.06.25.31

,67,63,60.56.50,52

,16.02.17.37.43.24

,44,18

- ,03,02.37

2,043,30

F3(HA)

,09,10,15.15.15.04,10,08

,16,17,27,24.12.06

,80,73,73,52,51,35

.19

.02,15,16,38

1.262.89

Items :

Post-QuestionnaireF1 [ F2

(REL) (RES)

.50 :

.73 ;

.52 1

.77 :

.66

.55 -,64,59

.02- ,04

,32,18,23,10

,27.14,19,14,12,12

,32.45.37 -,30 -

- ,05

7,204.04

Table 1b. Intercorrelations of Factor Scores ObtainedWith the Pre-Development Sampte

Pre-Factor 1Pre-Factor 2Pre-Factor 3

Post-Factor 1 Post-Factor

,97,21.04

68 MiS Quarterly/March 1994

.19

.86

.00

2 Post-Factor 3

_19.0492

,08,19.16,19.21

-.06.02.26

,64,86,45,51.46.69

.17,22,17,13,17.04

.50,31

-,09-,15.25

2.253.09

F3(HA)

.21,10.19,13.13,08,24

- ,14

,21.10.36,20,33,07

,77.73,68,02,22,03

.38- .07

,06,09,29

1.262.42

F4(HA)

,09.31.13.17,16

- ,10- ,04

,21

.14

.02,42,40,29,20

- .02,27,28,70,73,51

.03

.05,21,02,46

1.252.41

' Post-Factor 4

- .06,48,52

User Measures

Table 1c. Intercorrelations of Factor Scores ObtainedWith the Post-Implementation Sample

Post-Factor 1 Post-Factor 2 Post-Factor 3 Post-Factor 4

Pre-Factor 1Pre-Factor 2Pre-Factor 3

,97,06,03

,03,87,07

.15

.03,91

- ,02,49,37

analysis of factor scores, similar to that performedfor the participation items, was conducted.Results of this analysis are presented in Table2b and Table 2c, As can be seen, the two setsof factors are very similar. Pre-factor 1 is cor-related with post-factor 2 (r's = .91 and .96) andseems to represent a dimension of Importance;pre-factor 2 is correlated with post-factor 1 (r's= .83 and ,90) and represents a dimension ofPersonal Relevance; pre-factor 3 is correlatedwith post-factor 3 (r's = ,73 and .82) andrepresents the construct of Attitude.

By taking into account the factor loadings of thetwo analyses, scales of user involvement and at-titude were created. Three involvement scalescan be created—two subscaies to represent theimportance (consisting of five items, labelled " I "in Table 2a) and personal relevance (consistingof four items, labelled "PR" in Table 2a) dimen-sions, and a nine-item overall scale of user in-volvement, created by combining items from thetwo subscales. In addition, a scale of user attitude(consisting of four items, labelled "A" in Table2a) can also be created. Two items were discard-ed ("exciting/unexciting" and "of interest tome/of no interest to me"—because they failedto load higher than .50 in both of the analyses).

The factor analyses for the pre-development andpost-implementation samples were re-run withoutthe discarded items in order to confirm the ob-tained factor structures. With the reduced set of13 involvement and attitude items, a one-factorsolution emerged for the pre-development sam-ple. As above, when a three-factor solution wasforced, the same three factors and primary itemloadings were obtained. In the re-analysis of thepost-implementation sample, a two-factor solu-tion was obtained: the first factor contained allnine involvement items; the second factor con-tained the four attitude items. Thus, similar

dimensions emerged from the analysis of thereduced set of items.

Construct ValidityConstruct validity looks at the extent to which ascale measures a theoretical variable of interest(Cronbach and Meehl, 1955), There are,however, many different aspects of constructvalidity that have been proposed in thepsychometric literature (see, for example, Bagoz-zi, et al., 1992), Further, the constructs of thisstudy may be assessed at two different levels:at the overall construct level and the specificdimension level. Thus, at one level, the constructvalidity of user participation, user involvement,and user attitude may be examined. In doing so,the following should be considered: (1) contentvalidity, (2) internal consistency reliability, (3) con-vergent validity, (4) discriminant validity, and (5)predictive validity. In addition, at another level,the construct validity of the specific dimensionsof user participation (user-IS relationship, respon-sibility, and hands-on activities) and user in-volvement (importance and personal relevance)may also be examined. In doing so, the followingshould be considered: (1) internal consistencyreliability, (2) factorial validity, (3) convergent/discriminant validity, (4) predictive validity, and(5) nomological validity.

The constructs of user participation,user involvement, and user attitude

Content Validity

Content validity refers to the representativenessand comprehensiveness of the items used tocreate a scale. It is assessed by examining theprocess by which scale items are generated(Nunnally, 1978; Straub, 1989), In this research.

MIS Quarterly/March 1994 69

User Measures

Table 2a. Factor Analysis of Involvement and Attitude Items

Item

Pre-QuestionnaireF1 F2 F3(I) (PR) (A)

Post-QuestionnaireF1 F2 F3

(PR) (I) (A)

Essential/nonessential (I)Trivial/fundamental (I)Significant/insignificant (I)Important/unimportant (I)Not needed/needed (I)

Irrelevant to me/relevant to me (PR)Of no concern to me/of concern to me (PR)Matters to me/doesn't matter to me (PR)Means nothing to me/means a lot to me (PR)

Useful/useless (A)Good/bad (A)Worthless/valuable (A)Terrible/terrific (A)

Exciting/unexcitingOf interest to me/of no interest to me

Eigenvalues (before rotation)Eigenvalues (after rotation)

.75

.69,59,53,52

,32,35,33,50

,29,32.45.44

,44,46

8,953,50

.29

.27,33

• ,44

,42

.80

.74

.63

.62

.35,32.43,26

.29

.38

0,523,33

,24,36,37.39,23

,37,36,47,27

,73.78.61-.47

.41

.38

0,493,13

.27

.26,29',27,28

,80173;84;78

.15

.382̂1

,36

,48.73

8.183,96

.74

.74

.64

.75,69

.29

.40

.31

.27

.23

.17

.31

.21

.28

.29

1,253,34

.16

.14

.25

.25' ,34

,16,26,31,38

,83,78,84,65

,38,22

1,083,23

I—Involvement—Importance SubscalePR—Involvement—Personal Relevance SubscaleA—Attitude Scale

Table 2b. Intercorrelations of Factor Scores ObtainedWith the Pre-Development Sample

Post-Factor 1 Post-Factor 2 Post-Factor 3

Pre-Factor 1Pre-Factor 2Pre-Factor 3

,16,83,44

,91.21.40

.37

.44,73

Table 2c. Intercorrelations of Factor Scores ObtainedWith the Post-Implementation Sample

Pre-Factor 1Pre-Factor 2Pre-Factor 3

70 MIS Quarterly/March

Post-Factor 1

,07,90,26

1994

Post-Factor 2

,96.00,21

Post-Factor 3

,11.17.82

User Measures

definitions of user participation, user invoivement,and user attitude were initially proposed basedon a review of theory and research in IS and otherdiscipiines, in generating items for the user par-ticipation scaie, a comprehensive conceptualiza-tion was empioyed, inciuding direct and indirectforms of participation, formai and informai ac-tivities, activities performed aione and withothers, and both generai and stage-specificassignments, activities, and behaviors, Severaiexisting measures of user participation (Baroudi,et ai,, 1986; Franz and Robey, 1986; Oison andIves, 1980; 1981; Robey, et ai,, 1989) were con-suited to seiect items. A few items were also add-ed to ensure completeness, resuiting in a finaliist of 59 participation questions. To assess userinvoivement, 11 items were selected and adaptedfrom Zaichowsky's (1985) context-free measureof invoivement. To assess user attitude, fourseparate bi-poiar evaiuative scaies, based on theworks of Osgood, et ai. (1957) and Fishbein andAjzen (1975), were employed. Together, theseprocedures enabled a representative and com-prehensive sampiing of the user participation, in-voivement, and attitude domains, providingevidence of content vaiidity.

Internal Consistency Reliability

Internai consistency reiiabiiity looks at the extentto which the items used to assess a constructreflect a true common score for the construct. Inthis research, the internai consistency reliabilityof the 20-item user participation scaie wascaicuiated in two different ways. First, correia-tions between each item and the scaie total(minus that item) were calculated. All items werefound to correiate significantly (p<,01) with thescaie totai, with correiations ranging from 0,26to 0,71, Second, Cronbach alphas for the scaiewere aiso found to be high (,91 and ,89, respec-tiveiy, for the pre-deveiopment and post-impie-mentation sampies).

Internal consistency reliability for the nine-itemuser invoivement scaie and the four-item user at-titude scaie were simiiariy assessed. Correiationsbetween each of the scaie items and the scaletotais (minus that item) were found to be signifi-cant (p<,001), with correlations ranging from0,67 to 0,82 for invoivement, and from 0.65 to0,86 for attitude. Cronbach alphas for the userinvoivement scaie were ,93 for both the pre-

deveiopment and post-impiementation sampies,Aiphas for the user attitude scale were .99 forboth samples. Thus, aii three of our scaies werefound to achieve high levels of internalconsistency.

Convergent Validity

Convergent vaiidity refers to the extent to whichmuitipie measures of a construct agree with oneanother (Campbeii and Fiske, 1959). In thisresearch, the three user participation subscalesof user-iS relationship, responsibility, and hands-on activities might be considered three differentmeasures of the user participation construct.Thus, correlations among the three subscalescan be considered evidence of convergent validi-ty (Bagozzi and Yi, 1991), In the study, correla-tions of ,54 and .40 were found between user-iSrelationship and responsibiiity, ,41 and ,45 be-tween user-iS reiationship and hands-on ac-tivities, and ,50 and ,58 between responsibiiityand hands-on activities, for the pre-developmentand post-implementation sampies, respectiveiy.Aii correiations were significant (all p's <.OO1),Simiiariy, the two user involvement subscaies ofimportance and personal reievance might be con-sidered two different measures of the user in-voivement construct, Correiations of .79 and .65were found between importance and personaireievance for the pre-deveiopment and post-implementation samples. Both correlations aresignificant (p's <,001), Thus, evidence for theconvergent vaiidity of both the user participationand user invoivement scales is provided.

Discriminant Validity

Discriminant vaiidity refers to the extent to whichmeasures of different constructs are distinct(Campbell and Fiske, 1959). Specificaiiy, correla-tions between distinct constructs shouid besignificantiy iess than 1,00 (Bagozzi, et al,, 1992).In the pre-development sample of our research,a correiation of .21 was found between expecteduser participation and user invoivement, and acorreiation of ,20 was found between expecteduser participation and user attitude, in the post-implementation sampie, correiations of .28 and,31 were found between actual user participationand the two constructs of user invoivement anduser attitude, respectiveiy, Aii correiations are

MIS Quarteriy/March 1994 71

User Measures

significantiy iess than 1,00 (p's <,001),^ In ad-dition, they are aiso smaller than the correlationsobtained among the three dimensions of userparticipation and between the two dimensions ofuser invoivement. Together these resuits providestrong evidence for the distinctiveness of userparticipation, operationaiized behavioralty, andthe psychotogicai constructs of user invoivementand user attitude,

Correiations of ,84 and .64 were found betweenuser invoivement and user attitude in the pre-development and post-implementation sampies.Of these, oniy the iatter is significantiy iess than1.00 (p<,001),8 Thus, there appears to be nodiscrimination between these two constructs priorto systems deveiopment. On the other hand,whiie they are stilt highiy reiated, there is somediscrimination between user invoivement anduser attitude in the post-implementation data.Evidence of discriminant vaiidity between theseconstructs is therefore weak.

Predictive Validity

Predictive validity considers the extent to whichmeasures of a construct predict measures ofother constructs that are expected to be relatedon the basis of theory. According to Barki andHartwick (1989), user participation, user invoive-ment, and user attitude represent distinct con-structs, with participation hypothesized to be one(of many) antecedents of invoivement and at-titude. Evidence concerning the distinctivenessof user participation and user invoivement hasjust been presented. However, white distinct, wedo beiieve that user participation and irivolvementare related. Users who participate in the systems

'Correlations between two measures of the same constructmay be less than 1.00 because of measurement error. To ex-amine this possibiiity, correlations corrected for attenuationdue to unreliabiiity were examined, in the pre-deveiopmentsampie, correiations of .23 and .21 were found between ex-pected participation and user involvement and user attitude,respectively, in the post-impiementation sampie, correiationsof .31 and .33 were found between actual participation anduser invoivement and user attitude, respectiveiy. The cor-rected correiations suggest that user participation is distinctfrom the psychoiogicai constructs of user invoivement anduser attitude.

° Correiations corrected for attenuation due to unreiiabiiity werealso examined for user involvement and user attitude. Cor-relations of .88 and .68 were found for the pre-deveiopmentand post-implementation sampies.

iV)

deveiopment process are iikeiy to deveiop beiiefsthat a new system is good, important, and per-sonalty retevant. Supporting this contention, pastresearch using other measures has found thatuser participation leads to positive user attitudesconcerning systems being deveioped (Pettingeii,et al., 1988; Straub and Trower, 1988). There areseveral reasons why this could occur. Throughtheir participation, users may be able to bettercommunicate their information needs, which, ifsatisfied, wiil result in a better system, at ieastfrom their point of vievy (Robey and Farrow,1982). Moreover, because of their participation,users may perceive that they have had substan-tiai infiuence on the deveiopment process andthereby deveiop feeiings.of ownership. Finatty,through participation, users are apt to developa better understanding of how the system canhelp them in their job. However, participation isiikeiy to be but one of many antecedents of in-volvement and attitude. Other infiuences includesuch factors as personality (e.g., need forachievement, iocus of control, and dominance)and experience with inforhiation systems (e,g,,education, type of systems used in the past, andamount and quaiity of experience with othersystems). Given these other infiuences, the reia-tionship between user participation and both in-voivement and attitude is expected to bemoderate in magnitude. That is what we ob-served. As noted in the previous section, actuaiuser participation was found to correiate ,28 and.31 with involvement and attitude (both p's < .01),

The dimensicns cf user participationand user invQivement

Internal Consistency Reliability

In this research, the internai consistency retiabiii-ty of the user participation subscaies (user-tSretationship, responsibility, and hands-on ac-tivities) was also assessed. Correlations betweensubscate items and subscate totats (minus thatitem) were ait significant (ail p's < .001), Further,Cronbach alphas for each subscale were alsofound to be satisfactory (,90 and .85 for user-iSretationship, ,80 and .84 for responsibility, and,82 and .77 for hands-on activities, for the twosamptes). tnternat consistency reiiabitity for thetwo user invoivement subscaies (importance andpersonat reievance) was atso assessed, Correla-

72 MIS Quarterly/March 1994

User Measures

tions between each of the subscaie items and thesubscale totals (minus that item) were bothsignificant (all p's <,001), In addition, Cronbachaiphas for both subscales were both high (,88 and,89 for importance, and ,92 and ,94 for personaireievance, for the two sampies). Thus, aii of oursubscales were found to achieve satisfactorylevels of internai consistency.

Factorial Validity

Factorial validity refers to the extent to which afactor analytic solution is consistent with a prioritheoretical expectations (Comrey, 1988; Ker-iinger, 1986). Strong evidence of factoriai validi-ty was found in our research. Consider, first, thedimensions of user participation. We expectedto find two dimensions of user participation, onereflecting user responsibilities and one refiectinguser activities (Olson and Ives, 1981), Factoranalyses of user participation were consistentwith this expectation, yielding a responsibility fac-tor and two activity factors (user-IS relationshipand hands-on activities) in the pre-deveiopmentanalysis, and a responsibility factor and three ac-tivity factors (user-iS reiationship, hands-on ac-tivities in physical design, and hands-on activitiesin implementation) in the post-implementationanaiysis,^

in the anaiysis of user involvement and attitude,three factors were expected—importance, per-sonai reievance, and attitude. The hypothesizeddimensions cieariy emerged in the analysis ofusers' post-impiementation responses. However,factor analysis of users' pre-developmentresponses yieided oniy one factor, Stiii, when athree-factor solution was forced on this data, theanticipated dimensions of importance, personaireievance, and attitude were found, it seems,then, that users have oniy a rough and undif-ferentiated set of thoughts concerning a systemprior to its deveiopment. However, by the timeit has been impiemented and used, a differen-tiated pattern of thoughts and feelings emerges.Following system impiementation, the predicted

'The fact that the differences between the two factor structures -were quite minor is especially remarkable given that the pre-development and post-implementation samples assessed ex-pected and actual participation, respectively. These resultscan also be interpreted as supporting the hypothesis that, atieast in terms of structure, peopie think about expected andactual participation in similar ways. •

dimensions of invoivement and attitude can beseen,

Convergent/Discriminant Validity

Convergent vaiidity refers to the extent to whichmuitipie measures of a construct agree with oneanother; discriminant vaiidity refers to the extentto which measures of different constructs aredistinct (Campbeii and Fiske, 1959). in thisresearch, factor anaiysis was empioyed to in-vestigate the convergent and discriminant vaiidityof the three dimensions of user participation andthe two dimensions of user involvement (Bagoz-zi and Yi, 1991; Nunnally, 1978), In factoranalysis, factor ioadings represent correlationsbetween originai item scores and factors. Thus,convergent validity is claimed when scale itemsload highly on relevant factors. Discriminantvaiidity is ciaimed when items load more highlyon one factor than on others. In this study, theseiection criteria employed therefore ensure con-vergent (i,e., items were seiected oniy if they ioad-ed highiy on a given factor in both pre-devei-opment and post-implementation analyses) anddiscriminant (i,e,, items were selected only if theyioaded more highly on one factor than on theothers) validity.

Two additionai factor anaiyses (one for each sam-pie) were conducted to further explore the con-vergent and discriminant vaiidity of thedimensions of user participation and user invoive-ment, in these anaiyses, aii 33 items selected forour scales (responsibiiity—6 items; user-iSrelationship—8 items; hands-on activities—6items; importance—5 items; personaireievance—4 items; and attitude—4 items) wereanalyzed together. In the analysis of pre-development responses, four factors with eigen-vaiues greater than 1,00 were found, represen-ting involvement/attitude, user-IS relationship,responsibility, and hands-on activities, in theanaiysis of post-impiementation responses, sixfactors with eigenvalues greater than 1.00 wereobtained, representing invoivement, attitude,user-iS reiationship, responsibility, hands-on ac-tivities in physicai design, and hands-on activitiesin impiementation. These resuits provide furthersupport for the convergent/discriminant vaiidityof the construct of user participation, and the twoconstructs of user involvement and user attitude.In addition, they paraiiei exactly those obtained

MIS Quarterly/March 1994 73

User Measures

when user participation, and user involvementand attitude were analyzed individuaily. In doingso, they provide evidence of convergent/discrimi-nant validity for the three dimensions of user par-ticipation. However, evidence of discriminantvaiidity concerning user invoivement and user at-titude was obtained only in the post-impiementation sampie.

Predictive Vaiidity

As noted eariier, Barki and Hartwick (1989) havehypothesized that user participation is one (ofniany) antecedent of user invoivement and at-titude. Users who participate in the systemsdevelopment process are likeiy to develop beliefs,that a new system is good, important, and per-sonaiiy reievant, A similar prediction couid bemade fof each of the dimensions of user par-ticipation, Correiationai anaiyses provide somesupport for these theoretical expectations, Re-sponsibiiity (r's = ,26 and .33, both p's <,01)and hands-on activities (r's = ,25 and ,24, bothp's < .01) were both significantiy correiated withinvoivement and attitude; user-IS relationshipwas marginaiiy related to invoivement and at-titude (r's = ,14 and .17, p's <.11 and .07),

The reiationship between the dimensions of userparticipation and user invoivement seem to bemoderate, at best. Further anaiysis, investigatingseparateiy the effect of participation on the twodimensions of user invoivement, sheds someiight on this. Participation seems to have iittierelationship with users' perceptions of the newsystem's importance (r's = ,14, ,05, ,11, and ,17for overall user participation, user-IS relationship,responsibility, and hands-on activities, respec-tiveiy; aii non-significant). The benefits of userparticipation seem to be one of engenderingperceptions of personai reievance (r's = .37, ,22,,36, and ,29 for overaii user participation, user-IS relationship, responsibiiity, and hands-on ac-tivities, respectively; aii p's <,01),

Nomological Validity

Nomologicai vaiidity refers to the extent to whichmeasures of a construct predict measures ofother constructs, aii embedded in a theoreticalnetwork of relationships (Cronbach and Meehl,1955), Predictive validity and nomoiogical validitydiffer oniy as a matter of degree (Bagozzi, et al.,1992): "The former scrutinizes how well the focal

construct predicts a single criterion of theoreticaiinterest; the latter investigates how weii the focalconstruct functions within an entire network ofhypotheses" (pp, 665-666), Using thenomological network as a criterion, constructvaiidity of a measure is evidenced when the rela-tionships of this measure to other measures areconsistent with those expected on the basis ofexisting theory. Nomoiogicai vaiidity for themeasures proposed in this study has beenassessed in work reported by Hartwick and Barki(1994) and Barki and Hartwick (1993a),^°

Hartwick and Barki (1994) investigated the in-fiuence of user participation, user invoivement,and user attitude on the use of a new system,Fishbein and Ajzen's Thebry of Reasoned Action(Ajzen and Fishbein, 1977; 1980; Fishbein, 1980;Fishbein and Ajzen, 1975) was employed todescribe their influence. Specificaiiy, the threedimensions of user participation (user-IS relation-ship, responsibiiity, and hands-on activities) werehypothesized to iead to user invoivement anduser attitude toward the new system, in turn, in-voivement and attitude were hypothesized to ieadto users' attitudes and norms concerning the useof the new system. Finaiiy, users' attitudes andnorms concerning use were hypothesized to ieadto intentions to use, and actuai use of, the newsystem. Using a Iongitudinai design and struc-tural equation analyses,, Hartwick and Barki(1994) found strong support for this modei. Thus,the distinct nature of the user participation andinvoivement constructs and the hypothesizedmediating roie of invoiverhent and attitude (be-tween user participation and system use) weresupported empirically. Of additional note, the keyrole of one participation dimension, responsibili-ty, was identified. It was primarily when users per-formed ISD assignments and activities thatentaiied responsibiiity that feelings of high in-voivement and positive attitude occurred.

Barki and Hartwick (1993a) investigated the in-fiuence of user participation (user-iS relationship.

"There are two minor variations in the operationalizations of - ^these two studies and those prpposed here. Hartwick and SBarki (1994) and Barki and Hartwick (1993a) both employ a |seven-item subscale of user-IS relationship (omitting the item |"I signed off a formalized agreement of work to be done by 'the Information Systems/Data Processing staff") and a five- ditem subscale of hands-on activities (omitting the item "I 'trained other users to use this'system"). Ail other opera- ,!tionalizations in these two studies are the same as proposed 'in this paper. J

74 MIS Quarterly/March 1994

User Measures

responsibiiity, and hands-on activities) on in-terpersonal conflict, using a modei proposed byRobey, et ai. (1989; 1993), According to Robeyand his coiieagues, individuais who participatein the systems development process wiii havegreater infiuence on IS design decisions, be in-voived in more disagreements or confiicts (withother members of the development group), butbe more likeiy to resoive these confiicts to theirsatisfaction. Using the three participationsubscales developed in this study and adaptingquestions from Robey, et al, (1989), Barki andHartwick (1993a) tested these hypotheses usingstructurai equation anaiyses. Paths were foundiinking participation to both infiuence and con-fiict, as weil as a path from user participationthrough infiuence to satisfactory conflict resolu-tion. These results Iargeiy agree with those ofRobey, et ai, (1989; 1993), providing furtherevidence for the nomoiogicai vaiidity of the threeuser participation subscales developed here.

Barki and Hartwick (1989) have suggested thatuser participation and user involvement representtwo distinct constructs. After a review of thepsychoiogy, organizational behavior, marketing,and IS literatures, they defined user invoivementas a subjective psychological state reflecting theimportance and personal relevance that a userattaches to a given system. As such, user involve-ment is likely to be related to, but distinct from,other subjective psychoiogicai states like user at-titude, defined as an affective or evaiuative judge-ment. On the other hand, Barki and Hartwick(1989) defined user participation behavioraiiy.Consistent with the PDiVi iiterature, users may besaid to participate in iSD when they take part in,or contribute to, the system being deveioped.Participation can therefore be measured byassessing the specific assignments, activities,and behaviors that users or their representativesperform during the systems deveiopmentprocess.

in this study, these conceptuai definitions wereused to operationaiize the three constructs ofuser participation, user invoivement, and user at-titude. Of prime importance, a 20-item scaie ofuser participation (aiong with three subscalescontaining 8, 6, and 6 items assessing the dimen-sions of user-IS relationship, responsibiiity, and

hands-on activities, respectiveiy) werexreated.In addition, a nine-item scaie of user involvement(along with two subscales containing 5 and 4items assessing the dimensions of importanceand personai relevance, respectiveiy) and a four-item scale of user attitude were also developed.Reliability and vaiidity of the user participation,user invoivement, and user attitude scales, asweli as the participation and involvementsubscales, were assessed in a number of ways -through an evaluation of internai consistencyreiiabiiity, as well as content, factorial, con-vergent, discriminant, predictive, and nomoiog-icai vaiidities. Evidence for the reiiabiiity andconstruct vaiidity of aii scales and subscales wasobtained in the study.

As noted above, Barki and Hartwick (1989) sug-gest that user participation and user involvementare two distinct constructs. Research presentedin our study supports this contention, A moderatecorrelation (r = .28) was found between user par-ticipation and user invoivement. Whiie reiated,the two constructs clearly differ. Barki and Hart-wick aiso hypothesized that user participation isone (of many) antecedent of user involvementand user attitude. Users who participate in thedevelopment process were likeiy to developbeliefs that a new system is good, important, andpersonaiiy relevant. The moderate correlationsbetween user participation and measures of userinvolvement and attitude support this contention.Further work, reported by Hartwick and Barki(1994), iends additionai support. In a longitudinalanalysis of the three constructs, they found thatinitiai levels of involvement and attitude had Iit-tie infiuence on the amount of user participationthat occurred. Thus, one does not participatemore if one believes that a new system to bedeveioped wiii be good, important, or personallyrelevant. On the other hand, user participationwas found to infiuence subsequent ievels of bothinvoivement and attitude. As noted earlier, thereare severai reasons why this couid occur.Through participation, users may be able to in-fiuence the design of a new system, satisfyingtheir needs. They may develop feelings of owner-ship. They may develop a better understandingof the new system and how it can heip them intheir job.

We believe, then, that these scales provide abasis for future research concerning user par-

MIS Quarterly/March 1994 75

User Measures

ticipation and involvement. Validation efforts,however, cannot be considered compiete. In par-ticuiar, two needs are apparent. First, there isneed for a cross-validation of the obtained dimen-sions of participation and invoivement, and itemssuggested for measuring them, ideally investi-gating a wide variety of users who are using manydifferent types of information systems. Second,there is a need to enhance nomoiogicai validityby deveioping the theoretical richness of the par-ticipation, invoivement, and attitude constructs.Empiricaiiy, many questions need to beanswered. For exampie, what are the key deter-minants of user participation? How does user par-ticipation infiuence user invoivement andattitude? Are some dimensions of participationmore important than others? What additionai fac-tors infiuence levels of invoivement and attitude?How are the constructs of participation, invoive-ment, and attitude reiated to other psychologicalconstructs like perceived usefuiness and per-ceived ease of use? What other consequencesdoes user participation have? It is hoped thatfuture research wiil employ the proposedmeasures to investigate a wide variety of concep-tuai variables, ieading to a richer theoreticaldescription of the systems development processand its outcomes.

In addition to the generai findings concerninguser participation and user invoivement, thisstudy has aiso identified specific dimensions ofthe behaviorai construct of user participation andthe psychologicai constructs of user involvementand attitude. For user participation, we initiallyexpected two dimensions—one dimension con-taining activities that reflect development-relatedresponsibiiities and a second dimension re-flecting activities that show specific iSD tasks(Ives and Olson, 1984; Olson and Ives, 1981),Factor analyses of user participation responseswere iargeiy consistent with expectations, yieid-ing the anticipated responsibiiity factor, aiongwith two activity factors—one refiecting deveiop-ment activities that invoive a reiationship betweenusers and IS staff and a second one reflectinghands-on activities performed during physicaidesign and impiementation,

it is interesting to compare these dimensions ofparticipation with those found by Doii andTorkzadeh (1990), In their study of end users,three dimensions of participation were

observed—"systems anaiysis," referring tohands-on activities performed during systemdefinition and development; "impiementation,"referring to hands-on activities performed duringphysicai design and impiementation; and "ad-ministration," referring to two project manage-ment activities. The first two dimensions, then,refiect activities simiiar to those inciuded in ourthird factor, hands-on activities. Of note, it isthese activities that forrri Doll and Torkzadeh'srecommended eight-item measure. Their thirddimension, administratioh, seems to suggest ac-tivities simiiar to those captured by our respon-sibility factor. Our remaining factor, user-ISrelationship, is not refiected in the Doii andTorkzadeh items. Whiie this factor is not reievantin cases where end users develop their own ap-plications, it is likeiy to ,be important in thosecases where users and IS staff coliaborate insystems deveiopment. ;

Our dimensions of user' participation seemedquite stable, underlying both users' expectationsof future participation prior to systems deveiop-ment and their reports of actual participationmade foiiowing impiementation. Uitimateiy, thissuggests that a taxonomy of participativeassignments, activities, arid behaviors might bepossible. In creating such a taxonomy, the dimen-sions of participation identified here (responsibiii-ty, user-iS reiationship, and hands-on activities)might be crossed with a ̂ four-dimension stageanalysis (overaii. systeni definition, physicaidesign, and implementation). Assignments, ac-tivities, and behaviors that occur within each ofthe 12 (three x four) categories could then beidentified and measured,, resulting in an evenfuiier characterization of user participation.

In the analysis of user invoivement and attitude,three factors were expected—importance, per-sonai reievance, and attitude. However, factoranalysis of pre-development responses yieidedonly one factor. Post-implementation responses,on the other hand, yieided^ the anticipated threedimensions. It seems, then, that our users hadonly a rough and relatively undifferentiated setof thoughts and feelings concerning the systemsprior to their deveiopment. It is iikeiy that thisstate is largely affective (cf.^Osgood, et al,, 1957),quite weak, and thus susceptible to attitudechange attempts. However, by the time the newsystems were implemented and used, a differen-

76 MIS Quarterly/March 1994

User Measures

tiated pattern of thoughts and feelings emerged.Being more cognitiveiy compiex, it may then bemore difficult to change users' thoughts and feei-ings concerning systems after they are im-piemented (cf, Wyer, 1975). Being both weii-defined and heid more confidentiy, such thoughtsand feelings are iii<eiy to have greater impact onthe use of new systems (cf. Fazio and Zanna,1981). The influence of user participation on thiscognitive differentiation, as weii as the relativeinfiuence of undifferentiated and differentiatedcognitive structures on system use, wouid appearto be interesting avenues for future research.

AcknowledgementsThe authors wouid like to thank the SocialSciences and Humanities Research Councii ofCanada for providing funding for this study.

ReferencesAjzen, I, and Fishbein, M. "Attitude-Behavior

Reiations: A Theoreticai Anaiysis and Reviewof Empiricai Research," PsychoiogicaiBuiietin (84:5), 1977, pp. 888-918,

Ajzen, i, and Fishbein, iVi, Understanding At-titudes and Predicting Social Behavior, Pren-tice Haii, Engiewood Ciiffs, NJ, 1980,

Aiavi, iVi. and Joachimsthaier, E.A. "RevisitingDSS Impiementation Research: A Meta-Anaiysis of the Literature and Suggestions forResearchers," MIS Quarteriy (16:1), March1992, pp, 95-116.

Bagozzi, R.P, and Yi, Y, "Muititrait-MultimethodiVIatrices in Consumer Research," Journal ofConsumer Research (17), March 1991, pp.426-439,

Bagozzi, R.P., Davis, F.D,, and Warshaw, P.R,"Deveiopment and Test of a Theory ofTechnoiogicai Learning and Usage," HumanReiations (45:7), 1992, pp, 659-686,

Barki, H, and iHartwick, J. "Rethinking the Con-cept of User Invoivement," MIS Quarterly(13:1), March 1989, pp, 53-63,

Barki, H. and iHartwick, J. "Participation and Con-fiict in System Development," Proceedings:Administrative Sciences Association ofCanada Annual Conference (4), Lake Louise,Alberta, June 1993a, pp, 74-85,

Barki, H, and Hartwick, J, "Measuring User Par-ticipation, User Invoivement, and User At-

titude," Cahier du GReSI 93-11, Ecoie deshautes Etudes Commerciales, Montr6ai,Quebec, 1993b,

Baroudi, J,J., Olson, M,H,, and Ives, B. "An Em-piricai Study of the Impact of User invoivementon System Usage and User Satisfaction,"Communications of the ACM (29:3), March1986, pp, 232-238.

Campbell, D.T. and Fiske, D.W. "Convergentand Discriminant Vaiidation by the Multitrait-Muitimethod Matrix," Psychological Bulletin(56), March 1959, pp, 81-105,

Catteli, R,B, Factor Anaiysis: An introduction andManual for the Psychologist and Social Scien-tist, Greenwood, Westport, CT, 1973,

Comrey, A.L, "Factor Analytic Methods of ScaleDeveiopment in Personality and ClinicaiPsychoiogy," Journai of Consulting andClinical Psychology (56), 1988, pp, 754-761.

Cote, J.A. and Buckiey, M.R, "Estimating Trait,Method, and Error Variance: Generaiizingacross 70 Construct Vaiidation Studies," Jour-nal of Marketing Research (24), August 1987,pp, 315-318.

Cronbach, L,J. and Meehl, P.E. "ConstructVaiidity in Psychoiogicai Tests,"Psychological Bulletin (52), 1955, pp, 281-302,

Davis, F,D, "Perceived Usefuiness, PerceivedEase of Use, and End-User Acceptance of in-formation Technology," MiS Quarterly {A3.3),September 1989, pp, 318-339,

Doii, W,J and Torkzadeh, G. "A DiscrepancyModei of End-User Involvement," Manage-ment Science (35:10), October 1989, pp,1151-1171.

Doii, W,J and Torkzadeh, G, "The Measurementof End-User Software invoivement," QMEGA(18:4), 1990, pp, 399-406,

Fazio, R.H, and Zanna, M,P, "Direct Experienceand Attitude-Behavior Consistency," in Ad-vances in Experimental Social Psychology(14), L, Berkowitz (ed,). Academic Press, SanDiego, CA, 1981,

Fishbein, M, "A Theory of Reasoned Action:Some Appiications and Implications," inNebraska Symposium on Motivation (27), H,Howe and M, Page (eds.). University ofNebraska Press, Lincoin, NE, 1980,

Fishbein, M. and Ajzen, I, Belief, Attitude, Inten-tions and Behavior: An introduction to Theoryand Research, Addison-Wesiey, Boston, MA,1975,

MIS Quarterly/March 1994 77

User Measures

Franz, CR, and Robey, D. "Organizational Con-text, User Involvement, and the Usefuiness ofInformation Systems," Decision Sciences(17), Juiy 1986, pp, 329-356.

Ghiselli, E,E., Campbeii, J.P. and Zedeck, S.Measurement Theory for the BehavioralSciences, Freeman, New York, NY, 1981.

Gorsuch, R, Factor Analysis, Eribaum, Hiiisdaie,NJ, 1983,

Hartwick, J, and Barki, H. "Expiaining the Roieof User Participation in information SystemUse," Management Science, 1994, in press.

ives, B. and Oison, M,H. "User Involvement andMiS Success: A Review of Research,"Management Science (30:5), May 1984, pp.586-603,

Ives, B,, Oison, M.H,, and Baroudi, J,J. "TheMeasurement of User Information Satisfac-tion," Communications of the ACM (26:10),October 1983, pp. 785-793,

Jarvenpaa, S,L, and ives, B. "Executive Invoive-ment and Participation in the Management ofInformation Technoiogy," MiS Quarterly(15:2), June 1991, pp. 205-227,

Kappeiman, L.A, and McLean, E.R. "TheRespective Roies of User Participation andUser Invoivement in information System im-piementation Success," Proceedings of theTwelfth International Conference on Informa-tion Systems, New York, NY, December 1991,pp. 339-350,

Keriinger, F,N. Foundations of BehavioralResearch, Hoit, Rinehart & Winston, NewYork, NY, 1986,

Kim, J, and Mueiier, C.W, Factor Anaiysis, Sage,Beveriy Hills, CA, 1978,

Krugman, H,E. "The Measurement of Advertis-ing invoivement," Public Qpinion Quarterly(30), Winter 1967, pp. 583-596,

Lawier, E,E, and Haii, D,T. "Reiationship of JobCharacteristics to Job Invoivement, Satisfac-tion, and Intrinsic Motivation," Journal of Ap-plied Psychology (54:4), 1970, pp, 305-312.

Loch, E.A. and Schweiger, D.M, "Participationin Decision Making: One More Look," inResearch in Qrganization Behavior (1), JAIPress, Inc, Greenwich, CT, 1979, pp,265-339,

Nunnally, J.C, Psychometric Theory, McGraw-Hiii, New York, NY, 1978,

Olson, M.H, and ives, B, "Measuring User in-voivement in Information System Deveiop-

ment," Proceedings of the First InternationaiConference on Information Systems,Phiiadeiphia, PA, December 1980, pp,130-143.

Olson, M.H, and ives, B, "User Invoivement inSystem Design: An Empirical Test of Aiter-native Approaches," Information & Manage-ment (4), 1981, pp, 183-195,

Osgood, CE., Suci, G,J,, and Tannenbaum, P.H,The Measurement of Meaning, University ofIllinois Press, Urbana, IL, 1957,

Pettingeii, K,, Marshaii, ;T., and Remington, W,"A Review of the Infiuence of User involve-ment on System Success," Proceedings ofthe Ninth Internationai Conference on Infor-mation Systems, Minneapoiis, MN, December1988, pp, 227-236.

Robey, D, and Farrow, D, "User Invoivement inInformation System Development: A ConfiictModei and Empiricai Test," ManagementScience (26:1), January 1982, pp, 73-85.

Robey, D,, Farrow, D,, and Franz, CR. "GroupProcess and Confiict in System Develop-ment," Management Science (35:10), October1989, pp, 1172-1191,:

Robey, D,, Smith, L,A,, and Vijayasarathy, L.R,"Perceptions of Conflict and Success in in-formation Systems Development Projects,"Journal of MIS (10:1), Summer 1993, pp.123-139.

Sherif, CW,, Sherif, M,, and Nebergaii, R.E, At-titude and Attitude Change, Saunders,Phiiadeiphia, PA, 1965,

Straub, D.W, "Validating instruments in MiSResearch," MIS Quarterly (\3.2), June 1989,pp, 147-169.

Straub, D.W, and Trower, J,K, "The Importanceof User Invoivement in Successfui Systems:A Meta-Anaiyticai Appraisai," MiSRC-WP-89-01, Curtis L, Carlson Schooi ofManagement, University of Minnesota, Min-neapoiis, MN, 1988.

Thurstpne, L,L. "The Measurement of Attitudes,"Journal of Abnormal Social Psychology (26),1931, pp, 249-269.

Thurstone, L.L, Multiple Factor Analysis, Univer-sity of Chicago Press, Chicago, IL, 1969,

Vroom, V, and Jago, A.Q. The New Leadership:Managing Participation in Qrganizations,Prentice Hall, Engiewood Ciiffs, NJ, 1988,

Wyer, R.S, Cognitive Qrganization and Change:An Information Processing Approach,

78 MIS Ouarterly/March 1994

User Measures

Eribaum, Potomac, MD, 1975.Zaichowsky, J.L. "Measuring The Invoivement

Construct," Journal of Consumer Research(12), December 1985, pp. 341-352.

Zanna, M.P. and Rempei, J.K. "Attitude: A NewLook at an Oid Concept," in The SocialPsychology of Knowledge, D. Bar-Tai and A,Krugianski (eds.), Cambridge UniversityPress, New York, NY, 1988.

About the AuthorsHenri Barki is associate professor of InformationSystems at the Ecoie des Hautes Etudes Com-merciaies in Montreal. He received his Ph.D ininformation systems from the Schooi of BusinessAdministration, University of Western Ontario,His research interests concentrate on softwaredeveiopment project management, inciuding

topics concerning user participation, involve-ment, and confiict, Journais where his papershave been pubiished inciude Canadian Journalof Administrative Sciences, INFQR, Information& Management, Journal of MIS, MIS Quarterly,and Technologies de I'information et societe.

Jon Hartwick is associate professor of Organiza-tionai Behavior at McGiii University, He is amember of the Academy of Management and theAmerican Psychoiogicai Association and hasbeen published in Journal of Personality andSocial Psychology, Journai of ExperimentaiSocial Psychology, Advances in ExperimentalSocial Psychology, Journal of Management,Journal of Consumer Research, and MISQuarterly. His current research interests inciudeattitudes and behavioral prediction, motivation,job satisfaction and performance, and user par-ticipation, invoivement and conflict.

MIS Quarterly/March 1994 79

User Measures

AppendixGeneral, Non-Stage-Specific Participation Items

itemNumber Question

1234

567

910

11

12

13

14

A1

A2

A3

A4

A5

A6

A7

Were you a member of the team that developed this system?Were you the ieader of the project team?Was the time that you spent on the project team charged to the systems^ deveiopment budget?Was your performance on the project team evaluated by the management of your owndepartment? [Did you have responsibility for estimating development costs of the new system?Did you have responsibility for estimating the benefits of the new system?Did you have responsibility for requesting additionai funds to cover 'unforeseen time/costoverruns?Did you have responsibiiity for selecting the hardware and/or software needed for the newsystem?Did you have responsibiiity for the success of the new system?For the development of this system, analysts from the information Systems/Data ProcessingDepartment were assigned to and iocated in our department.For the deveiopment of this system, a member of the information Systems/Data Processingstaff acted as "formai iiaison" between my department and Inforrhation Systems/DataProcessing,For the deveiopment of this system, a member of my department acted as "formal iiaison"between my department and Information Systems/Data Processing. :Evaluation of the information Systems/Data Processing staff's performance has been or willbe infiuenced by my own personai evaiuation of the new system's success.Evaiuation of the Information Systems/Data Processing staff's performance has been or willbe infiuenced by my department's evaluation of the new system's success,I had main responsibiiity for the deveiopment project (during system defihition/during physicaldesign/during impiementation). [An average of items 15, 16, and 17.]'Information Systems/Data Processing staff drew up a formaiized agreement of the work tobe done (during system definition/during physical design/during impiementation). [An averageof items 18, 19, and 20,] . ;

i was able to make changes to the formalized agreement of work to be done (during systemdefinition/during physical design/during impiementation), [An average of items 21, 22, and 23.]The Information Systems/Data Processing staff kept me informed concerning progress and/orprobiems (during system definition/during physicai design/during impiementation), [An averageof items 24, 25, and 26.)I formaiiy reviewed work done by Information Systems/Data Processing; staff (during systemdefinition/during physicai design/during impiementation). [An average of items 27, 28, and 29,]I formaiiy approved work done by the information Systems/Data Processing staff (during systemdefinition/during physicai design/during impiementation). [An average of items 30, 31, and 32.]I signed off a formaiized agreement of the work done by the Information Systems/Data Pro-cessing staff (during system definition/during physical design/during ihfipiementation). [Anaverage of items 33, 34, and 35.] '

80 MIS Quarterly/March 1994

User Measures

Participation Items for the System Definition PhaseItemNumber Ouestion

15 I had main responsibiiity for the deveiopment project during system definition,18 Information Systems/Data Processing staff drew up a formaiized agreement of the work to

be done during system definition,21 I was abie to make changes to the formaiized agreement of work to be done during system

definition.24 The Information Systems/Data Processing staff kept me informed concerning progress and/or

problems during system definition,27 I formaiiy reviewed work done by Information Systems/Data Processing staff during system

definition,30 I formaiiy approved work done by the information Systems/Data Processing staff during system

definition.33 I signed off a formalized agreement of the work done by the information Systems/Data Pro-

cessing staff during system definition,36 I was interviewed by the information Systems/Data Processing staff during the system defini-

tion phase.37 I responded to questionnaires administered by the Information Systems/Data Processing staff

during the system definition phase.38 I deveioped the information requirements anaiysis (i.e., the anaiysis of user needs) for this

system.39 i evaiuated an information requirements anaiysis deveioped by Information Systems/Data

Processing,40 I approved an information requirements analysis deveioped by the Information Systems/Data

Processing staff,41 I deveioped a cost/benefit analysis for this system.42 i evaluated a cost/benefit anaiysis deveioped by the Information Systems/Data Processing staff.43 I approved a cost/benefit anaiysis deveioped by the Information Systems/Data Processing staff.

Participation Items for the Physical Design PhaseitemNumber Ouestion

16 I had main responsibiiity for the deveiopment project during physicai design,19 Information Systems/Data Processing staff drew up a formaiized agreement of the work to

be done during physicai design.22 i was abie to make changes to the formaiized agreement of work to be done during physical

design,25 The Information Systems/Data Processing staff kept me informed concerning progress and/or

probiems during physicai design.28 i formaiiy reviewed work done by Information Systems/Data Processing staff during physicai

design,31 i formaiiy approved work done by the information Systems/Data Processing staff during physicai

design.34 i signed off a formaiized agreement of the work done by the information Systems/Data

Processing staff during physical design.44 For this system, I defined/heiped define input/output forms.

MIS Quarterly/March 1994 81

User Measures

45 For this system, I defined/helped define soreen layouts, . .46 For this system, i defined/heiped define report formats.47 I deveioped system controis and/or security procedures for this system,48 I evaluated system controls and/or security procedures deveioped by Information Systems/Data

Processing,49 I approved system controls and/or security procedures deveioped by information Systems/Data

Processing.50 The Information Systems/Data Processing staff deveioped a prototype of the new system for

me.51 The Information Systems/Data Processing staff presented a detaiied walk-through of the system

procedures and processes for me.

Participation Items for the Implementation PhaseitemNumber Question

17 I had main responsibility for the development project during impiementation.20 Information Systems/Data Processing staff drew up a formaiized agreement of the work to

be done during implementation.23 I was abie to make changes to the formalized agreement of work to be done during

implementation.26 The Information Systems/Data Processing staff kept me informed concerning progress and/or

probiems during impiementation,29 i formaiiy reviewed work done by information Systems/Data Processing staff during

implementation.32 i formaiiy approved work done by the Information Systems/Data Processing staff during

implementation.35 i signed off a formalized agreement of the work done by the Information Systems/Data

Processing staff during implementation.52 I deveioped test data specifications for this system. j53 I reviewed the resuits of system tests done by the information Systems/Data Processing staff.54 I approved the resuits of system tests done by the Information Systems/Data Processing staff.55 The information Systems/Data Processing staff held a "speciai event" to introduce the system

to me, :56 I was trained in the use of this system, •57 I designed the user training program for this system,58 I trained other users to use this system.59 i created the user procedures manual for this system.

82 MIS Quarterly/March 1994