Validating Instruments in MIS Research · Validating Instruments in MIS Research1 By: Detmar W....

23
Validating Research Instruments Validating Instruments in MIS Research 1 By: Detmar W. Straub Information and Decision Sciences Department Curtis L. Carlson School of Management University of Minnesota Minneapolis, Minnesota 55455 Abstract Calls for new directions in MISresearch bring with them a call for renewed methodological .rigor. This article offers an operating paradigm for renewal along dimensions previously un- stressed. The basic contention is that confirma- tory empirical findings will be strengthened when instrument validation precedes both in- ternal and statistical conclusion validity and that, in many situations, MIS researchers need to vali- date their researchinstruments. This contention is supportedby a survey of instrumentation as reported in sample IS journals over the last sev- eral years. A demonstration exercise of instrument valida- tion follows as an illustration of some of the basic principles of validation. The validated in- strument was designed to gather data on the impact’of computer security administration on the incidence of computer abuse in the U.S.A. This work was carded out under the auspices of In- ternational DPMA (DataProcessing Management.As- sociation). It was supported by grants fromIBM, IRMIS (Institute for Research on the Management of Information Systems, Indiana University Graduate School of Business) and the Ball Corporation Foundation. An earlier version of this article appeared as "Instrument Validation in the MIS Research Process," Proceedings of the Annual Administrative Sciences Association of Canada Conference, F. Bergeron (ed.), Whistler,B.C., June 1986, pp. 103-118. Keywordsi MIS research methodology, empiri- cal measurement, theory construc- tion and development, content va- lidity, reliability, construct validity, internal validity ACM Categories: H.0, J.0 Introduction Instrument validation has beeninadequatelyad- dressed in MISresearch. Only a few research- ers have devotedserious attention to measure- mentissues over the last decade (e.g., Bailey andPearson,1983; Goodhue, 1988; Ives, et ai., 1983; Ricketts andJenkins, 1985), andwhile the desirability of verifying findings through intemal validity checks has beenargued by Jarvenpaa, et ai. (1984), the primary andpdor value of in- strument validation has yet to be widely recognized. There are undoubtedlya number of reasons for this lack of attention to instrumentation.Because of rapid changes in technology, MISresearch- ers often feel that researchissues must be han- dled with dispatch (Hamilton. and Ives, 1982b)o Then, too, exploratory, qualitative, and non- empirical methodologies that have dominated MIS research (Farhoomand, 1987; Hamilton and Ives, 1982b) may not require the same level of validation .~ Exactly why, then, should MIS researchers pay closer attention to instrumentation? The ques- tion is a fair one and deserves a full answer. In the first place, concerns about instrumenta- tion are intimately connected with concerns about dgor in MISmethodology in general (McFar- lan, 1984). McFarlan and other key researchers in the field (e.g., Hamilton and Ives, 1982a; 1982b) believe there is a pressingneed to define meaningful MISresearch traditions to guide and shape the research effort, lest the field fragment and never achieve its own considerable poten- tial. The senseis that MISresearchers needto concentrate their energies on profitable lines of research m streams of opportunity m in both the near andthe far term (Jenkins, 1983; 1985; McFarlan, 1984). This should be carded out, whenever possible, in a systematic andprogram- matic manner (Hunter, et al., 1983; Jenkins, 2 Cf. Campbell (1975), however, Benbasat, et al. (1987), also dealwith this question. MIS Quarterly~June 1989 147

Transcript of Validating Instruments in MIS Research · Validating Instruments in MIS Research1 By: Detmar W....

Page 1: Validating Instruments in MIS Research · Validating Instruments in MIS Research1 By: Detmar W. Straub Information and Decision Sciences Department Curtis L. Carlson School of Management

Validating Research Instruments

ValidatingInstruments in MISResearch1

By: Detmar W. StraubInformation and Decision Sciences

DepartmentCurtis L. Carlson School of

ManagementUniversity of MinnesotaMinneapolis, Minnesota 55455

AbstractCalls for new directions in MIS research bringwith them a call for renewed methodological.rigor. This article offers an operating paradigmfor renewal along dimensions previously un-stressed. The basic contention is that confirma-tory empirical findings will be strengthenedwhen instrument validation precedes both in-ternal and statistical conclusion validity and that,in many situations, MIS researchers need to vali-date their research instruments. This contentionis supported by a survey of instrumentation asreported in sample IS journals over the last sev-eral years.

A demonstration exercise of instrument valida-tion follows as an illustration of some of thebasic principles of validation. The validated in-strument was designed to gather data on theimpact’of computer security administration onthe incidence of computer abuse in the U.S.A.

This work was carded out under the auspices of In-ternational DPMA (Data Processing Management.As-sociation). It was supported by grants from IBM,IRMIS (Institute for Research on the Managementof Information Systems, Indiana University GraduateSchool of Business) and the Ball CorporationFoundation.An earlier version of this article appeared as"Instrument Validation in the MIS Research Process,"Proceedings of the Annual Administrative SciencesAssociation of Canada Conference, F. Bergeron(ed.), Whistler, B.C., June 1986, pp. 103-118.

Keywordsi MIS research methodology, empiri-cal measurement, theory construc-tion and development, content va-lidity, reliability, construct validity,internal validity

ACM Categories: H.0, J.0

IntroductionInstrument validation has been inadequately ad-dressed in MIS research. Only a few research-ers have devoted serious attention to measure-ment issues over the last decade (e.g., Baileyand Pearson, 1983; Goodhue, 1988; Ives, et ai.,1983; Ricketts and Jenkins, 1985), and while thedesirability of verifying findings through intemalvalidity checks has been argued by Jarvenpaa,et ai. (1984), the primary and pdor value of in-strument validation has yet to be widelyrecognized.

There are undoubtedly a number of reasons forthis lack of attention to instrumentation. Becauseof rapid changes in technology, MIS research-ers often feel that research issues must be han-dled with dispatch (Hamilton. and Ives, 1982b)oThen, too, exploratory, qualitative, and non-empirical methodologies that have dominatedMIS research (Farhoomand, 1987; Hamilton andIves, 1982b) may not require the same level ofvalidation .~

Exactly why, then, should MIS researchers paycloser attention to instrumentation? The ques-tion is a fair one and deserves a full answer.In the first place, concerns about instrumenta-tion are intimately connected with concernsabout dgor in MIS methodology in general (McFar-lan, 1984). McFarlan and other key researchersin the field (e.g., Hamilton and Ives, 1982a;1982b) believe there is a pressing need to definemeaningful MIS research traditions to guide andshape the research effort, lest the field fragmentand never achieve its own considerable poten-tial. The sense is that MIS researchers need toconcentrate their energies on profitable lines ofresearch m streams of opportunity m in boththe near and the far term (Jenkins, 1983; 1985;McFarlan, 1984). This should be carded out,whenever possible, in a systematic and program-matic manner (Hunter, et al., 1983; Jenkins,

2Cf. Campbell (1975), however, Benbasat, et al.(1987), also deal with this question.

MIS Quarterly~June 1989 147

Page 2: Validating Instruments in MIS Research · Validating Instruments in MIS Research1 By: Detmar W. Straub Information and Decision Sciences Department Curtis L. Carlson School of Management

Validating Research Instruments

1983, 1985; Mackenzie and House, 1979),using, where appropriate (McFarlan, 1986), ad-vanced statistical and scientific methods (King,.1965).

Besides bringing more rigor in general to thescientific endeavor, greater attention to instru-mentation promotes cooperative research efforts(Hunter, et al., 1983) in permitting confirmatory,follow-up research to utilize a tested instrument.By allowing other researchers in the stream touse this tested instrument across heterogenoussettings and times, greater attention to instru-mentation also supports triangulation of results(Cook and Campbell, 1979). With validated in-struments, researchers can measure the sameresearch constructs in the same way, grantingimproved measurement of independent and de-pendent variables and, in the long run, helpingto relieve the confounding that plagues manystreams of MIS literature (cf. Ives and Olson,1984).

Attention to instrumentation issues also bringsgreater clarity to the formulation and interpreta-tion of research questions. In the process of vali-dating an instrument, the researcher is engaged,in a very real sense, in a reality check. He orshe finds out in relatively short order how wellconceptualization of problems and solutionsmatches with actual experience of practitioners.And, in the final analysis, this constant compari-son of theory and practice in the process of vali-dating instruments results in more "theoreticallymeaningful" variables and variable relationships(Bagozzi, 1980).

Finally, lack of validated measures in confirma-tory research raises the specter that no singlefinding in the study can be trusted. In manycases this uncertainty will prove to be inaccu-rate, but, in the absence of measurement vali-dation, it lingers.

This call for renewed scientific rigor should notbe interpreted in any way as a preference of 0quantitative to qualitative techniques, or confir-matory~ to exploratory research. In what hasbeen termed the "active realist" view of science(Cook and Campbell, 1979), each has a placein uncovering the underlying meaning of phe-

~Strictly speaking, and from the point of view of StrongInference, no scientific explanation is ever confirmed(Blalock, 1969; Popper, 1968); incorrect explanationsare simply eliminated from consideration. The termi-nology is adopted here for the sake of convenience,however.

nomena, as many methodologists have pointedout (Glaser and Strauss, 1967; Lincoln andGupta, 1985). This article focuses on the meth-odologies and validation techniques most oftenfound on the confirmatory side of the researchcycle, as shown in Figure 1. The key point isthat confirmatory research calls for rigorous in-strument validation as well as quantitative analy-sis to establish greater confidence in its findings.

Linking MIS Research andValidation ProcessesIn order to understand precisely what instrumentvalidation is and how it functions, it is placedin the context of why researchers u MIS re-searchers in particular u measure variables inthe first, place and how attention to measure-ment issues m the validation process m canundergird bases of evidence and inference. Inthe scientific tradition, social science reseamh-ers attempt to understand real-world phenom-ena through expressed relationships between re-search constructs (Blalock, 1969). Theseconstructs are not in themselves directly observ-able, but are believed to be latent in the phe-nomenon under investigation (Bagozzi, 1979).In sociology, for example, casual constructs like"home environment" are thought to influence out-come constructs like "success in later life." Nei-ther construct can be observed directly, but be-haviorally relevant measures can be operation-alized to represent or serve as surrogates forthese constructs (Campbell, 1960).

In MIS, research constructs in the user involve-ment and system success literature offer a casein point. In this research stream, user involve-ment is believed to be a "necessary condition"(Ives and Olson, 1984, p. 586) for system Suc-cess. As with other constructs in the behavioralsciences (Campbell, 1960), system success, "the extent to which users believe the informa-tion system available to them meets their infor-mation requirements" (Ives and Olson, 1984, p.586), is unobservable and unmeasurable; it is,in short, a conceptualization or a mental con-struct. In order to come to some understandingabout system effectiveness, therefore, research-ers in this area have constructed a surrogateset of behaviorally relevant measures, termedUser Information Satisfaction (UIS), for thesystem success outcome construct (Bailey andPearson, 1983; Ives, et al., 1983; Ricketts andJenkins, 1985).

148 MIS Quarter/y/June 1989

Page 3: Validating Instruments in MIS Research · Validating Instruments in MIS Research1 By: Detmar W. Straub Information and Decision Sciences Department Curtis L. Carlson School of Management

Validating Research Instruments

* Qualitative,non-empiricaltechniques

* Theory-building

* Grounded theory

Exploratory ConfirmatoryResearch Research

z~ Conceptual ~

Refinements

* Quantitative,empiricaltechniques

* Theory-testing

Figure 1. The Scientific Research Cycle(Based on Mackenzie and House, 1979 and McGi’ath, 1979)

The place of theory in confirmatoryresearchGiven this relationship between unobserved andobserved variables, what is the role of theoryin the process? Blalock (1969), Bagozzi (1980),and others argue that theodes are invaluable inconfirmatory research because they pre-specifythe makeup and structure of the constructs andseed the ground for researchers who wish toconduct further studies in the theoretical stream.This groundwork, is especially important in sup-porting programs of research (Jenkins, 1985).

By confining constructs and measures to asmaller a pdod domain (Churchill, 1979; Ha-nushek and Jackson, 1977) and thereby reduc-ing the threat of misspecification, use of theoryalso greatly strengthens findings. Moreover, se-lection of an initial set of items for a draft instru-ment from the theoretical and even non-theo-retical literature simplifies instrument develop-ment. When fully validated instruments areavailable, replication of a study in heterogenoussettings is likewise facilitated (Cook andCampbell, 1979).

For MIS research, there is much to be gainedby basing instruments on reference disciplinetheories. Constructs, relationships between con-structs, and operations are often already well-specified in the reference disciplines and avail-

able for application to MIS. This point has beeneffectively made by Dickson, et al. (1980). A spe-cific need for strong reference discipline theoryin support of the user involvement thesis hasbeen put forth by Ives and Olson (1984).

Difficulties in accurately measuringconstructsMeasurement of research constructs is neithersimple nor straightforward. Instrumentation thatthe UlS reseamher devises to translate the UISconstruct (as perceived by user respondents)into data, for instance, may be significantly af-fected by choice ofmethod itself (as in inter-views versus paper-and-pencil instruments)andcomponents of the chosen method (as in itemselection and item phrasing (Ives, et al., 1983)).Bias toward outcomes the researcher is expect-ing m in this case a positive relationship be-tween user involvement and system success --can subtly or overtly infuse the instrument. Inac-curacies in measurement can also be reflectedin the instrument when items are ambiguouslyphrased, length of the instrument taxes respon-dents’ concentration (Ives, et al., 1983), or moti-vation for answering carefully is not induced.Knowledge about the process of system devel-opment, therefore, is only bought with assump-tions about the "goodness" of the technique of

MIS Quarterly/June 1989 149

Page 4: Validating Instruments in MIS Research · Validating Instruments in MIS Research1 By: Detmar W. Straub Information and Decision Sciences Department Curtis L. Carlson School of Management

Validating Research Instruments

measurement (Cook and Campbell, 1979;Coombs, 1964).

In a perfectly valid UIS instrument, data meas-urements completely and accurately reflect theunobservable research construct, system suc-cess. Given real-world limitations, however,some inaccurate measurements inevitably ob-trude on the translation process. The primaryquestion for MIS researchers is the extent towhich these translation difficulties affect findings;in shod, we need to have a sense forhow goodour instruments really are. Fortunately, instru-mentation techniques are available that allowMIS researchers to alternately construct and vali-date a draft instrument that will ultimately resultin an acceptable research instrument.

Instrument validationIn the MIS research process, instrument valida-tion should precede other core empirical validi-ties (Cook and Campbell, 1979),4 which are setforth according to the kinds of questions they¯ answer in Figure 2. Researchers and those whowill utilize confirmatory research findings firstneed to demonstrate that developed instrumentsare measuring what they are supposed to bemeasuring. Most univariate and multivariate sta-tistical tests, including those commonly used totest internal validity and statistical conclusion va-lidity, are based on the assumption that errorterms between observations are uncorrelated(Hair, et al., 1979; Hanushek and Jackson, 1977;Lindman, 1974; Reichardt, 1979). If subjectsanswer in some way that is more a function ofthe instrument than the true score, this assump-tion is violated. Since the applicable statisticaltests are generally not robust to violations of thisassumption (Lindman, 1974), parameter esti-mates are likely to be unstable. Findings, in fact,will have little credibility at all.

An instrument can be deemed invalid ongrounds of the content of the measurementitems. An instrument valid in. content is one thathas drawn representative questions from a uni-versal pool (Cronbach, 1971; Kerlinger, 1964).With representative content, the instrument will

External validity, which deals with persons, settings,and times to which findings can be generalized, doesprecede instrument validation in planning a researchproject. For the sake of brevity and because this va-lidity can easily be discussed separately, external va-lidity is not discussed here.

Instrument ValidationContentValidity

ConstructValidity

Reliability

Are instrument measuresdrawn from all possiblemeasures of the propertiesunder investigation?Do measures show stabilityacross methodologies? That is,are the data a reflection oftrue scores or artifacts of thekind of instrument chosen?Do measures show stabilityacross the units of observation?That is, could measurementerror be so high as to discreditthe findings?

IInternalValidity

Are there untested rival hypo- |theses for the observed effects?

IStatisticalConclusionValidity

Do the variables demonstraterelationships not explainable bychance or some other standardof comparison?

Figure 2. Questions Answeredby the Validities

be more expressive of the true mean than onethat has drawn idiosyncratic questions from theset of all possible items. Bias generated by anunrepresentative instrument will carry over intouncertainty of results. A content-valid instrumentis difficult to create and perhaps even more dif-ficult to vedfy because the universe of possiblecontent is virtually infinite. Cronbach (1971) sug-gests a review process whereby experts in thefield familiar with the content universe evaluateversions of the instrument again and again untila form of consensus is reached.

Construct validity is in essence an operationalissue. It asks whether the measures chosen aretrue constructs describing the event or merelyartifacts of the methodology itself (Campbell andFiske, 1959; Cronbach, 1971). If constructs arevalid in this sense, one can expect relatively highcorrelations between measures of the same con-struct using different methods and low correla-tions between measures of constructs that areexpected to differ (Campbell and Fiske, 1959).The construct validity of an instrument can beassessed through multitrait-multimethod

150 MIS Quarterly/June 1989

Page 5: Validating Instruments in MIS Research · Validating Instruments in MIS Research1 By: Detmar W. Straub Information and Decision Sciences Department Curtis L. Carlson School of Management

Validating Research Instruments

(MTMM) techniques (Campbell and Fiske, 1959)or techniques such as confirmatory or principalcomponents factor analysis (Long, 1983; Nun-nally, 1967).5 Measures (termed "traits" in thisstyle of analysis) are said to demonstrate con-vergent validity when the correlation of the sametrait and varying methods is significantly differ-ent from zero and converges enough to warrant"further examination" (Campbell and Fiske,1959, p. 82). Evidence that it is also higher thancorrelations of that trait and different traits usingboth the same and different methods shows thatthe measure has discriminant validity,e

Since this systematic bias, or common methodvariance, is by definition unobservable, as arethe constructs themselves (Nunnally, 1967),good examples of what can now be termed con-struct validity are difficult to come by. Neverthe-less, patterns of correlated errors can occur. Forexample, take the case where large numbersof students unthinkingly check off entire columnsin a course evaluation. If these same studentswere probed later for comparable information ina cimumstance where thoughtfulness and fair-ness prevailed, there would likely be little agree-ment across methods, raising doubt about thevalidity of either instrument.

On the other hand, scores can even be distortedwhen there is no systematic response within theinstrument. A single item can be ambiguous ormisunderstood by individuals so that responseson this trait differ among alternative measuresof the same trait. The subject has answered ina way that is a function of his or her misunder-standing rather than a variation of the true score.Reliability, hence, is essentially an evaluation ofmeasurement accuracy -- for .example, theextent to which the respondent can answer thesame or approximately the same questions thesame way each time (Cronbach, 1951). High cor-relations between alternative measures or largeCronbach alphas are usually signs that the meas-ures are reliable.

It should be noted that factor analysis has been usedto test the factorial comPosition of a data set (Nun-nally, 1967) even when maximally different data col-lection methods are not used. Most of the validationof User Information Satisfaction instruments has fol-lowed this procedural line.

Concurrent and predictive validity (Cronbach andMeehl, 1955) are generally considered to be sub-sumed in construct validity (Mitchell, 1985) and, forthis reason, will not be discussed here.

Internal validity

Internal validity raises the question of whetherthe observed effects could have been causedby or correlated with a set of unhypothesizedand/or unmeasured variables. In short, are thereviable, rival explanations for the findings otherthan the single explanation offered by the re-searcher’s hypothesis (or hypotheses)? For gen-eral social science research, the subject hasbeen treated at length by psychometricians Cookand Campbell (1979) who argue that causationrequires ruling out rival hypotheses as well asfinding associative variables. In MIS, the criticalimportance of intemal validity has been arguedby Jarvenpaa, et al. (1984).

It is crucial to recognize that internal validity inno way establishes that the reseamher is work-ing with variables that truly reflect the phenome-non under investigation. Sample groups, too, canbe easily misdefined if the instrumentation is in-valid. This point is made clearer through a hy-pothetical case.

Suppose, for the sake of illustration, that a re-searcher wished to detect the effect of systemresponse time on user satisfaction in a record-keeping application. Suppose also that the re-searcher did not include in the methodologicaldesign a validation of the instrument by whichuser satisfaction was being measured. Such aninstrument could be utilized infield or laboratoryexperimentation, quasi-experimentation, orsurvey research. Suppose, moreover, that hadthe researcher actually tested the instrument, se-rious flaws in the representativeness of meas-ures (content validity), the meaningfulness of con-structs as measured (construct validity), stability of measures (reliability) would haveemerged. At least one of these flaws wouldalmost certainly occur, for instance, if the usersatisfaction measure was based entirely on howthe user felt about EDP applications as a whole.7In this scenario, extensive experimental and sta-tistical controls placed on these meaningless andpoody measured constructs could seemingly ruleout all significant rival hypotheses. Internal va-lidity would thus be assured beyond a reason-able doubt. Findings, however, would be mootbecause measurement was suspect.

7 Cf. Ricketts and Jenkins (1985) on this Point.

MIS Quarter/y/June 1989 151

Page 6: Validating Instruments in MIS Research · Validating Instruments in MIS Research1 By: Detmar W. Straub Information and Decision Sciences Department Curtis L. Carlson School of Management

Validating Research Instruments "

Statistical conclusion vafidityStatistical conclusion validity is an assessmentof the mathematical relationships between vari-ables and the likelihood that this mathematicalassessment provides a correct picture of the truecovariation (Cook and Campbell, 1979). Incor-rect conclusions concerning covariation (Type and Type II error) are violations of statistical con.-clusion validity, and factors such as the samplesize and reliability of measures can affect thisvalidity.

Another factor used in determining the statisti-cal conclusion validity of a study is statisticalpower. Power is the probability that the null hy-pothesis has been correctly rejected. Proper re-jection is closely associated with sample size sothat tests with larger sample sizes are less likelyto reject the null hypothesis improperly (Baroudiand Orlikowski, 1989; Cohen, 1969; 1977;Kraemer and Thiemann, 1987). It is also statis-tically related to alpha, the standard error, or re-liability of the sample results, and the effect size,or degree to which the phenomenon has practi-cal significance (Cohen, 1969). Non-significantresults from tests with low power, i.e., a prob-ability of less than 80 percent that the null hy-pothesis has been correctly rejected (Cohen,1969), are inconclusive and do not indicate thatthe effect is truly not present.

Statistical assumptions made by the technique(s)of choice (e.g., regression, MANCOVA, factoranalysis, LISREL) have a bearing on the credi-bility of the analysis, but conclusions based onthese statistics say nothing about the viabilityof rival hypotheses per se or the meaningful-ness of constructs in the first place. Much con-firmatory MIS research in the past has utilizedonly statistical conclusion validity to evaluate re-sults, a situation that can often lead to con-founded results because high correlation ofcause and effect is only one of the criteda forestablishing causality (Cook and Campbell,1979).

.In summary, it is possible to show the overall ¯results of violating order or position in the vali-dation process. Figure 3 highlights the dangers.Evaluating statistical conclusion validity alone es-tablishes that variables covary or have somemathematical association. Without prior valida-tion, it is not possible to rule out the possibilitythat statistical associations are caused by mod-erator variables (Sharma, et al., 1981) or mis-specifications in the causal model (Blalock,1969). Preceding statistical conclusion validitywith internal validation procedures strengthensfindings by allowing the researcher to control ef-fects from moderator variables and rival hypothe-ses. Even these tests do not establish, however,

ValidityEstablished

Validity Touchstones

StatisticalConclusion

ValidityEstablished

ValidityEstablished

StatisticalConclusion

ValidityEstablished

InternalValidity

Established

StatisticalConclusion

ValidityEstablished

Outcomes

Mathematical relationships betweenthe hypothesized study variables doexist; relationships between someuntested variables may also exist;variables may or may not be pre-sumed research concepts (con-structs).

Mathematical relationships are ex-plained by the hypothesized studyvariables and only these variables;variables may or may not be pre-sumed research concepts (con-structs).

The hypothesized study variables dorepresent the constructs they aremeant to represent; mathematicalrelationships are not a function ofthe instrumentation and are ex-plained by the hypothesized vari-ables and only those variables.

Figure 3. Outcomes From Omitted Validities

152 MIS Quarter/y/June 1989

Page 7: Validating Instruments in MIS Research · Validating Instruments in MIS Research1 By: Detmar W. Straub Information and Decision Sciences Department Curtis L. Carlson School of Management

Validating Research Instruments

that measures chosen for the study are directlytied to the molar mental constructs that answerthe research questions (Cook and Campbell,1979). To accomplish this final step, the instru-ment itself must be validated.

The Need for InstrumentValidation in MIS

Before elaborating a demonstration exercise ofinstrument validation, it is important to show, ascontended, that instruments in the MIS litera-ture are at present, insufficiently validated. Toexamine this contention, over three years of pub-lished MIS empirical research (January 1985-August 1988) have been surveyed. Surveyed jour-nals include MIS Quarterly, Communications ofthe ACM, and Information & Management. Toqualify for the sample, a study had to employeither: (a) correlational or statistical manipula-tion of variables, or (b) some form of data analy-sis (even if the data analysis was simply descrip-tive statistics). Studies utilizing archival data(e.g., citation analysis) or unobtrusive measures(e.g., system accounting measures) were omit-ted from the sample unless it was clear fromthe methodological description that key variablerelationships being studied could have been sub-mitted to validation procedures.

Background study resultsThe survey overwhelmingly supports the con-tention that instrumentation issues are generallyignored in MIS research. With 117 studies in thesample, the survey data indicates that 62 per-cent of the studies lacked even a single formof instrument validation. Table 1 summarizesother key findings of the survey.

As percentages in Table 1 indicate, MIS research-ers rely most frequently (17 percent of the stud-ies) on previously utilized instruments as a pri-mary means of, presumably, validating theirinstruments. Almost without exception, however,the employment of previously utilized instru-ments in MIS is problematic from amethodological standpoint. In the first place,many previously utilized instruments were them-selves apparently never validated. The only con-ceivable gain from this procedure~ therefore, isto save the time of developing a wholly new in-strument. There is no advantage from a valida-

tion standpoint,a In the second place problemsarise in the case where researchers are adapt-ing instruments that have been previously vali-dated. In almost all cases, researchers alterthese instruments in significant ways before ap-plying them to the IS environment. However well-validated an instrument may have been in itsodginal form, excising selected items from a vali-dated instrument does not result in a validatedderivative instrument. In fact, the more theformat, order, wording, and procedural settingof the original instrument is changed, the greaterthe likelihood that the dedved instrument will lackvalidated qualities of the odginal instrument.

The remainder of the descriptive statistics inTable 1 paint a similarly disturbing picture. Reli-ability is the most frequently assessed validity,but even in this case, some 83 percent of thestudies do not test this minimal level of valida-tion. Reliability is infrequently coupled with testsof construct validity (less than 16 percent of thestudies) and assessment of content validity arealmost unheard of (5 percent of the studies) the literature.

Although the nature and extent of validationvaried somewhat from joumal to journal, a morerevealing finding of this background studyshowed that experimental researchers weremuch less likely to validate their instruments thannon-experimental researchers. Laboratory andfield experiments are often pretested and pilotedto ensure that the task manipulates the subjectsas intended (manipulation checks), but the in-struments used to gather data before and afterthe treatment are seldom validated. Instrumentsdeveloped for case studies are also unlikely tobe validated.

¯ By comparison with reference disciplines like theadministrative sciences, MIS researchers wereless inclined to validate their instruments, accord-ing to the study data. These ratios, moreover,are generally several orders of magnitude in dif-ference. More than 70 percent of the research-ers in the administrative sciences report raliabil-ify statistics, compared with 17 percent in MIS.

A weak argument can possibly be made that some "degree of nomological validity can be gained fromemploying previously utilized instruments. This is avery weak argument, however, because nomologi-cal validity usually occurs only in a long and well-established stream of reseamh, a situation that doesnot apply in this case.

MIS Quarter/y/June 1989 153

Page 8: Validating Instruments in MIS Research · Validating Instruments in MIS Research1 By: Detmar W. Straub Information and Decision Sciences Department Curtis L. Carlson School of Management

Validating Research Instruments

Table 1. Survey of Instrument Validation Use in MIS Literature°

InstrumentationCate~lorles

1. Pretest2. Pilot3. Pretest or pilot4. Previously utilized

instrument5. Content validity6. Construct validity7. Reliability

Response15 Yes 102 No7 Yes 110 No

22 Yes 95 No20 Yes 97 No

Percentage13% Yes 87% No

6% Yes 94% No19% Yes 81% No17% Yes 83% No

5 Yes 112 No16 Yes 101 No20 Yes 97 No

4% Yes 96% No14% Yes 86% No17% Yes 83% No

* A total of 117 cases were surveyed from three journals (43 from MIS Quarter/y; 28 from Communica-tions of the ACM; 46 from Information & Management).

They also validate constructs twice as often asMIS researchers (Mitchell, 1985).

A Demonstration Exercise ofInstrument Validation in MISConceptual appreciation of the role of instrumentvalidation in the research process is useful. Butthe role of instrument validation may be best un-derstood by seeing how validation can be ap-plied to an actual MIS research problem -- vali-dation of an instrument to measure computerabuse. This process exemplifies the vadety ofways instruments might be validated, a processthat is especially appropriate to confirmatory em-pirical research.

The research instrument in question was de-signed to measure computer abuse through apolling of abuse victims in industry, government,and education? Computer abuse was operation-ally defined as misuse of information systemassets such as programs, data, hardware, andcomputer service and was restricted to abuseperpetrated by individuals against organizations(Kling, 1980). There is a growing body of evi-dence that the problem of computer abuse isserious (ABA, 1984) and that managers are con-cerned about IS secudty and control (Brancheauand Wetherbe, 1987; Canning, 1986; Dickson,et al., 1984; Sprague and McNurlin, 1986). Or-ganizations respond to this risk from abuse byattempting to: (a) deter abusers through coun-

This research was supported by grants from IRMIS(Institute for Research on the Management of Infor-mation Systems) and the Ball Corporation Founda-tion. It was accomplished under the auspices of theInternational DPMA (Data Processing ManagementAssociation).

termeasures such as strict sanctions againstmisuse (these programs managed by a securitystaff), or (b) prevent abusers through counter-measures such as computer security software.The overall project set out to estimate thedamage being sustained by information systemsfrom computer abuse and to ascertain which con-trol mechanisms, if any, have been successfulin containing losses.

As one Of the first empirical studies in the field,the project developed testable propositions fromthe baseline of the criminological theory of Gen-eral Deterrence (Nagin, 1978). Causal linkageswere postulated between exogenous variablessuch as deterrent and preventive control meas-ures, and endogenous variables such as lossand severity of impact. Use of theory in this.manner strengthens instrument development bypermitting the reseamher to use prespecified andidentified constructs.

Early tn the research process, it was determinedthat the most effective and efficient means ofachieving a statistical sample and gathering datawas a victimization questionnaire. This type ofresearch instrument has been used extensivelyin criminology (e.g., the National Crime Survey)1°and in prior computer abuse studies (AICPA,1984; Colton, et al., 1982; Kusserow, 1983;Parker, 1981) to explore anti-social acts. Para-digms for exploring anti-social behaviors are wellestablished in criminolOgical studies and, be-cause computer abuse appears to be a proto-typical white collar crime, research techniquesfrom this reference discipline were appropriatein this study.

lo Cf. Skogan (1981).

154 MIS Quarter/y/June 1989

Page 9: Validating Instruments in MIS Research · Validating Instruments in MIS Research1 By: Detmar W. Straub Information and Decision Sciences Department Curtis L. Carlson School of Management

Validating Research Instruments

An obvious reference discipline for activities thatinvolve a violation of social codes is criminol-ogy, which provides a ready behavioral expla-nation for why deterrents may be effective con-trols. General Deterrence theory has well-established research constructs and causal re-lationships. There is a long-standing tradition ofresearch in this area and concurrence by panelsof experts on the explanatory power of the theory(Blumstein, et al., 1978; Cook, 1982). Constructsand measures have been developed to test thetheory since the eady 1960s, and its applicationto the computer security environment, therefore,seemed timely.

The thrust of most of the theoretic deterrenceliterature has been on "disincentives" or sanc-tions against committing a deviant act. Disincen-tives are traditionally divided into two related,but independent, conceptual components: (1) cer-tainty of sanction and (2) severity of sanction(Blumstein, et al., 1978). The theory holds thatunder conditions in which risk of being punishedis high and penalties for violation of norms aresevere, potential offenders will refrain from illicitbehaviors.

In the literature, observable commitment of anenforcement group, such as the police In pun-ishing offenders, typically serves as a surrogatefor perception of risk or certainty of sanction(Gibbs, 1975). This assumes that potential of-fenders perceive risk to be in direct proportionto efforts to monitor and uncover illicit behavoiors. In other words, people believe that punish-ment will be more certain when enforcementagents explicitly or implicitly "police" or maketheir presence felt to potential offenders. In in-formation systems, this is equivalent to securityadministrators making their presence felt throughmonitoring, enforcing, and disturbing informationabout organizational policies regarding systemusage, or what this article refers to as deterrentcountermeasures. When punishment is severe,it is assumed that offenders, especially less mo-tivated potential offenders, are dissuaded fromanti-social acts (Straub and Widom, 1984). Table2 presents the pertinent connections betweenthe conceptual terminology used in this article,the constructs most frequently cited in GeneralDeterrence theory, and the actual items as meas-ured in the final instrument, which appears inthe Appendix.

Table 2. Concepts, Constructs, and MeasuresResearch Survey

Concepts Construct Item Measure Description25 - Number of incidents39 - Actual dollar lossAbuse Damage 38 - Opportunity dollar loss37 - Subjective seriousness index

Deterrents

Preventives

RivalExplanations

Disincentives:Certainty

Disincentives:Severity

Preventives

Environmental-MotivationalFactors

10111214b1522

13 minus 35 &13 minus 28 minus 36

1819221617302932312421

28 minus 35 &36

- Full-time security staff- Part-time security staff- Total security hours/week- Data security hours/week- Total security staff salades: Subjective deterrent effect- Longevity of security (frominception to incident date)

- Information about proper use- Most severe penalty for abuse- Subjective deterrent effect- Use of software access controlo Use of specialized software- Privileged status of offender- Amount of collusion- Motivation of offender- Employee/non-employee status- Tightness of security- Visibility of security- Duration of abuse

MIS Quarter/y/June 1989 155

Page 10: Validating Instruments in MIS Research · Validating Instruments in MIS Research1 By: Detmar W. Straub Information and Decision Sciences Department Curtis L. Carlson School of Management

Validating Research Instruments

Evolving directly from pdor instruments, a draftinstrument was first constructed to reflect studyconstructs. Next, instrument validation took placein three well-defined operations. The entire re-search process was conducted in four phases,as outlined in Table 3.

Phase I: PretestIn the pretest, the draft instrument was subjectedto a qualitative testing of all validities. This phasewas designed to facilitate revision, leading to anInstrument that could be formally validated.

Personal interviews were conducted with 37 par-ticipants in order to locate and correct weak-nesses in the questionnaire instrument. The in-terview schedule was structured so that threefull waves of interviews could be conducted.Each version of the instrument reflected improve-ments suggested by participants up to that pointand was continuously re-edited for the next ver-sion. The selection of interviewees was designedto get maximum feedback from various exper-tises, organizational roles, and geographical re-gions. Initial interviews included divergent areasof expertise: academic experts in methodologyand related criminological areas, academic ex-perts in information systems, practitioner expertsin information systems and auditing, and law en-fomement officials at the state and federal levels.

¯ Participants came from a variety of organizationsin the public and private sectors including bank-ing, insurance, manufacturing, trade, utilities,transportation, education, and health services.A range of key informants were sought includ-ing system managers, operations managers,data security officers, database administrators,and internal auditors.

The in-depth interviews offered insight into thefunctioning of the entire spectrum of security con-trols and what respondents themselves indicatedwere important elements in their deterrent force.Interviews were designed to move progressivelyfrom an open-ended general discussion format,to a semi-structured format, and finally to ahighly structured item-by-item examination of thedraft instrument. Information gathered earlier inthe interview did not, thereby, bias responseslater when the draft instrument was evaluated.In the beginning of the interview, participantswere encouraged to be discursive. They spokeon all aspects of their computer security, includ-ing personnel, guidelines and policies, softwarecontrols, reports, etc. They were prompted bya simple request for information. Concepts in-dependently introduced by more than three re-spondents were noted as was the precise lan-guage in which these constructs were pemeivedby the participants (content validity and reliabil-ity). In the second semistructured segment, ques-tions from the interviewer directed attention tokey matters of secudty raised by other partici-pants but not raised in the current session. Clari-fication of constructs and the means of opera-tionalizing selected constructs were alsoundertaken in this segment (construct validityand reliability).

To eliminate ambiguities and further test validi-ties, participants in the third segment of the in-terview were asked to evaluate a version of thequestionnaire item-by-item. Content validity wasstressed in this segment by encouraging partici-pants to single out pointless questions and sug;gest new areas for inquiry. Most participantschose to dialogue in a running commentaryformat as they reviewed each question. This fa-cilitated preliminary testing of the other validi-

Table 3. Phases of the Computer Abuse Measurement Project

Validation Content ConstructPhase Name Tests Performed Validity Validity Reliability

I Pretest Qualitative X X XII Technical Cronbach alphas X

Validation MTMM Analysis X XIII Pilot Test Cronbach alphas X

Factor Analysis XQualitative X

IV Full-ScaleVictimizationSurvey

156 MISQua~er~/June 1989

Page 11: Validating Instruments in MIS Research · Validating Instruments in MIS Research1 By: Detmar W. Straub Information and Decision Sciences Department Curtis L. Carlson School of Management

Validating Research Instruments

ties. Because misunderstanding of questionswould contribute to measurement error in theinstrument, for instance, particular attention waspaid to possible discrepancies or vadati0ns inanswers (reliability).

By the final version, the draft instrument had un-dergone a dramatic metamorphosis. It had col-lected data in every class of the relevant vari-ables. To provide sufficient data points to assessreliability, several measures of each significantindependent variable as well as each depend-ent variable were included.

Phase Ih Technical validationIn keeping with the project plan, technical in-strument validation occurred dudng the fall of1984 and the winter of 1985 (Phase II). Its pur-pose was to validate construct validity and reli-ability. To triangulate on the postulated con-structs (construct validity), extremely dissimilarmethods (Campbell and Fiske, 1959) were util-ized. The purpose in this case is similar to thepurpose of triangulation in research in general(Cook and Campbell, 1979). By bringing verydifferent data-gathering methods to bear on aphenomenon of interest, it is possible, by com-paring results, to determine the extent to whichinstrumentation affects the findings, i.e., howrobust the findings truly are.

Specifically, personal interviews were conductedwith 44 key security informants. The target popu-lation was primarily information system manag-ers, computer security specialists, and internalauditors. Their responses were correlated withquestionnaire responses made independently byother members of the organization who hadequal access to secudty information. The instru-ment also included equivalent, "maximally simi-lar" measures (Campbell, 1960, p. 550) to gaugethe extent of the random error (reliability).

In Phases II and III the instrument was quantita-tively validated along several methodological di-mensions. These dimensions and analytical tech-niques are summarized in Table 3.

Technical Validation of Construct ValidityTests of construct validity are generally intendedto determine if measures across subjects aresimilar across methods of measuring those vari-ables. In the Computer Abuse Measurement Pro-ject, methods were designed to be "maximally

different," in accordance with the Campbell andFiske criteria. Triangulation by dissimilar meth-ods is designed to isolate common method vari-ance and assure that the Campbell and Fiskeassumption of independence of methods is notviolated. A personal interview was conductedwith one participant while a pencil-and-paper in-strument (the pretested questionnaire) was givento an equally knowledgeable participant in thesame organization. During the interactive inter°viewing process, the researcher verbally pre-sented the questions and recorded responsesin a 1-2 hour timeframe. A limited amount ofconsultation was permitted, but, in general, re-spondents were encouraged to keep their owncounsel. Questionnaire respondents, on theother hand, had no personal contact with theresearcher and responded entirely on their own.This instrument was completed at leisure andreturned by mail to the researcher. In all in-stances, the need for independence of answersfrom the two types of respondents was stressed.

If measures vary little from the pencil-and-paperinstrument to the personal interview, they canbe said to be independent of the methods usedto gather the data and to demonstrate high con-struct validity. Conversely, high method varianceindicates that measures are more a function ofthe instrumentation than of underlying con-structs. As an analytical technique, Campbelland Fiske’s MTMM allows patterns of selected,individual measures to be compared through acorrelation matrix of measures by methods, asshown in Figure 4.11 Analysis of the MTMMmatdx in Figure 4 shows generally low methodvadance and high convergent/discriminant va-lidity. This can be seen most directly by com-paring the values in the validity diagonal m thehomotrait-heteromethod diagonal encimled in thelower left matrix partition or block m with valuesin the same row or column of each trait. Evi-dence in favor of what is termed "convergentvalidity" is a relatively high, statistically signifi-cant correlation in the validity diagonal. Evidencein favor of "discriminant validity" occurs if thatcorrelation is higher than other values in thesame row or column, i.e., r(i,i) > r(i,j) and r(i,i)> r(j,i) where i is not equal to j. The total per-sonnel hours trait (item 11), for example, hasa .65 correlation significant at the .05 level and

The matdx in Figure 4 is only a partial matdx ofall the correlations evaluated. These matrix ele-ments were chosen simply for purposes ofdemonstration.

MIS Quarter/y/June 1989 157

Page 12: Validating Instruments in MIS Research · Validating Instruments in MIS Research1 By: Detmar W. Straub Information and Decision Sciences Department Curtis L. Carlson School of Management

2

8g

10~ 11

27

29

34

14

27

Interview . Pencil-and-Paper

2 8 9 10 11 14 19 21 27 29 34 2 8 9 10 11 14 19 21 27 29 34

_~6

Legend2...Item 2, Years experience in IS8...Item 8, EDP budget per year

o~ 9...Item 9, Staff members working >20 hours-" .... -’"~ ~ 10...Item 10, Staff members working <19 hours-.07 -.55 .69" .28~,,~.87 11.,.Item 11, Total personnel hours "

........ ~ 14... Item 14, Total expenditures per year-.01 -.62 ..69" .22 .84~ 19...Item 19, Opinion-Does your computer security deter abuse?-.13 -.05 -.12-.06-.09 - .13""~-- 21..oItem 21, Opinion-Is your computer security tight?

......... ~ ..... ~ 27...Item 27. Position of perpetrator- ~.02 -.02 .35~,,,<- 29... Item 29. Motivation of perpetrator

............. ~ 34... Item. 34, Subjective seriousness of incidence-.41-.09 .06 .07 22 .16 -.07-.52",,,,~ Positive significance at .05 level

-.15-.26 .2111.___ .~0=

.22~7_" - .61",,,,,~ .

-~ -.1_i .2_8

.04 .4_2." .3.5

-.05 -.2.6 .73"-.38~,,,~

~ .0_0_-.1.4_-.~-.1.! .1.!-.t=!-.!! .00 ’-’55

¯ .0.5 ~.5.7_.3.4..3.6." .3_9_~_6_3"~,,_4. -.?! .0.! 2! ’!! -’1_!-.55 .56" .08 .57",,,,~

-.1_5_ .0.2..0.0_ -.1.7_ -.1.0_. -.1_3_ .~",,~_5_"~_2_ .18.-.3.6 -’1_3 -’04 -23 -.05 ° -.16 .11 .28",,,~

-.47 -.03 .42 -.35 .16 .31 .36 .17 .60" -.05 ¯ -.29 -.32 .18 -.04 -.14 .00 21 - 36 49~ 33 -

2 8 9 10 11 14 19 21 27 29 34 2 8 9 10 11 14 19 21 27 29 34Figure 4. MTMM Matrix for Victimization Instruments

Page 13: Validating Instruments in MIS Research · Validating Instruments in MIS Research1 By: Detmar W. Straub Information and Decision Sciences Department Curtis L. Carlson School of Management

Validating Research Instruments

is greater than other entries in its row andcolumn.

In the MTMM test, the test instrument failed toachieve convergent validity on three out of the11 items (items 2, 19 and 21) as evidenced low, non-significant correlations in the validity di-agonal (r--.25, .21, .25 respectively). Trait 2, a simple indicator of the number of years of ex-perience of the respondent, however, should dem-onstrate only random associations, overall, withboth itself and other traits in the homomethodor heteromethod blocks. That is, there is no par-ticular reason why two participants from thesame organization should have had the sameyears of experience in information systems, aninterpretation borne out by the non-significant va-lidity diagonal value for this trait. By virtue ofthe validation design, traits 19 and 21 may notbe as highly correlated in the validity diagonalas the objective measures (traits 9, 10, 11, and14), since these are subjective measures fromvery different sources of information. It is rea-sonable to expect that two people in the sameorganization can give very similar objective an-swers about security efforts (e.g., traits 9~ 10,11, and 14); it is also reasonable to expect themto vary on opinions about the effectiveness ofthese efforts. And the data unveils this pattern.

Even though the instrument, upon inspection,does pass the first CampbelI-Fiske desideratumfor convergent validity, there are several viola-tions of discriminant validity. In the interview ho-momethod block, the correlation r(14,11), for in-stance, is higher at .84 than the validity diagonalvalue of .65. Likewise, in the pencil-and-paperhomomethod block, r(11,9) exceeds its corre-sponding validity diagonal value of .65. Interpre-tations of aberrations such as these can be dif-ficult (Farh, et al., 1984; Marsh and Hocevar,1983). For one thing, it is well-known that highbut spurious correlations will occur by chancealone in large matrices such as the one in Figure4 (231 elements). Yet, in spite of reasonable otherwise ingenious explanations that can be of-fered for off-diagonal high correlations, it maybe more straightforward in MTMM analysis toclassify departures from the technical criteriasimply as violations. As long as violations do notcompletely overwhelm good fits, the instrumentcan be said to have acceptable measurementproperties.

An analysis of common method variance is thelast procedure in evaluating this matrix. Note that

the interview-based monomethod correlation be-tween items 11 and 14 (r=.84) is significantlyelevated over the parallel heteromethod corre-lation between items 11 and 14 (r=.39). Theelevation of .84 over .39 is an index of thedegree of common method variance, which ap-pears to be substantial. Basically, .39 is the cor-relation between items 11 and 14 with thecommon method variance due to interviewingremoved, Similarly, the pencil-and-paper-basedmonomethod correlation between items 9 and11, r= .67, should be compared against the cor-responding heteromethod correlation of .57. Ex-amining other high correlations in themonomethod triangles and comparing them totheir heteromethod counterparts suggest thatitems 9, 11, and 14 do demonstrate commonmethod variance.

Perhaps part of the problem is that subjects areconfounded with methods in the design. Thepeople who responded to the interview were adifferent group than those who filled out thepencil-and-paper questionnaire. Thus, there islikely a systematic effect because of "person,"apart from the effect of method per se, that ac-counts for this pattern of findings.12 If a personoverestimates the number of staff, it is likely heor she will also overestimate total personnelhours. Thus, one might expect a person’s esti-mate of staff to correlate more highly with theirown estimate of personnel hours than with some-one else’s estimate of staff. If this correlationis a true accounting of the real world, then"person" is a source of shared method variance,i.e., errors that are not statistically independentof each other that are common within a methodbut not across two methods. Unfortunately, withthe existing design, it is not possible to disen-tangle the effects due to people from the effectsdue to the instrument per se.

Even though this design does fulfill the Campbell-Fiske (1959) criterion for "maximally different

The effect here might have been mitigated if thevalidation sample had been randomly selected fromthe population of interest. Random selection of sub-jects or participants from the population as well asrandom assignment to treatments (in the case ofexperimental research) reduces the possibility thatsystematic effects of "person" (as in "individual dif-ferences") are present in the data. Random selec-tion affects extemal validity whereas random assign-ment affects intemal validity (Cook and Campbell,1979).

MIS Quarterly~June 1989 159

Page 14: Validating Instruments in MIS Research · Validating Instruments in MIS Research1 By: Detmar W. Straub Information and Decision Sciences Department Curtis L. Carlson School of Management

Validating Research Instruments

methods," common method variance detractsfrom the "relative validity" (p. 84) that is gener-ally found throughout the matrix. In this study,if both methods had been administered to eachrespondent as one solutionto evaluatingcommon method vadance, a new source of con-founding -- a test-retest bias (Cook andCampbell, 1979) -- would have beenintroduced.

Technical Validation of Reliability ¯Essentially, reliability is a statement about thestability of individual measures across replica-tions from the same soume of information (withinsubjects in this case). For example, a respon-dent to the questionnaire who indicated that hisor her overall organization had sales of $5 bil-lion plus but only 500-749 employees wouldalmost certainly have misinterpreted one or bothof the questions. If enough respondents wereinconsistent in their answers to these items, forthis and other reasons, the items would containabnormally high measurement error and hence,unreliable measures. Contrariwise, if individualmeasures are reliable for the most part, theentire instrument can be said to have minimalmeasurement error. Findings based on a reli-able instrument are better supported, and pa-rameter estimates are more efficient.

Coefficients of internal consistency and alterna-tive forms were used to test the instrument. Bothare explained in depth in Churchill (1979) andBailey (1978). Cronbach alphas reported in thediagonal of the MTMM matrix pass the .80 rule-of-thumb test used as a gauge for reliablemeasures.

Phase IIh Pilot test of reliability andconstruct vafidityTo further validate the instrument, a pilot surveyof randomly selected DPMA (Data ProcessingManagement Association) members was carriedout in January of 1985. Judging from the 170returned questionnaires, the pilot test once againconfirmed that measurement problems in the in-strument were not seriously disabling.

The instrument was first tested for reliabilityusing Cronbach alphas and Pearson correla-tions. The variables presented in the MTMManalysis passed the .80 rule-of-thumb test withcoefficients of .938, .934, .982, and .941.

Construct validity is assessed in a pilot instru-ment by establishing the factorial validity (Allen.and Yen, 1979) of constructs. This technique,which has been utilized in the UIS (e.g., Ives,et al., 1983) and IS job satisfaction arenas (e.g.,Ferratt and Short, 1986), has become popularas a result of the sophisticated statistical capa-bilities of factor analysis. Factor analysis allowsthe researcher to answer questions about whichmeasures covary in explaining the highest per-centage of the vadance in a dataset. The re-searcher can approach the question with princi-pal components factor analysis or confirmatoryfactor analysis.TM Thus, factorial validity helps toconfirm that a certain set of measures do or donot reflect latent constructs.

A principal components factor analysis of thesame subset of variables illustrated in the MTMManalysis shows that measures of the computersecudty deterrent construct (items 9, 10, 11, and14) all contribute heavily to or "load" on a singlefactor, as shown in Table 4. Once the first factoris extracted, the analytical technique attemptsto find other factors or sets of variables that bestexplain vadance in the dataset. After 4 such ex-tractions (with eigenvalues of at least 1.0), theselected measures load at a .5 cutoff level; to-gether the Ioadings explain 97 percent of thevadance in the dataset. In bdef, results of thistest suppod the view that measures of deter-rence in the questionnaire are highly interrelatedand do constitute a construct.

Table 4. Loadings for Factor Analysisof Pilot Instrument

Factor1

Item 9 ~Item 10Item 11Item 14Item 2Item 19Item 21Item 27Item 34Item 29

Factor Factor Factor2 3 4

.956.910.885 .847

.961.897

One of the most sophisticated measurement modelfactor analysis tools is LISREL.

160 MIS Qua~erly/June 1989

Page 15: Validating Instruments in MIS Research · Validating Instruments in MIS Research1 By: Detmar W. Straub Information and Decision Sciences Department Curtis L. Carlson School of Management

Validating Research Instruments

Besides their use in validation, pilot tests arealso desirable because they provide a testingground or dry run for final administration .of theinstrument. In this case, for example, some itemswere easily identified as problematic by virtueof their lack of variance, or what are frequentlyCalled "floor" or "ceiling" effects. Items 8 and9 were originally scaled in such a way that an-swers tended to crowd together at the high endof the scale, i.e., the scaling of these items didnot allow for discrimination among largeorganizations.

Other problems that need to be resolved canalso surface during pilot testing. A good exam-ple in this validation was an item on "Annualcost of computer security insurance." Most re-spondents in the field interviews assumed thateven though their organization did not have spe-cific insurance coverage for violations of com-puter security, others did. And they also as-sumed that such costs could be estimated. ~Given the media attention to security insurancematters in the last several years, these assump-tions are quite understandable. The pilot surveydata cleady showed that the question was badlymisjudged for content; there was virtually no vari-ance at all on this item. Over 95 percent of therespondents left this item blank; some of thosethat did respond indicated through notations inthe margins that they were not sure what thequestion meant.

The overall assessment of these validation testswas that the instrument had acceptable meas-urement properties. It meant, in essence, thatmuch greater confidence could be placed in lateruse of the instruments.

Phase IV: Full scale victimizationsurveyThe full scale implementation of the survey in-strument resulted in 1,211 responses and 259separate incidents of computer abuse. As aresult of the work undertaken prior to this phaseof the research, it was felt that, stronger infer-ences could be made about causal relationshipsamong variables and about theory.

Guidelines for ValidatingInstruments in MIS ResearchClearly, numerous contingencies affect the col-lection and evaluation of research data. In a tech-

nology-ddven field such as MIS, windows of op-portunity for gathering data appear and disap-pear so rapidly that researchers often feel theycannot afford the time required to validate theirdata collection instruments. Researchers engag-ing in initial studies or exploratory studies suchas case studies may feel that validated instru-ments are not critical. Experimentalists concen-trating on legitimate concerns of intemal validity(Jarvenpaa, et al., 1984), moreover, may not re-alize that their pre- and posttreatment data-gathering instruments are, in fact, sine qua nonmeans of controlling extraneous sources ofvariation.

Nevertheless, it is desirable for MIS as a fieldm experimentalists and case researchers not ex-cepted (Campbell 1975; Fromkin and Streufert,1976) m to apply more dgorous scientific stan-dards to data collection procedures and instru-ments (Mitchell, !985). As an initial step, a setof minimal guidelines may be considered asfollows:

¯ Researchers should pretest and/or pilot testinstruments, attempting to assess as many va-lidities as possible in this process.

¯ MIS joumal editors should encourage or re-quire researchers to prepare an "InstrumentValidation" subsection of the Methodology sec-tion; this subsection can include, at the least,reliability tests and factorial validity tests of thefinal administered instrument.

As instrumentation issues become more inter-nalized in the MIS research process, more strin-gent standards can be adopted as follows:

¯ Researchers should use previously validatedinstruments wherever possible, being carefulnot to make significant alterations in the vali-dated instrument without revalidating instru-ment content, constructs, and reliability.

¯ Researchers should undertake technical vali-dation whenever it is critical to ground centralconstructs in major MIS research streams.

Validating Instruments forUse in MIS ResearchStreamsThere are numerous research streams in MISthat would gain considerable credibility frommore carefully articulated constructs and meas-

MIS Quarter/y/June 1989 161

Page 16: Validating Instruments in MIS Research · Validating Instruments in MIS Research1 By: Detmar W. Straub Information and Decision Sciences Department Curtis L. Carlson School of Management

Validating Research Instruments

ures. System success, as probably the centralperformance vadable in MIS, is a prime candi-date. It is the effect sought’in large scale trans-actional processing systems (Ives and OIs0n,1984), decision support systems (Poole and De-Sanctis, 1987), and user-developed applications(Rivard and Huff, 1988). Although instrumentsmeasuring certain dependent variables such asUIS have been subjected to validation (e.g., Ives,et al., 1983), there have been few, if any, vali-dations of instruments measuring other compo-nents of system success (such as system ac-ceptance or systems usage), or independentvariables (such as user involvement) (Ives Olson, 1984). Varying components of user in-volvement have been tested, but no validationof the construct of user involvement has yetbeen undertaken. Recent studies (Baronas andLouis, 1988; Tate and Vessey, 1988; Yaver-baum, 1988) have employed altered forms ofpreviously validated instruments (ioe., theBaroudi, Ives, and Olson instrument and Job Di-agnostic Survey), but the only tests of the userinvolvement construct have been reliabilitystatistics.

Basic macro-level constructs in the field, con-structs like "information" and "information value,"are still in need of validation and further refine-ment. Epstein and King (1982), Larcker andLessig (1980), O’Reilly (1982), Swanson (1974;1986; 1987), and Zmud (1978) all deal with portant questions surrounding data, information,information attributes, and information value. Butuntil Goodhue (1988), no technical validationeffort (including MTMM analysis) had been un-dertaken to cladfy this stream of research.

There are, in fact, whole streams of reseamhin the field where primary research instrumentsremain unverified. Reseamh progrems explor-ing the value of software engineering techniques(Vessey and Weber, 1984), life cycle enhance-ments such as prototyping (Jenkins and Byrer,1987), and factore affecting successful end-user computing (Brancheau, 1987) are all casesin point. MIS reseamh, moreover, suffers fremmeasurement problems in exploring variable re-lationships among variables such as informationquality, secure systems, user-friendly systems,MIS sophistication, and decision quality.

ConclusionInstrument validation is a prior and pdmary proc-ess in confirmatory empirical research. Yet, in

spite of growing awareness within the MIS fieldthat methodologies need to be more rigorous,most of the empirical studies continue to uselargely unvalidated instruments. Even thoughMIS researchers frequently adopt instrumentsthat have been used in previous studies, amethodological approach that can significantlyundergird the research effort, these advantagesare lost in most instances either because theadapted instrument itself has not been validatedor because the researcher has made major al-terations in the validated instrument without re-testing it.

It is important for MIS researchers to recognizethat valid statistical conclusions by no meansensure that a causal relationship between vari-ables exists. It is also important to realize that,in spite of the need to warrenty internal validity,this validat/on does not test whether the researchinstrument is measuring what the researcher in-tended to measure. Measurement problems inMIS can only be resolved through instrumentvalidation.

This article argues that instrument validation atany level can be of considereble help to MISreseamhers in substantiating their findings. Spe-cific guidelines for improvements include pre andpilot testing, formal validation procedures, and,finally, close imitation of previously validated in-struments. As demonstrated in the case of theComputer Abuse Measurement Project, instru-ment pretests can be useful in qua/itative/y es-tablishing the reliability, construct validity, andcontent validity of measures. Formal validation,either MTMM analysis or factor analysis, offerestatistical evidence that the instrument itself isnot seriously interfering with the gathering of ac-curate data. Pilot tests can permit testing of reli-ability and construct validity, identify and helpcorrect scaling problems, and serve as dry runsfor final administration of the instrument.

Improving empirical MIS reseamh is a two partprecess, Firet, we must recognize that we havemethodological problems that need to be ad-dressed, and second, and even more important,we must take action by incorporating instrumentvalidation into our reseamh efforts. Serious at-tention to the issues of validation can .move thefield toward meaningfully replicated studies andrefined concepts and away from intractable con-structs and flawed measures.

162 MIS Quarter/y/June 1989

Page 17: Validating Instruments in MIS Research · Validating Instruments in MIS Research1 By: Detmar W. Straub Information and Decision Sciences Department Curtis L. Carlson School of Management

Validating Research Instruments

ReferencesABA. "Repod on Computer Cdme," pamphlet,

Amedcan Bar Association, Task Force on Com-puter Crime, Section on Cdminal Justice, 1800M Street, Washington, D.C. 20036, 1984.

AICPA. "Report on the Study of EDP-RelatedFraud in the Banking and Insurance Indus-tries," pamphlet, American Institute of Certi-fied Public Accountants, Inc., 1211 Avenue ofthe Americas, New York, NY, 1984.

Allen, M.J. and Yen, W.M. Introduction to Meas-urement Theory, Brooks-Cole, Monterey, CA,1979.

Bagozzi,R.P. "The Role of Measurement inTheoryConstruction and Hypothesis Testing:Toward a Holistic Model," in Conceptual andTheoretical Developments in Marketing, Ameri-can Marketing Association, Chicago, IL, 1979.

Bagozzi, R.P, Causal Modelling in Marketing,Wiley & Sons, New York, NY, 1980.

Bailey,K.D. Methods of Social Research, TheFreePress, New York, NY, 1978.

Bailey,J.E. and Pearson, S.W. "Developmentof a Tool for Measuring and Analysing Com-puter User Satisfaction," Management Sci-ence (29:5), May 1983, pp. 530-545,

Baronas, A.K. and Louis, M.R. "Restoring aSense of Control Dudng Implementation: HowUser Involvement Leads to System Accep-tance," MIS Quarterly (12:1), March 1988, pp.111-126.

Baroudi, J.J. and Odikowski, W.J. "The Prob-lem of Statistical Power in MIS Research,"MIS Quarterly (13:1), March 1989, pp. 87-106.

Benbasat, I., Goldstein, D.K. and Mead, M. "TheCase Research Strategy in Studies of Infor-mation Systems," MIS Quarterly (11:3), Sep-tember 1987, pp. 369-388.

Blalock, H.M., Jr. Theory Construction: FromVerbal to Mathematical Formulations, Prentice-Hall, Englewood Cliffs, NJ, 1969.

Blumstein, A., Cohen, J. and Nagin, D. "Intro-duction," in Deterrence and Incapacitation: Es-timating the Effects of Criminal Sanctions onCrime Rates, A. Blumstein, J. Cohen and D.Nagin (eds.), National Academy of Sciences,Washington, D.C., 1978.

Brancheau, J.C. The Diffusion of InformationTechnology: Testing and Extending Innova-tion Diffusion Theory in the Context of End-User Computing, unpublished Ph.D. disserta-tion, University of Minnesota, 1987.

Brancheau, J.C. and Wetherbe, J.C. "Key Issuesin Information Systems -- 1986," MIS Quar-

terly (11:1 ), March 1987, pp. 23-45.Campbell, D.T. "Recommendations for APA Test

Standards Regarding Construct, Trait, and Dis-criminant Validity," American Psychologist(15), August 1960, pp. 546-553.

Campbell, D.T. "Degrees of Freedom and theCase Study," Comparative Political Studies(8:2), July 1975, pp. 178-191.

Campbell, D.T. and Fiske, D.W. "Convergentand Discriminant Validation by the Multitrait-Multimethod Matrix," Psychological Bulletin(56), March 1959, pp. 81-105.

Canning, R. "Information Secudty and Privacy,"EDP Analyzer (24:2), February 1986, pp. 1-16.,

Churchill, G.A., Jr. "A Paradigm for DevelopingBetter Measures of Marketing Constructs," Jour-nal of Marketing Research (16), February1979, pp. 64-73.

Cohen, J. Statistical Power Analysis for the Be-havioral Sciences, Academic Press, NewYork, NY, 1969.

Cohen, J. Statistical Power Analysis for the Be-havioral Sciences, Revised Edition, AcademicPress, New York, NY, 1977.

Colton, K.W., Tien, J.M., Tvedt, S., Dunn, B. andBarnett, A.I. "Electronic Funds Transfer Sys-tems and Crime," interim report in ongoingstudy on "The Nature and Extent of CriminalActivity in Electronic Funds Transfer and Elec-tronic Mail Systems," supported by Grant 80-BJ-CX-0026, U.S. Bureau of Justice Statistics,1982. Referenced by special permission.

Cook, P.J. "Research in Criminal Deterrence:Laying the Groundwork for the SecondDecade," in Crime and Justice: An AnnualReview of Research (2), University of ChicagoPress, Chicago, IL, 1982, pp. 211-268.

Cook, T.D. and Campbell, D.T. Quasi-Experimen-tation: Design and Analytical Issues for FieldSettings, Rand McNally, Chicago, IL, 1979.

Coombs, C.H. A Theory of Data, Wiley, NewYork, NY, 1964.

Cronbach, L.J. "Coefficient Alpha and the Inter-nal Consistency of Tests," Psychometrika(16), September 1951, pp. 297-334.

Cronbach, L.J. "Test Validation," in EducationalMeasurement, 2rid Edition, R.L. Thorndike(ed.), American Council on Education, Wash-ington, D.C., 1971, pp. 443-507.

Cronbach, LJ. and Meehl, P.E. "Construct Va-lidity in Psychological Tests," PsychologicalBulletin (52), 1955, pp. 281-302.

Dick, son, G.W., Benbasat, I. and King, W.R. "TheManagement Information Systems Area: Prob-lems, Challenges, and Opportunities," Proceed-

MIS Quarter/y/June 1989 163

Page 18: Validating Instruments in MIS Research · Validating Instruments in MIS Research1 By: Detmar W. Straub Information and Decision Sciences Department Curtis L. Carlson School of Management

Validating Research Instruments

ings of the First International Conference onInformation Systems," Philadelphia, PA, De-cember 8-10, 1980, pp. 1-8.

Dickson, G.W., Leitheiser, R.L., Wetherbe, J.C.and Nechis, M. "Key Information SystemsIssues.for.the 80’s," MIS Quarterly (8:3), Sep-tember 1984, pp. 135-159.

Epstein, B.J. and King, W.R. "An ExperimentalStudy of the Value of Information," Omega(10:3), 1982, pp. 249-258.

Farh, J., Hoffman, R.C. and Hegarty,.W.H. "As-sessing Environmental Scanning at the Sub-unit Level: A Multitrait-Multimethod Analysis,"Decision Sciences (15), 1984, pp. 197-220.

Farhoomand, A. "Scientific Progress of Manage-ment Information Systems," Data Base,Summer 1987, pp. 48-56.

Ferratt, T.W. and Shod, L.E. "Are InformationSystems People Different: An Investigation ofMotivational Differences," MIS Quarterly(10:4), December 1986, pp. 377-388.

Fromkin, H.L. and Streufert, S. "Laboratory Ex-perimentation," in Handbook of Indust~al andOrganizational Psychology, Marvin D. Dun-nette (ed.), Rand McNally, Chicago, IL, 1976.

Gibbs, J. Crime, Punishment, and Deterrence,Elsevier, New York, NY, 1975.

Glaser, B.G. and Strauss, A.L. The Discoveryof Grounded Theory: Strategies for Qualita-tive Research, Aldine, New York, NY, 1967.

Goodhue, D.L. Supporting Users of CorporateData: Th~ Effect of I/S Policy Choices, un-published Ph.D. dissertation, MassachusettsInstitute of Technology, 1988.

Hair, J.F., Jr., Anderson, R.E., Tatham, R.L. andGrablowsky, B.J. Multivariate Data Analysis,PPC Books, Tulsa, OK, 1979.

Hamilton, J.S. and Ives, B. "MIS Research Strate-gies," Information and Management (5:6), De-cember 1982a, pp. 339-347.

Hamilton, J.S. and Ives, B. "Knowledge Utiliza-tion among MIS Researchers," MIS Quarterly(6:4), December 1982b, pp. 61-77.

Hanushek, E.A. and Jackson, J.E. StatisticalMethods for Social Scientists, AcademicPress, Orlando, FL, 1977.

Hunter J.E., Schmidt, F.L. and Jackson, G.B.Meta-Analysls: Cumulating Research Find-ings Across Studies, Sage, Beverly Hills, CA,1983.

Ives, B. and OIson, M.H. "User Involvement andMIS Success: A Review of Research," Man-agement Science (30:5), May 1984, pp. 586-603.

Ives, B., Olson, M.H. and Baroudi, J.J. ’q’he Meas-

urement of User Information Satisfaction," Com-munications of the ACM (26:10), October1983, pp. 785-793.

Jarvenpaa, S.L., Dickson, G.W. and DeSanctis,G.L. "Methodological Issues in ExperimentalIS Research: Experiences and Recommenda-tions," Proceedings of the Fifth InternationalInformation Systems Conference, Tucson, AZ,November 1984, pp. 1-30.

Jenkins, A.M. MIS Design Variables and Deci-sion Making Performance, UMI ResearchPress, Ann Arbor, MI, 1983.

Jenkins, A.M. "Research Methodologies andMIS Research," in Research Methods in In-formation Systems, E. Mumford(ed.), ElsevierScience Publishers B.V. (North-Holland), Am-sterdam, 1985, pp. 315-320.

Jenkins, A.M. and Byrer, J. "An Annotated Bibli-ography on Prototyping," Working Paper W-706, IRMIS (Institute for Research on the Man-agement of Information Systems), Indiana Uni-versity Graduate School of Business, Blooming-ton, IN, 1987.

Kerlinger, F.N. Foundations of Behavioral Re-search, Holt, Rinehart, and Winston, NewYork, NY, 1964.

King, W.A. "Strategic Planning for InformationSystems," research presentation, Indiana Uni-versity School of Business, March 1985.

Kiing, R. "Computer Abuse and Computer Crimeas Organizational Activities," Computer LawJournal (2:2), 1980, pp. 186-196.

Kraemer, H.C. and Thiemann, S. How Many Sub-jects? Statis~cal Power Analysis in Research,Sage, Newbury Park, CA, 1987.

Kusserow, R.P. "Computer-Related Fraud andAbuse in Government Agencies," pamphlet,U.S. Dept. of Health and Human Services,Washington, D.C., 1983.

Larcker, D.F. and Lessig, V.P. "Perceived Use-fulness of Information: A Psychometric Exami-nation," Decision Sciences (11:1), 1980, pp.121-134.

Lincoln, Y.S. and Gupta, E.G. Naturalistic In-quiry, Sage, Beverly Hills, CA, 1985.

Lindman, H.R. ANOVA in Complex Experimen-tal Designs, W.H. Freeman, San Francisco,CA, 1974.

Long, J.S. Confirmatory Factory Analysis, Sage,Beverly Hills, CA, 1983.

Mackenzie, K.D. and House, R. "Paradigm De-velopment in the Social Sciences," in Re-search in Organizations: Issues and Contro-versies, R.T. Mowday and R.M. Steers (eds.),Goodyear Publishing, Santa Monica, CA,1979, pp. 22-38.

164 MIS Quarterly~June 1989

Page 19: Validating Instruments in MIS Research · Validating Instruments in MIS Research1 By: Detmar W. Straub Information and Decision Sciences Department Curtis L. Carlson School of Management

Validating Research Instruments

Marsh, HoW. and Hocevar, D. "ConfirmatoryFactor Analysis of Multitrait-Multimethod Ma-trices," Journal of Educational Measurement(20:3), Fall 1983, pp. 231-248.

McFarlan, F.W. (ed.). The Information SystemsResearch Challenge, Harvard BusinessSchool Press, Boston, MA, 1984.

McFarlan, F.W. "Editor’s Comment," MIS Quar-terly (10:1), March 1986, pp. i-ii.

McGrath, J. "Toward a ’Theory of Method’ forResearch on Organizations," in. Research inOrganizations: Issues and Controversies, R.T.Mowday and R.M. Steers (eds.), GoodyearPublishing, Santa Monica, CA, 1979, pp. 4-21.

Mitchell, T.R. "An Evaluation of the Validity ofCorrelational Research Conducted in Organi-zations," Academy of Management Review(!0:2), 1985, pp. 192-205.

Nagin, D. "General Deterrence: A Review of theEmpirical Evidence," in Deterrence and Inca-pacitation: Estimating the Effects of CriminalSanctions on Crime Rates, A. Blumstein, J.Cohen, and D. Nagin (eds.), National Acad-emy of Sciences, Washington, D.C., 1978.

Nunnally, J.C. Psychometric Theory, McGraw-Hill, New York, NY, 1967.

O’Reilly, C.A., III. "Variations in Decision Makers’Use of Information Sources: The Impact ofQuality and Accessibility of Information," Acad-emy of Management Journal (25:4), 1982, pp.756-771.

Parker, D.B. Computer Security Management,Reston Publications, Reston, VA, 1981.

Poole, M.S. and DeSanctis, G.L. "Group Deci-sion Making and Group Decision Support Sys-tems: A 3-year Plan for the GDSS ResearchProject," working paper, MISRC-WP-88-02Management Information Systems ResearchCenter, University of Minnesota, Minneapolis,MN, 1987.

Popper, K. The Logic of Scientific Discovery,Harper Torchbooks, New York, NY, 1968.(First German Edition, Logik der Forschung,1935.)

Reichardt, C. "Statistical Analysls of Data fromNonequivalent Group Designs," in Quasi-Experimentation, D. Cook and D.To Campbell(eds.), Houghton-Mifflin, New York, NY, 1979.

Ricketts, J.A. and Jenkins, A.M. "The Development of an MIS Satisfaction Questionnaire: AnInstrument for Evaluating User Satisfactionwith Turnkey Decision Support Systems," Dis-cussion Paper No. 296, Indiana UniversitySchool of Business, Bloomington, IN, 1985.

Rivard, S. and Huff, S.L. "Factors of Successfor End-User Computing," Communications ofthe ACM (31:5), May 1988, pp. 552-561.

Sharma, S., Durand, R.M. and Gur-Arie, O. "iden-tification and Analysis of Moderator Variables,"Journal of Marketing Research (18), August1981, pp. 291-300.

Skogan, W.G. Issues in the Measurement of Vic-~mization, NCJ-74682, U.S. Department of Jus-tice, Bureau of Justice Statistics, Washington,D.C., 1981.

Sprague, R.H., Jr. and McNudin, B.C. (eds.), In-formation Systems Management in Practice,Prentice-Hall, Englewood Cliffs, NJ, 1986.

Straub, D.W. and Widom, C.S. "Deviancy by Bitsand Bytes: Computer Abusers and ControlMeasures," in Computer Security: A GlobalChallenge, J.H. Finch and E.G. Dougall (eds.),Elsevier Science Publishers B.V. (North-Holland) and IFIP, Amsterdam, 1984, pp. 91-102.

Swanson, E.B. "Management Information Sys-tems: Appreciation and Involvement," Manage-ment Science (21:2), October 1974, pp. 178-188.

Swanson, E.B. "A Note on Information Attrib-utes," Journal of MIS (2:3), 1986, pp. 87-91.

Swanson, E.B. "Information Channel Dispositionand Use," Decision Sciences (18:1), 1987, pp.131-145.

Tait, P. and Vessey, I. "The Effect of User In-volvement on System Success," MIS Quar-terly (12:1), March 1988, pp. 91-110.

Vessey, I. and Weber, R. "Research on Struc-tured Programming: An Empiricist’s Evalu-ation," IEEE Transactions on Software Engi-neering (SE-10:4), July 1984, pp. 397-407.

Yaverbaum, G.J. "Critical Factors in the UserEnvironment: An Experimental Study ofUsers," MIS Quarterly (12:1 ), March 1988, pp.75-90.

Zmud, R.W. "An Empirical Investigation of theDimensionality of the Concept of Information,"Decision Sciences (9:2), 1978, pp. 187-195.

MIS Quarterly/June 1989 165

Page 20: Validating Instruments in MIS Research · Validating Instruments in MIS Research1 By: Detmar W. Straub Information and Decision Sciences Department Curtis L. Carlson School of Management

Valida~ng Research Instruments

About the AuthorDetmar W. Straub is assistant professor of man-agement information systems at the Universityof Minnesota where he teaches courses in sys-tems and pursues research at the Curtis L.Carlson School of Management. He joined theMinnesota faculty in September, 1986 after com-pleting a dissertation at Indiana University. Pro-fessor Straub has published a number of stud-

ies in the computer security management arena,but his research interests also extend into infor-mation technology forecasting, measurement ofkey IS concepts, and innovation and diffusiontheory testing. His professional associations andresponsibilities include: associate director, MiSResearch Center, University of MinnesOta; as-sociate publisher of the MIS Quarterly; editorialboard memberships; and consulting with the de-fense and transportation industries.

166 MIS Quarterly~June 1989

Page 21: Validating Instruments in MIS Research · Validating Instruments in MIS Research1 By: Detmar W. Straub Information and Decision Sciences Department Curtis L. Carlson School of Management

Validating Research Instruments

APPENDIXSection I.

Computer Abuse Ouestionnaire

Personal Information

1. YOUR POSITION:

i~ President~Owner~Director/Chairman~Partner[] Vice President/General Manager[] Vice President of EDP[] Director~Manager/Head/Chief of EDP/MIS[] Director/Manager of Programming[] Director/Manager of Systems & Procedures[] Director/Manager of Communications[] Director/Managerof EDPOperations[] Director/Manager of Data Administration[] Director/Manager of Personal Computers[] Director/Manager of Information Center[] Data Administrator or Data Base Administrator[] Data/Computer Security Officer[] Senior Systems Analyst0 Systems/Information Analyst[] Chief/Lead/Senior Applications Programmer[] Applications Programmer[] Chief/Lead/Senior Systems Programmer[] Systems Programmer[] Chief/Lead/Senior Operator[] Machine or Computer Operator

[3 Vice President of Finance[] Controller[] Director/Manager Internal Auditing or EDP Auditing[] Director/Manager of Plant/Building Security[] EDP Auditor[] Internal Auditor[] Consultant[3 Educator[] User of EDP[] Other (please specify):

2. YOUR IMMEDIATE SUPERVISOR’S POSITION:

[] President/Owner/Director/Chairman/Partner[] Vice President/General ManagerE] Vice President of EDP[] Director/Manager/Head~Chief of EDP/MIS[] Director/Managerof Programming~ Director/Manager of Systems & Procedures[] Director/Manager of Communications[] Director/Manager of EDPOperations.[] Director/Manager of Data Administration[] Director/Manager of Personal Computers~ Director/Manager of Information Center[] Data/Computer Security Officer[] Senior Systems Analyst[] Chief~Lead~Senior Applications Programmer[] Chief/Lead/Senior Systems Programmer[] Chief~Lead~Senior Machineor Computer Operator

[] Vice President of Finance[] Controller[] Director/Manager Internal Auditing or EDP Auditing[] Director/Manager of Plant/Building Security

E3 Other (please specify):

3. NUMBER OF TOTAL YEARS EXPERIENCE IN/WITHINFORMATION SYSTEMS?

[] More than 14 years["111to 14 years

[] 7to t0years[] 3 to6years[] Less than 3 years[] Not sure

Or~;anizational Information

4. Approximate ASSETS and annual REVENUES of yourorganization:

ASSETS REVENUESAt all

Locations

O[][][][][][][]D[][]

At this At all At thisLocation Locations Location[] ....... Over 5 Billion ....... [] []i’1 .. .... 1 Billion-5 Billion ..... E] O[] ..... 250 Million-I Billion .... [] O[] ... 100 Million-250 Million ... [] [][] .... 50 Million-t00 Million .... [] [][] .... 10 Million-50 Million .... [] [][] ..... 5 Million-|0 Million .... [] [][] ..... 2 Million-5 Million ..... [] [][] ..... 1 Million-2 Million ..... [] [][] ...... Under ! Million ...... E] [][] ......... Not sure ......... 0 []

NUMBER OFEMPLOYEES of your organization:At all At this

Locations Location10,000 or more ..................... [] []5,000-9,999 ....................... [] []2,500-4,999 ....................... [] []1,000-2,499 ....................... [] []750-999 .......................... [] []5OO-749 ..... ..................... [] []250-499 ........................... [] []100-249 .......................... [] []6-99 ............................ [] []Fewer than 6 ...................... [] [3Not sure ......................... [] []

PRIMARY END PRODUCT OR SERVICE of your organization atthis location:I-I Manufacturingand Processing[] Chemical or Pharmaceutical[] Government: Federal, State, Municipal including Military[] Educational: Colleges, Universities, and other

Educational Institutions[] Computer and Data Processing Services including

Software Services, Service Bureaus, Time-Sharingand Consultants

[] Finance: Banking, Insurance, Real Estate, Securities,and Credit

[] Trade: Wholesaleand Retail[] Medical and Legal Services[] Petroleumr-I Transportation Services; Land, Sea, and Air[] Utilities: Communications, Electric, Gas, and

Sanitary Services[] Construction, Mining, and Agriculture0 Other (please specify):

Are you located at Corporate Headquarters: Yes [] No E3

MIS Quarterly~June 1989 167

Page 22: Validating Instruments in MIS Research · Validating Instruments in MIS Research1 By: Detmar W. Straub Information and Decision Sciences Department Curtis L. Carlson School of Management

Validating Research Instruments

7. CITY (at this location)? STATE?~

8. TOTAL NUMBER OF EDP (Electronic Data Processing)EMPLOYEES at this location (excludingdata input personnel):[] More than 300 [] 50-99[] 250-300 [] 10-49rl 200-249 r-1 Fewer than 10[] 150-199 [3 Not sure[3 100-149

9. Approximate EDP BUDGET per year of your organizationat this location:[] Over $20 Million [] $2-$4 Million[3 $10-$20 Million [] $I-$2 Million[] $8-$10 Million [] Under$1 Million[] $6-$8 Million [] Not surerl $4-$6 Million

Computer Security, Internal Audit.and Abuse Incident Information

A Computer Security function in an organization is any pur-poseful activity that has the objective of protecting assets suchas hardware, programs, data, and computer service from loss ormisuse. Examples of personnel engaged in computer securityfunctions include: data security and systems assurance officers.For this questionnaire, computer security and EDP audit func-lions will be considered separately.

10.How many staff members areworking 20 hours per week or morein these functions at this location?

11.How many staff members areworking 19 hours per week or less inthese functions at this location?

12 What arethe total personnel hoursper week dedicated to thesefunctions?

13. When were these functionsinitiated?

Computer EDPSecudty Audit

__ (number (numberof persons) of persons)

__(number __(numberOf persons) of persons)

(total __ (totalhours/wk) hours/wk)

(month/yr) (month/yr)

If your answer to the Computer Security part of question 12 waszero, please go directly to question 25. Otherwise, continue.

14. Of these total computer security personnel hours per week(question 12), how many are dedicated to each of thefollowing?

A. Physical security administration, disasterrecovery, and contingency planning ..... (hours/week)

B. Data security administration ..... (hours/week)C. User and coordinator training ......... (hours/week)

D Other ........................... (hours/week)(please specify):

15. EXPENDITURES per year for computer security at thislocation:

Annual computer security personnel salaries ..... $~

Do you have insurance (separate policy or rider)specifically for computer security losses?

[] Yes [] No [] Not sure

If yes, what is the annual cost of such insurance ... $~

16. SECURITY SOFTWARE SYSTEMS available and actively in useon the mainframe(s) [or minicomputer(s)] at this location:

Number of Number ofavailable systemssystems? m use?

Operating system access control facilities...DBMS security access control facilitiesFourth Generation software access

control facilities .....................

17. Other than those security software systems you listed inquestion 1E, how many SPECIALIZED SECURITY SOFTWARESYSTEMS are actively in use? (Examples: ACFII, RACF)

(number of specialized security software systems actively in use)

Of these, how many were purchased from a vendor? .(number purchased.trom a vendori

... and how many were developed in-house? _(number developed in,house~

18. Through what INFORMATIONAL SOURCES are computer systemusers made aware OF THE APPROPRIATE AND INAPPRO-PRIATE USES OF THE COMPUTER SYSTEM?(Choose as many as applicable)

(3 Distributed EOPGuidelines[] Administrative program to classify information by sensitivity(3 Periodic departmental memos and notes13 Distributed statements of professional ethics(3 Computer Security Violations Reports[] Organizational meetings(3 Computer Security Awareness Training sessions[] Informal discussions[3 Other (please specify):

19. Which types of DISCIPLINARY ACTION do these infor-mational sources mention (question 18) as consequencesof purposeful computer abuse?(Choose as many as applicable)

[3 Reprimand[3 Probation or suspension[3 Firing[] Criminal prosecution[] Civil prosecution[3 Other (please specify):

In questions 20-24, please indicate your reactions to the fol-lowing statements:

Strongly Not StrcnglyABree ABree Sure Disagree Disagree

20. The current computer securityeffort was in reaction in largepart to actual or suspected pastincidents of computer abuse atthis kEation.

21. The activities of computersecurity administrators are wellknown to users at this location.

22. The presence and activities ofcomputer security administra-tors deter anyone who mightabuse the computer system atthis location. 0 0 0 [3 0

23. Relative to OUr type of industrycomputer security is veryeffective at this location.

24. The overall security philosophyat this location is to providevery tight security withouthindering productivity. [3 I-I 0 0 0

25. How many SEPARATE UNAUTHORIZED AND DELIBERATEINCIDENTS OF COMPUTER ABUSE has your organization atthis location experienced in the 3 year period, Jan. 1, 1983-Jan. l, 19867

__ (number of incidents)

(Please fill out a separate "’Computer Abuse Incident Report’"[Blue.colored Secfion II] for each incident.)

26. HOw many incidents do you have reason to suspect otherthan those numbered above in this same 3 year period, Jan.1, 1983-Jan. 1, 19867

(number of suspected incidents)

27. Please briefly describe the basis (bases) for thesesuspicions.

168 MIS Quarterly~June 1989

Page 23: Validating Instruments in MIS Research · Validating Instruments in MIS Research1 By: Detmar W. Straub Information and Decision Sciences Department Curtis L. Carlson School of Management

Validating Research Instruments

Section II.Computer Abuse Incident Report

(covering the 3 year period, Jan. 1.1983.Jan. 1. 1986)

Instructiong: Please fill out a separate report for each incident of computer abuse that has occurred in the 3 year period,Jan. 1, 1983-Jan. 1, 1986

28. WHEN WAS THIS INCIDENT DISCOVERED?

Month/year /

29. HOWMANY PEOPLE WERE INVOLVED in committing the¯ computer abuse in this incident?

~ (number of perpetrators)

30. POSITION(S) OF OFFENDER(S): Main SecondOffender Offender

Top executive ...................... r-I []Security officer ..................... r"l []Auditor ........................... I-I r-1Controller ......................... [] []Manager, supervisor ................. [] []Systems Programmer ................ I"1 []Data entry staff ..................... r"l []Applications Programmer ............. r-I []Systems analyst .................... []Machine or computer operator ......... [] []Other EDPstaff ..................... []Accountant ........................ [] []Clerical personnel ................... [] []Student .......................... [] []Consultant ........................ [] []Not sure .......................... [] []Other ............................ [] []

(please specify): (Main)

(Second)

31. STATUS(ES) OF OFFENDER(S)when incident occurred: Main Second

Offender OffenderEmployee ......................... [] []Ex-employee .... 1, .................. [] []Non-employee ..................... [] []Not sure .......................... [] []Other ............................ [] []

(please specify): (Main).

(Second)

32. MOTIVATION(S) OF OFFENDER(S): Main SecondOffender Offender

Ignorance of propor professional conduct... []Personal gain ...................... [] r-IMisguided playfulness ............... [] []Maliciousness or revenge ............. [] []Not sure .......................... [] []Other ............................ []

(please specify): (Main)

(Second)

33. MAJOR ASSET AFFECTED or involved:(Choose as many as applicable)

[] Unauthorized use of computer service[] Disruption of computer service[] Data[] Hardware[] Programs

34. Was this a one-time incident or had it been going on for aperiod of time?(Choose one only)

[] one-time event[] going on for a period of timeF1 not sure

35. If a one-time incident, WHEN DID IT OCCUR?Month Year

36. If the incident had been going on for a period of timehow long was that?

years months

37. In your judgment, how serious a breach of security was.this incident?

(Choose one only)i"1 Extremely serious[] SeriousID Of minimallmportance[] Not sure[] Of negligible importance

38. Estimated $ LOSS through LOST OPPORTUNITIES (ifmeasurable): (Example: $3,000 in lost businessbecause of data corruption)

$.(estimated $ loss through lost opportunities)

39. Estimated $ LOSS through THEFT and/or RECOVERYCOSTS from abuse: (Example: $12,000 electronicallyembezzled plus $],000 in salary to recover fromdata corruption + $2,000 in legal fees = $15.000)

$(estimated $ loss through theft and/or recovery costs)

40. This incident was discovered...(Choose as many as applicable)

[] by accident by a system user[] by accident by a systems staff member or an

internal/EDP auditor[] through a computer security investigation other

than an audit[] by an internal/EDPaudit[] through normal systems controls, like software or

procedural controls[] by an external audit[3 not sure[] other (please specify):

41. This incident was reported to ....(Choose as many as a~plicable)

t’l someone inside the local organization[3 someone outside the local organization[3 not sure

42. If this incident was reported to someone outside thelocal organization, who was that?(Choose as many as applicable)

[] someone at divisional or corporate headquarters[] the mediaE3 the police[] other authorities[] not sure

43. Please briefly describe the incident and what finallyhappened to the perpetrator(s).

MIS Quarterly/June 1989 169