Survey Instrument Development in OB/HRM Research Prof. Jiing-Lih Larry Farh HKUST IACMR Guangzhou...

70
Survey Instrument Development in OB/HRM Research Prof. Jiing-Lih Larry Farh HKUST IACMR Guangzhou workshop July 2007

Transcript of Survey Instrument Development in OB/HRM Research Prof. Jiing-Lih Larry Farh HKUST IACMR Guangzhou...

Survey Instrument Development in OB/HRM

Research

Prof. Jiing-Lih Larry FarhHKUST

IACMRGuangzhou workshop

July 2007

2HKUST Business SchoolLarryFarh

Too many constructsConstructs are poorly definedMeasures do not match constructsUnreliable/invalid measuresLevel of measurement does not match the level of the

theory

Fatal flaws for empirical papers!!!

Construct and Measurement Related Problems in Manuscripts

3HKUST Business SchoolLarryFarh

“The construction of the measuring devices is perhaps the most important segment of any study. Many well-conceived research studies have never seen the light of day because of flawed measures.”

Schoenfeldt, 1984

“The point is not that adequate measurement is ‘nice’. It is necessary, crucial, etc. Without it we have nothing.”

Korman, 1974, p. 194“Validation is an unending process….Most psychological measures need to be constantly evaluated and reevaluated to see if they are behaving as they should.”

Nunnally & Bernstein, 1994, p. 84

4HKUST Business SchoolLarryFarh

Empirical Research Model

X’ Y’

YX

Independent

Operational

Conceptual

Dependent(a)

(b1)

(d)

(c)

(b2)

1. Independent and dependent variables are identified by X and Y, respectively.2. The symbol prime, ’ is used to designate that a variable is specified at the conceptual

level.3. Arrows represent the direction of influence or cause.4. a—conceptual relationship; d---empirical relationship; b1, b2—construct validity; c---

internal validity

From Schwab (1999)

5HKUST Business SchoolLarryFarh

Validity in Research Construct validity is present when there is a high

correspondence between the scores obtained on a measure and the mental definition of a construct it is designed to represent.

Internal validity is present when variation in scores on a measure of an independent variable is responsible for variations in scores on a measure of a dependent variable.

External validity is present when generalizations of findings obtained in a research study, other than statistical generalization, are made appropriately.

6HKUST Business SchoolLarryFarh

Construct Validation Involves procedures researchers

use to develop measures and to make inferences about a measure’s construct validity

It is a continual process No one method alone will give

confidence in the construct validity of your measure

7HKUST Business SchoolLarryFarh

Construct Validation StepsDefine the construct and develop

conceptual meaning for it

Develop/choose a measure

consistent with the definition

Perform logical analyses and empirical tests to determine if observations obtained on the

measure conform to the conceptual definition

From Schwab (1999)

Content validityFactor analysisReliabilityCriterion-related/Convergent/Discriminant/Nomologicalvalidity

8HKUST Business SchoolLarryFarh

Why is it important?

How to do it?

What are some of the best practices?

Survey Instrument Development

9HKUST Business SchoolLarryFarh

Instrumentation in Perspective Selection and application of a technique that

operationalizes the construct of interest e.g., physics = colliders e.g., MDs = MRI e.g., OB = Job descriptive index

Instruments are devices with their own advantages and disadvantages, some more precise than others, and sophistication doesn’t guarantee validity

10HKUST Business SchoolLarryFarh

Survey Instruments 3 most common types of instrumentation in

social sciences Observation Interview Survey instrumentation

Survey instrumentation Most widely used across disciplines Most abused technique---people designing

instruments who have little training in the area

11HKUST Business SchoolLarryFarh

Why do we do surveys? To describe the populations: What is

going on? Theoretical reasons: Why is it going

on? Develop and test theory Theory should always guide survey

development and data collection

12HKUST Business SchoolLarryFarh

What construct does this scale measure? (1)

1. Have a job which leaves you sufficient time for your personal or family life. (.86)

2. Have training opportunities (to improve your skills or learn new skills). (-.82)

3. Have good physical working conditions (good ventilation and lighting, adequate work space, etc.). (-.69)

4. Fully use your skills and abilities on the job. (-.63)5. Have considerable freedom to adapt your own approach to

the job. (.49)6. Have challenging work to do---work from which you can get a

personal sense of accomplishment. (.46)7. Work with people who cooperate well with one another.(.20)8. Have a good working relationship with your manager.(.20)

Adapted from Heine et al. (2002)

13HKUST Business SchoolLarryFarh

What construct does this scale measure? (2)

I would rather say “no” directly, than risk being misunderstood. (12) Speaking up during a class is not a problem for me. (14) Having a lively imagination is important to me. (12) I am comfortable with being singled out for praise or rewards. (13) I am the same person at home that I am at school. (13) Being able to take care of myself is a primary concern for me. (12) I act the same way no mater who I am with. (13) I prefer to be direct and forthright when dealing with people I have

just met. (14) I enjoy being unique and different from others in many respects.

(13) My personal identity, independent of others, is very important to

me. (14) I value being in good health above everything. (8)

Adapted from Heine et al. (2002)

14HKUST Business SchoolLarryFarh

Example: Computer satisfaction

15HKUST Business SchoolLarryFarh

Construct Definition Personal computer satisfaction is an emotional

response resulting from an evaluation of the speed, durability, and initial price, but not the appearance of a personal computer. This evaluation is expected to depend on variation in the actual characteristics of the computer (e.g., speed) and on the expectations a participant has about those characteristics. When characteristics meet or exceed expectations, the evaluation is expected to be positive (satisfaction). When characteristics do not come up to expectations, the evaluation is expected to be negative (dissatisfaction).

From Schwab (1999)

16HKUST Business SchoolLarryFarh

Hypothetical Computer Satisfaction Questionnaire

Decide how satisfied or dissatisfied you are with each characteristic of your personal computer using the scale below. Circle the number that best describes your feelings for each statement.

1. Initial price of the computer 1 2 3 4 5

2. What I paid for the computer 1 2 3 4 5

3. How quickly the computer performs calculations 1 2 3 4 5

4. How fast the computer runs programs 1 2 3 4 5

5. Helpfulness of the salesperson 1 2 3 4 5

6. How I was treated when I bought the computer 1 2 3 4 5

My satisfaction with:

Very Dissatisfied Dissatisfied

Neither Satisfied nor Dissatisfied Satisfied

Very Satisfied

1 2 3 4 5

17HKUST Business SchoolLarryFarh

Construct Validity Challenges

Construct Variance

Observed Score Variance

Systematic Variance

Unreliability

Reliable Contaminatio

n

Construct Valid

VarianceDeficiency

From Schwab (1999)

18HKUST Business SchoolLarryFarh

Scale Development Process

Step1: Item Generation

Step 2: Questionnaire Administration

Step 3: Initial Item Reduction

Step 4: Confirmatory Factor Analysis

Step 5: Convergent/Discriminant Validity

Step 6: Replication

From Hinkin (1998)

19HKUST Business SchoolLarryFarh

Step 1: Item Generation -Deductive Approach

It requires:(a) an understanding of the phenomenon to be investigated;

(b) thorough review of the literature to develop the theoretical definition of the construct under examination From Hinkin (1998)

20HKUST Business SchoolLarryFarh

Step 1: Item Generation-Deductive Approach

Advantages: through adequate construct definitions, items should capture the domain of interest, thus to assure content validity in the final scale

Disadvantages: requires the researchers to possess working knowledge of the phenomena; may not be appropriate for exploratory studies

From Hinkin (1998)

21HKUST Business SchoolLarryFarh

Step 1: Item Generation - Inductive Approach

Appropriate when the conceptual basis may not result in easily identifiable dimensions for which items can then be generated

Frequently researchers develop scales inductively by asking a sample of respondents to provide descriptions of their feelings about their organizations or to describe some aspects of behavior

Responses classified into a number of categories by content analysis based on key words or themes or using a sorting process

22HKUST Business SchoolLarryFarh

Step 1: Item Generation - Inductive Approach

Advantages: effective in exploratory research

Disadvantages: Without a definition of construct under

examination, it is difficult to develop items that will be conceptually consistent.

Requires expertise on content analyses Rely on factor analysis which does not

guarantee items which load on the same factors share the same theoretical construct

23HKUST Business SchoolLarryFarh

Characteristics of Good Items As simple and short as possible Language should be familiar to target

audience Keep items consistent in terms of perspectives

(e.g., assess behaviors vs. affective response) Item should address one single issue (no

double-barreled items) Leading questions should be avoided Negatively worded questions should be

carefully constructed and placed in the survey

24HKUST Business SchoolLarryFarh

What about these items? I would never drink and drive for fear of that

I might be stopped by the police (yes or no) I am always furious (yes or no) I often lose my temper (never to always) 滿招損,謙受益

25HKUST Business SchoolLarryFarh

Content Validity Assessment

Basically a judgment call But can be supplemented statistically

Proportion of substantive agreement (Anderson & Gerbing, 1991) (see next slide)

Item re-translation (Schriesheim et al. 1990)

Content adequacy (Schriesheim et al. 1993)

26HKUST Business SchoolLarryFarh

Content Validation RatioContent Validation Ratio

CVR =2 n e - 1

N

ne

N

is the number of Subject Matter Experts (SMEs) rating the selection tool or skills being assessed is essential to the job, i.e., having good coverage of the KSAs required for the job.

is the total number of experts

CVR = 1 when all judges believe the tool/item is essential; CVR = -1 when none of the judge believes the tool/skill is essential; CVR = 0 means only half of the judges believe that the tool/item is essential.

27HKUST Business SchoolLarryFarh

How many items per construct? 4 - 6 items for most constructs. For initial

item generation, twice as many items should be generated

28HKUST Business SchoolLarryFarh

Item Scaling

Scale used should generate sufficient variance among respondents for subsequent statistical analyses

Likert-type scales are the most frequently used in survey questionnaire. Likert developed the scale to be composed of five equal appearing intervals with a neutral midpoint

Coefficient alpha reliability with Likert scales has been shown to increase up to the use of five points, but then it levels off

29HKUST Business SchoolLarryFarh

Step 2: Questionnaire Administration

Sample size: Recommendations for item-to-response ratios range from 1:4 to 1:10 for each set of scales to be factor analyzed e.g., if 30 items were retained to develop three

measures, a sample size of 150 observations should be sufficient in exploratory factor analyses. For confirmatory factor analysis, a minimum sample size of 200 has been recommended.

30HKUST Business SchoolLarryFarh

Step 3: Initial Item Reduction Interitem correlations of the variables to be

conducted first. Corrected item-total correlations smaller than 0.4 can be eliminated

Exploratory factor analysis. An appropriate loadings greater than 0.40 and /or a loading twice as strong on an appropriate factor than on any other factor. Eigenvalues of greater than 1 and a scree test of the percentage of variance explained should also be examined

Be aware of construct deficiency problems in deleting items

31HKUST Business SchoolLarryFarh

Step 3: Internal Consistency Assessment Reliability is the accuracy or precision of a

measuring instrument and is a necessary condition for validity

Use Cronbach’s alpha to measure internal consistency. 0.70 should be served as minimum for newly developed measures.

32HKUST Business SchoolLarryFarh

Coefficient alphaThe average of all possible split halve reliabilities.

n is the number of items for each applicantt is the total of all items for an applicant is the variance across all applicants 2

)(1 2

1

22

t

n

iit

nn

33HKUST Business SchoolLarryFarh

An example of coefficient Subject

Item A B C Variance1 6 5 4 1.002 6 4 5 1.003 5 3 3 1.334 4 4 4 .005 4 5 4 .33

3.67Total 25 21 20 7.00

Variance of total = 7.0 ; Total of variance = 3.67

2t

22

5

1

2

ii

60.)0.7

67.30.7(

45

)(1 2

1

22

t

n

iit

nn

34HKUST Business SchoolLarryFarh

How High Cronbach Alpha Needs to be?

In exploratory research where hypothesized measures are developed for new constructs, the Alphas need to exceed .70

In basic research where you use well-established instruments for constructs, the Alphas need to exceed .80.

In applied research where you need to make decisions based on the measurement outcomes, the Alphas need to exceed .90.

35HKUST Business SchoolLarryFarh

Step 4: Confirmatory Factor Analysis (CFA)

Items that load clearly in an exploratory factor analysis may demonstrate a lack of fit in a multiple-indicator measurement model due to lack of external consistency

It is recommended that a Confirmatory Factor Analysis be conducted using the item variance-covariance matrix computed from data collected from an independent sample.

Then assess the goodness of fit index, t-value, and chi square

36HKUST Business SchoolLarryFarh

Step 5: Convergent/Discriminant Validity

Convergent validity—when there is a high correspondence between scores from two or more different measures of the same construct.

Discriminat validity---when scores from measures of different constructs do not converge.

Multitrait-Multimethod Matrix (MTMM) Nomological networks---relationships between a

construct under measurement consideration and other constructs.

Criterion-related validity

37HKUST Business SchoolLarryFarh

Convergent ValidityConstruct

Measure B

Measure A

From Schwab (1999)

38HKUST Business SchoolLarryFarh

Step 6: Replication Find an independent sample to collect

more data using the measure. The replication should include confirmatory

factor analysis, assessment of internal consistency, and convergent, discriminant, and criterion-related validity assessment

39HKUST Business SchoolLarryFarh

Elements of a MTMM matrix

40HKUST Business SchoolLarryFarh

A sample MTMM matrix

Heterotrait-monomethod

Monotrait-monomethodMonotrait-

heteromethod

Heterotrait-heteromethod

Adapted from http://www.socialresearchmethods.net/kb/mtmmmat.htm

Note: SE: self esteem; SD: self disclosure; LC: Locus of control

(Paper & Pencil self test)

41HKUST Business SchoolLarryFarh

Interpreting MTMM

Reliability (monotrait-monomethod) should be the highest

Monotrait-heteromethod (convergent validity) must be >0 and high

Monotrait-heteromethod (convergent validity) >

heterotrait-monomethod (discriminant validity)> heterotrait-heteromethod (i.e., convergent validity should be higher than discriminant validity)

42HKUST Business SchoolLarryFarh

Inductive Example: Taking Charge (Morrison & Phelps, 1999, AMJ)

Open-end survey to 148 MBA to list 152 individuals efforts, and collected 445 statements.

Reduce the list to 180 by eliminating redundant and ambiguous ones, and sort the statements into 19 groups based on similarity.

Write a general statement to reflect each group, compare the content of the statements with the construct, and result in 10 prototypical activities to reflect the construct.

Pretest it to 20 MBA students, to check for clarity and suggestion for wording improvements

Pretest the measure with a sample of 152 working MBAs, to assess internal consistency of the items and check whether the 10 specific behaviors were extra-role activities. 77% checked six or more.

43HKUST Business SchoolLarryFarh

Open-ended Survey: Taking Charge (Morrison & Phelps, 1999, AMJ)

• To think of individuals with whom they had worked who have actively tried to bring about improvement within their organization. These change efforts could be aimed at any aspect of the org, including the person’s job, how work was performed within their dept, and org’al policies or procedures.

• To focus on efforts that went beyond the person’s formal role or, efforts that were not required or formally expected.

• To list specific behaviors that reflected or exemplified the person’s change effort.

44HKUST Business SchoolLarryFarh

Sample Items: Taking Charge (Morrison & Phelps, 1999, AMJ)

1. Try to institute new methods that are more effective

2. Try to introduce new structure, technologies, or approach to improve efficiency

3. Try to change how his/her job is executed in order to more effective

4. Try to bring about improved procedures for the work unit or department

45HKUST Business SchoolLarryFarh

Theoretical Model: Taking Charge (Morrison & Phelps, 1999, AMJ)

Top management openness

Group norms

Self-efficacy

Felt responsibility

Expert power

Taking charge

46HKUST Business SchoolLarryFarh

Deductive Example: Org. Justice (Colquitt, 2001, JAP)

The Dimensionality of Organizational Justice

Organizational Justice

Distributive Justice

Procedural Justice

Interactive Justice

Informational Justice

47HKUST Business SchoolLarryFarh

Sample Items: Org. Justice (Colquitt, 2001, JAP) Distributive Justice

“Does your outcome reflect the effort you have input into your work” (Leventhal, 1976)

Procedural justice “Have you been able to express your views and feelings during those

procedures” (Thibaut & Walker, 1975) Interactive justice

“Has he/she treated you in a polite manner” (Bies & Moag, 1986) Informational justice

“Has he/she communicated details in timely manner” (Shapiro, et al., 1994)

48HKUST Business SchoolLarryFarh

Theoretical Model: Org. Justice (Colquitt, 2001, JAP)

Distributive Justice

Procedural Justice

Interactive Justice

Informational Justice

Outcome Satisfaction

Rule Compliance

Leader Evaluation

Collective Self-esteem

49HKUST Business SchoolLarryFarh

Research in Chinese context

50HKUST Business SchoolLarryFarh

Four Types of Scale Development Approaches in Chinese Management Research

Expectations about Cultural Specificity

Etic Orientation Emic Orientation

Source of the scale

Use or Modify an Existing Scale

Translation Adaptation

Develop a New Scale

De-contextualization

Contextualization

Farh, Cannella, & Lee (2006, MOR)

51HKUST Business SchoolLarryFarh

Four Types of Scale Development Approaches in Chinese

Management Research

ScaleDevelopmentApproaches

KeyAssumptions

MajorStrengths

MajorLimitations

Translation approach

Target construct is equivalent across cultures in terms of overall definition, content domain, and empirical representations of the content domain Availability of high quality culturally unbiased Western scales for target construct

Low developmental time and costsPreserve the possibility of a high level of equivalenceAllow for direct cross-cultural comparison of research findings

Difficulty in achieving semantic equivalence between the Chinese and Western scalesCulturally unbiased Western scales are hard to come by

Adaptation approach

Target construct is equivalent between cultures in terms of overall definition and content domainAvailability of high quality Western scales for target construct

Low to moderate developmental time and costsEase of scholarly exchanges of research findings with the Western literature

Difficulty in conducting cross-cultural researchDrastic adaptation may create new scale that requires extensive validation in the Chinese context

52HKUST Business SchoolLarryFarh

Four Types of Scale Development Approaches in Chinese Management Research:

ScaleDevelopmentApproaches

KeyAssumptions

MajorStrengths

MajorLimitations

De-contextualization approach

Target construct is etic or universal or culturally invariantHigh quality scale for the target construct is unavailable in the literature

Opportunity to develop universal measure for target constructEase of scholarly exchange of research findings with the Western literature

Long developmental time and high developmental costsItems tend to be phrased at a more abstract level, which may limit its informational and practical value

Contextualization approach

Target construct is emic or culture specificHigh quality emic scale for target construct unavailable in the literature

Opportunity to develop scales highly relevant for the Chinese context Opportunity to contribute context-specific knowledge to Chinese management

Long developmental time and high developmental costsLimited generalizability of the new scaleHard to communicate research findings with the Western literature

53HKUST Business SchoolLarryFarh

Should you use well-established scales from the (western) literature or develop local scales?

Align your measure with your theoretical orientation When you take an etic (universal or cultural invariant)

perspective to a research topic, you assume that the Chinese context is largely irrelevant. Here your study is based on general theories, and you should use well-established measures in the literature.

When you take an emic (cultural specific) perspective to a research topic, you assume that the phenomenon is Chinese context specific. Here your study is based on context embedded theories, and you should consider using measures appropriate for the Chinese context.

When you do cross-cultural research, you try to study phenomena common across societies. You model culture explicitly in your theories (either as a main or a moderating effect) and should apply measures that work in multiple cultural contexts.

54HKUST Business SchoolLarryFarh

A Close Look at Item Generation Using Inductive Approach

55HKUST Business SchoolLarryFarh

Item Generation Process

Content domain clarity?

low

Collect behavioral incidents

Classify into categories

Form dimensions from categories

Domain definition

Item development & refinement

Empirical testing

Key issueshigh

sampling/method

classification/panel test

Empirical/ conceptual

Creativity & insight

content validation

56HKUST Business SchoolLarryFarh

Research project in focus

Investigate the construct domain of moral leadership in the PRC…

57HKUST Business SchoolLarryFarh

Generate Behavioral Descriptions

這份調查表是想了解您對領導道德行為的看法,在您回答之前,請先回想您在工作上曾經遇到過的一兩位道德行為表現良好的主管,及一兩位道德行為表現不佳的主管,并想想他們具有哪些行為表現。

然后 , 請根據您所回想的結果,在下列表格中 , 寫出您認為一個有道德的企業主管應該表現哪些行為 , 請寫出最重要的六項。在列出六項行為后,再依據這六項行為的相對重要性,由 1 至 6 加以排序, 1 代表最重要, 6 代表最不重要,將排序的數字填在后面的括號中。

有道德的企業主管應具备的行為排序

58HKUST Business SchoolLarryFarh

Sample ItemsNID: 111 為人坦誠 NID: 112 處理工作的態度比較公正NID: 113 容易和員工接近 NID: 114 有相關的知識,工作技巧 NID: 121 無畏,具有挑戰中上級的勇氣,堅持做正確的事情NID: 122 關心,栽培下屬.有人情味 NID: 123 坦白.鼓勵團結內部充份的信息流通 NID: 124 認真做事 NID: 125 公平對待下屬,以業績表現評價下屬NID: 126 獎勵,肯定下屬優秀表現NID: 131 公私分明NID: 132 言行一NID: 133 一視同仁 NID: 134 光明正大NID: 142 有責任NID: 144 不推卸NID: 145. 公私分明 NID: 146. 尊重別人NID: 151. 誠實正直

59HKUST Business SchoolLarryFarh

Some 44 Categories #1 胸怀宽广

不疾贤妒能,具有包容性,不斤斤计较,能够宽容别人的错误。 #3 敬业

工作热情主动,积极进取,敬业奉献,勤奮 #8 诚实坦诚

不隐瞒,不误导,不欺骗,能够使员工得到真实的信息

How do you define moral leadership to begin with?

How do you consolidate into dimensions?

60HKUST Business SchoolLarryFarh

Content domain clarity Must do an exhaustive literature review How do you define your construct? How does it differ from others? What about its content domains? Its state?

Its level? Its structure? The more you are able to define your

construct clearly before you proceed, the greater your chance of success!!!

61HKUST Business SchoolLarryFarh

Collect Behavioral Incidents Sampling is crucial

Try to sample the entire content domain Diverse sampling

Sampling across attributes (e.g., age, gender, education) Sampling across contexts (e.g., position levels, job types, organizations,

industries) “Adequate” sample size

Sample until saturation (no new information yielded by additional sampling) If you plan to do item level analysis, you need at least 200+ clear incidents

Mode of data collection should match the complexity of the phenomenon

Simple listing of events Description of complete scenarios or events In-depth personal interview Focus group Participant observation

62HKUST Business SchoolLarryFarh

Classify Incidents into Categories

Classification system Based on content similarity/dissimilarity Aim at an “all inclusive” and “mutually exclusive” system May need to provide sorters with more guidance Must have clear category definition Must have clear classification rules

Panel testing Use subject matter experts as panel members if possible Train the judges well Check interrater reliability

63HKUST Business SchoolLarryFarh

From categories to concepts

Empirical approach Factor analysis (Kipnis et al. 1980) Q sort followed by cluster analysis (Coleman &

Borman, 2000) Conceptual approach

Rely on theoretical insights (Farh et al., 2004)

64HKUST Business SchoolLarryFarh

Construct re-definition

Re-define your “constructs” while taking into account of the results of content analysis

Constructs should be more abstract and broader than categories

Clear construct definition is a must before developing valid measures

A challenging but key task!!!

65HKUST Business SchoolLarryFarh

Item development and refinement Content validation

Write items based on your construct definition (be aware of contamination & deficiency!!!)

Be sure to review items of extant scales Incident descriptions may not make good survey

items (may be too specific, too ambiguous) Schriesheim et al. (1990) method is useful for

multidimensional constructs Judgments from a few content experts will do

(e.g., MacKenzie et al. 1991)

66HKUST Business SchoolLarryFarh

Summary: Best practices Study the literature & the phenomenon to

come up with a broad definition of the construct

Collect good behavioral incidents (quantity & quality)

Build a sound classification system Conduct panel test to verify your results Use inductive and deductive approaches

alternately in the dev. process

67HKUST Business SchoolLarryFarh

Take Away Lessons Good survey measures must be grounded on

sound theory and conceptual definitions Developing good survey measures takes much

time, resources, experiences, and commitment, but the payoff can be immense!!

Avoid convenience measurement at all time!!! If there is a good, published measure available,

Use it!!! Not to reinvent the vehicle!!!

68HKUST Business SchoolLarryFarh

Questions and Answers

69HKUST Business SchoolLarryFarh

References :1. Anderson, J. C. and Gerbing, D. W. (1991). Predicting performance of measures

in a confirmatory factor analysis with a pretest assessment of their substantive validities. Journal of Applied Psychology, 76, 732- 740.

2. Colquitt, J. A. (2001). On the dimensionality of organizational justice: A construct validation of a measure. Journal of Applied Psychology, 86, 386-400.

3. Coleman, V. I. & Borman, W. C. (2000). Investigating the underlying structure of the citizenship performance domain. Human Resource Management Review, 10, 25-44.

3. Farh, J. L., Cannella, A. A. Jr., & Lee, C. (2006). Approaches to scale development in Chinese management research. Management and Organization Review, 2, 301-308.

4. Farh J. L., Zhong, C. B. and Organ, D.W. (2004). Organizational citizenship behavior in the People's Republic of China. Organization Science, 15, 241-253.

5. Heine, S. J., Lehman, D. R., Peng, K. 2002. What’s wrong with cross-cultural comparisons of subjective Likert scales?: The reference-group effect. Journal of Personality and Social Psychology, 82, 903–918.

6. Hinkin, T.K. (1998). A brief tutorial on the development of measures for use in survey

questionnaires. Organizational Research Methods, 1, 104-121.7. Kipnis, D., Schmidt, S. M., & Wilkinson, I. (1980). Intraorganizational influence

tactics: Explorations in getting one's way. Journal of Applied Psychology, 65, 440-452.8. Korman, A. K. (1974). Contingency approaches to leadership: An overview. In J.G.

Hunt and L.L. Larson (Eds.), Contingency Approaches to Leadership (pp. 189-198).

Southern Illinois University Press.

70HKUST Business SchoolLarryFarh

References:9. MacKenzie, S. B., Podsakoff, P. M. and Fetter, R. (1991). Organizational

citizenshipbehaviors and objective productivity as determinants of managerial evaluations of

salespersons’ performance. Organizational Behavior and Human Decision Processes, 50, 123-150.

10. Morrison, E. W., & Phelps, C. C. (1999). Taking charge at work: Extrarole efforts to initiate

workplace change. Academy of Management Journal, 42, 403–419.11. Nunnally J. C. and Bernstein, I. H. (1994). Psychometric Theory (3rded.). McGraw

Hill. 12. Schwab, D. P. (1999). Research methods for organizational studies, Mahwah,

NJ: Lawrence Erlbaum. 13. Schriesheim, C. A. and Hinkin, T. R. (1990). Influence tactics used by

subordinates: A theoretical and empirical analysis and refinement of the Kipnis, Schmidt, and Wilkinson subscales. Journal of Applied Psychology, 75, 246-257.

14. Schriesheim, C. A., Powers, K. J., Scandura, T. A., Gardiner, C. C. and Lankau, M. J. (1993). Improving construct measurement in management research: Comments and a quantitative approach to assessing the theoretical content adequacy of paper-and-pencil survey-type instruments. Journal of Management, 19, 385-417.