Freq_Distribution

download Freq_Distribution

of 108

Transcript of Freq_Distribution

  • 8/7/2019 Freq_Distribution

    1/108

  • 8/7/2019 Freq_Distribution

    2/108

    Chapter Outline

    1) Overview

    2) Frequency Distribution

    3) Statistics Associated with FrequencyDistribution

    i. Measures of Location

    ii. Measures of Variability

    iii. Measures of Shape

    4) Introduction to Hypothesis Testing

    5) A General Procedure for Hypothesis Testing

  • 8/7/2019 Freq_Distribution

    3/108

    Chapter Outline

    6) Cross-Tabulations

    i. Two Variable Case

    ii. Three Variable Case

    iii. General Comments on Cross-Tabulations

    7) Statistics Associated with Cross-Tabulationi. Chi-Square

    ii. Phi Correlation Coefficient

    iii. Contingency Coefficient

    iv. Cramers V

    v. Lambda Coefficient

    vi. Other Statistics

  • 8/7/2019 Freq_Distribution

    4/108

    Chapter Outline

    8) Cross-Tabulation in Practice

    9) Hypothesis Testing Related to Differences

    10) Parametric Tests

    i. One Sample

    ii. Two Independent Samples

    iii. Paired Samples

    11) Non-parametric Tests

    i. One Sample

    ii. Two Independent Samples

    iii. Paired Samples

  • 8/7/2019 Freq_Distribution

    5/108

  • 8/7/2019 Freq_Distribution

    6/108

    Respondent Sex Familiarity Internet Attitude Toward Usage of InternetNumber Usage Internet Technology Shopping Banking

    1 1.00 7.00 14.00 7.00 6.00 1.00 1.00

    2 2.00 2.00 2.00 3.00 3.00 2.00 2.00

    3 2.00 3.00 3.00 4.00 3.00 1.00 2.00

    4 2.00 3.00 3.00 7.00 5.00 1.00 2.00

    5 1.00 7.00 13.00 7.00 7.00 1.00 1.00

    6 2.00 4.00 6.00 5.00 4.00 1.00 2.00

    7 2.00 2.00 2.00 4.00 5.00 2.00 2.00

    8 2.00 3.00 6.00 5.00 4.00 2.00 2.009 2.00 3.00 6.00 6.00 4.00 1.00 2.00

    10 1.00 9.00 15.00 7.00 6.00 1.00 2.00

    11 2.00 4.00 3.00 4.00 3.00 2.00 2.00

    12 2.00 5.00 4.00 6.00 4.00 2.00 2.00

    13 1.00 6.00 9.00 6.00 5.00 2.00 1.00

    14 1.00 6.00 8.00 3.00 2.00 2.00 2.00

    15 1.00 6.00 5.00 5.00 4.00 1.00 2.00

    16 2.00 4.00 3.00 4.00 3.00 2.00 2.00

    17 1.00 6.00 9.00 5.00 3.00 1.00 1.00

    18 1.00 4.00 4.00 5.00 4.00 1.00 2.0019 1.00 7.00 14.00 6.00 6.00 1.00 1.00

    20 2.00 6.00 6.00 6.00 4.00 2.00 2.00

    21 1.00 6.00 9.00 4.00 2.00 2.00 2.00

    22 1.00 5.00 5.00 5.00 4.00 2.00 1.00

    23 2.00 3.00 2.00 4.00 2.00 2.00 2.00

    24 1.00 7.00 15.00 6.00 6.00 1.00 1.00

    25 2.00 6.00 6.00 5.00 3.00 1.00 2.00

    26 1.00 6.00 13.00 6.00 6.00 1.00 1.00

    27 2.00 5.00 4.00 5.00 5.00 1.00 1.00

    28 2.00 4.00 2.00 3.00 2.00 2.00 2.00

    29 1.00 4.00 4.00 5.00 3.00 1.00 2.00

    30 1.00 3.00 3.00 7.00 5.00 1.00 2.00

    Internet Usage Data

    Table 15.1

  • 8/7/2019 Freq_Distribution

    7/108

    Frequency Distribution

    In a frequency distribution, one variable isconsidered at a time.

    A frequency distribution for a variableproduces a table of frequency counts,percentages, and cumulative percentages for

    all the values associated with that variable.

  • 8/7/2019 Freq_Distribution

    8/108

    Frequency Distribution of Familiaritywith the Internet

    Table 15.2

    Valid Cumulative

    Value label Value Frequency (N) Percentage percentage percentage

    Not so familiar 1 0 0.0 0.0 0.0

    2 2 6.7 6.9 6.9

    3 6 20.0 20.7 27.6

    4 6 20.0 20.7 48.3

    5 3 10.0 10.3 58.6

    6 8 26.7 27.6 86.2

    Very familiar 7 4 13.3 13.8 100.0

    Missing 9 1 3.3

    TOTAL 30 100.0 100.0

  • 8/7/2019 Freq_Distribution

    9/108

    Frequency HistogramFigure 15.1

    2 3 4 5 6 70

    7

    4

    3

    2

    1

    6

    5

    Frequency

    Familiarity

    8

  • 8/7/2019 Freq_Distribution

    10/108

    SPSS DATA ANALYSIS

  • 8/7/2019 Freq_Distribution

    11/108

    The mean, or average value, is the most commonly used

    measure of central tendency. The mean, ,is given by

    Where,

    Xi

    = Observed values of the variable X

    n = Number of observations (sample size)

    p(i)= Probability of xi

    The mode is the value that occurs most frequently. It

    represents the highest peak of the distribution. The mode isa good measure of location when the variable is inherentlycategorical or has otherwise been grouped into categories.

    DistributionMeasures of Location

    X= Xi/ni=1

    nX

    =

    N

    i

    ixipX1

    ~*)(

  • 8/7/2019 Freq_Distribution

    12/108

    The median of a sample is the middle valuewhen the data are arranged in ascending ordescending order. If the number of datapoints is even, the median is usually estimatedas the midpoint between the two middle

    values by adding the two middle values anddividing their sum by 2. The median is the50th percentile.

    DistributionMeasures of Location

  • 8/7/2019 Freq_Distribution

    13/108

    The range measures the spread of the data.It is simply the difference between the largestand smallest values in the sample. Range =Xlargest Xsmallest.

    The interquartile range is the difference

    between the 75th and 25th percentile. For aset of data points arranged in order ofmagnitude, the pth percentile is the value thathas p% of the data points below it and (100 -p)% above it.

    DistributionMeasures of Variability

  • 8/7/2019 Freq_Distribution

    14/108

    The variance is the mean squared deviationfrom the mean. The variance can never benegative.

    The standard deviation is the square root ofthe variance.

    The coefficient of variation is the ratio ofthe standard deviation to the mean expressedas a percentage, and is a unitless measure ofrelative variability.

    sx= (Xi-X)2

    n-1i =1

    n

    CV= sx/X

    DistributionMeasures of Variability

    Sample,not population

  • 8/7/2019 Freq_Distribution

    15/108

    Skewness. The tendency of the deviations

    from the mean to be larger in one directionthan in the other. It can be thought of as thetendency for one tail of the distribution to beheavier than the other.

    Kurtosis is a measure of the relativepeakedness or flatness of the curve defined bythe frequency distribution. The kurtosis of anormal distribution is zero. If the kurtosis ispositive, then the distribution is more peaked

    than a normal distribution. A negative valuemeans that the distribution is flatter than anormal distribution.

    DistributionMeasures of Shape

  • 8/7/2019 Freq_Distribution

    16/108

    Skewness of a DistributionFigure 15.2

    Skewed Distribution

    SymmetricDistribution

    Mean

    Median

    Mode(a)

    Mean Median

    Mode (b)

  • 8/7/2019 Freq_Distribution

    17/108

    TestingFig. 15.3

    Draw Marketing Research Conclusion

    Formulate H0 and H1

    Select Appropriate Test

    Choose Level of Significance

    DetermineProbability

    Associated with

    Test Statistic

    Determine CriticalValue of TestStatistic TSCR

    Determine if TSCRfalls into (Non)

    Rejection Region

    Compare withLevel of

    Significance, Reject or Do not Reject H0

    Collect Data and Calculate Test Statistic

    T ti

  • 8/7/2019 Freq_Distribution

    18/108

    TestingStep 1: Formulate the Hypothesis

    A null hypothesis is a statement of thestatus quo, one of no difference or no effect. Ifthe null hypothesis is not rejected, no changeswill be made.

    An alternative hypothesis is one in which

    some difference or effect is expected.Accepting the alternative hypothesis will leadto changes in opinions or actions.

    The null hypothesis refers to a specified value

    of the population parameter (e.g., ), nota sample statistic (e.g., ).

    , , X

    T ti

  • 8/7/2019 Freq_Distribution

    19/108

    A null hypothesis may be rejected, but it cannever be accepted based on a single test. Inclassical hypothesis testing, there is no way todetermine whether the null hypothesis is true.

    In marketing research, the null hypothesis is

    formulated in such a way that its rejectionleads to the acceptance of the desiredconclusion. The alternative hypothesisrepresents the conclusion for which evidence

    is sought.H0: 0.40

    H1: > 0.40

    TestingStep 1: Formulate the Hypothesis

    T ti

  • 8/7/2019 Freq_Distribution

    20/108

    The test of the null hypothesis is a one-tailedtest, because the alternative hypothesis isexpressed directionally. If that is not the case,then a two-tailed test would be required,and the hypotheses would be expressed as:

    H0: = 0.40

    H1: 0.40

    TestingStep 1: Formulate the Hypothesis

    Generally limited

    to productionmeasures for Q.C.Purposes

    T ti

  • 8/7/2019 Freq_Distribution

    21/108

    The test statistic measures how close thesample has come to the null hypothesis.

    The test statistic often follows a well-knowndistribution, such as the normal, t, or chi-square distribution.

    In our example, the zstatistic, which followsthe standard normal distribution, would beappropriate.

    TestingStep 2: Select an Appropriate Test

    z =p -

    pwhere

    p

    = (1 )

    n

    T ti

  • 8/7/2019 Freq_Distribution

    22/108

    Type I Error Type Ierror occurs when the sample results

    lead to the rejection of the null hypothesiswhen it is in fact true.

    The probability of type I error ( ) is also calledthe level of significance.

    Type II Error Type II error occurs when, based on the

    sample results, the null hypothesis is notrejected when it is in fact false.

    The probability of type II error is denoted by . Unlike , which is specified by the researcher,

    the magnitude of depends on the actualvalue of the population parameter (proportion).

    TestingStep 3: Choose a Level of Significance

    T ti

  • 8/7/2019 Freq_Distribution

    23/108

    Power of a Test The power of a test is the probability (1 - )

    of rejecting the null hypothesis when it is falseand should be rejected.

    Although is unknown, it is related to . An

    extremely low value of (e.g., = 0.001) willresult in intolerably high errors. So it is necessary to balance the two types of

    errors.

    TestingStep 3: Choose a Level of Significance

  • 8/7/2019 Freq_Distribution

    24/108

    Probabilities of Type I & Type II Error

    Figure 15.4

    99% of

    Total Area

    CriticalValue ofZ

    0= 0.40

    = 0.45 = 0.01

    = 1.645Z

    = -2.33ZZ

    Z

    95% ofTotal Area

    = 0.05

    ro a y o z w a ne a e

  • 8/7/2019 Freq_Distribution

    25/108

    ro a y o z w a ne- a eTest

    Unshaded Area

    = 0.0301

    Fig. 15.5

    Shaded Area

    = 0.9699

    z = 1.880

    Step 4: Collect Data and Calculate Test

  • 8/7/2019 Freq_Distribution

    26/108

    The required data are collected and the valueof the test statistic computed.

    In our example, the value of the sampleproportion is

    = 17/30 = 0.567.

    The value of can be determined as follows:

    Step 4: Collect Data and Calculate TestStatistic

    p

    p

    p

    =(1 - )

    n

    =

    (0.40)(0.6)

    30

    = 0.089

    Step 4: Collect Data and Calculate Test

  • 8/7/2019 Freq_Distribution

    27/108

    The test statistic zcan be calculated as follows:

    p

    pz

    =

    = 0.567-0.400.089

    = 1.88

    Step 4: Collect Data and Calculate TestStatistic

    Our hypothesizedpopulation value(>.4)

    Our Sample Valueor Estimate: 17/30

    Std Error EstimateIn Words: Our sample valuewas 1.88 StandardDeviations above ourHypothesized Mean Value

    How likely is it thatactual populationvalue is

  • 8/7/2019 Freq_Distribution

    28/108

    Using standard normal tables (Table 2 of the

    Statistical Appendix), the probability of obtaining a zvalue of 1.88 can be calculated (see Figure 15.5).

    The shaded area between - and 1.88 is 0.9699.Therefore, the area to the right ofz= 1.88 is 1.0000- 0.9699 = 0.0301.

    Alternatively, the critical value ofz, which will givean area to the right side of the critical value of 0.05,is between 1.64 and 1.65 and equals 1.645.

    Note, in determining the critical value of the teststatistic, the area to the right of the critical value is

    either or . It is for a one-tail test andfor a two-tail test.

    Step 5: Determine the Probability (CriticalValue)

    /2/2

    A General Procedure for Hypothesis Testing

  • 8/7/2019 Freq_Distribution

    29/108

    If the probability associated with the calculated

    or observed value of the test statistic ( )isless than the level of significance ( ), the nullhypothesis is rejected.

    The probability associated with the calculated orobserved value of the test statistic is 0.0301.

    This is the probability of getting a p value of0.567 when = 0.40. This is less than the levelof significance of 0.05. Hence, the nullhypothesis is rejected.

    Alternatively, if the calculated value of the teststatistic is greater than the critical value of thetest statistic ( ), the null hypothesis isrejected.

    A General Procedure for Hypothesis TestingSteps 6 & 7: Compare the Probability (Critical Value) andMaking the Decision

    TSCR

    TSCAL

    A General Procedure for Hypothesis Testing

  • 8/7/2019 Freq_Distribution

    30/108

    The calculated value of the test statistic z=1.88 lies in the rejection region, beyond thevalue of 1.645. Again, the same conclusion toreject the null hypothesis is reached.

    Note that the two ways of testing the null

    hypothesis are equivalent but mathematicallyopposite in the direction of comparison.

    If the probability of < significance level () then reject H0 but if > then reject

    H0.

    A General Procedure for Hypothesis TestingSteps 6 & 7: Compare the Probability (Critical Value) andMaking the Decision

    TSCAL TSCAL TSCR

    Testing

  • 8/7/2019 Freq_Distribution

    31/108

    The conclusion reached by hypothesis testingmust be expressed in terms of the marketingresearch problem.

    In our example, we conclude that there isevidence that the proportion of Internet users

    who shop via the Internet is significantlygreater than 0.40. Hence, therecommendation to the department storewould be to introduce the new Internet

    shopping service.

    TestingStep 8: Marketing Research Conclusion

    A Broad Classification of Hypothesis

  • 8/7/2019 Freq_Distribution

    32/108

    A Broad Classification of HypothesisTests

    Median/Rankings

    Distributions

    Means Proportions

    Figure 15.6

    Tests of

    Association

    Tests of

    Differences

    Hypothesis Tests

  • 8/7/2019 Freq_Distribution

    33/108

    Cross-Tabulation

    While a frequency distribution describes one

    variable at a time, a cross-tabulationdescribes two or more variablessimultaneously.

    Cross-tabulation results in tables that reflect

    the joint distribution of two or more variableswith a limited number of categories or distinctvalues, e.g., Table 15.3.

  • 8/7/2019 Freq_Distribution

    34/108

    Gender and Internet UsageTable 15.3

    Gender

    RowInternet Usage Male Female Total

    Light (1) 5 10 15

    Heavy (2) 10 5 15

    Column Total 15 15

  • 8/7/2019 Freq_Distribution

    35/108

    Two Variables Cross-Tabulation

    Since two variables have been cross classified,

    percentages could be computed eithercolumnwise, based on column totals (Table15.4), or rowwise, based on row totals (Table15.5).

    The general rule is to compute thepercentages in the direction of theindependent variable, across the dependentvariable. The correct way of calculating

    percentages is as shown in Table 15.4.

  • 8/7/2019 Freq_Distribution

    36/108

    Internet Usage by GenderTable 15.4

    Internet U

  • 8/7/2019 Freq_Distribution

    37/108

    Gender by Internet UsageTable 15.5

    Gender Lig

  • 8/7/2019 Freq_Distribution

    38/108

    SPSS: CROSSTABS

  • 8/7/2019 Freq_Distribution

    39/108

    CROSSTAB RESULTS CLARIFIED

  • 8/7/2019 Freq_Distribution

    40/108

    CROSSTAB RESULTS

    Appears to berelationship

    between Genderand Internet Usage:Is itSignificant?

  • 8/7/2019 Freq_Distribution

    41/108

    CROSSTAB SIGNIFICANCE

    333.3

    5.7/})5.75()5.710()5.710()5.75{(

    /)(2

    2222

    exp

    2

    exp

    =

    +++=

    = ectedCELLSALL

    ectedobserved fff

    Must be >3.841 ToAccept Alternative

    HypothesisThere is NO statisticalrelationship between genderand internet usage at 5%Level!

    Introduction of a Third Variable in Cross-

  • 8/7/2019 Freq_Distribution

    42/108

    RefinedAssociation

    between the TwoVariables

    No Associationbetween the Two

    Variables

    No Changein the Initial

    Pattern

    SomeAssociation

    between the TwoVariables

    Introduction of a Third Variable in CrossTabulation

    Fig. 15.7

    Some Associationbetween the Two

    Variables

    No Associationbetween the Two

    Variables

    Introduce aThird Variable

    Introduce aThird Variable

    Original Two Variables

    Three Variables Cross-Tabulation

  • 8/7/2019 Freq_Distribution

    43/108

    As shown in Figure 15.7, the introduction of a thirdvariable can result in four possibilities:

    As can be seen from Table 15.6, 52% of unmarried respondentsfell in the high-purchase category, as opposed to 31% of themarried respondents. Before concluding that unmarriedrespondents purchase more fashion clothing than those who aremarried, a third variable, the buyer's sex, was introduced into theanalysis.

    As shown in Table 15.7, in the case of females, 60% of theunmarried fall in the high-purchase category, as compared to 25%of those who are married. On the other hand, the percentages aremuch closer for males, with 40% of the unmarried and 35% of themarried falling in the high purchase category.

    Hence, the introduction of sex (third variable) has refined therelationship between marital status and purchase of fashion

    clothing (original variables). Unmarried respondents are morelikely to fall in the high purchase category than married ones, andthis effect is much more pronounced for females than for males.

    Three Variables Cross TabulationRefine an Initial Relationship

    Purchase of Fashion Clothing by Marital

  • 8/7/2019 Freq_Distribution

    44/108

    Purchase of Fashion Clothing by MaritalStatus

    Table 15.6

    PurchaseFashion

    Purchase of Fashion Clothing by Marital

  • 8/7/2019 Freq_Distribution

    45/108

    Purchase of Fashion Clothing by MaritalStatus

    Table 15.7

    S e x

    M a l e F e m a l e

    d N o t

    M a r r i e d

    M a r r i e d N o t

    M a r r i e d

    4 0 % 2 5 % 6 0 %

    6 0 % 7 5 % 4 0 %

    1 0 0 % 1 0 0 % 1 0 0 %

    1 2 0 3 0 0 1 8 0

    Three Variables Cross-Tabulation

  • 8/7/2019 Freq_Distribution

    46/108

    Table 15.8 shows that 32% of those with college

    degrees own an expensive automobile, ascompared to 21% of those without collegedegrees. Realizing that income may also be afactor, the researcher decided to reexamine therelationship between education and ownership ofexpensive automobiles in light of income level.

    In Table 15.9, the percentages of those with andwithout college degrees who own expensiveautomobiles are the same for each of the incomegroups. When the data for the high income andlow income groups are examined separately, the

    association between education and ownership ofexpensive automobiles disappears, indicatingthat the initial relationship observed betweenthese two variables was spurious.

    Three Variables Cross TabulationInitial Relationship was Spurious

    Ownership of Expensive Automobiles by

  • 8/7/2019 Freq_Distribution

    47/108

    p p yEducation Level

    Table 15.8

    Own Expensive

    Automobile

    Ownership of Expensive Automobiles by

  • 8/7/2019 Freq_Distribution

    48/108

    p p yEducation Level and Income Levels

    Table 15.9

    Ow n

    Expensive

    Automobile

    College

    Degree

    No

    College

    Degree

    College

    Degree

    No College

    Degree

    Yes 20% 20% 40% 40%

    No 80% 80% 60% 60%

    Column totals 100% 100% 100% 100%

    Number ofrespondents

    100 700 150 50

    Low Incom e High Incom e

    Income

    Three Variables Cross-Tabulation

  • 8/7/2019 Freq_Distribution

    49/108

    Table 15.10 shows no association between desire to

    travel abroad and age. When sex was introduced as the third variable,

    Table 15.11 was obtained. Among men, 60% ofthose under 45 indicated a desire to travel abroad,as compared to 40% of those 45 or older. Thepattern was reversed for women, where 35% of

    those under 45 indicated a desire to travel abroadas opposed to 65% of those 45 or older. Since the association between desire to travel

    abroad and age runs in the opposite direction formales and females, the relationship between thesetwo variables is masked when the data are

    aggregated across sex as in Table 15.10. But when the effect of sex is controlled, as in Table

    15.11, the suppressed association between desireto travel abroad and age is revealed for theseparate categories of males and females.

    Reveal Suppressed Association

  • 8/7/2019 Freq_Distribution

    50/108

    Desire to Travel Abroad by AgeTable15.10

    Desire to Travel A

    Desire to Travel Abroad by Age and

  • 8/7/2019 Freq_Distribution

    51/108

    y gGender

    Table 15.11

    Desire toTravelAbroad

    SexMaleAge

    FemaleAge

    < 45 >=45 =45

    Yes 60% 40% 35% 65%

    No 40% 60% 65% 35%

    Columntotals

    100% 100% 100% 100%

    Number ofCases

    300 300 200 200

    Three Variables Cross-Tabulations

  • 8/7/2019 Freq_Distribution

    52/108

    Consider the cross-tabulation of family size

    and the tendency to eat out frequently in fast-food restaurants as shown in Table 15.12. Noassociation is observed.

    When income was introduced as a third

    variable in the analysis, Table 15.13 wasobtained. Again, no association was observed.

    No Change in Initial Relationship

    Eating Frequently in Fast-Food

  • 8/7/2019 Freq_Distribution

    53/108

    g q yRestaurants by Family Size

    Table 15.12

    Eat FrequFood Rest

    Restaurants

  • 8/7/2019 Freq_Distribution

    54/108

    Small Large Small Large

    Yes 65% 65% 65% 65%

    No 35% 35% 35% 35%

    Column totals 100% 100% 100% 100%

    Number of respondents 250 250 250 250

    Income

    Eat Frequently in Fast

    Food Restaurants

    Family size Family size

    Low High

    by Family Size & Income

    Table15.13

    Tabulation

  • 8/7/2019 Freq_Distribution

    55/108

    To determine whether a systematic association

    exists, the probability of obtaining a value of chi-square as large or larger than the one calculatedfrom the cross-tabulation is estimated.

    An important characteristic of the chi-squarestatistic is the number of degrees of freedom (df)

    associated with it. That is, df = (r- 1) x (c -1). The null hypothesis (H0) of no association

    between the two variables will be rejected onlywhen the calculated value of the test statistic isgreater than the critical value of the chi-square

    distribution with the appropriate degrees offreedom, as shown in Figure 15.8.

    Chi-Square

    Chi Di ib i

  • 8/7/2019 Freq_Distribution

    56/108

    Chi-square DistributionFigure 15.8

    Reject H0

    Do NotReject H0

    Critical

    Value

    2

    Tabulation

  • 8/7/2019 Freq_Distribution

    57/108

    Chi-Square

    The chi-square statistic ( )is used to test

    the statistical significance of the observedassociation in a cross-tabulation.

    The expected frequency for each cell can becalculated by using a simple formula:

    2

    fe =nrncn

    where nr

    = total number in the row

    nc = total number in the columnn = total sample size

    Tabulation

  • 8/7/2019 Freq_Distribution

    58/108

    For the data in Table 15.3, the expected

    frequencies for

    the cells going from left to right and from top to

    bottom, are:

    Then the value of is calculated as follows:

    1 5 X1 530

    =7.50 15 X 1530

    =7.50

    1 5 X 1 530

    =7.50 15 X 1530

    =7.50

    2 =(f

    o- f

    e)2

    fe

    all

    cells

    2

    Chi-Square

    Tabulation

  • 8/7/2019 Freq_Distribution

    59/108

    For the data in Table 15.3, the value of is

    calculated as:

    = (5 -7.5)2 + (10 - 7.5)2 + (10 - 7.5)2 + (5 - 7.5)2

    7.5 7.5 7.5 7.5

    =0.833 + 0.833 + 0.833+ 0.833

    = 3.333

    2

    Chi-Square

    Tabulation

  • 8/7/2019 Freq_Distribution

    60/108

    The chi-square distribution is a skewed

    distribution whose shape depends solely on thenumber of degrees of freedom. As the number ofdegrees of freedom increases, the chi-squaredistribution becomes more symmetrical.

    Table 3 in the Statistical Appendix containsupper-tail areas of the chi-square distribution for

    different degrees of freedom. For 1 degree offreedom the probability of exceeding a chi-square value of 3.841 is 0.05.

    For the cross-tabulation given in Table 15.3,there are (2-1) x (2-1) = 1 degree of freedom.

    The calculated chi-square statistic had a value of3.333. Since this is less than the critical value of3.841, the null hypothesis of no association cannot be rejected indicating that the association isnot statistically significant at the 0.05 level.

    Chi-Square

    Tabulation

  • 8/7/2019 Freq_Distribution

    61/108

    The phi coefficient ( ) is used as a measure

    of the strength of association in the specialcase of a table with two rows and two columns(a 2 x 2 table).

    The phi coefficient is proportional to the squareroot of the chi-square statistic

    It takes the value of 0 when there is noassociation, which would be indicated by a chi-

    square value of 0 as well. When the variablesare perfectly associated, phi assumes the valueof 1 and all the observations fall just on themain or minor diagonal.

    Phi Coefficient

    = 2

    n

    Tabulation

  • 8/7/2019 Freq_Distribution

    62/108

    While the phi coefficient is specific to a 2 x 2 table,

    the contingency coefficient(C) can be used toassess the strength of association in a table of anysize.

    The contingency coefficient varies between 0 and 1.

    The maximum value of the contingency coefficientdepends on the size of the table (number of rows

    and number of columns). For this reason, it shouldbe used only to compare tables of the same size.

    Contingency Coefficient

    C=

    2

    2 + n

    Tabulation

  • 8/7/2019 Freq_Distribution

    63/108

    Cramer's V is a modified version of the phi

    correlation coefficient, , and is used intables larger than 2 x 2.

    or

    Cramers V

    V=

    2

    min (r-1), (c-1)

    V= 2

    /nmin (r-1), (c-1)

    Tabulation

  • 8/7/2019 Freq_Distribution

    64/108

    Asymmetric lambda measures the percentageimprovement in predicting the value of the dependentvariable, given the value of the independent variable.

    Lambda also varies between 0 and 1. A value of 0means no improvement in prediction. A value of 1indicates that the prediction can be made without error.This happens when each independent variable categoryis associated with a single category of the dependent

    variable. Asymmetric lambda is computed for each of the

    variables (treating it as the dependent variable). A symmetric lambda is also computed, which is a kind

    of average of the two asymmetric values. Thesymmetric lambda does not make an assumption about

    which variable is dependent. It measures the overallimprovement when prediction is done in both directions.

    Lambda Coefficient

    Tabulationh i i

  • 8/7/2019 Freq_Distribution

    65/108

    Other statistics like taub, tauc, and gamma

    are available to measure association betweentwo ordinal-level variables. Both tau b and tauc adjust for ties.

    Taub is the most appropriate with squaretables in which the number of rows and thenumber of columns are equal. Its value variesbetween +1 and -1.

    For a rectangular table in which the number ofrows is different than the number of columns,tauc should be used.

    Gamma does not make an adjustment foreither ties or table size. Gamma also variesbetween +1 and -1 and generally has a highernumerical value than tau b or tauc.

    Other Statistics

    Cross Tabulation in Practice

  • 8/7/2019 Freq_Distribution

    66/108

    Cross-Tabulation in Practice

    While conducting cross-tabulation analysis in practice, it isuseful to

    proceed along the following steps.1. Test the null hypothesis that there is no association

    between the variables using the chi-square statistic. Ifyou fail to reject the null hypothesis, then there is norelationship.

    2. IfH0 is rejected, then determine the strength of theassociation using an appropriate statistic (phi-coefficient,contingency coefficient, Cramer's V, lambda coefficient,or other statistics), as discussed earlier.

    3. IfH0 is rejected, interpret the pattern of the relationshipby computing the percentages in the direction of theindependent variable, across the dependent variable.

    4. If the variables are treated as ordinal rather thannominal, use taub, tau c, or Gamma as the test statistic.IfH0 is rejected, then determine the strength of theassociation using the magnitude, and the direction of therelationship using the sign of the test statistic.

    Hypothesis Testing Related toDiff

  • 8/7/2019 Freq_Distribution

    67/108

    Differences

    Parametric tests assume that the variables of interest are

    measured on at least an interval scale. Nonparametric tests assume that the variables are

    measured on a nominal or ordinal scale. These tests can be further classified based on whether one

    or two or more samples are involved. The samples are independent if they are drawn randomly

    from different populations. For the purpose of analysis, datapertaining to different groups of respondents, e.g., malesand females, are generally treated as independent samples.

    The samples are paired when the data for the two samplesrelate to the same group of respondents.

    A Classification of Hypothesis TestingP d f E i i Diff

  • 8/7/2019 Freq_Distribution

    68/108

    Independent Samples

    PairedSamples Independe

    nt SamplesPaired

    Samples* Two-Group ttest

    * Z test

    * Pairedt test * Chi-Square

    * Mann-

    Whitney* Median

    * Sign* Wilcoxon

    * McNemar* Chi-

    Procedures for Examining Differences

    Fig. 15.9 Hypothesis Tests

    One Sample Two or MoreSamples

    One Sample Two or MoreSamples

    * t test* Z test

    * Chi-Square * K-S* Runs

    * Binomial

    ParametricTests (Metric

    Tests)

    Non-parametricTests (Nonmetric

    Tests)

    Parametric Tests

  • 8/7/2019 Freq_Distribution

    69/108

    Parametric Tests

    The t statistic assumes that the variable is

    normally distributed and the mean is known (orassumed to be known) and the populationvariance is estimated from the sample.

    Assume that the random variable Xis normallydistributed, with mean and unknown populationvariance , which is estimated by the samplevariance s 2.

    Then, is tdistributed with n - 1degrees of freedom.

    The tdistribution is similar to the normaldistribution in appearance. Both distributionsare bell-shaped and symmetric. As the numberof degrees of freedom increases, the tdistribution approaches the normal distribution.

    2

    t = (X - )/sX

    Hypothesis Testing Using the tStatistic

  • 8/7/2019 Freq_Distribution

    70/108

    Statistic

    1. Formulate the null (H0) and the alternative

    (H1) hypotheses.

    2. Select the appropriate formula for the tstatistic.

    3. Select a significance level, , for testing H0

    .

    Typically, the 0.05 level is selected.

    4. Take one or two samples and compute themean and standard deviation for eachsample.

    5. Calculate the tstatistic assuming H0 is true.

    Hypothesis Testing Using the tStatistic

  • 8/7/2019 Freq_Distribution

    71/108

    6. Calculate the degrees of freedom and estimate theprobability of getting a more extreme value of thestatistic from Table 4 (Alternatively, calculate thecritical value of the t statistic).

    7. If the probability computed in step 5 is smaller thanthe significance level selected in step 2, reject H0. Ifthe probability is larger, do not reject H0.(Alternatively, if the value of the calculated tstatisticin step 4 is larger than the critical value determinedin step 5, reject H0. If the calculated value is smallerthan the critical value, do not reject H0). Failure toreject H0 does not necessarily imply that H0 is true. Itonly means that the true state is not significantlydifferent than that assumed by H

    0.8. Express the conclusion reached by the ttest in terms

    of the marketing research problem.

    Statistic

    One Samplet T t

  • 8/7/2019 Freq_Distribution

    72/108

    For the data in Table 15.2, suppose we wanted to test

    the hypothesis that the mean familiarity rating exceeds4.0, the neutral value on a 7 point scale. A significancelevel of = 0.05 is selected. The hypotheses may beformulated as:

    t Test

    H0: < 4.0

    > 4.0

    t= (X - )/sX

    sX= s/ n

    sX = 1.579/ 29= 1.579/5.385 = 0.293

    t= (4.724-4.0)/0.293 = 0.724/0.293 =2.471

    H1:

    FAMILI

    7

    2

    33

    FAM

    One Samplet Test

  • 8/7/2019 Freq_Distribution

    73/108

    t Test

    FAM

    4.724 S

    1.579 S4.724

    5.0175.310

    2.75% Probability

    One Samplet Test

  • 8/7/2019 Freq_Distribution

    74/108

    For the data in Table 15.2, suppose we wanted to test

    the hypothesis that the mean familiarity rating exceeds4.0, the neutral value on a 7 point scale. A significancelevel of = 0.05 is selected. The hypotheses may beformulated as:

    t Test

    H0: < 4.0

    > 4.0

    t= (X - )/sX

    sX= s/ n

    sX = 1.579/ 29= 1.579/5.385 = 0.293

    t= (4.724-4.0)/0.293 = 0.724/0.293 =2.471

    H1:

    One Samplet Test

  • 8/7/2019 Freq_Distribution

    75/108

    The degrees of freedom for the tstatistic to

    test the hypothesis about one mean are n - 1.In this case,n - 1 = 29 - 1 or 28. From Table 4 in theStatistical Appendix, the probability of getting

    a more extreme value than 2.471 is less than0.05 (Alternatively, the critical t value for 28degrees of freedom and a significance level of0.05 is 1.7011, which is less than thecalculated value). Hence, the null hypothesis

    is rejected. The familiarity level does exceed4.0.

    t Test

    One Samplez Test

  • 8/7/2019 Freq_Distribution

    76/108

    Note that if the population standard deviation

    was assumed to be known as 1.5, rather thanestimated from the sample, a z test would beappropriate. In this case, the value of the zstatistic would be:

    where

    = = 1.5/5.385 = 0.279

    and

    z= (4.724 - 4.0)/0.279 = 0.724/0.279 = 2.595

    z Test

    z = (X - )/X

    X 1.5/ 29

    One Samplez Test

  • 8/7/2019 Freq_Distribution

    77/108

    z Test

    From Table 2 in the Statistical Appendix, the

    probability of getting a more extreme value ofzthan 2.595 is less than 0.05. (Alternatively,the critical zvalue for a one-tailed test and asignificance level of 0.05 is 1.645, which is less

    than the calculated value.) Therefore, the nullhypothesis is rejected, reaching the sameconclusion arrived at earlier by the ttest.

    The procedure for testing a null hypothesis withrespect to a proportion was illustrated earlier inthis chapter when we introduced hypothesistesting.

    Two Independent SamplesMeans

  • 8/7/2019 Freq_Distribution

    78/108

    Means

    In the case of means for two independent samples, the

    hypotheses take the following form.

    210

    : =H

    211

    : H

    Two Independent SamplesMeans

  • 8/7/2019 Freq_Distribution

    79/108

    Means

    In the case of means for two independent samples, the

    hypotheses take the following form.

    210

    : =H

    211

    : H

    Two Independent SamplesMeans

  • 8/7/2019 Freq_Distribution

    80/108

    Means

    In the case of means for two independent samples, the

    hypotheses take the following form.

    The two populations are sampled and the means andvariances computed based on samples of sizes n1 andn2. If both populations are found to have the samevariance, a pooled variance estimate is computed fromthe two sample variances as follows:

    210

    : =H

    211

    : H

    2

    ((

    21

    1 1

    2

    22

    2

    112

    1 2

    ))

    +

    +

    = = =

    nn

    XXXX

    s

    n n

    i i

    ii or

    s2

    =(n1 - 1) s1

    2+ (n2-1) s2

    2

    n1 +n2 -2

    Two Independent SamplesMeans

  • 8/7/2019 Freq_Distribution

    81/108

    The standard deviation of the test statistic can be

    estimated as:

    The appropriate value oftcan be calculated as:

    The degrees of freedom in this case are (n1 + n2

    -2).

    Means

    sX1 -X2 = s2 ( 1n1

    + 1n2)

    t=(X1 -X2) - (1 - 2)

    sX1 -X2

    Two Independent SamplesF Test

  • 8/7/2019 Freq_Distribution

    82/108

    An F test of sample variance may be performed

    if it isnot known whether the two populations have

    equal

    variance. In this case, the hypotheses are:

    H0: 12 = 2

    2

    H1: 12 2

    2

    F Test

    Two Independent SamplesF Statistic

  • 8/7/2019 Freq_Distribution

    83/108

    The F statistic is computed from the sample variances

    as follows

    where

    n1 = size of sample 1

    n2 = size of sample 2n1-1 = degrees of freedom for sample 1

    n2-1 = degrees of freedom for sample 2

    s12 = sample variance for sample 1

    s22 = sample variance for sample 2

    Using the data of Table 15.1, suppose we wanted to determinewhether Internet usage was different for males as compared tofemales. A two-independent-samples ttest was conducted. Theresults are presented in Table 15.14.

    F Statistic

    F(n1-1),(n2-1) =s1

    2

    s22

    -Tests

  • 8/7/2019 Freq_Distribution

    84/108

    TestsTable 15.14

    Summary Statistics

    Number Standardof Cases Mean Deviation

    Male 15 9.333 1.137Female 15 3.867 0.435

    FTest for Equality of Variances

    F 2-tail

    value probability

    15.507 0.000

    tTest

    Equal Variances Assumed Equal Variances Not Assume d

    t Degrees of 2-tail t Degrees of 2-tail

    value freedom probability value freedom probability

    4.492 28 0.000 -4.492 18.014 0.000-

    Two Independent SamplesProportions

  • 8/7/2019 Freq_Distribution

    85/108

    The case involving proportions for two independent samples is alsoillustrated using the data of Table 15.1, which gives the number of

    males and females who use the Internet for shopping. Is theproportion of respondents using the Internet for shopping thesame for males and females? The null and alternative hypothesesare:

    A Ztest is used as in testing the proportion for one sample.However, in this case the test statistic is given by:

    Proportions

    H0: 1 =2H1:

    1

    2

    SPPpP

    Z

    21

    21

    =

    Two Independent SamplesProportions

  • 8/7/2019 Freq_Distribution

    86/108

    In the test statistic, the numerator is the difference

    between theproportions in the two samples, P1 and P2. Thedenominator is

    the standard error of the difference in the two proportionsand is

    given by

    where

    +=

    nn

    S PPpP21

    21

    11)1(

    P =

    n1P1+ n

    2P2

    n1 + n2

    Proportions

    Two Independent SamplesProportions

  • 8/7/2019 Freq_Distribution

    87/108

    A significance level of = 0.05 is selected. Given the

    data ofTable 15.1, the test statistic can be calculated as:

    = (11/15) -(6/15)

    = 0.733 - 0.400 = 0.333

    P = (15 x 0.733+15 x 0.4)/(15 + 15) = 0.567

    = = 0.181

    Z= 0.333/0.181 = 1.84

    PP 21

    S pP 210.567x 0.433[ 1

    15+ 1

    15]

    Proportions

    Two Independent SamplesProportions

  • 8/7/2019 Freq_Distribution

    88/108

    Given a two-tail test, the area to the right of

    the critical value is 0.025. Hence, the criticalvalue of the test statistic is 1.96. Since thecalculated value is less than the critical value,the null hypothesis can not be rejected. Thus,

    the proportion of users (0.733 for males and0.400 for females) is not significantly differentfor the two samples. Note that while thedifference is substantial, it is not statisticallysignificant due to the small sample sizes (15 in

    each group).

    Proportions

    Paired Samples

  • 8/7/2019 Freq_Distribution

    89/108

    Paired Samples

    The difference in these cases is examined by a paired samples ttest. To compute tfor paired samples, the paired difference

    variable, denoted by D, is formed and its mean and variancecalculated. Then the tstatistic is computed. The degrees offreedom are n - 1, where n is the number of pairs. The relevantformulas are:

    continued

    H0: D = 0

    H1: D 0

    tn-1 =D - D

    sDn

    Paired Samples

  • 8/7/2019 Freq_Distribution

    90/108

    where,

    In the Internet usage example (Table 15.1), a paired t test couldbe used to determine if the respondents differed in their attitudetoward the Internet and attitude toward technology. The resultingoutput is shown in Table 15.15.

    i

    i1

    n

    SS

    D

    D=

    Paired Samples

    Paired-Samples t Test

  • 8/7/2019 Freq_Distribution

    91/108

    Paired Samples tTest

    Number Standard Standard

    Variable of Cases Mean Deviation Error

    Internet Attitude 30 5.167 1.234 0.225

    Technology Attitude 30 4.100 1.398 0.255

    Difference = Internet -Technology

    Difference Standard Standard 2 -tail t Degrees of 2 -tail

    Mean deviation error Correlation prob. value freedom probability

    1.067 0.828 0.1511 0.809 0.000 7.059 29 0.000

    Table 15.15

    Non-Parametric Tests

  • 8/7/2019 Freq_Distribution

    92/108

    Non Parametric Tests

    Nonparametric tests are used when the

    independent variables are nonmetric. Likeparametric tests, nonparametric tests areavailable for testing variables from onesample, two independent samples, or two

    related samples.

    Non-Parametric TestsOne Sample

  • 8/7/2019 Freq_Distribution

    93/108

    Sometimes the researcher wants to test whether theobservations for a particular variable could reasonably

    have come from a particular distribution, such as thenormal, uniform, or Poisson distribution.

    The Kolmogorov-Smirnov (K-S) one-sample testis one such goodness-of-fit test. The K-S compares thecumulative distribution function for a variable with a

    specified distribution. Aidenotes the cumulative

    relative frequency for each category of the theoretical

    (assumed) distribution, and Oi the comparable value ofthe sample frequency. The K-S test is based on the

    maximum value of the absolute difference between Ai

    and Oi. The test statistic is

    One Sample

    K= Max Ai - Oi

    Non-Parametric TestsOne Sample

  • 8/7/2019 Freq_Distribution

    94/108

    The decision to reject the null hypothesis is based onthe value ofK. The larger the Kis, the moreconfidence we have that H0 is false. For = 0.05, thecritical value ofKfor large samples (over 35) is givenby 1.36/ Alternatively, Kcan be transformed into anormally distributed zstatistic and its associatedprobability determined.

    In the context of the Internet usage example, supposewe wanted to test whether the distribution of Internetusage was normal. A K-S one-sample test isconducted, yielding the data shown in Table 15.16.Table 15.16 indicates that the probability of observinga Kvalue of 0.222, as determined by the normalized zstatistic, is 0.103. Since this is more than the

    significance level of 0.05, the null hypothesis can notbe rejected, leading to the same conclusion. Hence,the distribution of Internet usage does not deviatesignificantly from the normal distribution.

    n

    One Sample

    K-S One-Sample Test forNormality of Internet Usage

  • 8/7/2019 Freq_Distribution

    95/108

    Normality of Internet Usage

    Table 15.16

    Test Distribution - Normal

    Mean: 6.600Standard Deviation: 4.296

    Cases: 30

    Most Extreme DifferencesAbsolute Positive Negative K-S z 2-Tailed p0.222 0.222 -0.142 1.217 0.103

    Non-Parametric TestsOne Sample

  • 8/7/2019 Freq_Distribution

    96/108

    The chi-square test can also be performed

    on a single variable from one sample. In thiscontext, the chi-square serves as a goodness-of-fit test.

    The runs test is a test of randomness for thedichotomous variables. This test is conducted

    by determining whether the order or sequencein which observations are obtained is random.

    The binomial test is also a goodness-of-fittest for dichotomous variables. It tests thegoodness of fit of the observed number ofobservations in each category to the numberexpected under a specified binomialdistribution.

    O e Sa p e

    Non-Parametric TestsTwo Independent Samples

  • 8/7/2019 Freq_Distribution

    97/108

    When the difference in the location of two populations is tobe compared based on observations from two independent

    samples, and the variable is measured on an ordinal scale,the Mann-Whitney U test can be used.

    In the Mann-Whitney U test, the two samples are combinedand the cases are ranked in order of increasing size.

    The test statistic, U, is computed as the number of times ascore from sample or group 1 precedes a score from group

    2. If the samples are from the same population, thedistribution of scores from the two groups in the rank listshould be random. An extreme value ofU would indicate anonrandom pattern, pointing to the inequality of the twogroups.

    For samples of less than 30, the exact significance level for

    U is computed. For larger samples, U is transformed into anormally distributed zstatistic. This zcan be corrected forties within ranks.

    p p

    Non-Parametric TestsTwo Independent Samples

  • 8/7/2019 Freq_Distribution

    98/108

    We examine again the difference in the Internet usage ofmales and females. This time, though, the Mann-Whitney U

    test is used. The results are given in Table 15.17. One could also use the cross-tabulation procedure to conduct

    a chi-square test. In this case, we will have a 2 x 2 table.One variable will be used to denote the sample, and willassume the value 1 for sample 1 and the value of 2 forsample 2. The other variable will be the binary variable ofinterest.

    The two-sample median test determines whether the twogroups are drawn from populations with the same median. Itis not as powerful as the Mann-Whitney U test because itmerely uses the location of each observation relative to themedian, and not the rank, of each observation.

    The Kolmogorov-Smirnov two-sample test examineswhether the two distributions are the same. It takes intoaccount any differences between the two distributions,including the median, dispersion, and skewness.

    p p

    Mann-Whitney U - Wilcoxon Rank Sum WTest Internet Usage by Gender

  • 8/7/2019 Freq_Distribution

    99/108

    g y

    Table 15.17

    Sex Mean Rank Cases

    Male 20.93 15Female 10.07 15

    Total 30

    Corrected for tiesU W z 2-tailedp

    31.000 151.000 -3.406 0.001

    NoteU= Mann-Whitney test statisticW= Wilcoxon W Statisticz= U transformed into a normally distributedzstatistic.

    Non-Parametric TestsPaired Samples

  • 8/7/2019 Freq_Distribution

    100/108

    The Wilcoxon matched-pairs signed-ranks

    test analyzes the differences between thepaired observations, taking into account themagnitude of the differences.

    It computes the differences between the pairsof variables and ranks the absolute

    differences. The next step is to sum the positive and

    negative ranks. The test statistic, z, iscomputed from the positive and negative ranksums.

    Under the null hypothesis of no difference, zisa standard normal variate with mean 0 andvariance 1 for large samples.

    p

    Non-Parametric TestsPaired Samples

  • 8/7/2019 Freq_Distribution

    101/108

    The example considered for the paired ttest,whether the respondents differed in terms ofattitude toward the Internet and attitude towardtechnology, is considered again. Suppose weassume that both these variables are measured onordinal rather than interval scales. Accordingly, weuse the Wilcoxon test. The results are shown inTable 15.18.

    The sign test is not as powerful as the Wilcoxonmatched-pairs signed-ranks test as it only comparesthe signs of the differences between pairs ofvariables without taking into account the ranks.

    In the special case of a binary variable where the

    researcher wishes to test differences in proportions,the McNemar test can be used. Alternatively, thechi-square test can also be used for binaryvariables.

    p

    Wilcoxon Matched-Pairs Signed-Rank TestInternet with Technology

  • 8/7/2019 Freq_Distribution

    102/108

    gy

    (Technology - Internet) Cases Mean rank

    -Ranks 23 12.72

    +Ranks 1 7.50

    Ties 6

    Total 30

    z = -4.207 2-tailed p = 0.0000

    Table 15.18

    A Summary of Hypothesis TestsRelated to Differences

  • 8/7/2019 Freq_Distribution

    103/108

    Table 15.19

    Contd.

    Sample Application Level of Scaling Test/Comments

    One Sample

    One Sample Distributions Nonmetric

    K-S and chi-square for

    goodness of fit

    Runs test for randomness

    Binomial test for goodness o f

    fit for dichotomous variables

    One Sample Means Metric t test, if variance is unknown

    z test, if variance is known

    A Summary of Hypothesis TestsRelated to Differences

  • 8/7/2019 Freq_Distribution

    104/108

    Table 15.19 cont.

    Two Independent Samples

    Two independent samples Distributions Nonmetric K-S two-sample test for examining theequivalence of two distributions

    Two independent samples Means Metric Two-group ttestFtest for equality of variances

    Two independent samples Proportions Metric z testNonmetric Chi-square test

    Two independent samples Rankings/Medians Nonmetric Mann-Whitney U test is morepowerful than the median test

    Paired Samples

    Paired samples Means Metric Paired ttest

    Paired samples Proportions Nonmetric McNemar test for binary variablesChi-square test

    Paired samples Rankings/Medians Nonmetric Wilcoxon matched-pairs ranked-signstest is more powerful than the sign test

    SPSS Windows

  • 8/7/2019 Freq_Distribution

    105/108

    The main program in SPSS is FREQUENCIES. It

    produces a table of frequency counts,percentages, and cumulative percentages for thevalues of each variable. It gives all of theassociated statistics.

    If the data are interval scaled and only the

    summary statistics are desired, the DESCRIPTIVESprocedure can be used. The EXPLORE procedure produces summary

    statistics and graphical displays, either for all ofthe cases or separately for groups of cases.

    Mean, median, variance, standard deviation,minimum, maximum, and range are some of thestatistics that can be calculated.

    SPSS Windows

  • 8/7/2019 Freq_Distribution

    106/108

    To select these procedures click:

    Analyze>Descriptive Statistics>FrequenciesAnalyze>Descriptive Statistics>DescriptivesAnalyze>Descriptive Statistics>Explore

    The major cross-tabulation program is CROSSTABS.

    This program will display the cross-classification tablesand provide cell counts, row and column percentages,the chi-square test for significance, and all themeasures of the strength of the association that havebeen discussed.

    To select these procedures click:

    Analyze>Descriptive Statistics>Crosstabs

    SPSS Windows

  • 8/7/2019 Freq_Distribution

    107/108

    The major program for conducting parametric

    tests in SPSS is COMPARE MEANS. This programcanbe used to conduct ttests on one sample orindependent or paired samples. To select theseprocedures using SPSS for Windows click:

    Analyze>Compare Means>Means Analyze>Compare Means>One-Sample T Test Analyze>Compare Means>Independent-

    Samples T Test

    Analyze>Compare Means>Paired-Samples TTest

    SPSS Windows

  • 8/7/2019 Freq_Distribution

    108/108

    The nonparametric tests discussed in this chaptercan

    be conducted using NONPARAMETRIC TESTS.

    To select these procedures using SPSS for Windows

    click:

    Analyze>Nonparametric Tests>Chi-Square Analyze>Nonparametric Tests>Binomial Analyze>Nonparametric Tests>Runs Analyze>Nonparametric Tests>1-Sample K-S Analyze>Nonparametric Tests>2 Independent

    Samples Analyze>Nonparametric Tests>2 Related

    Samples