Chapter 1

5
CHAPTER 1 RESEARCH BASICS Research can be defined as disciplined, systematic, replicable inquiry. At one time or another, most graduate students in Communication will either conduct quantitative research or evaluate quantitative research done by others that uses these methodologies: laboratory-experimental, content analysis, and sample survey. The survey method will be the focus of COM5331. It may be helpful to look at four ways that we can characterize research done in Communication: 1. Proprietary vs. scholarly In proprietary research, the research is conducted for a specific client and it is not shared beyond the client (after all, it is the client who is paying for the work and does not want the competition in on the secret!). Scholarly research is conducted to promote public access to knowledge. There are no secrets - results are disseminated whenever possible. As the saying goes, publish or perish -- academics want their findings published in journals and books. 2. Basic vs. applied Basic research examines topics derived from a theoretical base. It refines and extends theories. The goal is to explain a wide array of communication events and processes. Applied research serves practical needs. Its aim is not to advance theory. It may, however, use theory to solve practical communication problems. Decision makers often solicit applied research to solve immediate problems or serve organizational needs. 3. Primary vs. secondary Primary research involves the gathering of original data. Student theses and dissertations, as well as faculty research is typically classified as primary research. Secondary research using data gathered by companies that specialize in such services. For instance, those who work in advertising or in television and radio sales depend heavily on data provided by companies such as Nielsen and Arbitron. 4. Quantitative vs. qualitative In a word: numbers! Quantitative research seeks generalizations. It is deductive in that new knowledge is gained through the analysis of

description

curs spss

Transcript of Chapter 1

Page 1: Chapter 1

CHAPTER 1RESEARCH BASICS

Research can be defined as disciplined, systematic, replicable inquiry. At one time or another, most graduate students in Communication will either conduct quantitative research or evaluate quantitative research done by others that uses these methodologies: laboratory-experimental, content analysis, and sample survey. The survey method will be the focus of COM5331.

It may be helpful to look at four ways that we can characterize research done in Communication:

1. Proprietary vs. scholarly

In proprietary research, the research is conducted for a specific client and it is not shared beyond the client (after all, it is the client who is paying for the work and does not want the competition in on the secret!).

Scholarly research is conducted to promote public access to knowledge. There are no secrets - results are disseminated whenever possible. As the saying goes, publish or perish -- academics want their findings published in journals and books.

2. Basic vs. applied

Basic research examines topics derived from a theoretical base. It refines and extends theories. The goal is to explain a wide array of communication events and processes.

Applied research serves practical needs. Its aim is not to advance theory. It may, however, use theory to solve practical communication problems. Decision makers often solicit applied research to solve immediate problems or serve organizational needs.

3. Primary vs. secondary

Primary research involves the gathering of original data. Student theses and dissertations, as well as faculty research is typically classified as primary research.

Secondary research using data gathered by companies that specialize in such services. For instance, those who work in advertising or in television and radio sales depend heavily on data provided by companies such as Nielsen and Arbitron.

4. Quantitative vs. qualitative

In a word: numbers! Quantitative research seeks generalizations. It is deductive in that new knowledge is gained through the analysis of existing theory and testing of hypotheses. Quantification of behavior is achieved through the process of measurement. Measurement generates data which are analyzed to reveal facts that enable us to verify or falsify hypotheses. Quantitative research employs numerical indicators to ascertain the relative size of something. Thus, there can be a high level of measurement precision.

According to James Anderson (Communication Research: Issues and Methods, 1987), in the qualitative approach we systematically reflect upon the experience of the everyday world and create a "research text" which can be critically analyzed. This research text is a mixture of private and public knowledge, personal experience, and objective measures. It is interpretative; its focus is on words and not numbers. It provides a greater depth of information about how people perceive events in the context of the actual situations in which they occur.

Much of this course will focus on primary, applied, quantitative research. More particularly, survey research will serve as the basis for many of the examples used in this reader. It may be helpful, therefore, to review the steps in the survey research process:

1. Define the problem

Page 2: Chapter 1

2. Develop research questions

3. Backgrounding (acquire knowledge about research issues)

4. Defining and operationalizing concepts

5. Formulating hypotheses

6. Deciding on a survey approach

a. Telephoneb. Mailc. Face-to-face

7. Writing questions

8. Designing questionnaire

9. Pretesting questionnaire (and revising as necessary)

10. Sampling

11. Collecting data (interviewing or mailing out questionnaires)

12. Data coding and enty (this step may be eliminated ifnew survey technologies are used)

13. Analyses

14. Interpreting results and verifying or falsifying hypotheses

15. Writing up results (report or article)

16. Presenting/disseminating results

17. Replication/extension

Important concepts

There are a number of concepts that are important to all research methodologies. They include:

Variables

Variables are concepts or characteristics that reflect variation (take on different values) among the things being studied. They may be characteristics of people such as demographics, attitudes, behaviors and knowledge. We presume there is variation in the concepts of interest. There also is an expectation of measurement - we will do something to measure the variation.

Hypothesis

A hypothesis is a statement of relationship between two or more variables. Typically, hypotheses derive from theory. We sometimes refer to research as hypothesis testing.

Reliability

The key to reliability is the notion of consistency. If results from a questionnaire are consistent, we say that it is a reliable measurement instrument. Do repeated observations yield similar results? A basic

Page 3: Chapter 1

principle of research (including surveys) is the use of multiple measurements to assess the reliability of a measure.Reliability may take several forms: consistency over time, across forms (e.g., different versions of a questionnaire), across items, and across people.

Consistency over time refers to "test-retest reliability." We might give the same set of questions to the same group of individuals on two separate occasions and compare the results. If the questionnaire is reliable, there should be consistency in the responses. A "coefficient of stability" would indicate the stability over time -- a high, positive coefficient would mean individuals would respond similarly if given the same questions at two points in time.

To measure consistency across forms, sometimes termed equivalence or parallel forms reliability, we correlate the results from two different forms of a measurement instrument given to a single group of individuals. The more consistent the results between two parallel forms, the greater the equivalence or reliability.

To assess reliability across people, we establish inter-rater reliability. The greater the agreement (consistency), the greater the reliability. Inter-rater reliability is especially important in content analysis research. It can be measured in terms of percent of agreement (the percentage of incidents for which two observers agree). Alternatively, a correlation coefficient can be calculated to assess the reliability between coders/observers.

Lastly, reliability can be examined in terms of consistency of items within a measurement instrument. This is referred to as internal consistency. This is an indicator of how consistently items measure a concept. We want to determine if the items measure the same content and are thus consistent with each other. One means of arrive at this is referred to as a split-half procedure. The items are divided into two halves and a correlation is done on the two halves. The most typical approach to assessing internal consistency is Cronbach's coefficient alpha. It is the average of all possible split-half estimates. The process of determining the reliability of a set of items involves item analysis and SPSSWIN's Reliability Analysis (which goes beyond the scope of COM5331).

Validity

Validity can be thought of as the extent to which a measurement scale or variable represents what it is supposed to. In other words, does the variable measure the concept it is meant to measure? If a series of questions are written to measure respondents' satisfaction derived from using on-line newspapers, do they in fact measure this concept?

There are several ways of assessing validity. The first, content validity, is defined as the extent to which the content of the measurement instrument reflects what is supposed to be measured. Do the questions written satisfactions obtained from using on-line newspapers reflect the content of what the researcher seeks to measure? The more expertise one has in the research area, the more confident one can be that the questions meet the test of content validity.

Face validity refers to a questionnaire (or other data collection instrument) being judged valid by those being measured (the respondents). Put more simply, does the measure appear, on the face of it, to be measuring what is intended?

Concurrent validity involves a comparison of the results from a measurement instrument to results from other instruments designed to measure the same thing. If you develop a measure of job satisfaction, the results from that instrument would be compared to other job satisfaction indices. If the two measures are consistent, we consider the new instrument to have concurrent validity. This is sometimes referred to as criterion-related validity. In general, in criterion-related, a measure is considered valid to the extent that it enables a researcher to predict a score on some other measure or predict a behavior of interest.