Organization of statistical investigation. Medical Statistics Commonly the word statistics means the...
-
Upload
leslie-miller -
Category
Documents
-
view
215 -
download
0
Transcript of Organization of statistical investigation. Medical Statistics Commonly the word statistics means the...
Organization of statistical investigation
Medical Statistics
Commonly the word statistics means the arranging of data into charts, tables, and graphs along with the computations of various descriptive numbers about the data. This is a part of statistics, called descriptive statistics, but it is not the most important part.
Statistical Analysis in a Simple ExperimentHalf the subjects
receive one treatment and the other half another treatment (usually placebo)
Define population of interest
Use statistical techniques to make inferences about
the distribution of the variables in the general
population and about the effect of the treatment
Measure baseline
variables in each group
Randomly select sample of subjects to study
The most important part
The most important part is concerned with reasoning in an environment where one doesn’t know, or can’t know, all of the facts needed to reach conclusions with complete certainty. One deals with judgments and decisions in situations of incomplete information. In this introduction we will give an overview of statistics along with an outline of the various topics in this course.
The stages of statistic investigation
1st stage – composition of
the program and plan of
investigation
2nd stage – collection of
material
3ed stage – working up of
material
4th stage – analysis of material, conclusions,
proposals
5th stage – putting into
practice
Survival Analysis
Kaplan-Meier analysis measures the ratio of surviving subjects (or those without an event) divided by the total number of subjects at risk for the event. Every time a subject has an event, the ratio is recalculated. These ratios are then used to generate a curve to graphically depict the probability of survival.
Cox proportional hazards analysis is similar to the logistic regression method described above with the added advantage that it accounts for time to a binary event in the outcome variable. Thus, one can account for variation in follow-up time among subjects.
Kaplan-Meier Survival Curves
Why Use Statistics?
Cardiovascular Mortality in Males
0
0,2
0,4
0,6
0,8
1
1,2
'35-'44 '45-'54 '55-'64 '65-'74 '75-'84
SMR Bangor
Roseto
Percentage of Specimens Testing Positive for RSV (respiratory syncytial virus)
Jul Aug Sep Oct Nov Dec Jan Feb Mar Apr May Jun
South 2 2 5 7 20 30 15 20 15 8 4 3
North-east
2 3 5 3 12 28 22 28 22 20 10 9
West 2 2 3 3 5 8 25 27 25 22 15 12
Mid-west
2 2 3 2 4 12 12 12 10 19 15 8
Descriptive Statistics
Percentage of Specimens Testing Postive for RSV 1998-99
05
101520253035
South
Northeast
West
Midwest
Distribution of Course Grades
0
2
4
6
8
10
12
14
Number of Students
A A- B+ B B- C+ C C- D+ D D- F
Grade
The Normal Distribution
Mean = median = mode
Skew is zero 68% of values fall
between 1 SD 95% of values fall
between 2 SDs
.
Me
an
, Med
ian
, Mo
de
1
2
SAMPLING AND ESTIMATION
One of the questions asked was “Do you try hard to avoid too much fat in your diet?” They reported that 57% of the people responded YES to this question, which was a 2% increase from a similar survey conducted in 1983. The article stated that the margin of error of the study was plus or minus 3%.
Measures of Association
Measures Of Diagnostic Test Accuracy
Sensitivity is defined as the ability of the test to identify correctly those who have the disease.
Specificity is defined as the ability of the test to identify correctly those who do not have the disease.
Predictive values are important for assessing how useful a test will be in the clinical setting at the individual patient level. The positive predictive value is the probability of disease in a patient with a positive test. Conversely, the negative predictive value is the probability that the patient does not have disease if he has a negative test result.
Likelihood ratio indicates how much a given diagnostic test result will raise or lower the odds of having a disease relative to the prior probability of disease.
Measures Of Diagnostic Test Accuracy
Expressions Used When Making Inferences About Data
Confidence Intervals- The results of any study sample are an estimate of the true value
in the entire population. The true value may actually be greater or less than what is observed.
Type I error (alpha) is the probability of incorrectly concluding there is a statistically significant difference in the population when none exists.
Type II error (beta) is the probability of incorrectly concluding that there is no statistically significant difference in a population when one exists.
Power is a measure of the ability of a study to detect a true difference.
This is an example of an inference made from incomplete information. The group under study in this survey is the collection of adult Americans, which consists of more than 200 million people. This is called the population.
SAMPLING AND ESTIMATION
Group properties of statistical totality:
Distribution of characteristic
(criterion – relative sizes)
Average level of index (criterions –
Mo-mean, Me-median,
arithmetical mean)
Variety of characteristic (criterions –
lim- limit, am – amplitude, σ
– average deviation)
Representation (criterions – mM –
mistake of average sizes, m% - mistake of relative sizes)
Mutual connection between
characteristics (criterion – rxy -
coefficient of connection
If every individual of this group were to be queried, the survey would be called a census. Yet of the millions in the population, the Harris survey examined only 1;256 people. Such a subset of the population is called a sample.
SAMPLING AND ESTIMATION
We shall see that, if done carefully, 1;256 people are sufficient to make reasonable estimates of the opinion of all adult Americans. Samuel Johnson was aware that there is useful information in a sample. He said that you don’t have to eat the whole ox to know that the meat is tough.
SAMPLING AND ESTIMATION
The people or things in a population are called units. If the units are people, they are sometimes called subjects. A characteristic of a unit (such as a person’s weight, eye color, or the response to a Harris Poll question) is called a variable.
SAMPLING AND ESTIMATION
If a variable has only two possible values (such as a response to a YES or NO question, or a person’s sex) it is called a dichotomous variable. If a variable assigns one of several categories to each individual (such as person’s blood type or hair color) it is called a categorical variable. And if a variable assigns a number to each individual (such as a person’s age, family size, or weight), it is called a quantitative variable.
SAMPLING AND ESTIMATION
A number derived from a sample is called a statistic,
whereas a number derived from the population is called a parameter.
SAMPLING AND ESTIMATION
Parameters are is usually denoted by Greek letters, such as π, for population percentage of a dichotomous variable, or μ, for population mean of a quantitative variable. For the Harris study the sample percentage p = 57% is a statistic. It is not the (unknown) population percentage π, which is the percentage that we would obtain if it were possible to ask the same question of the entire population.
SAMPLING AND ESTIMATION
SAMPLING AND ESTIMATION
Inferences we make about a population based on facts derived from a sample are uncertain. The statistic p is not the same as the parameter π. In fact, if the study had been repeated, even if it had been done at about the same time and in the same way, it most likely would have produced a different value of p, whereas π would still be the same. The Harris study acknowledges this variability by mentioning a margin of error of ± 3%.
SAMPLING AND ESTIMATION
Consider a box containing chips or cards, each of which is numbered either 0 or 1. We want to take a sample from this box in order to estimate the percentage of the cards that are numbered with a 1. The population in this case is the box of cards, which we will call the population box. The percentage of cards in the box that are numbered with a 1 is the parameter π.
SIMULATION
In the Harris study the parameter π is unknown. Here, however, in order to see how samples behave, we will make our model with a known percentage of cards numbered with a 1, say π = 60%. At the same time we will estimate π, pretending that we don’t know its value, by examining 25 cards in the box.
SIMULATION
We take a simple random sample with replacement of 25 cards from the box as follows. Mix the box of cards; choose one at random; record it; replace it; and then repeat the procedure until we have recorded the numbers on 25 cards. Although survey samples are not generally drawn with replacement, our simulation simplifies the analysis because the box remains unchanged between draws; so, after examining each card, the chance of drawing a card numbered 1 on the following draw is the same as it was for the previous draw, in this case a 60% chance.
SIMULATION
Let’s say that after drawing the 25 cards this way, we obtain the following results, recorded in 5 rows of 5 numbers:
SIMULATION
An experiment is a procedure which results in a measurement or observation. The Harris poll is an experiment which resulted in the measurement (statistic) of 57%. An experiment whose outcome depends upon chance is called a random experiment.
ERROR ANALYSIS
On repetition of such an experiment one will typically obtain a different measurement or observation. So, if the Harris poll were to be repeated, the new statistic would very likely differ slightly from 57%. Each repetition is called an execution or trial of the experiment.
ERROR ANALYSIS
Suppose we made three more series of draws, and the results were + 16%, + 0%, and + 12%. The random sampling errors of the four simulations would then average out to:
ERROR ANALYSIS
Note that the cancellation of the positive and negative random errors results in a small average. Actually with more trials, the average of the random sampling errors tends to zero.
ERROR ANALYSIS
So in order to measure a “typical size” of a random sampling error, we have to ignore the signs. We could just take the mean of the absolute values (MA) of the random sampling errors. For the four random sampling errors above, the MA turns out to be
ERROR ANALYSIS
The MA is difficult to deal with theoretically because the absolute value function is not differentiable at 0. So in statistics, and error analysis in general, the root mean square (RMS) of the random sampling errors is generally used. For the four random sampling errors above, the RMS is
ERROR ANALYSIS
The RMS is a more conservative measure of the typical size of the
random sampling errors in the sense that MA ≤ RMS.
ERROR ANALYSIS
For a given experiment the RMS of all possible random sampling errors is called the standard error (SE). For example, whenever we use a random sample of size n and its percentages p to estimate the population percentage π, we have
ERROR ANALYSIS