Chapter 1: Random Events and Probability Department of Statistics Huang Xudong,Ph.D.

Post on 11-Jan-2016

232 views 4 download

Tags:

Transcript of Chapter 1: Random Events and Probability Department of Statistics Huang Xudong,Ph.D.

 

Chapter 1: Random Events and Probability

Department of Statistics

Huang Xudong,Ph.D

§1.1 Random event

1.1.1 Random Experiments1.1.1 Random Experiments

The basic notion in probability is that of a The basic notion in probability is that of a random random experimentexperiment::an experiment whose outcome cannot be an experiment whose outcome cannot be determined in advance, but is nevertheless still determined in advance, but is nevertheless still subject to analysis.subject to analysis.

Examples of random experiments are:

1. tossing a die, 2. measuring the amount of rainfall in Brisbane in

January, 3. counting the number of calls arriving at a

telephone exchange during a fixed time period, 4. selecting a random sample of fifty people and

observing the number of left-handers, 5. choosing at random ten people and measuring

their height.

1.1.2 Sample Space

Definition The sample space Ω of a random experiment is the set of all possible outcomes of the experiment.

Examples of random experiments with their sample spaces are:

1. Cast two dice consecutively, 2. The lifetime of a machine (in days), 3. The number of arriving calls at an exchange during

a specified time interval,

4. The heights of 10 selected people.

Discrete and continuous sample spaces

Definition: A sample space is finite if it has a finite number of elements.

Definition: A sample space is discrete if there are “gaps” between the di erent elements, or if the ffelements can be “listed”, even if an infinite list (eg. 1, 2, 3, . . .).

In mathematical language, a sample space is discrete if it is countable.

Definition: A sample space is continuous if there are no gaps between the elements,so the elements cannot be listed (eg. the interval [0, 1]).

1.1.3 Events So far, we have introduced the sample space,

Ω,which lists all possible outcomes of a random experiment, and might seem unexciting.

However, Ω is a set. It lays the ground for a whole mathematical formulation of randomness, in terms of set theory.

The next concept that you would need to formulate is that of something that happens at random, or an event.

How would you express the idea of an event in terms of set theory?

Definition of events Definition: An event is a subset of the sample space. That is, any collection of outcomes forms an event.

Events will be denoted by capital letters A,B,C,....

Note:We say that event A occurs if the outcome of the experiment is one of the elements in A.

Note: Ω is a subset of itself, so Ω is an event. The empty set, ∅ = , is also a subset of Ω. This is called the null event, or the event with no outcomes.

Examples of events are:

1. The event that the sum of two dice is 10 or more,

2. The event that a machine lives less than 1000 days,

3. The event that out of fifty selected people, five are left-handed,

Combining Events

Formulating random events in terms of sets gives us the power of set theory to describe all possible ways of combining or manipulating events. For example, we need to describe things like coincidences (events happening together), alternatives, opposites, and so on.

We do this in the language of set theory.

Example: Suppose our random experiment is to pick a person in the class and seewhat form(s) of transport they used to get to campus today.

This sort of diagram representing events in a sample space is called a Venn diagram.

1. Alternatives: the union ‘or’ operator

Definition: Let A and B be events on the same sample space Ω: so A Ω and B Ω.⊂ ⊂

The union of events A and B is written A B, and ∪is given by

2. Concurrences and coincidences: the intersection ‘and’ operator

Definition: The intersection of events A and B is written A ∩B and is given by

3. Opposites: the complement or ‘not’ operator

Definition: The complement of event A is written

and is given by

Examples:

Experiment: Pick a person in this class at random. Sample space: Ω = all people in class. Let event A =“person is male” and

event B = “person travelled by bike today”.

Suppose I pick a male who did not travel by bike. Say whether the following events have occurred:

Properties of union, intersection, and complement

Distributive laws

1.1.4 Partitioning sets and events

Examples:

Partitioning an event A

§1.2 Frequency and probability

1.2.1 FrequencyConsider performing our experiment a large number n times and counting the number of those times when A occurs. The relative frequency of A is then defined to be

When is the number of times that A occurs.

nnAf An )(

An

Properties of frequency:

1.2.2 Probability: a way of measuring sets

Remember that you are given the job of building the science of randomness. This means somehow ‘measuring chance’.

It was clever to formulate our notions of events and sample spaces in terms of sets: it gives us something to measure. ‘Probability’, the name that we give to our chance-measure, is a way of measuring sets.

Most of this course is about probability distributions.

A probability distribution is a rule according to which probability is apportioned,or distributed, among the di erent sets in the sample space.ff

At its simplest, a probability distribution just lists every element in the sample space and allots it a probability between 0 and 1, such that the total sum of probabilities is 1.

Discrete probability distributions

Continuous probability distributions

On a continuous sample space Ω, e.g. Ω = [0, 1], we can not list all the elements and give them an individual probability. We will need more sophisticated methods detailed later in the course.

However, the same principle applies. A continuous probability distribution is a rule under which we can calculate a probability between 0 and 1 for any set, or event, A Ω.⊆

1.2.3 Probability Axioms

For any sample space, discrete or continuous, all of probability theory is based on the following three definitions, or axioms.

If our rule for ‘measuring sets’ satisfies the three axioms, it is a valid probability distribution.

Note: The axioms can never be ‘proved’: they are definitions.

Note: Remember that an EVENT is a SET: an event is a subset of the sample space.

1.2.3 Probabilities of combined events

In Section 1.3 we discussed unions, intersections, and complements of events. We now look at the probabilities of these combinations. Everything below applies to events (sets) in either a discrete or a continuous sample space.

1. Probability of a union

Let A and B be events on a sample space Ω. There are two cases for the probability of the union A B:∪

1. A and B are mutually exclusive (no overlap): i.e. A ∩ B = ∅.

2. A and B are not mutually exclusive: A ∩ B = ∅.

Explanation

2. Probability of an intersection

There is no easy formula for P(A ∩ B). We might be able to use statistical independence

(Section 1.16). If A and B are not statistically independent, we

often use conditional probability (Section 1.10.)

3. Probability of a complement

1.2.4 The Partition Theorem

The Partition Theorem.

300 Australians were asked about their car preferences in 1998. Of the respondents, 33% had children. The respondents were asked what sort of car they would like if they could choose any car at all. 13% of respondents had children and chose a large car. 12% of respondents did not have children and chose a large car.

Find the probability that a randomly chosen respondent:

(a) would choose a large car; (b) either has children or would choose a large car

(or both).

1.2.5 Examples of basic probability calculations

First formulate events:

Respondents were also asked their opinions on car reliability and fuel consumption. 84% of respondents considered reliability to be of high importance, while 40% considered fuel consumption to be of high importance.

Formulate events: R = “considers reliability of high importance”,F = “considers fuel consumption of high importance”.

Probability that respondent considers BOTH reliability AND fuel consumption of high importance.

(f) Find the probability that a respondent considered reliability, but not fuel consumption, of high importance.

1.2.6 Formal probability proofs: nonexaminable

i)

§1.3 Conditional probability

1.3.1 Conditional Probability

Conditioning is another of the fundamental tools of probability: probably the most fundamental tool. It is especially helpful for calculating the probabilities of intersections, such as P(A∩B), which themselves are critical for the useful Partition Theorem.

Additionally, the whole field of stochastic processes is based on the idea of conditional probability. What happens next in a process depends, or is conditional, on what has happened beforehand.

Dependent events

Suppose A and B are two events on the same sample space. There will often be dependence between A and B. This means that if we know that B has occurred, it changes our knowledge of the chance that A will occur.

Example: Toss a dice once.

Conditioning as reducing the sample space

)(

)(

36/3

36/2

3

2)|(

BP

BAPBAP

Definition of conditional probability

Conditional probability provides us with a way to reason about the outcome of an experiment based on partial about the outcome of an experiment, based on partial information.

Conditional Probabilities Satisfy the Three Axioms

Conditional Probabilities Satisfy General Probability Laws

Simple Example using Conditional Probabilities

The Multiplication Rule

1.3.2 Multiplication (Chain) Rule: Example

Example Three cards are drawn from an ordinary 52-card deck without replacement (drawn cards are not placed back in the deck). We wish to find the probability that none of the three cards is a “heart”.

1.3.3 Total Probability Theorem

Example Using Total Probability TheoremYou enter a chess tournament where your probability of winning a game is 0.3 against half the players(call them type 1),0.4 against a quarter of the player (call them type 2), You play a game against a randomly chosen opponent. What is the probability of winning?

1.3.4 Bayes’ Theorem: inverting conditional probabilities

Example The False-Positive Puzzle.

§1.4 Independence of the events

1.4.1 Statistical Independence

We can extend the definitionto arbitrarily many events:

Statistical independence for calculating the probability of an intersection

In section 1.3 we said that it is often hard to calculate P(A∩ B).

We usually have two choices.

Pairwise independence does not imply mutual independence