Generalised linear models

25
Generalised linear models Gil McVean, Department of Statistics Thursday 5 th November 2009

description

Generalised linear models. Gil McVean, Department of Statistics Thursday 5 th November 2009. Questions to ask…. How do you add covariates to a model? What is a linear model? What is a generalised linear model? How do you estimate parameters and test hypotheses with GLMs? - PowerPoint PPT Presentation

Transcript of Generalised linear models

Page 1: Generalised linear models

Generalised linear models

Gil McVean, Department of StatisticsThursday 5th November 2009

Page 2: Generalised linear models

Questions to ask…

• How do you add covariates to a model?

• What is a linear model?

• What is a generalised linear model?

• How do you estimate parameters and test hypotheses with GLMs?

• How do you assess model fit with GLMs?

2

Page 3: Generalised linear models

What is a covariate?

• A covariate is a quantity that may influence the outcome of interest– Genotype at a SNP– Age of mice when measurement was taken– Batch of chips from which gene expression was measured

• Previously, we have looked at using likelihood to estimate parameters of underlying distributions

• We want to generalise this idea to ask how covariates might influence the underlying parameters

• Much statistical modelling is concerned with considering linear effects of covariates on underlying parameters

3

Page 4: Generalised linear models

What is a linear model?

• In a linear model, the expectation of the response variable is defined as a linear combination of explanatory variables

• Explanatory variables can include any function of the original data

• But the link between E(Y) and X (or some function of X) is ALWAYS linear and the error is ALWAYS Gaussian

...3210 iiiii ZXZXY

Response variable

Intercept Linear relationships with explanatory variables

Interaction term

Gaussian error

...322

10

iZi

iZ

ii XeXY

4

Page 5: Generalised linear models

What is a GLM?

• There are many settings where the error is non-Gaussian and/or the link between E(Y) and X is not necessarily linear

– Discrete data (e.g. counts in multinomial or Poisson experiments)– Categorical data (e.g. Disease status)– Highly-skewed data (e.g. Income, ratios)

• Generalised linear models keep the notion of linearity, but enable the use of non-Gaussian error models

• g is called the link function– In linear models, the link function is the identity

• The response variable can be drawn from any distribution of interest (the distribution function)

– In linear models this is Gaussian

...)( 32101

iiiiii ZXZXgYE

5

Page 6: Generalised linear models

Poisson regression

• In Poisson regression the expected value of the response variable is given by the exponent of the linear term

• The link function is the log

• Note that several distribution functions are possible (normal, Poisson, binomial counts), though in practice Poisson regression is typically used to model count data (particularly when counts are low)

...exp)( 3210 iiiiii ZXZXYE

6

Page 7: Generalised linear models

Example: Caesarean sections in public and private hospitals

7

Page 8: Generalised linear models

Boxplots of rates of C sections

8

Page 9: Generalised linear models

Fitting a model without covariates

9

> analysis<-glm(d$Caes ~ d$Births, family = "poisson")> summary(analysis)

Call:glm(formula = d$Caes ~ d$Births, family = "poisson")

Deviance Residuals: Min 1Q Median 3Q Max -2.81481 -0.73305 -0.08718 0.74444 2.19103

Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 2.132e+00 1.018e-01 20.949 < 2e-16 ***d$Births 4.406e-04 5.395e-05 8.165 3.21e-16 ***---Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for poisson family taken to be 1)

Null deviance: 99.990 on 19 degrees of freedomResidual deviance: 36.415 on 18 degrees of freedomAIC: 127.18

Number of Fisher Scoring iterations: 4

Implies an average of 12.7 per 1000 births

Page 10: Generalised linear models

Fitting a model with covariates

• Unexpectedly, this indicates that public hospitals actually have a higher rate of Caesarean sections than private ones

10

glm(formula = d$Caes ~ d$Births + d$Hospital, family = "poisson")

Deviance Residuals: Min 1Q Median 3Q Max -2.3270 -0.6121 -0.0899 0.5398 1.6626

Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 1.351e+00 2.501e-01 5.402 6.58e-08 ***d$Births 3.261e-04 6.032e-05 5.406 6.45e-08 ***d$Hospital 1.045e+00 2.729e-01 3.830 0.000128 ***---Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Implies an average of 15.2 per 1000 births in public hospitals and 5.4 per 1000 births in private ones

Page 11: Generalised linear models

Checking model fit

• Look a distribution of residuals and how well observed values are predicted

11

Page 12: Generalised linear models

What’s going on?

• Initial summary suggested opposite result to GLM analysis. Why?

• Relationship between no. Births and no. C sections does not appear to be linear

• Accounting for this removes most of the apparent differences between hospital types

• There is also one quite influential outlier

12

Normal Caesarean

Private 412 16

Public 21121 285

Relative risk Private (compared to public) = 2.8

Page 13: Generalised linear models

Simpson’s paradox

• Be careful about adding together observations – this can be misleading

• E.g. Berkeley sex-bias case

13

Applicants % admitted

Men 8442 44%

Women 4321 35%

Major Men Women

Applicants % admitted Applicants % admitted

A 825 62% 108 82%

B 560 63% 25 68%

C 325 37% 593 34%

D 417 33% 375 35%

E 191 28% 393 24%

F 272 6% 341 7%

Appears that women have lower success

But actually women are typically more successful at the Departmental level, just apply to more competitive subjects

Page 14: Generalised linear models

14

Page 15: Generalised linear models

Finding MLEs in GLM

• In linear modelling we can use the beautiful compactness of linear algebra to find MLEs and estimates of the variance for parameters

• Consider an n by k+1 data matrix, X, where n is the number of observations and k is the number of explanatory variables, and a response vector Y

– the first column is ‘1’ for the intercept term

• The MLEs for the coefficients () can be estimated using

• In GLMs, there is usually no such compact analytical expression for the MLEs– Use numerical methods to maximise the likelihood

YXXXβ TT 1ˆ ))(,(~ˆ 21XXββ TN

15

Page 16: Generalised linear models

Testing hypotheses in GLMs

• For the parameters we are interested in we typically want to ask how much evidence there is that these are different from zero

• For this we need to construct confidence intervals for the parameter estimates

• We could estimate the confidence interval by finding all parameter values with log-likelihood no greater than 1.94 units worse than the MLE

• Alternatively, we might appeal to the CLT and use bootstrap techniques to estimate the variance of parameter estimates

• However, we can also appeal to theoretical considerations of likelihood (again based on the CLT) that show that parameter estimates are asymptotically normal with variance described by the Fisher information matrix

• Informally, the information matrix describes the sharpness of the likelihood curve around the MLE and the extent to which parameter estimates are correlated

16

Page 17: Generalised linear models

Logistic regression

• When only two types of outcome are possible (e.g. disease/not-disease) we can model counts by the binomial

• If we want to perform inference about the factors that influence the probability of ‘success’ it is usual to use the logistic model

• The link function here is the logit

...exp1

...exp)(

210

210

ii

iii ZX

ZXYE

1

log)(g

17

Page 18: Generalised linear models

Example: testing for genotype association

• In a cohort study, we observe the number of individuals in a population that get a particular disease

• We want to ask whether a particular genotype is associated with increased risk

• The simplest test is one in which we consider a single coefficient for the genotypic value

Genotype AA Aa AA

Genotypic value

0 1 2

Frequency in population

2 2

Probability of disease

p0 p1 p2

][ 101

1iGi e

p

0

1

2

0 = -41 = 2

18

Page 19: Generalised linear models

A note on the model

• Note that each copy of the risk allele contribute in an additive way to the exponent

• This does not mean that each allele ‘adds’ a fixed amount to the probability of disease

• Rather, each allele contributes a fixed amount to the log-odds

• This has the effect of maintaining Hardy-Weinberg equilibrium within both the cases and controls

iGP

P10)diseaseNot (

)disease(logoddslog

19

Page 20: Generalised linear models

Concepts in disease genetics

• Relative risk describes the risk to a person in an exposed group compared to the unexposed group

• The odds ratio compares the odds of disease occurring in one group relative to another

• If the absolute risk of disease is low the two will be very similar

20

)exposednot |disease(

)exposed|disease(

P

PRR

)exposednot |diseasenot ()exposednot |disease(

)exposed|diseasenot ()exposed|disease(

PP

PP

OR

Page 21: Generalised linear models

Cont.

• Suppose in a given study we observe the following counts

• We can fit a GLM using the logit link function and binomial probabilities

• We have genotype data stored in the vector gt and disease status in the vector status

• Using R, this is specified by the command– glm(formula = status ~ gt, family = binomial)

Genotype 0 1 2

Counts with disease

26 39 21

Counts without disease

1298 567 49

21

Page 22: Generalised linear models

Interpreting results

Call:glm(formula = status ~ gt, family = binomial)

Deviance Residuals: Min 1Q Median 3Q Max -0.8554 -0.4806 -0.2583 -0.2583 2.6141

Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -4.6667 0.2652 -17.598 <2e-16 ***gt 1.2833 0.1407 9.123 <2e-16 ***---Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for binomial family taken to be 1)

Null deviance: 967.36 on 1999 degrees of freedomResidual deviance: 886.28 on 1998 degrees of freedomAIC: 890.28

Number of Fisher Scoring iterations: 6

MLE for coefficients

Standard error for estimates

Estimate/std. error

P-value from normal distribution

Measure of contribution of individual observations to overall goodness of fit (for MLE model)

Measure of goodness of fit of null (compared to saturated model)

Measure of goodness of fit of fitted model

Penalised likelihood used in model choice

Number of iterations used to find MLE

22

Page 23: Generalised linear models

Adding in extra covariates

• Adding in additional explanatory variables in GLM is essentially the same as in linear model analysis

• Likewise, we can look at interactions

• In the disease study we might want to consider age as a potentially important covariate

glm(formula = status ~ gt + age, family = binomial)

Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -7.00510 0.79209 -8.844 <2e-16 ***gt 1.46729 0.17257 8.503 <2e-16 ***age 0.04157 0.01903 2.185 0.0289 *

23

Page 24: Generalised linear models

Adding model complexity

• In the disease status analysis we might want to generalise the fitted model to one in which each genotype is allowed its own risk

][ 211101

1

iGiG

IIie

p

glm(formula = status ~ g1 + g2 + age, family = binomial)

Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -5.46870 0.73155 -7.476 7.69e-14 ***g1TRUE 1.25694 0.26278 4.783 1.72e-06 ***g2TRUE 3.00816 0.33871 8.881 < 2e-16 ***age 0.04224 0.01915 2.206 0.0274 * ---Null deviance: 684.44 on 1999 degrees of freedomResidual deviance: 609.09 on 1996 degrees of freedomAIC: 617.09

24

Page 25: Generalised linear models

Model choice

• It is worth remembering that we cannot simply identify the ‘significant’ parameters and put them in our chosen model

• Significance for a parameter tests the marginal null hypothesis that the coefficient associated with that parameter is zero

• If two explanatory variables are highly correlated then marginally neither may be significant, yet a linear model contains only one would be highly significant

• There are several ways to choose appropriate models from data. These typically involve adding in parameters one at a time and adding some penalty to avoid over-fitting.

25