Time Series in R: Forecasting and Visualisation · Visualisation Forecast evaluation 29 May 2017....

46
1 Time Series in R: Forecasting and Visualisation Forecast evaluation 29 May 2017

Transcript of Time Series in R: Forecasting and Visualisation · Visualisation Forecast evaluation 29 May 2017....

Page 1: Time Series in R: Forecasting and Visualisation · Visualisation Forecast evaluation 29 May 2017. Outline 1 Forecasting residuals 2 Evaluating forecast accuracy 3 Forecasting benchmark

1

Time Series in R:Forecasting andVisualisation

Forecast evaluation

29 May 2017

Page 2: Time Series in R: Forecasting and Visualisation · Visualisation Forecast evaluation 29 May 2017. Outline 1 Forecasting residuals 2 Evaluating forecast accuracy 3 Forecasting benchmark

Outline

1 Forecasting residuals

2 Evaluating forecast accuracy

3 Forecasting benchmark methods

4 Lab session 7

5 Time series cross-validation

6 Lab session 8

2

Page 3: Time Series in R: Forecasting and Visualisation · Visualisation Forecast evaluation 29 May 2017. Outline 1 Forecasting residuals 2 Evaluating forecast accuracy 3 Forecasting benchmark

Fitted values

yt|t−1 is the forecast of yt based on observationsy1, . . . , yt.We call these “fitted values”.Often not true forecasts since parameters areestimated on all data.

3

Page 4: Time Series in R: Forecasting and Visualisation · Visualisation Forecast evaluation 29 May 2017. Outline 1 Forecasting residuals 2 Evaluating forecast accuracy 3 Forecasting benchmark

Forecasting residuals

Residuals in forecasting: difference betweenobserved value and its fitted value: et = yt − yt|t−1.

Assumptions1 {et} uncorrelated. If they aren’t, then information

left in residuals that should be used in computingforecasts.

2 {et} have mean zero. If they don’t, then forecasts arebiased.

Useful properties (for prediction intervals)3 {et} have constant variance.4 {et} are normally distributed.

4

Page 5: Time Series in R: Forecasting and Visualisation · Visualisation Forecast evaluation 29 May 2017. Outline 1 Forecasting residuals 2 Evaluating forecast accuracy 3 Forecasting benchmark

Forecasting residuals

Residuals in forecasting: difference betweenobserved value and its fitted value: et = yt − yt|t−1.

Assumptions1 {et} uncorrelated. If they aren’t, then information

left in residuals that should be used in computingforecasts.

2 {et} have mean zero. If they don’t, then forecasts arebiased.

Useful properties (for prediction intervals)3 {et} have constant variance.4 {et} are normally distributed.

4

Page 6: Time Series in R: Forecasting and Visualisation · Visualisation Forecast evaluation 29 May 2017. Outline 1 Forecasting residuals 2 Evaluating forecast accuracy 3 Forecasting benchmark

Forecasting residuals

Residuals in forecasting: difference betweenobserved value and its fitted value: et = yt − yt|t−1.

Assumptions1 {et} uncorrelated. If they aren’t, then information

left in residuals that should be used in computingforecasts.

2 {et} have mean zero. If they don’t, then forecasts arebiased.

Useful properties (for prediction intervals)3 {et} have constant variance.4 {et} are normally distributed.

4

Page 7: Time Series in R: Forecasting and Visualisation · Visualisation Forecast evaluation 29 May 2017. Outline 1 Forecasting residuals 2 Evaluating forecast accuracy 3 Forecasting benchmark

checkresiduals() function

livestock %>% auto.arima %>% checkresiduals(test=FALSE)

●●

● ●

● ●

● ● ●●

●●

● ●

● ● ●

●●

●●

−25

0

25

50

1960 1970 1980 1990 2000

Residuals from ARIMA(0,1,0) with drift

−0.3

−0.2

−0.1

0.0

0.1

0.2

0.3

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Lag

AC

F

0

5

10

15

−25 0 25 50

residuals

coun

t

5

Page 8: Time Series in R: Forecasting and Visualisation · Visualisation Forecast evaluation 29 May 2017. Outline 1 Forecasting residuals 2 Evaluating forecast accuracy 3 Forecasting benchmark

checkresiduals() function

livestock %>% auto.arima %>% checkresiduals(plot=FALSE)

#### Ljung-Box test#### data: Residuals from ARIMA(0,1,0) with drift## Q* = 8.6, df = 9, p-value = 0.5#### Model df: 1. Total lags used: 10

6

Page 9: Time Series in R: Forecasting and Visualisation · Visualisation Forecast evaluation 29 May 2017. Outline 1 Forecasting residuals 2 Evaluating forecast accuracy 3 Forecasting benchmark

checkresiduals() function

auscafe %>% ets %>% checkresiduals(test=FALSE)

●●

●●

●●

●●

●●

●●

●●

●●●

●●

●●

●●

●●

●●

●●●

●●

●●●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●●

●●

●●●

●●

●●

●●

●●●

●●

●●

●●

●●

●●

●●●

●●●

●●

●●

●●

●●

●●

●●

●●

●●●

●●●

●●●

●●

−0.05

0.00

0.05

0.10

1990 2000 2010

Residuals from ETS(M,A,M)

−0.1

0.0

0.1

12 24 36

Lag

AC

F

0

20

40

60

−0.05 0.00 0.05 0.10

residuals

coun

t

7

Page 10: Time Series in R: Forecasting and Visualisation · Visualisation Forecast evaluation 29 May 2017. Outline 1 Forecasting residuals 2 Evaluating forecast accuracy 3 Forecasting benchmark

checkresiduals() function

auscafe %>% ets %>% checkresiduals(plot=FALSE)

#### Ljung-Box test#### data: Residuals from ETS(M,A,M)## Q* = 64, df = 8, p-value = 7e-11#### Model df: 16. Total lags used: 24

8

Page 11: Time Series in R: Forecasting and Visualisation · Visualisation Forecast evaluation 29 May 2017. Outline 1 Forecasting residuals 2 Evaluating forecast accuracy 3 Forecasting benchmark

Residuals and forecasting

Autocorrelations left in residuals suggest theforecast method can be improved (in theory).Small autocorrelations have little effect, even ifsignificant.Non-Gaussian residuals can be handled usingbootstrapped forecast intervals:

forecast(..., bootstrap=TRUE)

9

Page 12: Time Series in R: Forecasting and Visualisation · Visualisation Forecast evaluation 29 May 2017. Outline 1 Forecasting residuals 2 Evaluating forecast accuracy 3 Forecasting benchmark

Bootstrapped forecast intervals

auscafe %>% ets %>% forecast %>% autoplot

1

2

3

4

1990 2000 2010 2020

Time

.

level80

95

Forecasts from ETS(M,A,M)

10

Page 13: Time Series in R: Forecasting and Visualisation · Visualisation Forecast evaluation 29 May 2017. Outline 1 Forecasting residuals 2 Evaluating forecast accuracy 3 Forecasting benchmark

Bootstrapped forecast intervals

auscafe %>% ets %>% forecast(bootstrap=TRUE) %>% autoplot

1

2

3

4

1990 2000 2010 2020

Time

.

level80

95

Forecasts from ETS(M,A,M)

11

Page 14: Time Series in R: Forecasting and Visualisation · Visualisation Forecast evaluation 29 May 2017. Outline 1 Forecasting residuals 2 Evaluating forecast accuracy 3 Forecasting benchmark

Outline

1 Forecasting residuals

2 Evaluating forecast accuracy

3 Forecasting benchmark methods

4 Lab session 7

5 Time series cross-validation

6 Lab session 8

12

Page 15: Time Series in R: Forecasting and Visualisation · Visualisation Forecast evaluation 29 May 2017. Outline 1 Forecasting residuals 2 Evaluating forecast accuracy 3 Forecasting benchmark

Training and test sets

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● time

Training data Test data

A model which fits the training data well will notnecessarily forecast well.A perfect fit can always be obtained by using amodel with enough parameters.Over-fitting a model to data is just as bad asfailing to identify a systematic pattern in thedata.The test set must not be used for any aspect ofmodel development or calculation of forecasts.Forecast accuracy is based only on the test set.

13

Page 16: Time Series in R: Forecasting and Visualisation · Visualisation Forecast evaluation 29 May 2017. Outline 1 Forecasting residuals 2 Evaluating forecast accuracy 3 Forecasting benchmark

Forecast errors

Forecast “error”: the difference between an observedvalue and its forecast.

eT+h = yT+h − yT+h|T,

where the training data is given by {y1, . . . , yT}

Unlike residuals, forecast errors on the test setinvolve multi-step forecasts.These are true forecast errors as the test data isnot used in computing yT+h|T.

14

Page 17: Time Series in R: Forecasting and Visualisation · Visualisation Forecast evaluation 29 May 2017. Outline 1 Forecasting residuals 2 Evaluating forecast accuracy 3 Forecasting benchmark

Measures of forecast accuracy

training <- window(auscafe, end=c(2013,12))test <- window(auscafe, start=c(2014,1))training %>% ets %>% forecast(h=length(test)) -> fcautoplot(fc) + autolayer(test)

1

2

3

4

5

1990 2000 2010

Time

.

level80

95

seriestest

Forecasts from ETS(M,A,M)

15

Page 18: Time Series in R: Forecasting and Visualisation · Visualisation Forecast evaluation 29 May 2017. Outline 1 Forecasting residuals 2 Evaluating forecast accuracy 3 Forecasting benchmark

Measures of forecast accuracy

Let yt+h|t denote the forecast of yt+h using data up to time t.

Training set measures:

MAE = 1T

T∑t=1|yt − yt|t−1|

MSE = 1T

T∑t=1

(yt − yt|t−1)2 RMSE =

√√√√ 1T

T∑t=1

(yt − yt|t−1)2

MAPE = 100T

T∑t=1|yt − yt|t−1|/|yt|

MAE, MSE, RMSE are all scale dependent.MAPE is scale independent but is only sensible ifyt � 0 for all t, and y has a natural zero.

16

Page 19: Time Series in R: Forecasting and Visualisation · Visualisation Forecast evaluation 29 May 2017. Outline 1 Forecasting residuals 2 Evaluating forecast accuracy 3 Forecasting benchmark

Measures of forecast accuracy

Let yt+h|t denote the forecast of yt+h using data up to time t.

Training set measures:

MAE = 1T

T∑t=1|yt − yt|t−1|

MSE = 1T

T∑t=1

(yt − yt|t−1)2 RMSE =

√√√√ 1T

T∑t=1

(yt − yt|t−1)2

MAPE = 100T

T∑t=1|yt − yt|t−1|/|yt|

MAE, MSE, RMSE are all scale dependent.MAPE is scale independent but is only sensible ifyt � 0 for all t, and y has a natural zero. 16

Page 20: Time Series in R: Forecasting and Visualisation · Visualisation Forecast evaluation 29 May 2017. Outline 1 Forecasting residuals 2 Evaluating forecast accuracy 3 Forecasting benchmark

Measures of forecast accuracy

Let yt+h|t denote the forecast of yt+h using data up to time t.

Test set measures:

MAE = 1H

H∑h=1|yT+h − yT+h|T|

MSE = 1H

H∑h=1

(yT+h − yT+h|T)2 RMSE =

√√√√ 1H

H∑h=1

(yT+h − yT+h|T)2

MAPE = 100H

H∑h=1|yT+h − yT+h|T|/|yt|

MAE, MSE, RMSE are all scale dependent.MAPE is scale independent but is only sensible ifyt � 0 for all t, and y has a natural zero. 17

Page 21: Time Series in R: Forecasting and Visualisation · Visualisation Forecast evaluation 29 May 2017. Outline 1 Forecasting residuals 2 Evaluating forecast accuracy 3 Forecasting benchmark

Measures of forecast accuracy

Mean Absolute Scaled Error

MASE = 1H

H∑h=1|yT+h − yT+h|T|/Q

where Q is a stable measure of the scale of the timeseries {yt}.

Proposed by Hyndman and Koehler (IJF, 2006).For non-seasonal time series,

Q = 1T−1

T∑t=2|yt − yt−1|

works well. Then MASE is equivalent to MAE relativeto a naïve method. 18

Page 22: Time Series in R: Forecasting and Visualisation · Visualisation Forecast evaluation 29 May 2017. Outline 1 Forecasting residuals 2 Evaluating forecast accuracy 3 Forecasting benchmark

Measures of forecast accuracy

Mean Absolute Scaled Error

MASE = 1H

H∑h=1|yT+h − yT+h|T|/Q

where Q is a stable measure of the scale of the timeseries {yt}.

Proposed by Hyndman and Koehler (IJF, 2006).For seasonal time series,

Q = 1T−m

T∑t=m+1|yt − yt−m|

works well. Then MASE is equivalent to MAE relativeto a seasonal naïve method. 19

Page 23: Time Series in R: Forecasting and Visualisation · Visualisation Forecast evaluation 29 May 2017. Outline 1 Forecasting residuals 2 Evaluating forecast accuracy 3 Forecasting benchmark

Measures of forecast accuracy

training <- window(auscafe, end=c(2013,12))test <- window(auscafe, start=c(2014,1))training %>% ets %>% forecast(h=length(test)) -> fcaccuracy(fc, test)

## ME RMSE MAE MPE## Training set 0.001482 0.03816 0.02761 0.09342## Test set -0.121597 0.15318 0.13016 -3.51895## MAPE MASE ACF1 Theil's U## Training set 2.056 0.2881 0.2006 NA## Test set 3.780 1.3583 0.6323 0.7647

20

Page 24: Time Series in R: Forecasting and Visualisation · Visualisation Forecast evaluation 29 May 2017. Outline 1 Forecasting residuals 2 Evaluating forecast accuracy 3 Forecasting benchmark

Poll: true or false?

1 Good forecast methods should have normallydistributed residuals.

2 A model with small residuals will give goodforecasts.

3 The best measure of forecast accuracy is MAPE.4 If your model doesn’t forecast well, you should

make it more complicated.5 Always choose the model with the best forecast

accuracy as measured on the test set.

21

Page 25: Time Series in R: Forecasting and Visualisation · Visualisation Forecast evaluation 29 May 2017. Outline 1 Forecasting residuals 2 Evaluating forecast accuracy 3 Forecasting benchmark

Outline

1 Forecasting residuals

2 Evaluating forecast accuracy

3 Forecasting benchmark methods

4 Lab session 7

5 Time series cross-validation

6 Lab session 8

22

Page 26: Time Series in R: Forecasting and Visualisation · Visualisation Forecast evaluation 29 May 2017. Outline 1 Forecasting residuals 2 Evaluating forecast accuracy 3 Forecasting benchmark

Forecasting benchmark methods

Average methodForecasts equal to mean of historical data.

Naïve methodForecasts equal to last observed value.Consequence of efficient market hypothesis.

Seasonal naïve methodForecasts equal to last value from same season.

Drift methodForecasts equal to last value plus average change.Equivalent to extrapolating a line drawn betweenfirst and last observations. 23

Page 27: Time Series in R: Forecasting and Visualisation · Visualisation Forecast evaluation 29 May 2017. Outline 1 Forecasting residuals 2 Evaluating forecast accuracy 3 Forecasting benchmark

Forecasting benchmark methods

Mean: meanf(y, h=20)Naïve: naive(y, h=20)Seasonal naïve: snaive(y, h=20)Drift: rwf(y, drift=TRUE, h=20)

Check that your method does better than thesestandard benchmark methods.

24

Page 28: Time Series in R: Forecasting and Visualisation · Visualisation Forecast evaluation 29 May 2017. Outline 1 Forecasting residuals 2 Evaluating forecast accuracy 3 Forecasting benchmark

Forecasting benchmark methods

livestock %>% rwf(drift=TRUE) %>% autoplot

300

400

500

600

1960 1980 2000

Time

.

level80

95

Forecasts from Random walk with drift

25

Page 29: Time Series in R: Forecasting and Visualisation · Visualisation Forecast evaluation 29 May 2017. Outline 1 Forecasting residuals 2 Evaluating forecast accuracy 3 Forecasting benchmark

Forecasting benchmark methods

auscafe %>% snaive %>% autoplot

1

2

3

4

1990 2000 2010 2020

Time

.

level80

95

Forecasts from Seasonal naive method

26

Page 30: Time Series in R: Forecasting and Visualisation · Visualisation Forecast evaluation 29 May 2017. Outline 1 Forecasting residuals 2 Evaluating forecast accuracy 3 Forecasting benchmark

Forecasting benchmark methods

training %>% ets %>% forecast(h=length(test)) -> fc_etstraining %>% snaive(h=length(test)) -> fc_snaiveaccuracy(fc_ets, test)

## ME RMSE MAE MPE## Training set 0.001482 0.03816 0.02761 0.09342## Test set -0.121597 0.15318 0.13016 -3.51895## MAPE MASE ACF1 Theil's U## Training set 2.056 0.2881 0.2006 NA## Test set 3.780 1.3583 0.6323 0.7647

accuracy(fc_snaive,test)

## ME RMSE MAE MPE MAPE## Training set 0.08569 0.1226 0.09583 6.529 7.286## Test set 0.41363 0.4344 0.41363 12.183 12.183## MASE ACF1 Theil's U## Training set 1.000 0.8425 NA## Test set 4.317 0.6438 2.165 27

Page 31: Time Series in R: Forecasting and Visualisation · Visualisation Forecast evaluation 29 May 2017. Outline 1 Forecasting residuals 2 Evaluating forecast accuracy 3 Forecasting benchmark

Outline

1 Forecasting residuals

2 Evaluating forecast accuracy

3 Forecasting benchmark methods

4 Lab session 7

5 Time series cross-validation

6 Lab session 8

28

Page 32: Time Series in R: Forecasting and Visualisation · Visualisation Forecast evaluation 29 May 2017. Outline 1 Forecasting residuals 2 Evaluating forecast accuracy 3 Forecasting benchmark

Lab Session 7

29

Page 33: Time Series in R: Forecasting and Visualisation · Visualisation Forecast evaluation 29 May 2017. Outline 1 Forecasting residuals 2 Evaluating forecast accuracy 3 Forecasting benchmark

Outline

1 Forecasting residuals

2 Evaluating forecast accuracy

3 Forecasting benchmark methods

4 Lab session 7

5 Time series cross-validation

6 Lab session 8

30

Page 34: Time Series in R: Forecasting and Visualisation · Visualisation Forecast evaluation 29 May 2017. Outline 1 Forecasting residuals 2 Evaluating forecast accuracy 3 Forecasting benchmark

Time series cross-validation

Traditional evaluation

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● time

Training data Test data

31

Page 35: Time Series in R: Forecasting and Visualisation · Visualisation Forecast evaluation 29 May 2017. Outline 1 Forecasting residuals 2 Evaluating forecast accuracy 3 Forecasting benchmark

Time series cross-validation

Traditional evaluation

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● time

Training data Test data

Time series cross-validation● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●

time

32

Page 36: Time Series in R: Forecasting and Visualisation · Visualisation Forecast evaluation 29 May 2017. Outline 1 Forecasting residuals 2 Evaluating forecast accuracy 3 Forecasting benchmark

Time series cross-validation

h = 1● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●

time

Forecast accuracy averaged over test sets.Also known as “evaluation on a rollingforecasting origin” 33

Page 37: Time Series in R: Forecasting and Visualisation · Visualisation Forecast evaluation 29 May 2017. Outline 1 Forecasting residuals 2 Evaluating forecast accuracy 3 Forecasting benchmark

Time series cross-validation

h = 2● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●

time

Forecast accuracy averaged over test sets.Also known as “evaluation on a rollingforecasting origin” 34

Page 38: Time Series in R: Forecasting and Visualisation · Visualisation Forecast evaluation 29 May 2017. Outline 1 Forecasting residuals 2 Evaluating forecast accuracy 3 Forecasting benchmark

Time series cross-validation

h = 3● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●

time

Forecast accuracy averaged over test sets.Also known as “evaluation on a rollingforecasting origin” 35

Page 39: Time Series in R: Forecasting and Visualisation · Visualisation Forecast evaluation 29 May 2017. Outline 1 Forecasting residuals 2 Evaluating forecast accuracy 3 Forecasting benchmark

Time series cross-validation

h = 4● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●

time

Forecast accuracy averaged over test sets.Also known as “evaluation on a rollingforecasting origin” 36

Page 40: Time Series in R: Forecasting and Visualisation · Visualisation Forecast evaluation 29 May 2017. Outline 1 Forecasting residuals 2 Evaluating forecast accuracy 3 Forecasting benchmark

Time series cross-validation

h = 5● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●

time

Forecast accuracy averaged over test sets.Also known as “evaluation on a rollingforecasting origin” 37

Page 41: Time Series in R: Forecasting and Visualisation · Visualisation Forecast evaluation 29 May 2017. Outline 1 Forecasting residuals 2 Evaluating forecast accuracy 3 Forecasting benchmark

tsCV function:

e <- tsCV(ts, forecastfunction, h=1, ...)

e1 <- tsCV(auscafe, stlf,etsmodel="AAN", damped=FALSE, lambda=0)

autoplot(e1)

−0.1

0.0

0.1

1990 2000 2010

Time

e1

sqrt(mean((e1/auscafe)^2, na.rm=TRUE))

## [1] 0.0259 38

Page 42: Time Series in R: Forecasting and Visualisation · Visualisation Forecast evaluation 29 May 2017. Outline 1 Forecasting residuals 2 Evaluating forecast accuracy 3 Forecasting benchmark

tsCV function:

For ets and auto.arima, you need to write a single forecasting function:

fets <- function(x, h, model="ZZZ", damped=NULL, ...) {forecast(ets(x, model=model, damped=damped), h=h)

}e2 <- tsCV(auscafe, fets, model="MAM", damped=FALSE)autoplot(e2)

−0.1

0.0

0.1

1990 2000 2010

Time

e2

sqrt(mean((e2/auscafe)^2, na.rm=TRUE))

## [1] 0.0330139

Page 43: Time Series in R: Forecasting and Visualisation · Visualisation Forecast evaluation 29 May 2017. Outline 1 Forecasting residuals 2 Evaluating forecast accuracy 3 Forecasting benchmark

tsCV function:

Comparison should be over the same observations:

pe1 <- window(100*e1/auscafe, start=1985)pe2 <- window(100*e2/auscafe, start=1985)sqrt(mean(pe1^2, na.rm=TRUE))

## [1] 2.571

sqrt(mean(pe2^2, na.rm=TRUE))

## [1] 2.733

40

Page 44: Time Series in R: Forecasting and Visualisation · Visualisation Forecast evaluation 29 May 2017. Outline 1 Forecasting residuals 2 Evaluating forecast accuracy 3 Forecasting benchmark

Time series cross-validation

A good way to choose the best forecasting modelis to find the model with the smallest RMSEcomputed using time series cross-validation.Minimizing AICc is asymptotically equivalent tominimizing tscv with h = 1.

41

Page 45: Time Series in R: Forecasting and Visualisation · Visualisation Forecast evaluation 29 May 2017. Outline 1 Forecasting residuals 2 Evaluating forecast accuracy 3 Forecasting benchmark

Outline

1 Forecasting residuals

2 Evaluating forecast accuracy

3 Forecasting benchmark methods

4 Lab session 7

5 Time series cross-validation

6 Lab session 8

42

Page 46: Time Series in R: Forecasting and Visualisation · Visualisation Forecast evaluation 29 May 2017. Outline 1 Forecasting residuals 2 Evaluating forecast accuracy 3 Forecasting benchmark

Lab Session 8

43