Sharing Experiences in Operational Consensus Track Forecasting Rapporteur: Andrew Burton Team...

30
Sharing Experiences in Sharing Experiences in Operational Consensus Operational Consensus Track Forecasting Track Forecasting Rapporteur: Andrew Burton Team members: Philippe Caroff, James Franklin, Ed Fukada, T.C. Lee, Buck Sampson, Todd Smith.

Transcript of Sharing Experiences in Operational Consensus Track Forecasting Rapporteur: Andrew Burton Team...

Page 1: Sharing Experiences in Operational Consensus Track Forecasting Rapporteur: Andrew Burton Team members: Philippe Caroff, James Franklin, Ed Fukada, T.C.

Sharing Experiences in Operational Sharing Experiences in Operational Consensus Track ForecastingConsensus Track Forecasting

Rapporteur: Andrew Burton

Team members: Philippe Caroff, James Franklin, Ed Fukada, T.C. Lee, Buck Sampson, Todd Smith.

Page 2: Sharing Experiences in Operational Consensus Track Forecasting Rapporteur: Andrew Burton Team members: Philippe Caroff, James Franklin, Ed Fukada, T.C.

Consensus Track ForecastingConsensus Track Forecasting

Single and multi-model approachesSingle and multi-model approaches

Weighted and non-weighted methodsWeighted and non-weighted methods

Selective and non-selective methodsSelective and non-selective methods

Optimising consensus track forecastingOptimising consensus track forecasting

Practical considerationsPractical considerationsGuidance on guidance and intensity consensusGuidance on guidance and intensity consensus

DiscussionDiscussionRecommendationsRecommendations

Page 3: Sharing Experiences in Operational Consensus Track Forecasting Rapporteur: Andrew Burton Team members: Philippe Caroff, James Franklin, Ed Fukada, T.C.

Consensus Track ForecastingConsensus Track Forecasting

Consensus methods now relatively Consensus methods now relatively widespread, because: widespread, because: – Clear evidence of improvement (seasonal Clear evidence of improvement (seasonal

timescales) over individual guidancetimescales) over individual guidance– It’s what forecasters naturally doIt’s what forecasters naturally do– Improved objectivity in track forecastingImproved objectivity in track forecasting– Removes the windscreen wiper effectRemoves the windscreen wiper effect

Page 4: Sharing Experiences in Operational Consensus Track Forecasting Rapporteur: Andrew Burton Team members: Philippe Caroff, James Franklin, Ed Fukada, T.C.

Consensus Track ForecastingConsensus Track Forecasting Single model approaches (“EPS”)Single model approaches (“EPS”)

Page 5: Sharing Experiences in Operational Consensus Track Forecasting Rapporteur: Andrew Burton Team members: Philippe Caroff, James Franklin, Ed Fukada, T.C.

Consensus Track ForecastingConsensus Track Forecasting

Single model approaches (“EPS”)Single model approaches (“EPS”)

– Multiple runs, perturb initial conditions/physicsMultiple runs, perturb initial conditions/physics– Degraded resolutionDegraded resolution- Generally not used operationally for direct input to Generally not used operationally for direct input to

consensus forecast. Generally used qualitatively.consensus forecast. Generally used qualitatively.- Little work done on long-term verification of ensemble Little work done on long-term verification of ensemble

meansmeans- Little work done on ‘statistical calibration’ of EPS Little work done on ‘statistical calibration’ of EPS

probabilities. probabilities.

Page 6: Sharing Experiences in Operational Consensus Track Forecasting Rapporteur: Andrew Burton Team members: Philippe Caroff, James Franklin, Ed Fukada, T.C.

Consensus Track ForecastingConsensus Track Forecasting

Page 7: Sharing Experiences in Operational Consensus Track Forecasting Rapporteur: Andrew Burton Team members: Philippe Caroff, James Franklin, Ed Fukada, T.C.

Consensus Track ForecastingConsensus Track Forecasting

Single vs. multi-model approachesSingle vs. multi-model approaches

– Disjoint in how these approaches are Disjoint in how these approaches are currently used operationally.currently used operationally.

– Multi model ensembles – lesser numbers Multi model ensembles – lesser numbers of members, but with greater of members, but with greater independence between members (?) and independence between members (?) and with higher resolution.with higher resolution.

Page 8: Sharing Experiences in Operational Consensus Track Forecasting Rapporteur: Andrew Burton Team members: Philippe Caroff, James Franklin, Ed Fukada, T.C.

Consensus Track ForecastingConsensus Track Forecasting

Multi-model approachesMulti-model approaches

– Combining deterministic forecasts of Combining deterministic forecasts of multiple models (not just NWP), multiple models (not just NWP),

- Fairly widespread use in operations.Fairly widespread use in operations.

- Weighted or non-weighted.Weighted or non-weighted. - Selective or non-selective. Selective or non-selective.

Page 9: Sharing Experiences in Operational Consensus Track Forecasting Rapporteur: Andrew Burton Team members: Philippe Caroff, James Franklin, Ed Fukada, T.C.

Consensus Track ForecastingConsensus Track ForecastingMulti-model approaches – ‘simple’ example.Multi-model approaches – ‘simple’ example.

Process:

• Acquire tracks

• Perform initial position correction

• Interpolate tracks

• Geographically average

Page 10: Sharing Experiences in Operational Consensus Track Forecasting Rapporteur: Andrew Burton Team members: Philippe Caroff, James Franklin, Ed Fukada, T.C.

Consensus Track ForecastingConsensus Track ForecastingNon-selective multi-model consensusNon-selective multi-model consensus

– Low maintenanceLow maintenance– Low training overheadLow training overhead– Incorporate ‘new’ models ‘on-the-fly’Incorporate ‘new’ models ‘on-the-fly’– Robust performanceRobust performance– If many members, less need for selective approachIf many members, less need for selective approach– Widely adopted as baseline approachWidely adopted as baseline approach

Page 11: Sharing Experiences in Operational Consensus Track Forecasting Rapporteur: Andrew Burton Team members: Philippe Caroff, James Franklin, Ed Fukada, T.C.

Consensus Track ForecastingConsensus Track ForecastingMulti-model approaches – weighting Multi-model approaches – weighting

– Weighted according to historical performance.Weighted according to historical performance.

– Complex weighting: eg. FSSE – unequal weights to forecast parameters for each model and forecast time.Complex weighting: eg. FSSE – unequal weights to forecast parameters for each model and forecast time.

– Can outperform unweighted consensus, providing training is up-to-date (human or computer)Can outperform unweighted consensus, providing training is up-to-date (human or computer)

– Maintenance overheadMaintenance overhead

Page 12: Sharing Experiences in Operational Consensus Track Forecasting Rapporteur: Andrew Burton Team members: Philippe Caroff, James Franklin, Ed Fukada, T.C.

Consensus Track ForecastingConsensus Track Forecasting

Selective vs. non-selective approachesSelective vs. non-selective approaches

– Subjective selection common place Subjective selection common place and can add significant value.and can add significant value.

– Semi-objective selection: SAFA – Semi-objective selection: SAFA – implementation encountered hurdles.implementation encountered hurdles.

– How to identify those cases where How to identify those cases where selective approach will add value?selective approach will add value?

Page 13: Sharing Experiences in Operational Consensus Track Forecasting Rapporteur: Andrew Burton Team members: Philippe Caroff, James Franklin, Ed Fukada, T.C.

Consensus Track ForecastingConsensus Track ForecastingSelective Selective (SCON)(SCON) Vs Non-selective Vs Non-selective(NCON)(NCON)

How to exclude members? How to exclude members?

Page 14: Sharing Experiences in Operational Consensus Track Forecasting Rapporteur: Andrew Burton Team members: Philippe Caroff, James Franklin, Ed Fukada, T.C.

Consensus Track ForecastingConsensus Track ForecastingSelective Selective (SCON)(SCON) Vs Non-selective Vs Non-selective(NCON)(NCON)

SCON – How to exclude members?SCON – How to exclude members?

Page 15: Sharing Experiences in Operational Consensus Track Forecasting Rapporteur: Andrew Burton Team members: Philippe Caroff, James Franklin, Ed Fukada, T.C.

Consensus Track ForecastingConsensus Track ForecastingSelective Selective (SCON)(SCON) vs non-selective vs non-selective (NCON)(NCON)

SCON – How to exclude members? SCON – How to exclude members?

Requires knowledge of known model biases Requires knowledge of known model biases (this changes with updates)(this changes with updates)

Page 16: Sharing Experiences in Operational Consensus Track Forecasting Rapporteur: Andrew Burton Team members: Philippe Caroff, James Franklin, Ed Fukada, T.C.

Consensus Track ForecastingConsensus Track ForecastingSelective Selective (SCON)(SCON) Vs Non-selective Vs Non-selective(NCON)(NCON)

SCON – How to exclude members?SCON – How to exclude members?

Requires knowledge of model run Requires knowledge of model run

eg analyses differs from observed BEWAREeg analyses differs from observed BEWARE

Page 17: Sharing Experiences in Operational Consensus Track Forecasting Rapporteur: Andrew Burton Team members: Philippe Caroff, James Franklin, Ed Fukada, T.C.

Consensus Track ForecastingConsensus Track ForecastingRecent performance of a model does not Recent performance of a model does not

guarantee success/failure next timeguarantee success/failure next time

Page 18: Sharing Experiences in Operational Consensus Track Forecasting Rapporteur: Andrew Burton Team members: Philippe Caroff, James Franklin, Ed Fukada, T.C.

Consensus Track ForecastingConsensus Track ForecastingRecent performance of a model does Recent performance of a model does not guarantee success/failure next time.not guarantee success/failure next time.

Page 19: Sharing Experiences in Operational Consensus Track Forecasting Rapporteur: Andrew Burton Team members: Philippe Caroff, James Franklin, Ed Fukada, T.C.

Consensus Track ForecastingConsensus Track ForecastingPosition Vs Vector Motion consensus Position Vs Vector Motion consensus

Combining short and long-term membersCombining short and long-term members

Page 20: Sharing Experiences in Operational Consensus Track Forecasting Rapporteur: Andrew Burton Team members: Philippe Caroff, James Franklin, Ed Fukada, T.C.

Consensus Track ForecastingConsensus Track Forecasting

Accuracy depends on:Accuracy depends on:1.1. Number of modelsNumber of models2.2. Accuracy of individual membersAccuracy of individual members3.3. Independence of member errorsIndependence of member errors

Including advisories in the consensus Including advisories in the consensus JTWC, JMA, CMA.JTWC, JMA, CMA.

Optimising consensus tracksOptimising consensus tracks

Page 21: Sharing Experiences in Operational Consensus Track Forecasting Rapporteur: Andrew Burton Team members: Philippe Caroff, James Franklin, Ed Fukada, T.C.

Would you add WBAR to your consensus?

A Question of Independence

Page 22: Sharing Experiences in Operational Consensus Track Forecasting Rapporteur: Andrew Burton Team members: Philippe Caroff, James Franklin, Ed Fukada, T.C.

Would you add WBAR to your consensus?

24hrs

48hrs

Page 23: Sharing Experiences in Operational Consensus Track Forecasting Rapporteur: Andrew Burton Team members: Philippe Caroff, James Franklin, Ed Fukada, T.C.

Consensus Track ForecastingConsensus Track Forecasting

Practical ConsiderationsPractical Considerations

Access to models? Access to models?

Where to get them from? (JMA eg.?)Where to get them from? (JMA eg.?)

Can we organise a central repository Can we organise a central repository of global TC tracks? of global TC tracks?

Standard format and timely!Standard format and timely!

Page 24: Sharing Experiences in Operational Consensus Track Forecasting Rapporteur: Andrew Burton Team members: Philippe Caroff, James Franklin, Ed Fukada, T.C.

Consensus Track ForecastingConsensus Track Forecasting

Practical Considerations contd.Practical Considerations contd.

Access to software? Access to software?

Access to model fieldsAccess to model fields

Pre-cyclone phase –less tracksPre-cyclone phase –less tracks

Capture/recurvature/ETTCapture/recurvature/ETT

Page 25: Sharing Experiences in Operational Consensus Track Forecasting Rapporteur: Andrew Burton Team members: Philippe Caroff, James Franklin, Ed Fukada, T.C.

Consensus Track ForecastingConsensus Track Forecasting

Page 26: Sharing Experiences in Operational Consensus Track Forecasting Rapporteur: Andrew Burton Team members: Philippe Caroff, James Franklin, Ed Fukada, T.C.

Consensus Track ForecastingConsensus Track Forecasting

DiscussionDiscussionHow many operational centres represented here How many operational centres represented here commonly have access to <5 deterministic runs?commonly have access to <5 deterministic runs?

Do you have access to tracks for which you don’t Do you have access to tracks for which you don’t have the fields? have the fields?

How many operational centres represented here How many operational centres represented here use weighted consensus methods as their primary use weighted consensus methods as their primary method?method?

Do forecasters have the skill to be selective? Are Do forecasters have the skill to be selective? Are the training requirements too great?the training requirements too great?

Modifications for persistence?Modifications for persistence?

Page 27: Sharing Experiences in Operational Consensus Track Forecasting Rapporteur: Andrew Burton Team members: Philippe Caroff, James Franklin, Ed Fukada, T.C.

Consensus Track ForecastingConsensus Track Forecasting

DiscussionDiscussionAre weighted methods appropriate for all NMHSs?Are weighted methods appropriate for all NMHSs?

Bifurcation situations? Should a forecaster sit on Bifurcation situations? Should a forecaster sit on the fence – in zero probability space?the fence – in zero probability space?

Is statistical calibration of EPS guidance a Is statistical calibration of EPS guidance a requirement?requirement?

How many operational centres are currently How many operational centres are currently looking to produce probabilistic products for looking to produce probabilistic products for external dissemination?external dissemination?

Page 28: Sharing Experiences in Operational Consensus Track Forecasting Rapporteur: Andrew Burton Team members: Philippe Caroff, James Franklin, Ed Fukada, T.C.

Consensus Track ForecastingConsensus Track Forecasting

DiscussionDiscussionWhat modifications should forecasters be allowed What modifications should forecasters be allowed to make?to make?

Do you agree that the relevant benchmark for Do you agree that the relevant benchmark for operational centres is the ‘simple’ consensus of operational centres is the ‘simple’ consensus of available guidance?available guidance?

What is an appropriate means of combining EPS What is an appropriate means of combining EPS and deterministic runs in operational consensus and deterministic runs in operational consensus forecasting? (Is it sufficient to include the forecasting? (Is it sufficient to include the ensemble mean as a member).ensemble mean as a member).

Page 29: Sharing Experiences in Operational Consensus Track Forecasting Rapporteur: Andrew Burton Team members: Philippe Caroff, James Franklin, Ed Fukada, T.C.

Consensus Track ForecastingConsensus Track Forecasting

Recommendations?Recommendations?

Page 30: Sharing Experiences in Operational Consensus Track Forecasting Rapporteur: Andrew Burton Team members: Philippe Caroff, James Franklin, Ed Fukada, T.C.

Consensus Track ForecastingConsensus Track Forecasting