2002 MM5 Model Evaluation 12 vs. 36 km Results

16
2002 MM5 Model Evaluation 12 vs. 36 km Results Chris Emery, Yiqin Jia, Sue Kemball-Cook, and Ralph Morris ENVIRON International Corporation Zion Wang UCR CE-CERT Western Regional Air Partnership (WRAP) Regional Modeling Center (RMC) National RPO Meeting May 25, 2004

description

2002 MM5 Model Evaluation 12 vs. 36 km Results. Chris Emery, Yiqin Jia, Sue Kemball-Cook, and Ralph Morris ENVIRON International Corporation Zion Wang UCR CE-CERT Western Regional Air Partnership (WRAP) Regional Modeling Center (RMC) National RPO Meeting May 25, 2004. - PowerPoint PPT Presentation

Transcript of 2002 MM5 Model Evaluation 12 vs. 36 km Results

Page 1: 2002 MM5 Model Evaluation 12 vs. 36 km Results

2002 MM5 Model Evaluation12 vs. 36 km Results

Chris Emery, Yiqin Jia, Sue Kemball-Cook, and Ralph Morris

ENVIRON International Corporation

Zion Wang

UCR CE-CERT

Western Regional Air Partnership (WRAP)

Regional Modeling Center (RMC)

National RPO Meeting

May 25, 2004

Page 2: 2002 MM5 Model Evaluation 12 vs. 36 km Results

2002 MM5 Evaluation Review

• IA/WI 2002 MM5 Configuration on National RPO 36 km Grid, except:> Used MM5 v3.6.2> Invoked Reisner II, disregarded INTERPX

• Evaluation Methodology> Synoptic Evaluation> Statistical Evaluation using METSTAT and surface data

• WS, WD, T, RH> Evaluation against upper-air obs

• Compared statistical performance against EDAS, VISTAS

Page 3: 2002 MM5 Model Evaluation 12 vs. 36 km Results

METSTAT Evaluation Package

• Statistics:> Absolute Bias and Error, RMSE, IOA

• Daily and, where appropriate, hourly evaluation• Statistical Performance Benchmarks

> Based on an analysis of > 30 MM5 and RAMS runs > Not meant as a pass/fail test, but to put modeling results

into perspective Wind Speed Wind Direction Temperature Humidity RMSE 2 m/s Mean Bias 0.5m/s 10 0.5K 1g/kg Index of Agreement 0.6 0.8 0.6 Gross Error 30 2K 2g/kg

Page 4: 2002 MM5 Model Evaluation 12 vs. 36 km Results

Datasets for Met Evaluation

• NCAR dataset ds472 airport surface met observations• Twice-Daily Upper-Air Profile Obs (~120 in US)

> Temperature> Moisture

• Scatter plots of performance metrics> Include box for benchmark> Include historical MM5/RAMS simulation results> WS RMSE vs. WD Gross Error> Temperature Bias vs. Temperature Error> Humidity Bias vs. Humidity Error

Page 5: 2002 MM5 Model Evaluation 12 vs. 36 km Results

Subdomains for Model Evaluation

1 = Pacific NW

2 = SW

3 = North

4 = Desert SW

5 = CenrapN

6 = CenrapS

7 = Great Lakes

8 = Ohio Valley

9 = SE

10 = NE

11 = MidAtlantic

Page 6: 2002 MM5 Model Evaluation 12 vs. 36 km Results

Evaluation of 36-km WRAP MM5 Results

• Model performed reasonably well for eastern subdomains, but not the west (WRAP region)> General cool moist bias in Western US> Difficulty with resolving Western US orography?

• May get better performance with higher resolution> Pleim-Xiu scheme optimized more for eastern US?

• More optimization needed for desert and rocky ground?

• MM5 performs better in winter than in summer> Weaker forcing in summer

• July 2002 Desert SW subdomain exhibits low temperature and high humidity bias

Page 7: 2002 MM5 Model Evaluation 12 vs. 36 km Results

Comparison: EDAS vs. WRAP MM5

• Is it possible that 36-km MM5 biases may be caused by the analyses used to nudge (FDDA) the model?

• We evaluated EDAS analysis fields to see whether biases exist> Used Metstat to look at the EDAS surface fields

• Input EDAS fields do not have the cold moist bias seen in the 36 km MM5 simulation, but wind speed underestimation bias is present> Performance issues not due to EDAS analysis fields,

must be internally generated by MM5

Page 8: 2002 MM5 Model Evaluation 12 vs. 36 km Results

Comparison: VISTAS vs. WRAP MM5

• Evaluate VISTAS 2002 MM5 simulation to see whether similar bias exists> Different configuration: KF II, Reisner I

• Both MM5 simulations had trouble in western U.S. – same subdomains lie outside the statistical benchmarks

• Both MM5 simulations performed better in winter than in summer

Page 9: 2002 MM5 Model Evaluation 12 vs. 36 km Results

Comparison: VISTAS vs. WRAP MM5

• VISTAS:> Better simulation of PBL temperature and humidity

profiles> Less surface humidity bias in the western U.S.> Markedly better summer precipitation field

• WRAP:> Less surface temperature bias than VISTAS during

winter• Overall, VISTAS did better in the west

> Further tests indicate use of KF II has larger effect on performance than Reisner I

Page 10: 2002 MM5 Model Evaluation 12 vs. 36 km Results
Page 11: 2002 MM5 Model Evaluation 12 vs. 36 km Results

Addition of 12-km WRAP Grid

• IC/BC’s extracted from 36-km MM5 fields• 3-D FDDA fields extracted from 36-km MM5 fields• Preliminary 5-day run starting 12Z July 1

CMAQ MM5 Dot points 208 x 187 220 x 199 Cross points 207 x 186 219 x 198 SW corner coordinate -2376, -936 -2248, -1008 NE corner coordinate 108, 1296 180, 1368

Page 12: 2002 MM5 Model Evaluation 12 vs. 36 km Results

Comparison: 12 vs. 36-km WRAP MM5

• Performance scatter plots prepared> Directly compare 36-km statistics with 12-km statistics

for each western sub-region> Provides mean stats over July 1-6 preliminary test

period

Page 13: 2002 MM5 Model Evaluation 12 vs. 36 km Results

WRAP 36km/12km July Wind Performance Comparison

0

20

40

60

80

100

120

0 0.5 1 1.5 2 2.5 3 3.5

Wind Speed RMSE (m/s)

Win

d D

ierc

tio

n E

rro

r (m

/s)

Benchmark 12 km Subdomains MM5/RAMS Runs 36 km Subdomains

DesertSW

North

SWPacNW

Page 14: 2002 MM5 Model Evaluation 12 vs. 36 km Results

WRAP 36km/12km July Temperature Performance Comparison

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

Temperature Bias (K)

Tem

ep

ratu

re E

rro

r (K

)

Benchmark 12 km Subdomain MM5/RAMS Runs 36 km Subdomains

DesertSW

SW

North

PacNW

Desert SW

SWNorth

PacNW

Page 15: 2002 MM5 Model Evaluation 12 vs. 36 km Results

WRAP 36km/12km July Humidity Performance Comparison

0

1

2

3

4

5

-3 -2 -1 0 1 2 3

Humidity Bias (g/kg)

Hu

mid

ity

Err

or

(g/k

g)

Benchmark 12km Subdomains MM5/RAMS Runs 36 km Subdomains

DesertSW

NorthSWPacNW

Page 16: 2002 MM5 Model Evaluation 12 vs. 36 km Results

Comparison: 12 vs. 36-km WRAP MM5

• Results:> No significant or consistent impact on wind

speed/direction performance> Temperature bias dramatically improved for all areas,

but gross error is made worse > Impacts on humidity performance are minor, and worse

in the Desert SW• There appear to be larger issues that 12-km grid

resolution does not improve upon> Remember that all IC/BC and 3-D FDDA are derived

from 36-km results> This issue addressed in 12-km sensitivity tests