Efd vs fc_summary_v3

8
Enterprise Flash Technology Benchmark Summary Technology/Consulting/Managed Solutions

description

EMC EFD vs. FC Benchmark Results

Transcript of Efd vs fc_summary_v3

Page 1: Efd vs fc_summary_v3

Enterprise Flash TechnologyBenchmark Summary

Technology/Consulting/Managed Solutions

Page 2: Efd vs fc_summary_v3

6+1 RAID 5 Group

HP DL380 G5

2x4 Core 2.8 Ghz CPUs

3GB Memory

Windows 2003 SP 232bit

Emulex LP-E11000

Driver Version 5.2.10.7

EMC CX4-120 EFD IOmeter Test Configuration

EFD 0 EFD 1 EFD 2 EFD 3 EFD 4 EFD 5 EFD 6

LUN 4

6+1 RAID 5 Group

EFD 16 EFD 17 EFD 18 EFD 19 EFD 20 EFD 21 EFD 22

LUN 5

SP-A

SP-B

Notes:

SP cache disabled on for LUN4 and LUN5

LUN4 presented to IOmeter test host as Physicaldrive6

LUN5 presented to IOmeter test host as Physicaldrive7

IOmeter version 2006.07.27

Documented test results have been aggregated across all

14 EFD drives shown below

Page 3: Efd vs fc_summary_v3

4 drive R10 groups presented as a single metalun

HP DL380 G5

2x4 Core 2.8 Ghz CPUs

3GB Memory

Windows 2003 SP2 32bit

Emulex LP-E11000

Driver Version 5.2.10.7

EMC CX3-80 FC IOmeter Test Configuration

15K

146GB

FC Drive

15K

146GB

FC Drive

15K

146GB

FC Drive

15K

146GB

FC Drive

15K

146GB

FC Drive

15K

146GB

FC Drive

15K

146GB

FC Drive

15K

146GB

FC Drive

15K

146GB

FC Drive

15K

146GB

FC Drive

15K

146GB

FC Drive

15K

146GB

FC Drive

15K

146GB

FC Drive

15K

146GB

FC Drive

15K

146GB

FC Drive

15K

146GB

FC Drive

15K

146GB

FC Drive

15K

146GB

FC Drive

15K

146GB

FC Drive

15K

146GB

FC Drive

15K

146GB

FC Drive

15K

146GB

FC Drive

15K

146GB

FC Drive

15K

146GB

FC Drive

Metalun 208 (SP-A)

Notes:

SP R/W cache enabled

Metalun208 presented to IOmeter test host as Physicaldrive6

Metalun209 presented to IOmeter test host as Physicaldrive7

IOmeter version 2006.07.27

Documented test results have been aggregated across all 24 15K FC drives

shown below

RG28RG27RG26

RG29 RG54 RG55

LUN 10 – Meta208 part

LUN20 – Meta209 part

Metalun 209 (SP-B)

LUN 11 – Meta208 part

LUN22 – Meta209 part

LUN 12 – Meta208 part

LUN22 – Meta209 part

LUN 13 – Meta208 part

LUN23 – Meta209 part

LUN 14 – Meta208 part

LUN24 – Meta209 part

LUN 15 – Meta208 part

LUN25 – Meta209 part

Page 4: Efd vs fc_summary_v3

100% Random 2K Reads

100% Random 2K Writes

Random 2K 40% Read / 60%

Write

Random 2K 60% Read / 40%

Write

100% Random 4K Reads

100% Random 4K Writes

Random 4K 40% Read / 60%

Write

Random 4K 60% Read / 40%

Write

EFD IOps 69906.56 20317.19 28396.18 35721.83 77608.40 15871.00 21926.44 27525.12

FC IOps 6929.36 6033.35 5843.31 5918.02 6750.73 5476.01 5665.46 5809.82

0.00

10000.00

20000.00

30000.00

40000.00

50000.00

60000.00

70000.00

80000.00

90000.00

IOP

SIOPS (CX4-120 EFD vs. CX3-80 15K FC)

Page 5: Efd vs fc_summary_v3

100% Random 2K Reads

100% Random 2K Writes

Random 2K 40% Read / 60% Write

Random 2K 60% Read / 40% Write

100% Random 4K Reads

100% Random 4K Writes

Random 4K 40% Read / 60% Write

Random 4K 60% Read / 40% Write

EFD Avg Response Time 3.66 12.60 9.01 7.17 3.30 16.13 11.67 9.30

FC Avg Response Time 36.94 42.43 43.81 43.26 37.92 46.75 45.18 44.06

0.00

5.00

10.00

15.00

20.00

25.00

30.00

35.00

40.00

45.00

50.00

Ave

rage

Re

spo

nse

Tim

eResponse Time (CX4-120 EFD vs. CX3-80 15K FC)

Page 6: Efd vs fc_summary_v3

Note: FC drive IOps exceed theoretical maximum of 180 IOPS due to cache benefit.

100% Random 2K

Reads

100% Random 2K

Writes

Random 2K 40% Read / 60% Write

Random 2K 60% Read / 40% Write

100% Random 4K

Reads

100% Random 4K

Writes

Random 4K 40% Read / 60% Write

Random 4K 60% Read / 40% Write

EFD IOps per Drive 4993.33 1451.23 2028.30 2551.56 5543.46 1133.64 1566.17 1966.08

FC IOps per Drive 288.72 251.39 243.47 246.58 281.28 228.17 236.06 242.08

0.00

1000.00

2000.00

3000.00

4000.00

5000.00

6000.00

IOP

SIOPS per Drive (CX4-120 EFD vs. CX3-80 15K FC)

Page 7: Efd vs fc_summary_v3

Database Sizing and Throughput Jetstress System Parameters

Disk Subsystem Performance

CX4-120 Jetstress Benchmark

Database Sizing and Throughput Jetstress System Parameters

Disk Subsystem Performance

CX3-80 Jetstress Benchmark

Page 8: Efd vs fc_summary_v3

• Reduced storage footprint from 24 drives to 14 drives = ~ 42% reducution

• ~ 12x Random Read Performance Improvement

• ~ 5x Random Write Performance Improvement

• ~ 8 – 10 Performance Improvement with Random 60% Read / 40% Write workload

Conclusions