AMS2500 Performance Report

44
Hitachi AMS 2500 Using 200GB SSD Drives – Scalability Analysis A Performance Brief By Alan Benway (Performance Measurement Group, Technical Operations) Confidential – Hitachi Data Systems Internal and Channel Partner Use Only August 2010

description

Hitachi AMS2500

Transcript of AMS2500 Performance Report

Page 1: AMS2500 Performance Report

Hitachi AMS 2500 Using 200GB SSD Drives – Scalability Analysis

A Performance Brief

By Alan Benway (Performance Measurement Group, Technical Operations)

Confidential – Hitachi Data Systems Internal and Channel Partner Use Only

August 2010

Page 2: AMS2500 Performance Report

Hitachi Data Systems Internal and Channel Partner Confidential

Executive Summary The purpose of this testing was to establish a variety of performance comparisons of SSD and SAS drives on the Hitachi Adaptable Modular Storage 2500 (AMS 2500) midrange storage array. Various tests used 5, 10, 15 and 20 200GB SSD or SAS disks, each used in a RAID-5 (4D+1P) group. No other raid type was tested, since our field experience shows that almost all users use SSDs in Raid 5 configurations in order to maximize their price/performance ratio. In some tests (random write, for instance), we know that more IOPs would probably result if we ran R10 configurations. Additionally, some SSD tests were also run with the use of Hitachi Dynamic Provisioning for comparison. The AMS 2500’s Hardware Load Balancing feature was “Enabled.” There were no copy products license keys enabled, so the maximum amount of cache was available. There were 11 categories of tests conducted in all.

The performance results are presented in the charts in the Test Results Summary section of this report. While we attempt to profile a variety of application characteristics, no benchmark can replicate a real world application as well as the actual applications themselves.

Shown below are result summaries from Test 1 (random) and Test 2 (sequential). These tables show the measured SSD results and the interpolated 146GB 15K RPM SAS results from an AMS 2500 that show the number of SAS drives required to match the SSD result. Note that the number of host paths in use varied by the number of LUNs tested. Up to four host paths were used for the SSD tests, and up to 16 were used for the SAS tests. For example, in the SAS tests, 14 LUNs would have been mapped over 14 paths, while 72 LUNs would have been mapped across 16 paths.

As a general rule of thumb, 30 SSDs on the AMS 2500 can replace 360 15K RPM SAS drives for meeting random performance requirements. When reviewing past SAS test results, one can see that the system scalability limit for certain RAID levels and workloads can occur well below 360 SAS disks. As such, one cannot expect to use 30 SSD drives plus 120 SAS disks with heavy concurrent loads that have significant write components. In the SAS sequential results below, when using RAID-5 (4D+1P) the system limit was at about 160 disks. As such, it is expected that heavy sequential use of 30 SSDs in RAID-5 (4D+1P) would consume all of the arrays internal resources.

100% Random Read Comparison

100% Random Read  SSD  1‐4 Paths       

8 threads/LUN  4KB  RAID‐5 4+1    

Drives  LUNs  Threads  IOPS  RT [msec]  IOPS/SSD 

5  1  8  15,148  0.5  3030 

10  2  16  30,437  0.5  3043 

15  3  24  45,298  0.5  3020 

20  4  32  60,086  0.5  3004 

100% Random Read  SAS 15k  16 Paths       

8 threads/LUN  4KB  RAID‐5 4+1    

Drives  LUNs  Threads  IOPS  RT [msec]  IOPS/HDD 

90  18  144  15,200  9.5  169 

180  36  288  30,400  9.5  169 

270  54  432  45,600  9.5  169 

360  72  576  60,800  9.5  169 

480*  96  96  79,500  9.7  166 

Page 3: AMS2500 Performance Report

Hitachi Data Systems Internal and Channel Partner Confidential

100% Random Write Comparison

Random Write     SSD   1‐4 Paths       

1 thread/LUN  4KB  RAID‐5 4+1    

Drives  LUNs  Threads  IOPS  RT [msec]  IOPS/SSD 

5  1  1  4,498  0.2  900 

10  2  2  10,198  0.2  1,020 

15  3  3  12,767  0.2  851 

20  4  4  16,687  0.2  834 

 

Random Write     SAS 15k   14‐16 Paths       

1 thread/LUN  4KB  RAID‐5 4+1    

Drives  LUNs  Threads  IOPS  RT [msec]  IOPS/HDD 

70  14  14  4,700  22.0  67 

160  32  32  9,600  27.0  60 

480*  96  96  15,000  48.0  33 

‐  ‐  ‐  ‐  ‐  ‐ 

*System performance limit.

100% Sequential Read Comparison

100% Sequential Read  SSD  1‐4 Paths    

1 thread/LUN  256kb  RAID‐5 4+1       

Drives  LUNs  Threads  MB/s  MB/s SSD 

5  1  1  321.2  64.2 

10  2  2  623.5  62.4 

15  3  3  910.7  60.7 

20  4  4  1224.6  61.2 

100% Sequential Read  SAS 15k  1‐16 Paths    

1 thread/LUN  256kb  RAID‐5 4+1       

Drives  LUNs  Threads  MB/s  MB/s HDD 

5  1  1  295  59 

10  2  2  590  59 

45  9  9  900  20 

60  12  12  1,200  20 

160*  52  52  2300  14.5 

Page 4: AMS2500 Performance Report

Hitachi Data Systems Internal and Channel Partner Confidential

100% Sequential Write Comparison

100% Sequential Write  SSD  1‐4 Paths    

1 thread/LUN  256kb  RAID‐5 4+1       

Drives  LUNs  Threads  MB/s  MB/s SSD 

5  1  1  257.9  51.6 

10  2  2  506.6  50.7 

15  3  3  716.7  47.8 

20  4  4  931.7  46.6 

100% Sequential Write  SAS 15k  2‐16 Paths    

1 thread/LUN  256kb  RAID‐5 4+1       

Drives  LUNs  Threads  MB/s  MB/s HDD 

10  2  2  454.9  45.5 

55  11  11  495  9 

80  16  16  718  9 

130  26  26  910  7 

160*  52  52  1095  6.8 

*System performance limit.

Page 5: AMS2500 Performance Report

Hitachi Data Systems Internal and Channel Partner Confidential

Notices and Disclaimer Copyright © 2010 Hitachi Data Systems Corporation. All rights reserved.

The performance data contained herein was obtained in a controlled isolated environment. Actual results that may be obtained in other operating environments may vary significantly. While Hitachi Data Systems Corporation has reviewed each item for accuracy in a specific situation, there is no guarantee that the same results can be obtained elsewhere.

All designs, specifications, statements, information and recommendations (collectively, "designs") in this manual are presented "AS IS," with all faults. Hitachi Data Systems Corporation and its suppliers disclaim all warranties, including without limitation, the warranty of merchantability, fitness for a particular purpose and non-infringement or arising from a course of dealing, usage or trade practice. In no event shall Hitachi Data Systems Corporation or its suppliers be liable for any indirect, special, consequential or incidental damages, including without limitation, lost profit or loss or damage to data arising out of the use or inability to use the designs, even if Hitachi Data Systems Corporation or its suppliers have been advised of the possibility of such damages.

Adaptable Modular Storage® is a registered trademark of Hitachi Data Systems, Inc. in the United States, other countries, or both.

Other company, product or service names may be trademarks or service marks of others.

This document has been reviewed for accuracy as of the date of initial publication. Hitachi Data Systems Corporation may make improvements and/or changes in product and/or programs at any time without notice.

No part of this document may be reproduced or transmitted without written approval from Hitachi Data Systems Corporation.

WARNING: This document can only be used as HDS internal documentation for informational purposes only. This documentation is not meant to be disclosed to customers or discussed without a proper non-disclosure agreement (NDA).

Page 6: AMS2500 Performance Report

Hitachi Data Systems Internal and Channel Partner Confidential

Document Revision Level

Revision Date Description

1.0 July 2010 Initial Release

1.1 Aug 2010 Fixed typo in executive summary table (seq writes, SAS)

Reference Hitachi AMS 2000 Architecture and Concepts Guide

Hitachi AMS 2500 Dynamic Provisioning Concepts, Performance, and Best Practices Guide

Contributors The information included in this document represents the expertise, feedback, and suggestions of a number of skilled practitioners. The author would like to recognize and thank the following contributors or reviewers of this document:

Yusuke Nishihara, Engineer, Disk Array Software Development Dept. III, Storage Systems Development, Disk Array Systems Division, Hitachi LTD

Ian Vogelesang, Performance Measurement Group - Technical Operations

Mel Tungate, Product Management, Midrange

Page 7: AMS2500 Performance Report

Table of Contents

Executive Summary ............................................................................................................................................................. 2 

Purpose of This Testing ....................................................................................................................................................... 9 

Workload Generator Information ........................................................................................................................................ 9 

Test Configurations and Workloads ................................................................................................................................... 9 

Configuration .................................................................................................................................................. 9 

Test Methodologies ...................................................................................................................................... 10 

Tests 1 and 2: Uniform Workloads, RAID Group and Block Size Scalability, Random and Sequential, SSD and SAS ........................................................................................................................................................................ 11 

Tests 3 and 4: Mixed Workloads, RAID Group and Block Size Scalability, Random and Sequential, SSD and SAS11 

Test 5: Single Workload, Single RAID Group, Thread Scalability, SSD and SAS .................................................. 12 

Tests 6 to 9: Mixed Workloads, RAID Group and Block Size Scalability, Random and Sequential, SSD, HDP and non-HDP ................................................................................................................................................................ 12 

Tests 10 and 11: Mixed Workloads, 4 RAID Groups, Random, SSD and SAS, HDP and non-HDP ...................... 12 

AMS 2500 Test Results Summary ..................................................................................................................................... 13 

Test 1 Results .............................................................................................................................................. 13 

Random Read Summary ........................................................................................................................................ 13 

Random Write Summary ........................................................................................................................................ 14 

Test 2 Results .............................................................................................................................................. 14 

Sequential Read Summary .................................................................................................................................... 14 

Sequential Write Summary ..................................................................................................................................... 15 

Test 3 Results .............................................................................................................................................. 16 

Observations .......................................................................................................................................................... 16 

Test 4 Results .............................................................................................................................................. 17 

Observations .......................................................................................................................................................... 17 

Test 5 Results .............................................................................................................................................. 21 

Test 6 Results (non-HDP) ............................................................................................................................ 22 

Test 7 Results (HDP) ................................................................................................................................... 22 

Test 8 Results (non-HDP) ............................................................................................................................ 23 

Test 9 Results (HDP) ................................................................................................................................... 23 

Test 10 Results ............................................................................................................................................ 23 

Test 11 Results (HDP) ................................................................................................................................. 25 

Conclusions ........................................................................................................................................................................ 26 

APPENDIX A. Test Configuration Details ......................................................................................................................... 28 

Page 8: AMS2500 Performance Report

Test information ........................................................................................................................................... 28 

Host Configuration ....................................................................................................................................... 28 

Storage Configuration .................................................................................................................................. 28 

APPENDIX B. Test-1 Full Results ...................................................................................................................................... 29 

APPENDIX C. Test-2 Full Results ...................................................................................................................................... 33 

APPENDIX D. Test-3 Full Results ...................................................................................................................................... 37 

Random Mixed Workloads ..................................................................................................................................... 37 

APPENDIX E. Test-4 Full Results ...................................................................................................................................... 39 

Sequential Workloads Using Default 256KB RAID Chunk ..................................................................................... 39 

Sequential Workloads Using Optional 64KB RAID Chunk ..................................................................................... 41 

Page 9: AMS2500 Performance Report

Hitachi Data Systems Internal and Channel Partner Confidential  Page 9 

Hitachi AMS 2500 Using 200GB SSD HDDs – Scalability Analysis

A Performance Brief

By Alan Benway (Performance Measurement Group, Technical Operations)

Purpose of This Testing The purpose of this testing was to establish a variety of performance comparisons of SSD and SAS drives on the Hitachi Adaptable Modular Storage 2500 (AMS 2500) midrange storage array. Various tests used 5, 10, 15 and 20 200GB SSD or 146GB 15k SAS disks in a RAID-5 (4D+1P) configuration. Additionally, some SSD tests were also run with the use of Hitachi Dynamic Provisioning for comparison. The AMS 2500’s Hardware Load Balancing feature was “Enabled.” There were no copy products license keys enabled, so the maximum amount of cache was available.

These results will help answer questions about the kind of performance capabilities to expect with various workloads when using a 0% cache hit ratio. The performance results are presented in the charts in the Test Results Summary section of this report. While we attempt to profile a variety of application characteristics, no benchmark can replicate a real world application as well as the actual applications themselves.

Workload Generator Information Vdbench and IOmeter were used to generate a variety of I/O workloads against raw volumes (no file systems, with their various overheads). Various workload parameters such as I/O rates, file sizes, transfer sizes, thread counts, read/write ratios, and random versus sequential were controlled by parameter files. By using raw volumes, the tests bypassed the host file system and its cache, thus more accurately reflecting the I/O performance capabilities of the storage unit.

Test Configurations and Workloads Configuration There was a single Hitachi AMS 2500 midrange storage system used for these tests. The AMS 2500 was configured with 16GB of Cache. A RAID-5 (4D+1P) configuration was used for both SSD and SAS configurations. Various LUN sizes were used during these tests, with an 8GB, 133GB, or 362GB LUN configured. On some test there were two LUNs per RAID Group rather than one.

There was one HP DL585 G2 server used, with 4 x 3GHz Opteron dual-core processors, 16GB of RAM, and four Qlogic QLE2462 PCIe 4Gb/sec Fibre Channel HBAs, with up to 8 4Gb/sec paths used for the tests. The operating system used was Microsoft Windows Server 2003 with Service Pack 2.

Table 1 shows the general locations of the SAS and SSD drives for each RAID Group. Four disk trays were used, with five empty drive slots per tray. From one to four host ports (1 or 2 per controller) were used for these tests.

Page 10: AMS2500 Performance Report

Hitachi Data Systems Internal and Channel Partner Confidential  Page 10 

Table 1. AMS 2500 RAID Group Layout – RAID-5 (4D+1P)

Tray‐3  RG 3 SAS  RG 3 SSD 

Tray‐2  RG 2 SAS  RG 2 SSD 

Tray‐1  RG 1 SAS  RG 1 SSD 

Tray‐0  RG 0 SAS  RG 0 SSD 

Slot #  0  1  2  3  4  5  6  7  8  9  10  11  12  13  14 

Test Methodologies There were eleven types of tests performed on SSD drives, with six of these tests also run on SAS disks. The details of these tests are shown below in Tables 1 and 2. Note that tests 6, 8 and 11 also used HDP. While SAS results are shown later on below, one need examine previous AMS 2500 SAS scalability test results to see how many SAS disks are needed to achieve comparable levels with these SSD results. Also note that these tests do not explore the use of 20 SSD drives along with a scaled number of SAS disks to see where the internal bandwidth of the controllers is exhausted.

Table 2. Test Configuration Overview

Test  Test Name  Configuration 

Set #     HDD/SSD  RAID‐5 4D+1P Groups  LUN / RG  LU Size  HDP 

1  Basic Performance  HDD, SSD  1, 2, 3, 4  1  8GB  ‐ 

2     HDD, SSD  1, 2, 3, 4  1  8GB  ‐ 

3     HDD, SSD  1, 2, 3, 4  1  8GB  ‐ 

4     HDD, SSD  1, 2, 3, 4  1  8GB  ‐ 

5     HDD, SSD  1  1  362GB  ‐ 

6  HDP Performance  SSD  4 (4RG/1pool)  2  133GB  yes 

7     SSD  4  2  133GB  no 

8     SSD  4 (4RG/1pool)  2  133GB  yes 

9     SSD  4  2  133GB  no 

10  Response Time  HDD, SSD  4  2  133GB  no 

11  Performance  SSD  4  2  133GB  yes 

Table 3. Workload Details by Test

Test  Workload  Tool 

Set #  Threads / LU     Block Size (KB)  Read %    

1  R:8,32 / W:1,8  Random  .5, 4, 16, 64, 256, 1024  0, 100%  IOmeter 

2  R:1,8 / W:1,8  Sequential  .5, 4, 16, 64, 256, 1024  0, 100%  IOmeter 

3  8  Random  2, 4, 8, 16  0, 25, 50, 75, 100%  Vdbench 

4  1  Sequential  64, 128, 256, 512, 1024  0, 25, 50, 75, 100%  Vdbench 

5  1‐256  Random  8  75%  Vdbench 

6  4  Random  8  0, 25, 50, 75, 100%  Vdbench 

7  4  Random  8  0, 25, 50, 75, 100%  Vdbench 

Page 11: AMS2500 Performance Report

Hitachi Data Systems Internal and Channel Partner Confidential  Page 11 

8  4  Sequential  1024  0, 25, 50, 75, 100%  Vdbench 

9  4  Sequential  1024  0, 25, 50, 75, 100%  Vdbench 

10  8 (16,32,64,128)  Random  4  0, 70,100%  Vdbench 

11  8 (16,32,64,128)  Random  4  0, 70,100%  Vdbench 

Tests 1 and 2: Uniform Workloads, RAID Group and Block Size Scalability, Random and Sequential, SSD and SAS Test 1 measured the performance of 100% random reads and 100% random writes of 1-4 RAID Groups of SSD and SAS disks using various block sizes and thread counts (per LUN). Test 2 was the same except for sequential workloads.

The initial step was to configure 4 RAID Groups (20 disks) using RAID-5 (4D+1P) and then create a single 8GB LUN per RAID Group for both SSD and SAS disks. These 4 LUNs were evenly assigned to the four AMS 2500 ports (0A, 1A, 0E, 1E) in use. The AMS 2500 had its internal Hardware Load Balancing enabled. LUNs were driven by workloads on the controllers that managed them.

For Random workloads, IOmeter was used to drive the workloads on the HP server against raw volumes. The workload mixes included 100% Read and 100% Write, using block sizes of .5KB, 4KB, 16KB, 64KB, 256KB, and 1024KB. For reads, there were tests with 8 or 32 threads per LUN, and for writes there were tests with 1 or 8 threads per LUN. Tests were run against 1, 2, 3, and 4 LUNs (or 5, 10, 15, and 20 disks) using 1, 2, 3, or 4 ports.

For Sequential workloads, IOmeter was used to drive the workloads on the HP server against raw volumes. The workload mixes included 100% Read and 100% Write, using block sizes of .5KB, 4KB, 16KB, 64KB, 256KB, and 1024KB using 1 and 8 threads per LUN for reads and for writes. A special set of tests were run using a 256KB block size and 1 thread per LUN for read and writes. Tests were run against 1, 2, 3, and 4 LUNs (or 5, 10, 15, and 20 disks) using 1, 2, 3, or 4 ports.

Tests 3 and 4: Mixed Workloads, RAID Group and Block Size Scalability, Random and Sequential, SSD and SAS Test 3 measured the performance of 1-4 RAID Groups of SSD and SAS disks using mixed random workloads using several block sizes and having 8 threads per LUN. Test 4 was the same except for sequential workloads, larger block sizes, and only 1 thread per LUN.

The initial step was to configure 4 RAID Groups (20 disks) using RAID-5 (4D+1P) and then create a single 8GB LUN per RAID Group for both SSD and SAS disks. These 4 LUNs were evenly assigned to the four AMS 2500 ports (0A, 1A, 0E, 1E) in use. The AMS 2500 had its internal Hardware Load Balancing enabled. LUNs were driven by workloads on the controllers that managed them.

For Random workloads, Vdbench was used to drive the workloads on the HP server against raw volumes. The workload mixes included 100%, 75%, 50%, and 25% Read and 100% Write, using block sizes of 2KB, 4KB, 8KB and 16KB. All tests used 8 threads per LUN. Tests were run against 1, 2, 3, or 4 LUNs (or 5, 10, 15, and 20 disks) using 1, 2, 3, or 4 ports.

For Sequential workloads, IOmeter was configured to drive the workloads on the HP server against the raw volumes. The workload mixes included 100% Read and 100% Write, using block sizes of .5KB, 4KB, 16KB, 64KB, 256KB, and 1024KB using 8 threads per LUN for reads and for writes. A special set of tests were run using a 64KB RAID chunk size and 1 thread per LUN for read and writes. Tests were run against 1, 2, 3, or 4 LUNs (or 5, 10, 15, and 20 disks) using 1, 2, 3, or 4 ports.

Page 12: AMS2500 Performance Report

Hitachi Data Systems Internal and Channel Partner Confidential  Page 12 

Test 5: Single Workload, Single RAID Group, Thread Scalability, SSD and SAS Test 5 measured the performance of one RAID Group of SSD or SAS drives with one LUN of 133GB using a single 75% random read workload with an 8KB block size and a thread count scaling from 1 to 256 threads.

The initial step was to configure 1 RAID Group (5 disks) using RAID-5 (4D+1P) and then create a single 362GB LUN for both SSD and SAS disks. This one LUN was assigned to one AMS 2500 port (0A). The AMS 2500 had its internal Hardware Load Balancing enabled.

For Random workloads, Vdbench was used to drive the workloads on the HP server against one raw volume. The workload was 75% random read using a block size of 8KB. The tests scaled using used 1 to 256 threads on this LUN using one port.

Tests 6 to 9: Mixed Workloads, RAID Group and Block Size Scalability, Random and Sequential, SSD, HDP and non-HDP Test 6 measured the non-HDP performance of 4 RAID Groups of SSD drives using mixed random workloads with 8 133GB LUNs, an 8KB block size and 16 threads per LUN. Test 7 was the same except for mixed sequential workloads and a 1024KB block size. Test 8 measured the HDP performance of 4 RAID Groups of SSD drives using mixed random workloads with 8 133GB LUNs, an 8KB block size and 16 threads per LUN. Test 9 was the same except for mixed sequential workloads and a 1024KB block size.

The initial step was to configure 4 RAID Groups (20 disks) using RAID-5 (4D+1P) and then create two 133GB LUNs per RAID Group for both SSD and SAS drives. These 8 LUNs were evenly assigned to the four AMS 2500 ports (0A, 1A, 0E, 1E) in use. The AMS 2500 had its internal Hardware Load Balancing enabled. LUNs were driven by workloads on the controllers that managed them.

For Random workloads, Vdbench was used to drive the workloads on the HP server against raw volumes. The workload mixes included 100%, 75%, 50%, and 25% Read and 100% Write, using a block size of 8KB. All tests used 16 threads per LUN. Tests were run against 8 LUNs (20 drives) on four ports.

For Sequential workloads, Vdbench was used to drive the workloads on the HP server against raw volumes. The workload mixes included 100%, 75%, 50%, and 25% Read and 100% Write, using a block size of 1024KB. All tests used 16 threads per LUN. Tests were run against 8 LUNs (20 drives) on four ports.

Tests 10 and 11: Mixed Workloads, 4 RAID Groups, Random, SSD and SAS, HDP and non-HDP Test 10 measured the performance of 4 RAID Groups of SSD and SAS disks using mixed random workloads using a single 4KB block size and having various threads per LUN. There were two 133GB LUNs created per RAID Group, or eight in all. The Test 11 was the same except for the use of all 4 RAID Groups as Pools Volumes in an HDP configuration, with 8 DP-VOLs created from the Pool.

Unlike any of the previous test sets, a base set of tests within Test 10 and 11 that scaled the load on the drives, where the aggregate percent busy rate for the RAID Groups was 10%, 50%, 70%, 80%, 90%, and 100%. All of these were run using 8 threads per LUN or DPVOL. Another series of tests were also run where the percent busy rate was held at 100% and the thread counts varied, using 16, 32, 64, and 128 threads per LUN or DPVOL. This was to gauge the effects of load versus response time.

For Test 10, the initial step was to configure 4 RAID Groups (20 disks) using RAID-5 (4D+1P) and then create a two 133GB LUNs per RAID Group for both SSD and SAS disks. These 8 LUNs were evenly assigned to the four AMS 2500 ports (0A, 1A, 0E, 1E) in use. In Test 11, the system was reconfigured to

Page 13: AMS2500 Performance Report

Hitachi Data Systems Internal and Channel Partner Confidential  Page 13 

have the four SSD RAID Groups used as an HDP Pool, with 8 DPVOLs of 133GB created from that Pool. There were no similar tests performed on SAS disks.

The AMS 2500 had its internal Hardware Load Balancing enabled. LUNs were driven by workloads on the controllers that managed them.

For these Random workloads, Vdbench was used to drive the workloads on the HP server against raw volumes. The workload mixes included 100% and 70% and 100% Write, using a block size of 4KB. All tests used 8 threads per LUN. Tests were run against 8 LUNs (20 drives) or 8 DPVOLs.

AMS 2500 Test Results Summary Test 1 Results Random Read Summary Tables 4 and 5 are summaries of random read results with only the 4KB block size for either 8 or 32 threads per LUN. The tables for all read results are included in Appendix B. Again, these tests used block sizes of .5, 4, 16, 64, 256 and 1024KB in a 100% random read workload against one, two, three, or four LUNs (one per RAID Group). Column 8 (“SSD: SAS”) shows the ratio of SSD over SAS performance. Column 3 (“threads”) shows the total threads in use during the test. Columns 4 and 5 show the SSD values, and columns 7 and 8 show the matching SAS values. Note the large overall increase in IOPS (but also in the response times) when increasing the workload from 8 to 32 threads per LUN. For these workloads SSD drives are about 12x faster than SAS disks.

Table 4. Random Read Results with 8 Threads per LUN

Random 100% Read                      

8 threads/LU  4KB  RAID‐5 4+1  SSD  SAS  SSD : SAS 

Drives  LUNs  Threads  IOPS  RT [msec]  IOPS  RT [msec]  X:1 

5  1  8  15,148  0.53  1,278  6.26  11.8 

10  2  16  30,437  0.52  2,562  6.24  11.9 

15  3  24  45,298  0.53  3,851  6.23  11.8 

20  4  32  60,086  0.53  5,128  6.24  11.7 

Table 5. Random Read Results with 32 Threads per LUN

Random 100% Read                      

32 threads/LU  4KB  RAID‐5 4+1  SSD  SAS  SSD : SAS 

Drives  LUNs  Threads  IOPS  RT [msec]  IOPS  RT [msec]  X:1 

5  1  32  24,191  1.32  1,969  16.25  12.3 

10  2  64  48,501  1.32  3,952  16.19  12.3 

15  3  96  72,262  1.33  5,931  16.18  12.2 

20  4  128  96,125  1.33  7,893  16.21  12.2 

Page 14: AMS2500 Performance Report

Hitachi Data Systems Internal and Channel Partner Confidential  Page 14 

Random Write Summary Tables 6 and 7 are summaries of random write results with the 4KB block size for either 1 or 8 threads per LUN. The tables for all write results are included in Appendix A. Again, these tests used block sizes of .5, 4, 16, 64, 256 and 1024KB in a 100% random write workload against one, two, three, or four LUNs (one per RAID Group). Column 8 (“SSD: SAS”) shows the ratio of SSD over SAS performance. Column 3 (“threads”) shows the total threads in use during the test. Columns 4 and 5 show the SSD values, and columns 7 and 8 show the matching SAS values. Note the large overall increase in IOPS (but also in the response times) when increasing the workload from 1 to 8 threads per LUN. For these workloads SSD drives are about 7x or 9x faster than SAS disks.

Table 6. Random Write Results with 1 Thread per LUN

Random 100% Write                   

1 thread/LU  4KB  RAID‐5  SSD  SAS  SSD : SAS 

Drives  LUNs  Threads  IOPS  RT [msec]  IOPS  RT [msec]  X:1 

5  1  1  4,498  0.20  623  1.60  7.2 

10  2  2  10,198  0.21  1,253  1.60  8.1 

15  3  3  12,767  0.22  1,903  1.58  6.7 

20  4  4  16,687  0.22  2,533  1.58  6.6 

Table 7. Random Write Results with 8 Threads per LUN

Random 100% Write                   

8 threads/LU  4KB  RAID‐5  SSD  SAS  SSD : SAS 

Drives  LUNs  Threads  IOPS  RT [msec]  IOPS  RT [msec]  X:1 

5  1  8  6,127  1.06  618  12.93  9.9 

10  2  16  12,269  1.07  1,260  12.70  9.7 

15  3  24  17,626  1.16  1,914  12.53  9.2 

20  4  32  23,026  1.20  2,536  12.62  9.1 

Test 2 Results Sequential Read Summary Tables 8 and 9 are summaries of sequential read results with only the 256KB block size for either 1 or 8 threads per LUN. The tables with all read results are included in Appendix C. Again, these tests used block sizes of .5, 4, 16, 64, 256 and 1024KB in a 100% sequential read workload against one, two, three, or four LUNs (one per RAID Group).

These results show that the use of 8 threads per LUN instead of 1 thread per LUN slightly increased total throughput but at a cost of large increase in response time. Note that response time is not usually a consideration for sequential workloads, but it does illustrate the effect of overdriving the LUNs. Also note the fairly small difference (5-15%) with 1 thread per LUN between the use of SSDs and SAS drives in this workload. Column 8 shows the ratio (as a percent) of the SSD result divided by the SAS result.

Page 15: AMS2500 Performance Report

Hitachi Data Systems Internal and Channel Partner Confidential  Page 15 

Table 8. Sequential Read Results with 1 Thread per LUN

Sequential Read                      

1 thread/LUN  256KB RAID‐5 4+1  SSD  SAS  SSD : SAS 

Drives  LUNs  Threads  MB/s  RT [msec]  MB/s  RT [msec]  X:1 

5  1  1  321.2  0.8  278.7  0.9  1.2 

10  2  2  623.5  0.8  576.6  0.9  1.1 

15  3  3  910.7  0.8  868.0  0.9  1.0 

20  4  4  1224.6  0.8  1,171.6  0.9  1.0 

Table 9. Sequential Read Results with 8 Threads per LUN

Sequential Read                      

8 threads/LUN  256KB RAID‐5 4+1  SSD  SAS  SSD : SAS 

Drives  LUNs  Threads  MB/s  RT [msec]  MB/s  RT [msec]  X:1 

5  1  8  380.5  5.3  293.7  6.8  1.3 

10  2  16  761.4  5.3  624.2  6.4  1.2 

15  3  24  1142.0  5.3  951.9  6.3  1.2 

20  4  32  1535.9  20.8  1,273.9  6.3  1.2 

Sequential Write Summary Tables 10 and 11 are summaries of sequential write results with only the 256KB block size for either 1 or 8 threads per LUN. The tables for all write results are included in Appendix B. Again, these tests used block sizes of .5, 4, 16, 64, 256 and 1024KB in a 100% sequential write workload against one, two, three, or four LUNs (one per RAID Group).

These results show that the use of 8 threads per LUN instead of 1 thread per LUN slightly increased total throughput but at a cost of large increase in response time. Note that response time is not usually a consideration for sequential workloads, but it does illustrate the effect of overdriving the LUNs. Also note the small difference (7-13%) between the use of SSDs and SAS drives in this workload. Column 8 shows the ratio (as a percent) of the SSD result divided by the SAS result.

Table 10. Sequential Write Results with 1 Thread per LUN

Sequential Write                      

1 thread/LUN  256KB RAID‐5 4+1  SSD  SAS  SSD : SAS 

Drives  LUNs  Threads  MB/s  RT [msec]  MB/s  RT [msec]  X:1 

5  1  1  257.9  1.0  227.4  1.1  1.1 

10  2  2  506.6  1.0  454.9  1.1  1.1 

15  3  3  716.7  1.0  665.2  1.1  1.1 

20  4  4  931.7  1.1  865.8  1.2  1.1 

Page 16: AMS2500 Performance Report

Hitachi Data Systems Internal and Channel Partner Confidential  Page 16 

Table 11. Sequential Write Results with 8 Threads per LUN

Sequential Write                      

8 threads/LUN  256KB RAID‐5 4+1  SSD  SAS  SSD : SAS 

Drives  LUNs  Threads  MB/s  RT [msec]  MB/s  RT [msec]  X:1 

5  1  8  265.3  7.5  232.9  8.5  1.1 

10  2  16  528.7  7.5  468.2  8.4  1.1 

15  3  24  754.8  7.9  683.9  8.7  1.1 

20  4  32  965.9  8.2  889.7  8.9  1.1 

Test 3 Results Observations

There is a lot of detailed data presented in the Appendix D below. However, the following summary may provide the best overall idea on small block random workloads on SSD. This summary show the results of the 5, 10, 15 and 20 drive tests using just the block size of 8KB with the RAID chunk size of 256KB (default).

8KB Block Size

As can be seen below there is a linear increase in performance as the test scaled from 5, 10, 15 and then 20 drives using RAID-5 (4d+1P). There was no real difference in the response times.

Chart 1. IOPS Results

0

5000

10000

15000

20000

25000

30000

35000

40000

45000

100 75 50 25 0

IOPS

Random Read %

5‐20 SSD, 8KB Block Size, IOPS

5 SSD

10 SSD

15 SSD

20 SSD

Page 17: AMS2500 Performance Report

Hitachi Data Systems Internal and Channel Partner Confidential  Page 17 

Chart 2. Response Times

Test 4 Results Observations

There is a lot of data presented in Appendix E for these tests. However, the following summary may provide the best overall idea on sequential workloads on SSD and the effect of changing the RAID chunk size from the default of 256KB down to 64KB. Normally, the response time is not considered with sequential workloads, but here they provide some interesting insight into the change of behavior with the two RAID chunk sizes. These three summaries show the results of the 20 drive tests using block sizes of 64KB, 256KB, and 512KB with RAID chunk sizes of 64KB and 256KB (default).

64KB Block Size

As can be seen below there is no effect on performance or response time due to the different RAID chunk sizes.

Table 12. Sequential Results with 64KB Block Size

Sequential  20 SSD 

R5 4d+1p  4 Threads 

   64KB Block 

   256KB Chunk  64KB Chunk 

Read %  MB/sec  RT [msec]  MB/sec  RT [msec] 

100  971.9  0.3  972.9  0.3 

75  626.0  0.4  566.7  0.4 

50  503.9  0.5  484.1  0.5 

25  446.7  0.6  412.5  0.6 

0  714.9  0.3  755.5  0.3  

0.0

0.2

0.4

0.6

0.8

1.0

1.2

1.4

1.6

100 75 50 25 0

Respone Tim

e (MS)

Random Read %

5‐20 SSD, 8KB Block Size, Response Time

5 SSD

10 SSD

15 SSD

20 SSD

Page 18: AMS2500 Performance Report

Hitachi Data Systems Internal and Channel Partner Confidential  Page 18 

Chart 3. Throughput by RAID Chunk Size 

Chart 4. Response Times

256KB Block Size

As can be seen below, the smaller 64KB RAID chunk size has a large advantage over the default 256KB chunk except for the 100% read or write cases where they are equal. The presence of mixed workloads gives a large performance and response time advantage to the smaller chunk size.

0

200

400

600

800

1000

1200

100 75 50 25 0

MB/sec

Sequential Read %

20 SSD, 64KB Block Size

256KB Chunk

64KB Chunk

0.0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

100 75 50 25 0

Respone Tim

e (MS)

Sequential Read %

20 SSD, 64KB Block Size

256KB Chunk

64KB Chunk

Page 19: AMS2500 Performance Report

Hitachi Data Systems Internal and Channel Partner Confidential  Page 19 

Table 13. Sequential Results with 256KB Block Size

Sequential  20 SSD 

R5 4d+1p  4 Threads 

   256KB Block 

   256KB Chunk  64KB Chunk 

Read %  MB/sec  RT [msec]  MB/sec  RT [msec] 

100  1218.8  0.8  1231.7  0.8 

75  241.0  4.2  789.4  1.3 

50  117.9  8.5  596.1  1.7 

25  97.6  10.4  486.3  2.1 

0  938.9  1.1  1040.0  1.0 

Chart 5. Throughput by RAID Chunk Size 

Chart 6. Response Times

0

200

400

600

800

1000

1200

1400

100 75 50 25 0

MB/sec

Sequential Read %

20 SSD, 256KB Block Size

256KB Chunk

64KB Chunk

0

2

4

6

8

10

12

14

16

100 75 50 25 0

Respone Tim

e (MS)

Sequential Read %

20 SSD, 256KB Block Size

256KB Chunk

64KB Chunk

Page 20: AMS2500 Performance Report

Hitachi Data Systems Internal and Channel Partner Confidential  Page 20 

512KB Block Size

As can be seen below, the smaller 64KB RAID chunk size has a large advantage over the default except for the 100% read or write cases. The presence of mixed workloads gives a large performance and response time advantage to the smaller chunk size.

Table 14. Sequential Results with 512KB Block Size

Sequential  20 SSD 

R5 4d+1p  4 Threads 

   512KB Block 

   256KB Chunk  64KB Chunk 

Read %  MB/sec  RT [msec]  MB/sec  RT [msec] 

100  1257.4  1.6  1282.2  1.6 

75  347.7  5.8  851.2  2.3 

50  189.7  10.6  631.2  3.2 

25  144.2  13.9  502.0  4.0 

0  960.3  2.1  1032.0  1.9 

Chart 7. Throughput by RAID Chunk Size 

0

200

400

600

800

1000

1200

1400

100 75 50 25 0

MB/sec

Sequential Read %

20 SSD, 512KB Block Size

256KB Chunk

64KB Chunk

Page 21: AMS2500 Performance Report

Hitachi Data Systems Internal and Channel Partner Confidential  Page 21 

Chart 8. Response Times

Test 5 Results These tests used a single RAID Group of either SSD or SAS drives. There was a single LUN of 362GB, and a workload of 75% random read with a block size of 8KB was used, with a scaling of the thread counts from 1 to 256 as shown below. These also include the controller percent busy rates (for the single controller in use).

The SAS drive tests didn’t cause much CPU usage until a high thread count of 64 for that LUN was reached. The 5 SSD drive tests showed a heavy CPU usage from a thread count of 4 and up. Note that for SSD, at the 16 thread level, the CPU % Busy was 68%. Yet in Test 6, at the 75% test with 16 threads, the CPU % Busy was 58% with 20 drives and three times the IOPS rate (9570 IOPS versus 28191). So it appears that the CPU % Busy rates with SSDs don’t track with the number of drives or the loads, and should only be used as a rough guide relative to SAS drives.

Table 15. SSD 5-Disk Thread Scaling

Random Read 75%          

8KB block  RAID‐5  4d+1p  5 SSD  362GB LUN 

Threads  IOPS  MB/s  RT [msec]  CPU usage 

1  1555  12.2  0.6  1% 

2  2723  21.3  0.7  12% 

4  4420  34.5  0.9  28% 

8  6739  52.7  1.2  49% 

16  9570  74.8  1.7  68% 

32  12230  95.5  2.6  80% 

64  13753  107.4  4.7  95% 

128  11405  89.1  11.2  100% 

256  8874  69.3  28.8  90% 

0

2

4

6

8

10

12

14

16

100 75 50 25 0

Respone Tim

e (MS)

Sequential Read %

20 SSD, 512KB Block Size

256KB Chunk

64KB Chunk

Page 22: AMS2500 Performance Report

Hitachi Data Systems Internal and Channel Partner Confidential  Page 22 

Table 16. SAS 5-Disk Thread Scaling

Random Read 75%          

8KB block  RAID‐5  4d+1p  5 SAS  362GB LUN 

Threads  IOPS  MB/s  RT [msec]  CPU usage 

1  165  1.3  6.1  1% 

2  272  2.1  7.4  1% 

4  400  3.1  10.0  1% 

8  534  4.2  15.0  1% 

16  661  5.2  24.2  1% 

32  778  6.1  41.1  3% 

64  861  6.7  74.4  82% 

128  976  7.6  131.1  94% 

256  1008  7.9  254.0  95% 

Test 6 Results (non-HDP) Random Mixed Workloads Using 4 SSD RAID Groups, 8 133GB LUNs, and 16 Threads Note the rapid drop in IOPS with a write element in the workload. Also note how well the response time holds up for all write levels.

Table 17. SSD Mixed Random Workload Results with 20 Drives

Random 8kb  Non‐HDP          

SSD (20)  R5 4d+1p  8 133GB LUNs  16 threads    

Read %  IOPS  MB/s  RT [msec]  CPU Usage 

100  44,674  349.0  0.7  55% 

75  28,191  220.2  1.1  58% 

50  25,714  200.9  1.2  73% 

25  26,376  206.1  1.2  92% 

0  22,452  175.4  1.4  98% 

Test 7 Results (HDP) Random Workloads Using HDP with 4 SSD RAID Groups, 8 133GB DPVOLs, and 16 Threads These workloads used 8 DPVOLs instead of 8 LUNs as above.

Table 18. SSD Mixed Random Workload Results with 20 Drives and HDP

Random 8kb  HDP          

SSD (20)  R5 4d+1p  8 133GB LUNs  16 threads    

Read %  IOPS  MB/s  RT [msec]  CPU Usage 

100  39,688  310.1  0.8  65% 

75  25,415  198.6  1.3  69% 

50  22,708  177.4  1.4  83% 

25  21,603  168.8  1.5  95% 

0  17,438  136.2  1.8  99% 

Page 23: AMS2500 Performance Report

Hitachi Data Systems Internal and Channel Partner Confidential  Page 23 

Test 8 Results (non-HDP) Sequential Workloads Using 4 SSD RAID Groups, 8 133GB LUNs, 16 Threads The performance with 50% to 0% Sequential reads stayed around 1GB/s. The 100% Read test shows that there is at least a 57% cache hit rate occurring in the server, as this is a 596 MB/s per path result. Note that 4Gbit/s FC paths top out at about 380 MB/sec.

Table 19. SSD Mixed Sequential Workload Results with 20 Drives

Sequential 1024k     Non‐HDP    

SSD (20)  R5 4d+1p  8 133GB LUNs  16 threads 

Read %  MB/s  RT [msec]  CPU Usage 

100  2386.7  13.4  33% 

75  1470.3  21.8  46% 

50  1088.2  29.4  57% 

25  957.7  33.4  69% 

0  1084.2  29.5  98% 

Test 9 Results (HDP) Sequential Workloads Using HDP with 4 SSD RAID Groups, 8 133GB DPVOLs, and 16 Threads These workloads used 8 DPVOLs instead of 8 LUNs as above. Also, the 100% Read result indicates at least a 10% cache hit rate in the server with the average 415MB/s per path result.

Table 20. SSD Mixed Sequential Workload Results with 20 Drives and HDP

Sequential 1024k     HDP    

SSD (20)  R5 4d+1p  8 133GB LUNs  16 threads 

Read %  MB/s  RT [msec]  CPU Usage 

100  1661.2  19.3  24% 

75  1129.7  28.4  38% 

50  883.3  36.3  54% 

25  836.5  38.3  71% 

0  1096.2  29.2  100% 

Test 10 Results Random Workloads Using 4 RAID Groups, 4KB blocks and 8 133GB LUNs This set of non-HDP tests has two parts to look at load scaling as opposed to LUN scaling. The first part of the tests uses up to 8 threads per LUN, but the workload throttles the overall amount of dispatched threads in such a way as to produce a certain aggregate SSD drive percent busy rate. The tests stepped from 10%, 50%, 70%, 80%, and then 90%. The next set of tests ran at a constant 100% load, but increased the threads per LUN counts from 8, 16, 32, 64 and then 128. One set of tests used 100% random read as the workload, the second used 100% random write, and the last set used 70% random read plus 30% sequential read in a mixed workload.

In the 100% random read tests, there was a steady performance gain as the workload increased until the 64 threads per LUN (i.e. 512 overall) point when the aggregate controller busy rates likely hit 99% (this data was not captured).

Page 24: AMS2500 Performance Report

Hitachi Data Systems Internal and Channel Partner Confidential  Page 24 

Table 21. Random Read Results

100% Random Read 4KB             

Non‐HDP  SSD (20)  R5 4d+1p  8 LUNs  133GB LUNs 

Disk % Busy  Threads/LUN  IOPS  MB/s  RT [msec] 

10%  8  6,197  24.2  0.7 

50%  8  30,608  119.6  0.6 

70%  8  42,794  167.2  0.6 

80%  8  48,891  191.0  0.5 

90%  8  55,090  215.2  0.5 

100%  8  60,424  236.0  0.5 

100%  16  88,981  347.6  0.7 

100%  32  111,285  434.7  1.1 

100%  64  125,238  489.2  1.6 

100%  128  123,714  483.3  2.6 

In the 100% random write tests, there was a steady performance gain as the workload increased until the 100% busy test using 8 threads per LUN (i.e. 64 overall) point when the ability of the SSD drives to accept writes hit the limit.

Table 22. Random Write Results

100% Random Write 4KB             

Non‐HDP  SSD (20)  R5 4d+1p  8 LUNs  133GB LUNs 

Disk % Busy  Threads/LUN  IOPS  MB/s  RT [msec] 

10%  8  2,599  10.2  0.2 

50%  8  12,995  50.8  0.3 

70%  8  18,100  70.7  0.4 

80%  8  20,694  80.8  0.6 

90%  8  23,318  91.1  0.7 

100%  8  25,623  100.1  1.2 

100%  16  23,398  91.4  2.7 

100%  32  21,337  83.4  6.0 

100%  64  16,407  64.1  15.6 

100%  128  11,480  44.8  44.6 

In the 70/30% tests, there was a steady performance gain as the workload increased until the 100% busy test using 32 threads per LUN (i.e. 256 overall) point when the ability of the SSD drives to accept writes along with the reads hit the limit.

Page 25: AMS2500 Performance Report

Hitachi Data Systems Internal and Channel Partner Confidential  Page 25 

Table 23. Mixed Random and Sequential Results

70% Random Read, 30% Sequential Read 4KB       

Non‐HDP  SSD (20)  R5 4d+1p  8 LUNs  133GB LUNs 

IOPS/Max IOPS  Threads/LUN  IOPS  MB/s  RT [msec] 

10%  8  2,900  11.3  0.5 

50%  8  14,508  56.7  0.7 

70%  8  20,321  79.4  0.9 

80%  8  23,227  90.7  1.0 

90%  8  26,108  102.0  1.0 

100%  8  28,523  111.4  1.1 

100%  16  40,252  157.2  1.6 

100%  32  51,715  202.0  2.5 

100%  64  32,438  126.7  7.9 

100%  128  26,495  103.5  19.3 

Test 11 Results (HDP) Random Workloads Using HDP with 4 RAID Groups, 4KB Blocks and 8 133GB DPVOLs This set of tests is the same as above but uses 8 HDP DPVOLs rather than 8 standard LUNs (2 per RAID Group). The four RAID Groups were used in one HDP Pool.

In the 100% random read tests, there was a steady performance gain as the workload increased until the 64 threads per LUN (i.e. 512 overall) point when the aggregate controller busy rates hit 99%.

Table 24. Random Read Results with HDP

100% Random Read 4KB                

HDP  SSD (20)  R5 4d+1p  8 LUNs  133GB LUNs    

Disk % Busy  Threads/LUN  IOPS  MB/s  RT [msec] % CPU usage 

10%  8  5,407  21.1  0.8  1 

50%  8  26,699  104.3  0.6  41 

70%  8  37,302  145.7  0.6  60 

80%  8  42,583  166.3  0.6  68 

90%  8  47,990  187.5  0.6  75 

100%  8  53,002  207.0  0.6  81 

100%  16  74,747  292.0  0.8  93 

100%  32  88,647  346.3  1.4  97 

100%  64  96,295  376.2  2.6  99 

100%  128  96,096  375.4  5.2  99 

In the 100% random write tests, there was a steady performance gain as the workload increased until the performance knee with the 100% busy test using 8 threads per LUN (i.e. 64 overall) point when the ability of the SSD drives to accept writes hit the limit.

Page 26: AMS2500 Performance Report

Hitachi Data Systems Internal and Channel Partner Confidential  Page 26 

Table 25. Random Write Results with HDP

100% Random Write 4KB             

HDP  SSD (20)  R5 4d+1p  8 LUNs  133GB LUNs 

IOPS/Max IOPS  Threads/LUN  IOPS  MB/s  RT [msec] 

10%  8  1,894  7.4  0.6 

50%  8  9,094  35.5  0.6 

70%  8  12,822  50.1  0.7 

80%  8  14,635  57.2  0.8 

90%  8  16,437  64.2  0.9 

100%  8  18,256  71.3  1.1 

100%  16  18,648  72.8  3.4 

100%  32  15,130  59.1  8.5 

100%  64  18,868  73.7  13.6 

100%  128  17,179  67.1  29.9 

In the 70/30% tests, there was a steady performance gain as the workload increased until the 100% busy test using 32 threads per LUN (i.e. 256 overall) point when the ability of the SSD drives to accept writes along with the reads hit the limit.

Table 26. Mixed Random and Sequential Results with HDP

70% Random Read, 30% Sequential Read 4KB       

HDP  SSD (20)  R5 4d+1p  8 LUNs  133GB LUNs 

IOPS/Max IOPS  Threads/LUN  IOPS  MB/s  RT [msec] 

10%  8  2,701  10.6  0.9 

50%  8  13,513  52.8  1.0 

70%  8  18,916  73.9  1.0 

80%  8  21,495  84.0  1.1 

90%  8  24,211  94.6  1.1 

100%  8  26,631  104.0  1.2 

100%  16  36,132  141.1  1.8 

100%  32  42,615  166.5  3.0 

100%  64  41,799  163.3  6.1 

100%  128  31,613  123.5  16.1 

This table summarizes the performance knees for each test as a comparison between non-HDP and HDP. In general there was a performance advantage of from 19% to 30% for the non-HDP configuration. However, these tests created uniform workloads on all 8 LUNs or DPVOLs, and HDP is primarily intended to smooth out RAID Group hot spots from having fairly skewed host loads, as is the usual case on production systems.

Conclusions In closing, a few general observations can be made when evaluating the performance of SSD drives on the AMS 2500.

For random workloads, one can see that as the write component was introduced, there was a clear fall-off in performance due to the large disparity in read to write performance of SSD technology in general. As expected, the difference with random workloads between equal numbers of SSD drives and SAS drives

Page 27: AMS2500 Performance Report

Hitachi Data Systems Internal and Channel Partner Confidential  Page 27 

was also very large. However, the array can only support a small number of SSD drives, hence a small total usable capacity.

For large block sequential workloads, there was a fairly small advantage for the SSD drives. Due to their high cost and limited capacity, SSD drives should not be used instead of SAS for predominantly sequential workloads.

Assuming a relatively small SSD capacity the use of HDP on an SSD pool would likely not be the preferred approach. Since the SSD capacity is small the administrator will have to take steps to isolate the most active workloads onto the smallest storage footprint. SSD also appears to not suffer from the traditional hotspot degradation problem until it is at the outer edges of its performance capabilities. Therefore deploying SSD would seem to not be able to take advantage of HDP’s benefit trifecta: Space savings, easy of provisioning, and avoiding hot spots.

The other issue to consider is the rate at which SSDs consume internal bandwidths of the array. Perhaps a good rule of thumb to follow is that each SSD drive uses up array bandwidth at the ratio of 12-to-1 of SAS drives for 4KB block random read environments. For random writes, this ratio varies considerably by the block size, with a 4KB block showing about a 9-to-1 ratio of SSD drives to SAS drives. These results suggest that using 30 SSD drives for mostly random read environments displaces 360 SAS drives or 270 SAS drives for random writes. One cannot, for example, configure an array with 30 SSDs and 300 SAS drives with the expectation that both types can be driven hard simultaneously.

Page 28: AMS2500 Performance Report

Hitachi Data Systems Internal and Channel Partner Confidential  Page 28 

APPENDIX A. Test Configuration Details Test information

Table 1: Test Details

Test Period  Report Date  Location  Tester 

Nov Dec 2009  February 2010  RSD Japan  Yusuke Nishihara 

Host Configuration

Table 2: Test Server Configuration

Server  Operating System  CPU  Memory  HBA 

HP DL585 G2  Windows 2003 Server SP2 8 x 3GHz 

Opteron 8222se 16GB RAM  2 QLE2462 HBA 

Storage Configuration

Table 3: AMS 2500 Configuration

Storage  Microcode  Number of  host paths 

Total Cache Size 

License keys enabled? 

Load Balancing Enabled 

AMS 2500  0846/B‐H  1‐4 4Gbit/s  16GB  No  Yes 

Type of Disk  Size of Disk  # of Disks  RPM 

SAS  146GB  20  15,000 

SSD  200GB  20  ‐ 

RAID Level 

RAID Configuration  Number of  Number of  Size of each LUN  Chunk size  Spares RAID Groups  LUNs 

RAID 5  4D + 1P  4  4, 8  8, 133, 362GB  256KB  0 

 

Page 29: AMS2500 Performance Report

Hitachi Data Systems Internal and Channel Partner Confidential  Page 29 

APPENDIX B. Test-1 Full Results 5 Drives

Random Read 

R5 4+1 x1 

8 threads  SSD  5 drives 

Random Read 

R5 4+1 x1 

32 threads  SSD  5 drives 

Block (KB)  IOPS  MB/s RT 

[msec] SSD/SAS IOPS  Block (KB)  IOPS  MB/s 

RT [msec] 

SSD/SAS IOPS 

0.5  22,840  11.2  0.35  1739%  0.5  36,995  18.1  0.86  1806% 

4  15,148  59.2  0.53  1185%  4  24,191  94.5  1.32  1228% 

16  8,512  133.0  0.94  728%  16  12,347  192.9  2.59  698% 

64  3,958  247.4  2.02  442%  64  4,757  297.3  6.73  369% 

256  1,307  326.7  6.12  266%  256  1,430  357.5  22.38  229% 

1024  375  375.2  21.32  213%  1024  388  387.8  82.52  202% 

Random Read 

R5 4+1 x1 

8 threads  SAS  5 drives 

Random Read 

R5 4+1 x1 

32 threads  SAS  5 drives 

Block (KB)  IOPS  MB/s RT 

[msec] SAS/SSD IOPS  Block (KB)  IOPS  MB/s 

RT [msec] 

SAS/SSD IOPS 

0.5  1,313  0.6  6.09  5.7%  0.5  2,048  1.0  15.62  5.5% 

4  1,278  5.0  6.26  8.4%  4  1,969  7.7  16.25  8.1% 

16  1,169  18.3  6.84  13.7%  16  1,769  27.6  18.09  14.3% 

64  896  56.0  8.93  22.6%  64  1,289  80.6  24.82  27.1% 

256  490  122.6  16.31  37.5%  256  626  156.4  51.13  43.8% 

1024  176  176.4  45.34  47.0%  1024  192  192.2  166.49  49.6% 

Random Write 

R5 4+1 x1  1 thread  SSD  5 drives 

Random Write 

R5 4+1 x1 

8 threads  SSD  5 drives 

Block (KB)  IOPS  MB/s RT 

[msec] SSD/SAS IOPS  Block (KB)  IOPS  MB/s 

RT [msec] 

SSD/SAS IOPS 

0.5  5,058  2.5  0.20  762%  0.5  7,553  3.7  1.06  1137% 

4  4,498  17.6  0.22  722%  4  6,127  23.9  1.31  992% 

16  3,209  50.1  0.31  618%  16  3,397  53.1  2.35  652% 

64  1,174  73.4  0.85  383%  64  1,191  74.4  6.72  395% 

256  322  80.5  3.10  256%  256  297  74.3  26.92  239% 

1024  19  18.9  53.03  60%  1024  99  98.8  80.95  154% 

Random Write 

R5 4+1 x1  1 thread  SAS  5 drives 

Random Write 

R5 4+1 x1 

8 threads  SAS  5 drives 

Block (KB)  IOPS  MB/s RT 

[msec] SAS/SSD IOPS  Block (KB)  IOPS  MB/s 

RT [msec] 

SAS/SSD IOPS 

0.5  664  0.3  1.51  13.1%  0.5  665  0.3  12.02  8.8% 

4  623  2.4  1.60  13.9%  4  618  2.4  12.93  10.1% 

16  519  8.1  1.92  16.2%  16  521  8.1  15.36  15.3% 

64  307  19.2  3.26  26.1%  64  301  18.8  26.54  25.3% 

256  126  31.5  7.93  39.1%  256  124  31.1  64.25  41.9% 

1024  32  31.5  31.73  167.1%  1024  64  64.1  124.70  64.8% 

Page 30: AMS2500 Performance Report

Hitachi Data Systems Internal and Channel Partner Confidential  Page 30 

10 Drives

Random Read 

R5 4+1 x2 

16 threads  SSD  10 drives 

Random Read 

R5 4+1 x2 

64 threads  SSD  10 drives 

Block (KB)  IOPS  MB/s RT 

[msec] SSD/SAS IOPS  Block (KB)  IOPS  MB/s 

RT [msec] 

SSD/SAS IOPS 

0.5  45,910  22.4  0.35  1739%  0.5  74,256  36.3  0.86  1810% 

4  30,437  118.9  0.52  1188%  4  48,501  189.5  1.32  1227% 

16  16,951  264.9  0.94  721%  16  24,732  386.4  2.59  695% 

64  7,922  495.1  2.02  439%  64  9,521  595.1  6.72  367% 

256  2,612  652.9  6.13  266%  256  2,858  714.6  22.39  227% 

1024  751  750.7  21.31  212%  1024  774  774.3  82.65  201% 

Random Read 

R5 4+1 x2 

16 threads  SAS  10 drives 

Random Read 

R5 4+1 x2 

64 threads  SAS  10 drives 

Block (KB)  IOPS  MB/s RT 

[msec] SAS/SSD IOPS  Block (KB)  IOPS  MB/s 

RT [msec] 

SAS/SSD IOPS 

0.5  2,640  1.3  6.06  5.8%  0.5  4,103  2.0  15.60  5.5% 

4  2,562  10.0  6.24  8.4%  4  3,952  15.4  16.19  8.1% 

16  2,351  36.7  6.81  13.9%  16  3,557  55.6  17.99  14.4% 

64  1,803  112.7  8.87  22.8%  64  2,591  161.9  24.70  27.2% 

256  982  245.4  16.29  37.6%  256  1,261  315.1  50.75  44.1% 

1024  354  353.6  45.24  47.1%  1024  385  385.0  166.18  49.7% 

Random Write 

R5 4+1 x2 

2 threads  SSD  10 drives 

Random Write 

R5 4+1 x2 

16 threads  SSD  10 drives 

Block (KB)  IOPS  MB/s RT 

[msec] SSD/SAS IOPS  Block (KB)  IOPS  MB/s 

RT [msec] 

SSD/SAS IOPS 

0.5  10,198  5.0  0.21  758%  0.5  15,145  7.4  1.07  1115% 

4  9,098  35.5  0.22  726%  4  12,269  47.9  1.30  974% 

16  6,405  100.1  0.31  606%  16  6,802  106.3  2.35  642% 

64  2,358  147.4  0.85  390%  64  2,391  149.5  6.69  399% 

256  654  163.6  3.06  256%  256  600  150.1  26.64  243% 

1024  38  37.8  52.77  56%  1024  200  199.8  80.07  163% 

Random Write 

R5 4+1 x2 

2 threads  SAS  10 drives 

Random Write 

R5 4+1 x2 

16 threads  SAS  10 drives 

Block (KB)  IOPS  MB/s RT 

[msec] SAS/SSD IOPS  Block (KB)  IOPS  MB/s 

RT [msec] 

SAS/SSD IOPS 

0.5  1,346  0.7  1.49  13.2%  0.5  1,358  0.7  11.78  9.0% 

4  1,253  4.9  1.60  13.8%  4  1,260  4.9  12.70  10.3% 

16  1,057  16.5  1.89  16.5%  16  1,059  16.5  15.12  15.6% 

64  604  37.8  3.31  25.6%  64  599  37.4  26.71  25.0% 

256  256  64.0  7.80  39.1%  256  247  61.8  64.70  41.2% 

1024  67  67.3  29.70  178.1%  1024  122  122.4  130.56  61.3% 

Page 31: AMS2500 Performance Report

Hitachi Data Systems Internal and Channel Partner Confidential  Page 31 

15 Drives Random Read 

R5 4+1 x3 

24 threads  SSD  15 drives 

Random Read 

R5 4+1 x3 

96 threads  SSD  15 drives 

Block (KB)  IOPS  MB/s RT 

[msec] SSD/SAS IOPS  Block (KB)  IOPS  MB/s 

RT [msec] 

SSD/SAS IOPS 

0.5  67,592  33.0  0.35  1704%  0.5  108,783  53.1  0.88  1769% 

4  45,298  176.9  0.53  1176%  4  72,262  282.3  1.33  1218% 

16  25,293  395.2  0.95  716%  16  37,095  579.6  2.59  695% 

64  11,879  742.4  2.02  438%  64  14,213  888.3  6.75  366% 

256  3,903  975.7  6.15  264%  256  4,249  1,062.2  22.59  225% 

1024  1,123  1,122.5  21.38  211%  1024  1,161  1,161.4  82.66  201% 

Random Read 

R5 4+1 x3 

24 threads  SAS  15 drives 

Random Read 

R5 4+1 x3 

96 threads  SAS  15 drives 

Block (KB)  IOPS  MB/s RT 

[msec] SAS/SSD IOPS  Block (KB)  IOPS  MB/s 

RT [msec] 

SAS/SSD IOPS 

0.5  3,968  1.9  6.05  5.9%  0.5  6,149  3.0  15.61  5.7% 

4  3,851  15.0  6.23  8.5%  4  5,931  23.2  16.18  8.2% 

16  3,532  55.2  6.79  14.0%  16  5,339  83.4  17.98  14.4% 

64  2,711  169.4  8.85  22.8%  64  3,884  242.8  24.71  27.3% 

256  1,476  369.0  16.26  37.8%  256  1,891  472.7  50.76  44.5% 

1024  532  532.4  45.07  47.4%  1024  578  577.7  166.14  49.7% 

Random Write 

R5 4+1 x3 

3 threads  SSD  15 drives 

Random Write 

R5 4+1 x3 

24 threads  SSD  15 drives 

Block (KB)  IOPS  MB/s RT 

[msec] SSD/SAS IOPS  Block (KB)  IOPS  MB/s 

RT [msec] 

SSD/SAS IOPS 

0.5  14,490  7.1  0.22  710%  0.5  21,311  10.4  1.16  1041% 

4  12,767  49.9  0.23  671%  4  17,626  68.9  1.36  921% 

16  9,327  145.7  0.32  581%  16  9,612  150.2  2.49  586% 

64  3,470  216.9  0.86  381%  64  3,514  219.6  6.83  388% 

256  909  227.1  3.30  238%  256  863  215.6  27.80  229% 

1024  57  57.3  51.69  58%  1024  293  293.5  81.76  151% 

Random Write 

R5 4+1 x3 

3 threads  SAS  15 drives 

Random Write 

R5 4+1 x3 

24 threads  SAS  15 drives 

Block (KB)  IOPS  MB/s RT 

[msec] SAS/SSD IOPS  Block (KB)  IOPS  MB/s 

RT [msec] 

SAS/SSD IOPS 

0.5  2,040  1.0  1.47  14.1%  0.5  2,046  1.0  11.72  9.6% 

4  1,903  7.4  1.58  14.9%  4  1,914  7.5  12.53  10.9% 

16  1,605  25.1  1.87  17.2%  16  1,640  25.6  14.62  17.1% 

64  910  56.9  3.30  26.2%  64  907  56.7  26.47  25.8% 

256  381  95.3  7.85  42.0%  256  376  94.0  63.80  43.6% 

1024  99  99.1  29.84  173.0%  1024  194  193.9  123.62  66.1% 

Page 32: AMS2500 Performance Report

Hitachi Data Systems Internal and Channel Partner Confidential  Page 32 

20 Drives Random Read 

R5 4+1 x4 

32 threads  SSD  20 drives 

Random Read 

R5 4+1 x4 

128 threads  SSD  20 drives 

Block (KB)  IOPS  MB/s RT 

[msec] SSD/SAS IOPS  Block (KB)  IOPS  MB/s 

RT [msec] 

SSD/SAS IOPS 

0.5  89,877  43.9  0.36  1700%  0.5  143,314  70.0  1.00  1752% 

4  60,086  234.7  0.53  1172%  4  96,125  375.5  1.33  1218% 

16  33,819  528.4  0.95  717%  16  49,479  773.1  2.59  696% 

64  15,854  990.9  2.02  439%  64  18,920  1,182.5  6.76  365% 

256  5,195  1,298.6  6.16  264%  256  5,644  1,411.0  22.68  224% 

1024  1,495  1,494.9  21.40  211%  1024  1,548  1,548.5  82.66  201% 

Random Read 

R5 4+1 x4 

32 threads  SAS  20 drives 

Random Read 

R5 4+1 x4 

128 threads  SAS  20 drives 

Block (KB)  IOPS  MB/s RT 

[msec] SAS/SSD IOPS  Block (KB)  IOPS  MB/s 

RT [msec] 

SAS/SSD IOPS 

0.5  5,286  2.6  6.05  5.9%  0.5  8,182  4.0  15.64  5.7% 

4  5,128  20.0  6.24  8.5%  4  7,893  30.8  16.21  8.2% 

16  4,714  73.7  6.79  13.9%  16  7,109  111.1  18.00  14.4% 

64  3,612  225.8  8.86  22.8%  64  5,185  324.0  24.69  27.4% 

256  1,966  491.6  16.27  37.9%  256  2,524  630.9  50.71  44.7% 

1024  709  709.4  45.10  47.5%  1024  770  769.7  166.27  49.7% 

Random Write 

R5 4+1 x4 

4 threads  SSD  20 drives 

Random Write 

R5 4+1 x4 

32 threads  SSD  20 drives 

Block (KB)  IOPS  MB/s RT 

[msec] SSD/SAS IOPS  Block (KB)  IOPS  MB/s 

RT [msec] 

SSD/SAS IOPS 

0.5  18,833  9.2  0.22  691%  0.5  27,573  13.5  1.20  1008% 

4  16,687  65.2  0.24  659%  4  23,026  89.9  1.39  908% 

16  12,209  190.8  0.33  573%  16  12,566  196.3  2.55  584% 

64  4,574  285.9  0.87  379%  64  4,643  290.2  6.89  388% 

256  1,187  296.6  3.37  234%  256  1,152  288.0  27.78  230% 

1024  74  73.9  54.07  59%  1024  389  389.2  82.11  145% 

Random Write 

R5 4+1 x4 

4 threads  SAS  20 drives 

Random Write 

R5 4+1 x4 

32 threads  SAS  20 drives 

Block (KB)  IOPS  MB/s RT 

[msec] SAS/SSD IOPS  Block (KB)  IOPS  MB/s 

RT [msec] 

SAS/SSD IOPS 

0.5  2,724  1.3  1.47  14.5%  0.5  2,735  1.3  11.70  9.9% 

4  2,533  9.9  1.58  15.2%  4  2,536  9.9  12.62  11.0% 

16  2,130  33.3  1.88  17.4%  16  2,152  33.6  14.86  17.1% 

64  1,208  75.5  3.31  26.4%  64  1,197  74.8  26.73  25.8% 

256  508  126.9  7.87  42.8%  256  502  125.4  63.79  43.5% 

1024  124  124.5  31.79  168.4%  1024  268  267.9  119.42  68.8% 

Page 33: AMS2500 Performance Report

Hitachi Data Systems Internal and Channel Partner Confidential  Page 33 

APPENDIX C. Test-2 Full Results 5 Drives

Sequential Read 

R5 4+1 x1  1 thread  SSD  5 drives 

Sequential Read 

R5 4+1 x1 

8 threads  SSD  5 drives 

Block (KB)  IOPS  MB/s RT 

[msec] SSD/SAS IOPS  Block (KB)  IOPS  MB/s 

RT [msec] 

SSD/SAS IOPS 

0.5  14,472  7.1  0.07  119%  0.5  74,769  36.5  0.11  114% 

4  11,276  44.0  0.09  103%  4  50,824  198.5  0.16  108% 

16  8,693  135.8  0.11  111%  16  19,447  303.9  0.41  109% 

64  3,820  238.7  0.26  108%  64  5,612  350.7  1.43  119% 

256  1,285  321.2  0.78  115%  256  1,522  380.5  5.26  130% 

1024  337  336.6  2.97  116%  1024  385  384.8  20.79  130% 

Sequential Read 

R5 4+1 x1  1 thread  SAS  5 drives 

Sequential Read 

R5 4+1 x1 

8 threads  SAS  5 drives 

Block (KB)  IOPS  MB/s RT 

[msec] SAS/SSD IOPS  Block (KB)  IOPS  MB/s 

RT [msec] 

SAS/SSD IOPS 

0.5  12,195  6.0  0.08  84.3%  0.5  65,518  32.0  0.12  87.6% 

4  10,979  42.9  0.09  97.4%  4  47,094  184.0  0.17  92.7% 

16  7,805  122.0  0.13  89.8%  16  17,874  279.3  0.45  91.9% 

64  3,551  221.9  0.28  93.0%  64  4,709  294.3  1.70  83.9% 

256  1,115  278.7  0.90  86.8%  256  1,175  293.7  6.81  77.2% 

1024  290  290.2  3.44  86.2%  1024  295  294.9  27.13  76.7% 

Sequential Write 

R5 4+1 x1  1 thread  SSD  5 drives 

Sequential Write 

R5 4+1 x1 

8 threads  SSD  5 drives 

Block (KB)  IOPS  MB/s RT 

[msec] SSD/SAS IOPS  Block (KB)  IOPS  MB/s 

RT [msec] 

SSD/SAS IOPS 

0.5  9,899  4.8  0.10  109%  0.5  39,175  19.1  0.20  101% 

4  8,202  32.0  0.12  93%  4  31,220  122.0  0.26  100% 

16  6,293  98.3  0.16  105%  16  16,887  263.9  0.47  117% 

64  2,889  180.6  0.35  96%  64  4,252  265.7  1.88  113% 

256  1,032  257.9  0.97  113%  256  1,061  265.3  7.51  114% 

1024  266  266.2  3.76  113%  1024  266  265.5  30.13  114% 

Sequential Write 

R5 4+1 x1  1 thread  SAS  5 drives 

Sequential Write 

R5 4+1 x1 

8 threads  SAS  5 drives 

Block (KB)  IOPS  MB/s RT 

[msec] SAS/SSD IOPS  Block (KB)  IOPS  MB/s 

RT [msec] 

SAS/SSD IOPS 

0.5  9,062  4.4  0.11  91.5%  0.5  38,688  18.9  0.21  98.8% 

4  8,788  34.3  0.11  107.1%  4  31,314  122.3  0.25  100.3% 

16  5,978  93.4  0.17  95.0%  16  14,426  225.4  0.55  85.4% 

64  3,006  187.9  0.33  104.0%  64  3,749  234.3  2.09  88.2% 

256  910  227.4  1.10  88.2%  256  931  232.9  8.45  87.8% 

1024  235  235.5  4.25  88.4%  1024  233  233.0  34.31  87.8% 

Page 34: AMS2500 Performance Report

Hitachi Data Systems Internal and Channel Partner Confidential  Page 34 

10 Drives

Sequential Read 

R5 4+1 x2 

2 threads  SSD  10 drives 

Sequential Read 

R5 4+1 x2 

16 threads  SSD  10 drives 

Block (KB)  IOPS  MB/s RT 

[msec] SSD/SAS IOPS  Block (KB)  IOPS  MB/s 

RT [msec] 

SSD/SAS IOPS 

0.5  25,882  12.6  0.08  105%  0.5  137,228  67.0  0.12  115% 

4  23,041  90.0  0.09  106%  4  101,538  396.6  0.16  107% 

16  16,213  253.3  0.12  105%  16  38,907  607.9  0.41  103% 

64  7,706  481.6  0.26  106%  64  11,240  702.5  1.42  113% 

256  2,494  623.5  0.80  108%  256  3,046  761.4  5.25  122% 

1024  678  677.6  2.95  110%  1024  770  770.1  20.78  123% 

Sequential Read 

R5 4+1 x2 

2 threads  SAS  10 drives 

Sequential Read 

R5 4+1 x2 

16 threads  SAS  10 drives 

Block (KB)  IOPS  MB/s RT 

[msec] SAS/SSD IOPS  Block (KB)  IOPS  MB/s 

RT [msec] 

SAS/SSD IOPS 

0.5  24,719  12.1  0.08  95.5%  0.5  119,421  58.3  0.13  87.0% 

4  21,808  85.2  0.09  94.7%  4  94,643  369.7  0.17  93.2% 

16  15,441  241.3  0.13  95.2%  16  37,625  587.9  0.42  96.7% 

64  7,291  455.7  0.27  94.6%  64  9,986  624.1  1.60  88.8% 

256  2,306  576.6  0.87  92.5%  256  2,497  624.2  6.41  82.0% 

1024  618  617.6  3.24  91.1%  1024  626  625.6  25.58  81.2% 

Sequential Write 

R5 4+1 x2 

2 threads  SSD  10 drives 

Sequential Write 

R5 4+1 x2 

16 threads  SSD  10 drives 

Block (KB)  IOPS  MB/s RT 

[msec] SSD/SAS IOPS  Block (KB)  IOPS  MB/s 

RT [msec] 

SSD/SAS IOPS 

0.5  18,634  9.1  0.11  104%  0.5  77,896  38.0  0.20  102% 

4  16,735  65.4  0.12  104%  4  62,248  243.2  0.26  100% 

16  12,060  188.4  0.17  103%  16  33,896  529.6  0.47  117% 

64  5,885  367.8  0.34  102%  64  8,468  529.3  1.89  112% 

256  2,026  506.6  0.99  111%  256  2,115  528.7  7.55  113% 

1024  530  529.9  3.77  112%  1024  529  529.2  30.23  113% 

Sequential Write 

R5 4+1 x2 

2 threads  SAS  10 drives 

Sequential Write 

R5 4+1 x2 

16 threads  SAS  10 drives 

Block (KB)  IOPS  MB/s RT 

[msec] SAS/SSD IOPS  Block (KB)  IOPS  MB/s 

RT [msec] 

SAS/SSD IOPS 

0.5  17,965  8.8  0.11  96.4%  0.5  76,372  37.3  0.21  98.0% 

4  16,153  63.1  0.12  96.5%  4  61,942  242.0  0.26  99.5% 

16  11,669  182.3  0.17  96.8%  16  28,970  452.7  0.55  85.5% 

64  5,752  359.5  0.35  97.7%  64  7,587  474.2  2.10  89.6% 

256  1,819  454.9  1.10  89.8%  256  1,873  468.2  8.38  88.5% 

1024  474  474.0  4.22  89.5%  1024  468  467.7  34.20  88.4% 

Page 35: AMS2500 Performance Report

Hitachi Data Systems Internal and Channel Partner Confidential  Page 35 

15 Drives

Sequential Read 

R5 4+1 x3 

3 threads  SSD  15 drives 

Sequential Read 

R5 4+1 x3 

24 threads  SSD  15 drives 

Block (KB)  IOPS  MB/s RT 

[msec] SSD/SAS IOPS  Block (KB)  IOPS  MB/s 

RT [msec] 

SSD/SAS IOPS 

0.5  36,057  17.6  0.08  98%  0.5  189,013  92.3  0.13  104% 

4  33,238  129.8  0.09  100%  4  149,562  584.2  0.16  109% 

16  23,063  360.4  0.13  100%  16  58,222  909.7  0.41  102% 

64  11,409  713.1  0.26  103%  64  16,856  1,053.5  1.42  111% 

256  3,643  910.7  0.82  105%  256  4,568  1,142.0  5.25  120% 

1024  1,010  1,010.1  2.97  107%  1024  1,152  1,152.5  20.82  121% 

Sequential Read 

R5 4+1 x3 

3 threads  SAS  15 drives 

Sequential Read 

R5 4+1 x3 

24 threads  SAS  15 drives 

Block (KB)  IOPS  MB/s RT 

[msec] SAS/SSD IOPS  Block (KB)  IOPS  MB/s 

RT [msec] 

SAS/SSD IOPS 

0.5  36,911  18.0  0.08  102.4%  0.5  182,233  89.0  0.13  96.4% 

4  33,280  130.0  0.09  100.1%  4  137,689  537.8  0.17  92.1% 

16  23,009  359.5  0.13  99.8%  16  57,147  892.9  0.42  98.2% 

64  11,069  691.8  0.27  97.0%  64  15,206  950.4  1.58  90.2% 

256  3,472  868.0  0.86  95.3%  256  3,808  951.9  6.30  83.4% 

1024  940  940.3  3.19  93.1%  1024  952  951.8  25.21  82.6% 

Sequential Write 

R5 4+1 x3 

3 threads  SSD  15 drives 

Sequential Write 

R5 4+1 x3 

24 threads  SSD  15 drives 

Block (KB)  IOPS  MB/s RT 

[msec] SSD/SAS IOPS  Block (KB)  IOPS  MB/s 

RT [msec] 

SSD/SAS IOPS 

0.5  26,409  12.9  0.11  99%  0.5  113,652  55.5  0.21  99% 

4  24,276  94.8  0.12  101%  4  90,075  351.9  0.27  101% 

16  17,075  266.8  0.18  100%  16  48,347  755.4  0.50  114% 

64  8,529  533.0  0.35  101%  64  12,125  757.8  1.97  110% 

256  2,867  716.7  1.05  108%  256  3,019  754.8  7.90  110% 

1024  757  756.5  3.96  109%  1024  756  756.2  31.74  110% 

Sequential Write 

R5 4+1 x3 

3 threads  SAS  15 drives 

Sequential Write 

R5 4+1 x3 

24 threads  SAS  15 drives 

Block (KB)  IOPS  MB/s RT 

[msec] SAS/SSD IOPS  Block (KB)  IOPS  MB/s 

RT [msec] 

SAS/SSD IOPS 

0.5  26,613  13.0  0.11  100.8%  0.5  114,751  56.0  0.21  101.0% 

4  24,145  94.3  0.12  99.5%  4  88,794  346.8  0.27  98.6% 

16  17,005  265.7  0.18  99.6%  16  42,588  665.4  0.56  88.1% 

64  8,477  529.8  0.35  99.4%  64  11,041  690.1  2.16  91.1% 

256  2,661  665.2  1.13  92.8%  256  2,736  683.9  8.71  90.6% 

1024  694  693.8  4.32  91.7%  1024  686  686.3  34.96  90.8% 

Page 36: AMS2500 Performance Report

Hitachi Data Systems Internal and Channel Partner Confidential  Page 36 

20 Drives

Sequential Read 

R5 4+1 x4 

4 threads  SSD  20 drives 

Sequential Read 

R5 4+1 x4 

32 threads  SSD  20 drives 

Block (KB)  IOPS  MB/s RT 

[msec] SSD/SAS IOPS  Block (KB)  IOPS  MB/s 

RT [msec] 

SSD/SAS IOPS 

0.5  49,664  24.2  0.08  95%  0.5  243,842  119.1  0.13  116% 

4  43,282  169.1  0.09  96%  4  195,664  764.3  0.16  110% 

16  31,090  485.8  0.13  98%  16  77,671  1,213.6  0.41  102% 

64  15,062  941.4  0.26  102%  64  22,479  1,404.9  1.42  110% 

256  4,898  1,224.6  0.82  105%  256  6,094  1,523.5  5.25  120% 

1024  1,337  1,336.9  2.99  106%  1024  1,536  1,535.9  20.83  120% 

Sequential Read 

R5 4+1 x4 

4 threads  SAS  20 drives 

Sequential Read 

R5 4+1 x4 

32 threads  SAS  20 drives 

Block (KB)  IOPS  MB/s RT 

[msec] SAS/SSD IOPS  Block (KB)  IOPS  MB/s 

RT [msec] 

SAS/SSD IOPS 

0.5  52,399  25.6  0.08  105.5%  0.5  211,105  103.1  0.15  86.6% 

4  45,233  176.7  0.09  104.5%  4  178,339  696.6  0.18  91.1% 

16  31,769  496.4  0.13  102.2%  16  76,141  1,189.7  0.42  98.0% 

64  14,758  922.4  0.27  98.0%  64  20,363  1,272.7  1.57  90.6% 

256  4,686  1,171.6  0.85  95.7%  256  5,096  1,273.9  6.28  83.6% 

1024  1,259  1,259.0  3.18  94.2%  1024  1,276  1,276.1  25.08  83.1% 

Sequential Write 

R5 4+1 x4 

4 threads  SSD  20 drives 

Sequential Write 

R5 4+1 x4 

32 threads  SSD  20 drives 

Block (KB)  IOPS  MB/s RT 

[msec] SSD/SAS IOPS  Block (KB)  IOPS  MB/s 

RT [msec] 

SSD/SAS IOPS 

0.5  35,735  17.4  0.11  102%  0.5  147,353  71.9  0.22  100% 

4  31,769  124.1  0.13  102%  4  116,121  453.6  0.27  101% 

16  22,720  355.0  0.18  101%  16  62,085  970.1  0.51  112% 

64  11,124  695.3  0.36  101%  64  15,512  969.5  2.06  108% 

256  3,727  931.7  1.07  108%  256  3,863  965.9  8.23  109% 

1024  968  968.2  4.13  107%  1024  969  969.1  33.02  108% 

Sequential Write 

R5 4+1 x4 

4 threads  SAS  20 drives 

Sequential Write 

R5 4+1 x4 

32 threads  SAS  20 drives 

Block (KB)  IOPS  MB/s RT 

[msec] SAS/SSD IOPS  Block (KB)  IOPS  MB/s 

RT [msec] 

SAS/SSD IOPS 

0.5  35,146  17.2  0.11  98.4%  0.5  146,653  71.6  0.22  99.5% 

4  31,126  121.6  0.13  98.0%  4  115,288  450.3  0.28  99.3% 

16  22,507  351.7  0.18  99.1%  16  55,581  868.4  0.58  89.5% 

64  11,013  688.3  0.36  99.0%  64  14,380  898.7  2.22  92.7% 

256  3,463  865.8  1.15  92.9%  256  3,559  889.7  8.91  92.1% 

1024  904  904.1  4.42  93.4%  1024  895  895.1  35.74  92.4% 

Page 37: AMS2500 Performance Report

Hitachi Data Systems Internal and Channel Partner Confidential  Page 37 

APPENDIX D. Test-3 Full Results Random Mixed Workloads All of the detailed test results are shown below in these four sets of tables (by block sizes, using 2KB, 4KB, 8KB, and 16KB).

2KB Block

Random 2KB  5 SSD  5 SAS  10 SSD  10 SAS 

R5 4d+1p  8 Threads  8 threads  16 Threads  16 Threads 

Read %  IOPS  RT [msec]  IOPS  RT [msec]  IOPS  RT [msec]  IOPS  RT [msec] 

100  18,776  0.4  1,323  6.0  37,670  0.4  2,649  6.0 

75  8,214  1.0  1,000  8.0  16,504  1.0  2,001  8.0 

50  7,136  1.1  907  8.8  14,318  1.1  1,816  8.8 

25  7,191  1.1  874  9.2  14,483  1.1  1,759  9.1 

0  7,277  1.1  702  11.4  14,600  1.1  1,410  11.3 

Random 2KB  15 SSD  15 SAS  20 SSD  20 SAS 

R5 4d+1p  24 Threads  24 Threads  32 Threads  32 Threads 

Read %  IOPS  RT [msec]  IOPS  RT [msec]  IOPS  RT [msec]  IOPS  RT [msec] 

100  55,569  0.4  3,975  6.0  73,470  0.4  5,293  6.0 

75  24,515  1.0  2,996  8.0  32,349  1.0  3,985  8.0 

50  21,115  1.1  2,728  8.8  27,675  1.2  3,636  8.8 

25  20,810  1.1  2,646  9.1  26,996  1.2  3,538  9.0 

0  20,616  1.2  2,114  11.4  26,628  1.2  2,818  11.4 

4KB Block

Random 4KB  5 SSD  5 SAS  10 SSD  10 SAS 

R5 4d+1p  8 Threads  8 threads  16 Threads  16 Threads 

Read %  IOPS  RT [msec]  IOPS  RT [msec]  IOPS  RT [msec]  IOPS  RT [msec] 

100  14,510  0.5  1,362  5.9  29,143  0.5  2,728  5.9 

75  7,670  1.0  1,014  7.9  15,397  1.0  2,021  7.9 

50  6,777  1.2  915  8.7  13,655  1.2  1,833  8.7 

25  6,860  1.2  887  9.0  13,826  1.2  1,794  8.9 

0  6,786  1.2  708  11.3  13,681  1.2  1,421  11.3 

Random 4KB  15 SSD  15 SAS  20 SSD  20 SAS 

R5 4d+1p  24 Threads  24 Threads  32 Threads  32 Threads 

Read %  IOPS  RT [msec]  IOPS  RT [msec]  IOPS  RT [msec]  IOPS  RT [msec] 

100  43,329  0.5  4,098  5.9  57,539  0.6  5,451  5.9 

75  22,901  1.0  3,036  7.9  30,250  1.1  4,038  7.9 

50  20,118  1.2  2,736  8.8  26,512  1.2  3,648  8.8 

25  19,988  1.2  2,680  9.0  26,096  1.2  3,572  9.0 

0  19,635  1.2  2,131  11.3  25,486  1.3  2,842  11.3 

Page 38: AMS2500 Performance Report

Hitachi Data Systems Internal and Channel Partner Confidential  Page 38 

8KB Block

Random 8KB  5 SSD  5 SAS  10 SSD  10 SAS 

R5 4d+1p  8 Threads  8 threads  16 Threads  16 Threads 

Read %  IOPS  RT [msec]  IOPS  RT [msec]  IOPS  RT [msec]  IOPS  RT [msec] 

100  9,628  0.8  1,397  5.7  19,358  0.8  2,791  5.7 

75  6,447  1.2  1,025  7.8  12,957  1.2  2,047  7.8 

50  5,936  1.3  914  8.8  11,906  1.3  1,820  8.8 

25  6,007  1.3  892  9.0  12,103  1.3  1,795  8.9 

0  5,535  1.4  714  11.2  11,163  1.4  1,437  11.1 

Random 8KB  15 SSD  15 SAS  20 SSD  20 SAS 

R5 4d+1p  24 Threads  24 Threads  32 Threads  32 Threads 

Read %  IOPS  RT [msec]  IOPS  RT [msec]  IOPS  RT [msec]  IOPS  RT [msec] 

100  28,929  0.8  4,184  5.7  38,519  0.8  5,581  5.7 

75  19,377  1.2  3,071  7.8  25,614  1.2  4,081  7.8 

50  17,712  1.4  2,725  8.8  23,446  1.4  3,637  8.8 

25  17,837  1.3  2,696  8.9  23,557  1.4  3,589  8.9 

0  16,620  1.4  2,696  11.2  22,062  1.4  2,865  11.2 

16KB Block

Random 16KB  5 SSD  5 SAS  10 SSD  10 SAS 

R5 4d+1p  8 Threads  8 threads  16 Threads  16 Threads 

Read %  IOPS  RT [msec]  IOPS  RT [msec]  IOPS  RT [msec]  IOPS  RT [msec] 

100  6,679  1.2  1,338  6.0  13,372  1.2  2,677  6.0 

75  4,298  1.9  991  8.1  8,611  1.9  1,982  8.1 

50  3,910  2.0  881  9.1  7,887  2.0  1,758  9.1 

25  4,139  1.9  912  8.8  8,317  1.9  1,830  8.7 

0  4,028  2.0  746  10.7  8,114  2.0  1,494  10.7 

Random 16KB  15 SSD  15 SAS  20 SSD  20 SAS 

R5 4d+1p  24 Threads  24 Threads  32 Threads  32 Threads 

Read %  IOPS  RT [msec]  IOPS  RT [msec]  IOPS  RT [msec]  IOPS  RT [msec] 

100  20,038  1.2  4,005  6.0  26,672  1.2  5,334  6.0 

75  12,905  1.9  2,979  8.1  17,205  1.9  3,956  8.1 

50  11,766  2.0  2,639  9.1  15,643  2.0  3,517  9.1 

25  12,386  1.9  2,744  8.7  16,458  1.9  3,653  8.8 

0  12,098  2.0  2,240  10.7  16,130  2.0  2,986  10.7 

Page 39: AMS2500 Performance Report

Hitachi Data Systems Internal and Channel Partner Confidential  Page 39 

APPENDIX E. Test-4 Full Results Sequential Workloads Using Default 256KB RAID Chunk These tests used mixed sequential workloads and block sizes of 64KB, 128KB, 256KB, 512KB, and 1024KB with the default RAID formatting chunk size of 256KB. Tests were run on 5, 10, 15 and 20 drives using 1, 2, 3, or 4 LUNs. Both SSD and SAS drive results are listed.

64KB Block

Sequential 64KB  5 SSD  5 SAS  10 SSD  10 SAS 

R5 4d+1p  1 Thread  1 Thread  2 Threads  2 Threads 

Read %  MB/sec  RT [msec]  MB/sec  RT [msec]  MB/sec  RT [msec]  MB/sec  RT [msec] 

100  249.3  0.2  254.7  0.2  493.2  0.3  481.4  0.3 

75  161.6  0.4  91.0  0.7  320.8  0.4  212.7  0.6 

50  127.4  0.5  95.7  0.7  255.1  0.5  190.1  0.7 

25  113.7  0.5  90.9  0.7  228.9  0.5  185.9  0.7 

0  186.6  0.3  189.7  0.3  374.1  0.3  366.2  0.3 

Sequential 64KB  15 SSD  15 SAS  20 SSD  20 SAS 

R5 4d+1p  3 Threads  3 Threads  4 Threads  4 Threads 

Read %  MB/sec  RT [msec]  MB/sec  RT [msec]  MB/sec  RT [msec]  MB/sec  RT [msec] 

100  718.4  0.3  698.2  0.3  971.9  0.3  946.2  0.3 

75  477.8  0.4  335.8  0.6  626.0  0.4  421.1  0.6 

50  378.4  0.5  284.8  0.7  503.9  0.5  381.6  0.7 

25  338.6  0.6  274.1  0.7  446.7  0.6  361.9  0.7 

0  539.9  0.3  527.5  0.4  714.9  0.3  697.4  0.4 

128KB Block

Sequential 128KB  5 SSD  5 SAS  10 SSD  10 SAS 

R5 4d+1p  1 Thread  1 Thread  2 Threads  2 Threads 

Read %  MB/sec  RT [msec]  MB/sec  RT [msec]  MB/sec  RT [msec]  MB/sec  RT [msec] 

100  273.2  0.5  275.6  0.5  546.4  0.5  566.8  0.4 

75  171.0  0.7  146.9  0.8  342.1  0.7  288.2  0.9 

50  111.2  1.2  109.7  1.2  210.8  1.2  223.2  1.1 

25  57.3  2.5  48.9  3.0  93.5  3.0  127.9  2.1 

0  221.2  0.6  215.9  0.6  441.7  0.6  438.8  0.6 

Sequential 128KB  15 SSD  15 SAS  20 SSD  20 SAS 

R5 4d+1p  3 Threads  3 Threads  4 Threads  4 Threads 

Read %  MB/sec  RT [msec]  MB/sec  RT [msec]  MB/sec  RT [msec]  MB/sec  RT [msec] 

100  787.1  0.5  803.1  0.5  1055.9  0.5  1061.3  0.5 

75  491.4  0.8  423.0  0.9  652.7  0.8  565.4  0.9 

50  325.0  1.2  322.6  1.2  399.0  1.3  426.6  1.2 

25  173.8  2.3  179.6  2.1  183.9  2.8  212.0  2.5 

0  635.6  0.6  625.0  0.6  823.6  0.6  807.8  0.6 

Page 40: AMS2500 Performance Report

Hitachi Data Systems Internal and Channel Partner Confidential  Page 40 

256KB Block

Sequential 256KB  5 SSD  5 SAS  10 SSD  10 SAS 

R5 4d+1p  1 Thread  1 Thread  2 Threads  2 Threads 

Read %  MB/sec  RT [msec]  MB/sec  RT [msec]  MB/sec  RT [msec]  MB/sec  RT [msec] 

100  313.4  0.8  319.6  0.8  626.7  0.8  604.5  0.8 

75  65.3  3.9  58.4  4.4  128.9  4.0  151.9  3.4 

50  32.7  7.8  31.9  7.9  68.8  7.3  69.7  7.3 

25  26.5  9.8  30.8  9.0  51.0  9.9  55.4  9.4 

0  255.5  1.0  229.7  1.1  509.8  1.0  457.1  1.1 

Sequential 256KB  15 SSD  15 SAS  20 SSD  20 SAS 

R5 4d+1p  3 Threads  3 Threads  4 Threads  4 Threads 

Read %  MB/sec  RT [msec]  MB/sec  RT [msec]  MB/sec  RT [msec]  MB/sec  RT [msec] 

100  891.7  0.8  865.6  0.9  1218.8  0.8  1179.4  0.8 

75  180.2  4.2  187.0  4.1  241.0  4.2  238.7  4.2 

50  91.5  8.2  93.5  8.1  117.9  8.5  119.8  8.4 

25  69.8  10.8  74.9  10.3  97.6  10.4  102.6  10.0 

0  716.6  1.0  665.5  1.1  938.9  1.1  866.2  1.2 

512KB Block

Sequential 512KB  5 SSD  5 SAS  10 SSD  10 SAS 

R5 4d+1p  1 Thread  1 Thread  2 Threads  2 Threads 

Read %  MB/sec  RT [msec]  MB/sec  RT [msec]  MB/sec  RT [msec]  MB/sec  RT [msec] 

100  328.5  1.5  315.2  1.6  657.1  1.5  642.1  1.6 

75  97.7  5.2  91.1  5.6  192.9  5.2  185.0  5.5 

50  51.4  10.1  50.9  9.9  106.2  9.5  101.8  10.0 

25  38.7  13.0  38.9  13.3  78.7  12.9  86.8  12.0 

0  263.1  1.9  242.8  2.1  524.4  1.9  477.0  2.1 

Sequential 512KB  15 SSD  15 SAS  20 SSD  20 SAS 

R5 4d+1p  3 Threads  3 Threads  4 Threads  4 Threads 

Read %  MB/sec  RT [msec]  MB/sec  RT [msec]  MB/sec  RT [msec]  MB/sec  RT [msec] 

100  946.9  1.6  907.9  1.6  1257.4  1.6  1210.6  1.6 

75  269.4  5.6  255.7  5.9  347.7  5.8  343.8  5.9 

50  149.3  10.1  140.8  10.7  189.7  10.6  183.0  11.0 

25  110.3  13.6  110.1  13.7  144.2  13.9  143.7  14.0 

0  747.6  2.0  684.7  2.2  960.3  2.1  887.1  2.3 

Page 41: AMS2500 Performance Report

Hitachi Data Systems Internal and Channel Partner Confidential  Page 41 

1024KB Block

Sequential 1024KB  5 SSD  5 SAS  10 SSD  10 SAS 

R5 4d+1p  1 Thread  1 Thread  2 Threads  2 Threads 

Read %  MB/sec  RT [msec]  MB/sec  RT [msec]  MB/sec  RT [msec]  MB/sec  RT [msec] 

100  334.7  3.0  328.3  3.0  667.8  3.0  647.8  3.1 

75  241.2  4.1  198.8  5.0  481.4  4.2  404.2  4.9 

50  182.9  5.5  151.2  6.6  367.2  5.4  292.5  6.8 

25  149.2  6.7  120.1  8.3  299.5  6.7  243.7  8.2 

0  266.1  3.8  236.7  4.2  529.8  3.8  473.9  4.2 

Sequential 1024KB  15 SSD  15 SAS  20 SSD  20 SAS 

R5 4d+1p  3 Threads  3 Threads  4 Threads  4 Threads 

Read %  MB/sec  RT [msec]  MB/sec  RT [msec]  MB/sec  RT [msec]  MB/sec  RT [msec] 

100  942.4  3.2  916.0  3.3  1285.8  3.1  1244.8  3.2 

75  706.7  4.2  596.7  5.0  938.7  4.3  782.6  5.1 

50  539.8  5.6  433.4  6.9  720.9  5.5  583.0  6.9 

25  443.8  6.8  361.7  8.3  586.2  6.8  478.4  8.4 

0  756.4  4.0  690.8  4.3  967.5  4.1  898.6  4.4 

Sequential Workloads Using Optional 64KB RAID Chunk These tests used mixed sequential workloads and block sizes of 64KB, 128KB, 256KB, 512KB, and 1024KB with the optional RAID formatting chunk size of 64KB. Tests were run on 5, 10, 15 and 20 drives using 1, 2, 3, or 4 LUNs. Both SSD and SAS drive results are listed.

64KB Block, 1 Thread / LUN, 5, 10, 15, 20 drives

Sequential 64KB  5 SSD  5 SAS  10 SSD  10 SAS 

R5 4d+1p  1 Thread  1 Thread  2 Threads  2 Threads 

Read %  MB/sec  RT [msec]  MB/sec  RT [msec]  MB/sec  RT [msec]  MB/sec  RT [msec] 

100  251.5  0.2  245.9  0.3  504.3  0.2  467.0  0.3 

75  142.7  0.4  106.3  0.6  285.4  0.4  200.2  0.6 

50  120.9  0.5  79.4  0.8  243.6  0.5  163.3  0.8 

25  107.3  0.6  66.0  1.0  211.1  0.6  142.9  0.9 

0  196.3  0.3  200.3  0.3  394.8  0.3  392.8  0.3 

Sequential 64KB  15 SSD  15 SAS  20 SSD  20 SAS 

R5 4d+1p  3 Threads  3 Threads  4 Threads  4 Threads 

Read %  MB/sec  RT [msec]  MB/sec  RT [msec]  MB/sec  RT [msec]  MB/sec  RT [msec] 

100  723.8  0.3  690.4  0.3  972.9  0.3  940.0  0.3 

75  420.2  0.4  305.4  0.6  566.7  0.4  400.9  0.6 

50  361.9  0.5  232.0  0.8  484.1  0.5  324.9  0.8 

25  319.4  0.6  206.5  0.9  412.5  0.6  281.3  0.9 

0  561.1  0.3  551.0  0.3  755.5  0.3  751.2  0.3 

Page 42: AMS2500 Performance Report

Hitachi Data Systems Internal and Channel Partner Confidential  Page 42 

128KB Block, 1 Thread / LUN, 5, 10, 15, 20 drives

Sequential 128KB  5 SSD  5 SAS  10 SSD  10 SAS 

R5 4d+1p  1 Thread  1 Thread  2 Threads  2 Threads 

Read %  MB/sec  RT [msec]  MB/sec  RT [msec]  MB/sec  RT [msec]  MB/sec  RT [msec] 

100  286.1  0.4  281.8  0.4  569.0  0.4  521.2  0.5 

75  166.5  0.7  117.2  1.1  334.8  0.7  234.0  1.1 

50  136.6  0.9  82.9  1.5  265.0  0.9  167.2  1.5 

25  110.0  1.1  69.4  1.8  219.5  1.1  150.0  1.7 

0  239.3  0.5  248.9  0.5  473.9  0.5  434.0  0.6 

Sequential 128KB  15 SSD  15 SAS  20 SSD  20 SAS 

R5 4d+1p  3 Threads  3 Threads  4 Threads  4 Threads 

Read %  MB/sec  RT [msec]  MB/sec  RT [msec]  MB/sec  RT [msec]  MB/sec  RT [msec] 

100  824.5  0.5  792.0  0.5  1102.0  0.5  1059.7  0.5 

75  495.1  0.8  332.9  1.1  663.2  0.8  451.3  1.1 

50  385.0  1.0  254.4  1.5  526.2  0.9  326.7  1.5 

25  325.4  1.2  222.7  1.7  437.6  1.1  298.7  1.7 

0  674.2  0.6  651.8  0.6  887.0  0.6  829.7  0.6 

256KB Block, 1 Thread / LUN, 5, 10, 15, 20 drives

Sequential 256KB  5 SSD  5 SAS  10 SSD  10 SAS 

R5 4d+1p  1 Thread  1 Thread  2 Threads  2 Threads 

Read %  MB/sec  RT [msec]  MB/sec  RT [msec]  MB/sec  RT [msec]  MB/sec  RT [msec] 

100  317.9  0.8  303.6  0.8  646.5  0.8  572.9  0.9 

75  197.7  1.3  129.6  1.9  399.5  1.2  262.3  1.9 

50  150.5  1.7  88.1  2.8  306.1  1.6  183.9  2.7 

25  122.1  2.0  69.3  3.7  247.5  2.0  151.3  3.3 

0  269.1  0.9  216.1  1.2  536.2  0.9  429.3  1.2 

Sequential 256KB  15 SSD  15 SAS  20 SSD  20 SAS 

R5 4d+1p  3 Threads  3 Threads  4 Threads  4 Threads 

Read %  MB/sec  RT [msec]  MB/sec  RT [msec]  MB/sec  RT [msec]  MB/sec  RT [msec] 

100  908.2  0.8  839.4  0.9  1231.7  0.8  1160.4  0.9 

75  590.5  1.3  379.3  2.0  789.4  1.3  525.5  1.9 

50  445.7  1.7  272.2  2.8  596.1  1.7  366.8  2.7 

25  362.2  2.1  237.7  3.2  486.3  2.1  299.5  3.3 

0  770.5  1.0  575.4  1.3  1040.0  1.0  954.9  1.0 

Page 43: AMS2500 Performance Report

Hitachi Data Systems Internal and Channel Partner Confidential  Page 43 

512KB Block, 1 Thread / LUN, 5, 10, 15, 20 drives

Sequential 512KB  5 SSD  5 SAS  10 SSD  10 SAS 

R5 4d+1p  1 Thread  1 Thread  2 Threads  2 Threads 

Read %  MB/sec  RT [msec]  MB/sec  RT [msec]  MB/sec  RT [msec]  MB/sec  RT [msec] 

100  335.1  1.5  327.4  1.5  668.9  1.5  600.8  1.7 

75  213.8  2.3  150.3  3.3  431.6  2.3  303.5  3.3 

50  155.0  3.2  120.3  4.2  312.5  3.2  204.7  4.9 

25  126.9  3.9  81.1  6.2  251.1  4.0  167.3  6.0 

0  271.9  1.8  237.4  2.2  537.2  1.9  298.9  3.4 

Sequential 512KB  15 SSD  15 SAS  20 SSD  20 SAS 

R5 4d+1p  3 Threads  3 Threads  4 Threads  4 Threads 

Read %  MB/sec  RT [msec]  MB/sec  RT [msec]  MB/sec  RT [msec]  MB/sec  RT [msec] 

100  954.4  1.6  908.4  1.6  1282.2  1.6  1232.1  1.6 

75  641.0  2.3  420.5  3.6  851.2  2.3  594.3  3.4 

50  463.5  3.2  330.1  4.6  631.2  3.2  446.3  4.5 

25  372.2  4.0  269.6  5.6  502.0  4.0  347.6  5.8 

0  787.1  1.9  542.8  2.9  1032.0  1.9  831.8  2.4 

1024KB Block, 1 Thread / LUN, 5, 10, 15, 20 drives

Sequential 1024KB  5 SSD  5 SAS  10 SSD  10 SAS 

R5 4d+1p  1 Thread  1 Thread  2 Threads  2 Threads 

Read %  MB/sec  RT [msec]  MB/sec  RT [msec]  MB/sec  RT [msec]  MB/sec  RT [msec] 

100  335.3  3.0  325.4  3.1  673.8  3.0  647.6  3.1 

75  209.8  4.8  158.7  6.3  419.6  4.8  317.5  6.3 

50  146.9  6.8  95.9  10.5  295.3  6.8  193.8  10.4 

25  126.9  7.9  75.1  13.4  251.4  8.0  159.7  12.6 

0  270.2  3.7  206.3  5.3  536.0  3.7  420.6  4.9 

Sequential 1024KB  15 SSD  15 SAS  20 SSD  20 SAS 

R5 4d+1p  3 Threads  3 Threads  4 Threads  4 Threads 

Read %  MB/sec  RT [msec]  MB/sec  RT [msec]  MB/sec  RT [msec]  MB/sec  RT [msec] 

100  972.9  3.1  937.1  3.2  1310.7  ‐  1247.1  3.2 

75  617.7  4.9  493.6  6.1  840.6  ‐  630.7  6.4 

50  433.2  6.9  292.6  10.3  582.9  ‐  410.0  9.8 

25  375.0  8.0  248.1  12.1  504.1  ‐  328.3  12.2 

0  815.4  3.7  547.0  5.6  1048.8  ‐  650.8  6.2 

Page 44: AMS2500 Performance Report

Hitachi Data Systems Internal and Channel Partner Confidential  Page 44 

Corporate Headquarters 750 Central Expressway, Santa Clara, California 95050-2627 USA Contact Information: + 1 408 970 1000 www.hds.com / [email protected]

Asia Pacific and Americas 750 Central Expressway, Santa Clara, California 95050-2627 USA� Contact Information: + 1 408 970 1000 www.hds.com / [email protected]

Europe Headquarters Sefton Park, Stoke Poges, Buckinghamshire SL2 4HD United Kingdom Contact Information: + 44 (0) 1753 618000 www.hds.com / [email protected]

Hitachi is a registered trademark of Hitachi, Ltd., and/or its affiliates in the United States and other countries. Hitachi Data Systems is a registered trademark and service mark of Hitachi, Ltd. In the United States and other countries.

Microsoft is a registered trademark of Microsoft Corporation.

Hitachi Data Systems has achieved Microsoft Competency in Advanced Infrastructure Solutions.

All other trademarks, service marks, and company names are properties of their respective owners.

Notice: This document is for informational purposes only, and does not set forth any warranty, express or limited, concerning any equipment or service offered or to be offered by Hitachi Data Systems. This document describes some capabilities that are conditioned on a maintenance contract with Hitachi Data Systems being in effect, and that may be configuration-dependent, and features that may not be currently available. Contact your local Hitachi Data Systems sales office for information on feature and product availability.

Hitachi Data Systems sells and licenses its products subject to certain terms and conditions, including limited warranties. To see a copy of these terms and conditions prior to purchase or license, please go to http://www.hds.com/corporate/legal/index.html or call your local sales representatives to obtain a printed copy. If you purchase or license the product, you are deemed to have accepted the terms and conditions.

© Hitachi Data Systems Corporation 2008. All Rights Reserved WHP-###-## July 2010