Post on 06-Aug-2020
HUAWEI OceanStor Enterprise Unified Storage System
RAID 2.0+ Technical White Paper
Issue 01
Date 2014-06-04
HUAWEI TECHNOLOGIES CO., LTD.
Issue 01 (2014-06-04) Huawei Proprietary and Confidential
Copyright © Huawei Technologies Co., Ltd.
i
Copyright © Huawei Technologies Co., Ltd. 2014. All rights reserved.
No part of this document may be reproduced or transmitted in any form or by any means without prior
written consent of Huawei Technologies Co., Ltd.
Trademarks and Permissions
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their respective
holders.
Notice
The purchased products, services and features are stipulated by the contract made between Huawei and
the customer. All or part of the products, services and features described in this document may not be
within the purchase scope or the usage scope. Unless otherwise specified in the contract, all statements,
information, and recommendations in this document are provided "AS IS" without warranties, guarantees or
representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has been made in the
preparation of this document to ensure accuracy of the contents, but all statements, information, and
recommendations in this document do not constitute a warranty of any kind, express or implied.
Huawei Technologies Co., Ltd.
Address: Huawei Industrial Base
Bantian, Longgang
Shenzhen 518129
People's Republic of China
Website: http://enterprise.huawei.com
HUAWEI OceanStor Enterprise Unified Storage System
RAID 2.0+ Technical White Paper Contents
Issue 01 (2014-06-04) Huawei Proprietary and Confidential
Copyright © Huawei Technologies Co., Ltd.
ii
Contents
1 Overview ......................................................................................................................................... 1
1.1 RAID Technology Evolution ........................................................................................................................................ 1
1.2 Introduction to Huawei RAID 2.0+ .............................................................................................................................. 2
2 Working Principle ......................................................................................................................... 3
2.1 Basic Principle of RAID 2.0+ ....................................................................................................................................... 3
2.2 RAID 2.0+ Implementation Framework ....................................................................................................................... 4
2.3 Logical Objects Involved in RAID 2.0+ ....................................................................................................................... 5
3 Technical Features......................................................................................................................... 9
3.1 Secure and Trusted ........................................................................................................................................................ 9
3.1.1 Automatic Load Balancing, Decreasing the Overall Failure Rate ............................................................................. 9
3.1.2 Fast Thin Reconstruction, Reducing Dual-Disk Failure Probability ....................................................................... 10
3.1.3 Fault Detection and Self-Healing, Ensuring System Reliability .............................................................................. 12
3.2 Flexible and Efficient .................................................................................................................................................. 12
3.2.1 Pool Virtualization, Simplifying Storage Planning and Management ...................................................................... 12
3.2.2 One LUN Across More Disks, Improving Performance of a Single LUN ............................................................... 13
3.2.3 Dynamic Space Distribution, Flexibly Adapting to Service Changes...................................................................... 14
4 Configuration ............................................................................................................................... 15
4.1 Configuring RAID 2.0+ .............................................................................................................................................. 15
4.1.1 Configuring Disk Domains and Storage Pools ........................................................................................................ 15
4.1.2 Configuring LUNs and LUN Groups....................................................................................................................... 17
4.1.3 Configuring Hosts and Host Groups ........................................................................................................................ 19
4.1.4 Configuring Mapping Views .................................................................................................................................... 22
A FAQs ............................................................................................................................................. 24
B Related Resources....................................................................................................................... 27
C Acronyms and Abbreviations .................................................................................................. 28
HUAWEI OceanStor Enterprise Unified Storage System
RAID 2.0+ Technical White Paper 1 Overview
Issue 01 (2014-06-04) Huawei Proprietary and Confidential
Copyright © Huawei Technologies Co., Ltd.
1
1 Overview
1.1 RAID Technology Evolution
The term redundant array of independent disks (RAID) was first defined by the University of
California, Berkeley in 1987. The basic idea of RAID is to combine multiple independent
physical disks based on a certain algorithm to form a virtual logical disk that provides a larger
capacity, higher performance, or better data error tolerance.
As a mature and reliable data protection standard, RAID has always been used as a basic
technology by storage systems since its existence. However, with rapid growth of data storage
needs and emergence of high-performance applications in recent years, traditional RAID
gradually exposes its defects.
IDC predicts that the storage market will keep an annual growth rate of at least 10% on
average in the following five years and the global storage capacity may reach 16,840 PB. To
meet data growth requirements, disk device manufacturers keep using more advanced
technologies to increase the unit storage density of disks. Nowadays, 4 TB large-capacity
disks and 900 GB high-performance SAS disks are commonly seen in enterprise and
consumer markets. However, data reconstruction implemented upon the failure of a
large-capacity disk reveals the disadvantages of traditional RAID.
For example, traditional RAID 5 (8D+1P) needs 40 hours to reconstruct data on a 7.2k rpm 4
TB disk. The reconstruction process consumes system resources, decreasing the overall
performance of the application system. If a user restricts the reconstruction priority in return
for timely application response, the reconstruction time will be even longer. In addition,
during the time-consuming reconstruction, a large number of access operations may cause the
failure of other disks in the RAID group, greatly increasing the disk failure probability and
data loss risk.
On the other hand, traditional RAID is subject to the number of disks. In the era of soaring
data growth, traditional RAID fails to meet enterprises' needs for unified and flexible resource
scheduling. Besides, as disk capacity increases, disk-based data management becomes
increasingly inefficient.
To resolve the preceding issues of traditional RAID and follow the virtualization trend, many
storage vendors adopt substitutes for traditional RAID as follows:
LUN virtualization: Based on traditional RAID, some storage vendors such as EMC and
HDS divide RAID groups into more fine-grained units and combine these units to form
storage space accessible to hosts.
Block virtualization: Some storage vendors such as Huawei and HP 3PAR divide the
space of disks that belong to a storage pool into small-granularity data blocks and create
HUAWEI OceanStor Enterprise Unified Storage System
RAID 2.0+ Technical White Paper 1 Overview
Issue 01 (2014-06-04) Huawei Proprietary and Confidential
Copyright © Huawei Technologies Co., Ltd.
2
RAID groups based on these data blocks. This approach allows data to be evenly
distributed onto all disks in the storage pool and enables resources to be managed in the
form of data blocks.
Figure 1-1 RAID technology evolution
1.2 Introduction to Huawei RAID 2.0+
HUAWEI OceanStor enterprise unified storage systems are brand-new storage systems
designed based on the current application status of storage products and storage technology
trends. Featured by virtualization, hybrid cloud, thin IT, and low carbon, they are intended for
medium- and large-sized data centers and focus on the core services of medium- and
large-sized enterprises.
OceanStor enterprise unified storage systems employ an innovative Smart Matrix
all-switching hardware architecture. With the use of a dedicated storage operating system
called eXtreme Virtual Engine (XVE), OceanStor enterprise unified storage systems meet
various storage requirements of large-sized data centers.
RAID 2.0+ is a brand-new RAID technology developed by Huawei to overcome the
disadvantages of traditional RAID and keep in line with the storage architecture virtualization
trend. RAID 2.0+ implements two-layer virtualized management instead of the traditional
fixed management. Based on underlying disk management that employs block virtualization
(Virtual for Disk), RAID 2.0+ uses Smart series efficiency improvement software to
implement efficient resource management that features upper-layer virtualization (Virtual for
Pool).
HUAWEI OceanStor Enterprise Unified Storage System
RAID 2.0+ Technical White Paper 2 Working Principle
Issue 01 (2014-06-04) Huawei Proprietary and Confidential
Copyright © Huawei Technologies Co., Ltd.
3
2 Working Principle
2.1 Basic Principle of RAID 2.0+
Huawei RAID 2.0+ employs two-layer virtualized management, namely, underlying disk
management plus upper-layer resource management. In a storage system, the space of each
disk is divided into data blocks with a small granularity, and RAID groups are created based
on data blocks so that data is evenly distributed onto all disks in a storage pool. Besides, using
data blocks as the smallest units greatly improves the efficiency of resource management.
1. OceanStor enterprise unified storage systems support SSDs, SAS disks, and NL-SAS
disks, which compose disk domains. In a disk domain, disks of the same type compose
disk groups (DGs) based on certain rules.
2. The space of each disk in a DG is divided into chunks (CKs) of a fixed size. OceanStor
enterprise unified storage systems select chunks from different disks at random to
compose chunk groups (CKGs) based on a certain RAID algorithm.
3. Each CKG is divided into logical storage spaces of a fixed size called extents. Extents
are the basic units that compose thick LUNs (also called fat LUNs). In terms of thin
LUNs, extents are further divided into grains that have a smaller granularity, and grains
are mapped to thin LUNs.
HUAWEI OceanStor Enterprise Unified Storage System
RAID 2.0+ Technical White Paper 2 Working Principle
Issue 01 (2014-06-04) Huawei Proprietary and Confidential
Copyright © Huawei Technologies Co., Ltd.
4
2.2 RAID 2.0+ Implementation Framework
The following figure shows the implementation framework of RAID 2.0+ employed by
OceanStor enterprise unified storage systems.
A disk domain in OceanStor enterprise unified storage systems consists of disks from
one or multiple storage tiers. Each storage tier supports disks of a specific type: SSDs
compose the high-performance tier, SAS disks compose the performance tier, and
NL-SAS disks compose the capacity tier.
The disks on each storage tier are divided into CKs with a fixed size of 64 MB.
CKs on each storage tier compose CKGs based on a user-defined RAID policy. Users are
allowed to define a specific RAID policy for each storage tier of a storage pool.
OceanStor enterprise unified storage systems divide CKGs into small-sized extents. An
extent is the smallest granularity for data migration and is the basic unit of a thick LUN.
When creating a storage pool, users can set the extent size on the Advanced page. The
default extent size is 4 MB.
Multiple extents compose a volume that is externally presented as a LUN (which is a
thick LUN) accessible to hosts. A LUN implements space application, space release, and
data migration based on extents. For example, when creating a LUN, a user can specify a
storage tier from which the capacity of the LUN comes. In this case, the LUN consists of
the extents on the specified storage tier. After services start running, the storage system
migrates data among the storage tiers based on data activity levels and data migration
polices. (This function requires a SmartTier license.) In this scenario, data on the LUN is
distributed to the storage tiers of the storage pool based on extents.
HUAWEI OceanStor Enterprise Unified Storage System
RAID 2.0+ Technical White Paper 2 Working Principle
Issue 01 (2014-06-04) Huawei Proprietary and Confidential
Copyright © Huawei Technologies Co., Ltd.
5
When a user creates a thin LUN, OceanStor enterprise unified storage systems divide
extents into grains and map grains to the thin LUN. In this way, fine-grained
management of storage capacity is implemented.
2.3 Logical Objects Involved in RAID 2.0+
This section describes the major logical objects and key concepts related to RAID 2.0+.
Disk Domain
A disk domain is a combination of multiple disks (or all the disks of a storage system). After
disks are consolidated and a certain amount of hot spare space is reserved, a disk domain
provides storage resources for storage pools in a unified manner.
One or more disk domains can be created in an OceanStor enterprise unified storage
system.
Multiple storage pools can be created in a disk domain.
A disk domain can consist of SSDs, SAS disks, or NL-SAS disks.
Disk domains are isolated from one another, including performance, storage resources,
and faults.
Storage Pool and Tier
A storage pool is a storage resource container. The storage resources used by application
servers are all from storage pools. Created based on a specific disk domain, a storage pool
dynamically allocates CK resources of the disk domain and enables these CKs to compose
CKGs based on the RAID policy defined for each storage tier. Then, the CKGs provide
applications with storage resources that have RAID protection.
A storage tier is a collection of storage media providing the same level of performance in a
storage pool. Different storage tiers manage storage media with different performance levels
and provide storage space for applications that have different performance requirements.
Based on disk types, a storage pool can be divided into multiple tiers. The following table
describes the storage tiers and disk types supported by OceanStor enterprise unified storage
systems.
Storage Tier
Tier Name Supported Type of Disks
Application
Tier 0 High-perfor
mance tier
SSD SSDs provide high performance but are
expensive. Therefore, SSDs are
suitable for storing data most
frequently accessed.
Tier 1 Performance
tier
SAS SAS disks provide moderate
performance and are inexpensive.
Therefore, SAS disks are suitable for
storing data less frequently accessed.
HUAWEI OceanStor Enterprise Unified Storage System
RAID 2.0+ Technical White Paper 2 Working Principle
Issue 01 (2014-06-04) Huawei Proprietary and Confidential
Copyright © Huawei Technologies Co., Ltd.
6
Storage Tier
Tier Name Supported Type of Disks
Application
Tier 2 Capacity tier NL-SAS NL-SAS disks provide low
performance but a large capacity per
disk at a low cost. Therefore, NL-SAS
disks are suitable for storing a large
amount of data that is seldom accessed.
When creating a storage pool, a user is allowed to specify storage tiers allocated from the
corresponding disk domain and define a RAID policy and a capacity for each tier.
OceanStor enterprise unified storage systems support RAID 5, RAID 6, and RAID 10.
The following table lists the supported RAID policies.
RAID Level RAID Policy
RAID 5 4D+1P, 8D+1P
RAID 6 4D+2P, 8D+2P
RAID 10 2D+2D or 4D+4D, which is automatically selected
by storage systems
The capacity tier consists of large-capacity NL-SAS disks. It is recommended that RAID
6 (a double-parity RAID level) be used as the RAID policy for this tier.
Disk Group (DG)
A DG is a group of disks of the same type and from the same disk domain. There are three
disk types: SSD, SAS, and NL-SAS. Based on the number of disks of each type in each disk
domain, OceanStor enterprise unified storage systems automatically allocate disks to one or
more DGs.
One DG consists of disks of only one type.
CKs in a CKG come from different disks that belong to the same DG.
DGs are internal objects automatically configured by OceanStor enterprise unified
storage systems and typically used for fault isolation. DGs are not presented externally.
Logical Drive (LD)
LDs are logical disks managed by an OceanStor enterprise unified storage system. Each LD
corresponds to a physical disk.
HUAWEI OceanStor Enterprise Unified Storage System
RAID 2.0+ Technical White Paper 2 Working Principle
Issue 01 (2014-06-04) Huawei Proprietary and Confidential
Copyright © Huawei Technologies Co., Ltd.
7
CK
Each CK is 64 MB physical space divided from disk space in a storage pool. CK is the basic
unit of a RAID group.
CKG
A CKG is a logical storage unit that consists of CKs from different disks in the same DG
based on a RAID algorithm. A storage pool allocates resources from a disk domain by taking
CKG as the smallest unit.
All CKs in a CKG come from disks in the same DG.
A CKG has RAID properties, which are configured for the corresponding storage tier.
CKs and CKGs are internal objects automatically configured by OceanStor enterprise
unified storage systems. They are not presented externally.
Extent
An extent is a logical storage space with a fixed size divided from a CKG. The size ranges
from 512 KB to 64 MB. The default size is 4 MB. Extent is the smallest unit (granularity) for
data migration and hotspot data statistics collection. It is also the smallest unit for space
application and release in a storage pool.
One extent belongs to one volume or LUN.
A user can set the extent size while creating a storage pool. After that, the extent size
cannot be changed.
Extents in one storage pool may have a different size from those in another storage pool.
However, the extents in the same storage pool have the same size.
HUAWEI OceanStor Enterprise Unified Storage System
RAID 2.0+ Technical White Paper 2 Working Principle
Issue 01 (2014-06-04) Huawei Proprietary and Confidential
Copyright © Huawei Technologies Co., Ltd.
8
Grain
In thin LUN mode, extents are divided into 64 KB grains. A thin LUN allocates storage space
by grains. Logical block addresses (LBAs) in a grain are consecutive.
Grains are mapped to thin LUNs. A thick LUN does not involve grains.
Volume and LUN
A volume is an internal management object in OceanStor enterprise unified storage systems.
A volume organizes all extents and grains of a LUN and applies for and releases extents to
increase and decrease the actual space occupied by the volume.
A LUN is a storage unit that can be directly mapped to a host for data read and write. A LUN
is the external display of a volume.
HUAWEI OceanStor Enterprise Unified Storage System
RAID 2.0+ Technical White Paper 3 Technical Features
Issue 01 (2014-06-04) Huawei Proprietary and Confidential
Copyright © Huawei Technologies Co., Ltd.
9
3 Technical Features
Based on two-layer virtualized management, RAID 2.0+ overcomes the inherent defects of
traditional RAID and greatly improves storage system reliability and resource management
efficiency. By leveraging RAID 2.0+, OceanStor enterprise unified storage systems provide
truly secure, trusted, flexible, and efficient enterprise storage. This chapter describes the
technical features of RAID 2.0+ in terms of secure, trusted, flexible, and efficient.
3.1 Secure and Trusted
3.1.1 Automatic Load Balancing, Decreasing the Overall Failure Rate
A traditional RAID-based storage system typically contains multiple RAID groups, each of
which consists of up to 10-odd disks. RAID groups work under different loads, leading to an
unbalanced load and existence of hotspot disks. According to the statistics collected by
Storage Networking Industry Association (SNIA), hotspot disks are more vulnerable to
failures. In the following figure, Duty Cycle indicates the percentage of disk working time to
total disk power-on time, and ARF indicates the annual failure rate. It can be inferred that
when the duty cycle is high, the ARF is almost 1.5 to 2 times higher than that in low duty
cycle scenarios.
RAID 2.0+ implements block virtualization to enable data to be automatically and evenly
distributed onto all disks in a storage pool, preventing unbalanced loads. This approach
decreases the overall failure rate of a storage system.
HUAWEI OceanStor Enterprise Unified Storage System
RAID 2.0+ Technical White Paper 3 Technical Features
Issue 01 (2014-06-04) Huawei Proprietary and Confidential
Copyright © Huawei Technologies Co., Ltd.
10
3.1.2 Fast Thin Reconstruction, Reducing Dual-Disk Failure Probability
In the recent 10 years of disk development, disk capacity growth outpaces performance
improvement. Nowadays, 4 TB disks are commonly seen in enterprise and consumer markets.
5 TB disks will come into being in the second quarter of 2014. Besides, even
high-performance SAS disks specific to the enterprise market can provide up to 1.2 TB per
disk.
Rapid capacity growth confronts traditional RAID with a serious issue: reconstruction of a
single disk, which required only dozens of minutes 10 years ago, now requires 10-odd hours
or even dozens of hours. The increasingly longer reconstruction time leads to the following
problem: A storage system that encounters a disk failure must stay in the degraded state
without error tolerance for a long time, exposed to a serious data loss risk. It is common that
data loss occurs in a storage system under the dual stress imposed by services and data
reconstruction.
Based on underlying block virtualization, RAID 2.0+ overcomes the performance bottleneck
seen in target disks (hot spare disks) that are used by traditional RAID for data reconstruction.
As a result, the write bandwidth provided for reconstructed data flows is no longer a
reconstruction speed bottleneck, greatly accelerating reconstruction, decreasing dual-disk
failure probability, and improving storage system reliability.
The following figure compares the reconstruction principle of traditional RAID with that of
RAID 2.0+.
In the schematic diagram of traditional RAID, HDDs 0 to 4 compose a RAID 5 group,
and HDD 5 serves as a hot spare disk. If HDD 1 fails, an XOR algorithm is used to
reconstruct data based on HDDs 0, 2, 3, and 4, and the reconstructed data is written onto
HDD 5.
In the schematic diagram of RAID 2.0 +, if HDD 1 fails, its data is reconstructed based
on the CK granularity, where only the allocated CKs ( and in the figure)
are reconstructed. All disks in the storage pool participate in the reconstruction. The
reconstructed data is distributed on multiple disks (HDDs 4 and 9 in the figure).
HUAWEI OceanStor Enterprise Unified Storage System
RAID 2.0+ Technical White Paper 3 Technical Features
Issue 01 (2014-06-04) Huawei Proprietary and Confidential
Copyright © Huawei Technologies Co., Ltd.
11
RAID 2.0+ fined-grained and efficient fault handling also contributes to reconstruction
acceleration. In addition to the original bad sector repair and disk failure reconstruction,
RAID 2.0+ provides bad block repair that only reconstructs used space based on the CK
granularity. By efficiently identifying used space, RAID 2.0+ implements thin reconstruction
upon a disk failure to further shorten the reconstruction time, mitigating data loss risks.
With great advantages in reconstruction, RAID 2.0+ enables OceanStor enterprise unified
storage systems to outperform traditional storage systems in terms of reconstruction. The
following figure compares the time that a traditional storage system and an OceanStor
enterprise unified storage system spend in reconstructing 1 TB data in an NL-SAS
large-capacity disk environment.
HUAWEI OceanStor Enterprise Unified Storage System
RAID 2.0+ Technical White Paper 3 Technical Features
Issue 01 (2014-06-04) Huawei Proprietary and Confidential
Copyright © Huawei Technologies Co., Ltd.
12
3.1.3 Fault Detection and Self-Healing, Ensuring System Reliability
OceanStor enterprise unified storage systems employ a multi-level error tolerance design for
disks and provide various measures to ensure reliability, including online disk diagnosis, disk
health analyzer (DHA), bad sector background scanning, and bad sector repair. Based on a hot
spare policy, RAID 2.0+ automatically reserves a certain amount of hot spare space in a disk
domain. If an OceanStor enterprise unified storage system detects an uncorrectable media
error in an area of a disk or finds that an entire disk fails, the OceanStor enterprise unified
storage system automatically reconstructs the affected data blocks and writes the
reconstructed data to the hot spare space of other disks, implementing quick self-healing.
Traditional RAID RAID 2.0+
Independent global or local hot spare disks
must be manually configured.
Distributed hot spare space is provided
automatically.
Many-to-one reconstruction is implemented.
Reconstructed data flows are written to a
single hot spare disk in serial.
Many-to-many reconstruction is
implemented. Reconstructed data flows
are written to multiple disks in parallel.
Hotspot disks exist. The reconstruction time is
long.
Load balancing is implemented. The
reconstruction time is short.
3.2 Flexible and Efficient
3.2.1 Pool Virtualization, Simplifying Storage Planning and Management
Nowadays, mainstream storage systems typically contain hundreds of or even thousands of
disks of different types. If such storage systems employ traditional RAID, administrators need
to manage a lot of RAID groups and must carefully plan performance and capacity for each
application and RAID group. In the era of constant changes, it is almost impossible to
accurately predict the service development trends in the IT system lifecycle and the
corresponding data growth amount. As a result, administrators often face management issues
such as uneven allocation of storage resources. These issues greatly increase management
complexity.
HUAWEI OceanStor Enterprise Unified Storage System
RAID 2.0+ Technical White Paper 3 Technical Features
Issue 01 (2014-06-04) Huawei Proprietary and Confidential
Copyright © Huawei Technologies Co., Ltd.
13
OceanStor enterprise unified storage systems employ advanced virtualization technologies to
manage storage resources in the form of storage pools. Administrators only need to maintain a
few storage pools. All RAID configurations are automatically completed during the creation
of storage pools. In addition, OceanStor enterprise unified storage systems automatically
manage and schedule system resources in a smart way based on user-defined policies,
significantly simplifying storage planning and management.
3.2.2 One LUN Across More Disks, Improving Performance of a Single LUN
Since the 21st century, server computing capabilities have improved greatly and the number
of host applications (such as databases and virtual machines) has increased sharply, causing
the needs for higher storage performance, capacity, and flexibility. Restricted by the number
of disks, a traditional RAID group provides only a small capacity, moderate performance, and
poor scalability. These disadvantages prevent traditional RAID groups from meeting service
requirements. When a host accesses a LUN intensively, only a limited number of disks are
actually accessed, easily causing disk access bottlenecks and making the disks hotspots.
RAID 2.0+ supports a storage pool that consists of dozens of or even hundreds of disks.
LUNs are created based on a storage pool, thereby no longer subject to the limited number of
disks supported by a RAID group. The wide striping technology distributes data of a single
LUN onto many disks, preventing disks from becoming hotspots and enabling the
performance and capacity of a single LUN to improve significantly. If the capacity of an
existing storage system does not meet the needs, a user can dynamically expand the capacity
of a storage pool and that of a LUN by simply adding disks to disk domains. This approach
improves disk capacity utilization.
HUAWEI OceanStor Enterprise Unified Storage System
RAID 2.0+ Technical White Paper 3 Technical Features
Issue 01 (2014-06-04) Huawei Proprietary and Confidential
Copyright © Huawei Technologies Co., Ltd.
14
3.2.3 Dynamic Space Distribution, Flexibly Adapting to Service Changes
RAID 2.0+ is implemented based on industry-leading block virtualization. Data and service
loads in a volume are automatically and evenly distributed onto all physical disks in a storage
pool. By leveraging Smart series efficiency improvement software, OceanStor enterprise
unified storage systems automatically schedule resources in a smart way based on factors such
as the amount of hot and cold data and the performance and capacity required by a service. In
this way, OceanStor enterprise unified storage systems adapt to rapid changes in enterprise
services.
HUAWEI OceanStor Enterprise Unified Storage System
RAID 2.0+ Technical White Paper 4 Configuration
Issue 01 (2014-06-04) Huawei Proprietary and Confidential
Copyright © Huawei Technologies Co., Ltd.
15
4 Configuration
This chapter describes RAID 2.0+ configuration and configuration management.
4.1 Configuring RAID 2.0+
You can configure RAID 2.0+ in the resource allocation flowchart of OceanStor
DeviceManager.
4.1.1 Configuring Disk Domains and Storage Pools
A disk domain in OceanStor enterprise unified storage systems consists of disks from one or
multiple storage tiers. A storage pool provides available storage space for users. The following
describes how to configure disk domains and storage pools in the resource allocation
flowchart.
Click Create Disk Domain. In the dialog box that is displayed, add description about the
created disk domain. You can select the number and type of available disks for the disk
domain. Then click OK.
HUAWEI OceanStor Enterprise Unified Storage System
RAID 2.0+ Technical White Paper 4 Configuration
Issue 01 (2014-06-04) Huawei Proprietary and Confidential
Copyright © Huawei Technologies Co., Ltd.
16
HUAWEI OceanStor Enterprise Unified Storage System
RAID 2.0+ Technical White Paper 4 Configuration
Issue 01 (2014-06-04) Huawei Proprietary and Confidential
Copyright © Huawei Technologies Co., Ltd.
17
Click Create Storage Pool. In the dialog box that is displayed, add description about the
created storage pool. You can select the disk domain to which the storage pool belongs
and a disk type and RAID policy for the storage pool. Then enter the capacity of the
storage pool. In the Advanced Property Settings dialog box that is displayed, set
Capacity Alarm Threshold and Data Migration Granularity. Then click OK.
4.1.2 Configuring LUNs and LUN Groups
You can configure LUNs and LUN groups only after disk domains and storage pools are
configured. LUNs are the units provided for users to store data. A LUN group consists of
multiple LUNs. A LUN can be detected only when the LUN belongs to a LUN group.
HUAWEI OceanStor Enterprise Unified Storage System
RAID 2.0+ Technical White Paper 4 Configuration
Issue 01 (2014-06-04) Huawei Proprietary and Confidential
Copyright © Huawei Technologies Co., Ltd.
18
Click Create LUN. In the dialog box that is displayed, add description about the created
LUN. You can set the LUN capacity, the number of LUNs to be created, and the storage
pool to which the LUN belongs. In the Advanced dialog box, you can set the owning
controller for the LUN and related policies (you can use the default settings if there are
no special requirements). Then click OK.
Click Create LUN Group. In the dialog box that is displayed, add description about the
created LUN group and add all or part of created LUNs into the LUN group. Then click
OK.
HUAWEI OceanStor Enterprise Unified Storage System
RAID 2.0+ Technical White Paper 4 Configuration
Issue 01 (2014-06-04) Huawei Proprietary and Confidential
Copyright © Huawei Technologies Co., Ltd.
19
4.1.3 Configuring Hosts and Host Groups
To smoothly read and write a LUN, you must configure a host that reads and writes the LUN
and set up a communication channel from the host to the storage system.
Click Create Host. In the dialog box that is displayed, add description about the created host,
including the host name and host operating system and ensure that a communication link
exists between the host and storage system. You can leave IP Address blank and enter the
device location. Then click Next.
HUAWEI OceanStor Enterprise Unified Storage System
RAID 2.0+ Technical White Paper 4 Configuration
Issue 01 (2014-06-04) Huawei Proprietary and Confidential
Copyright © Huawei Technologies Co., Ltd.
20
HUAWEI OceanStor Enterprise Unified Storage System
RAID 2.0+ Technical White Paper 4 Configuration
Issue 01 (2014-06-04) Huawei Proprietary and Confidential
Copyright © Huawei Technologies Co., Ltd.
21
In the Configure the Initiator dialog box, add an initiator for the host. Then repeatedly
click Next until the procedure ends.
Click Create Host Group. In the dialog box that is displayed, add the created host into
the host group. Then click OK.
HUAWEI OceanStor Enterprise Unified Storage System
RAID 2.0+ Technical White Paper 4 Configuration
Issue 01 (2014-06-04) Huawei Proprietary and Confidential
Copyright © Huawei Technologies Co., Ltd.
22
4.1.4 Configuring Mapping Views
After the previous configurations, configured LUNs must be presented to the host. Therefore,
a mapping view must be configured.
Click Create Mapping View. In the dialog box that is displayed, add description about
the created mapping view. Select the LUN group and host group that you want to add to
the mapping view. Then click OK.
HUAWEI OceanStor Enterprise Unified Storage System
RAID 2.0+ Technical White Paper 4 Configuration
Issue 01 (2014-06-04) Huawei Proprietary and Confidential
Copyright © Huawei Technologies Co., Ltd.
23
HUAWEI OceanStor Enterprise Unified Storage System
RAID 2.0+ Technical White Paper A FAQs
Issue 01 (2014-06-04) Huawei Proprietary and Confidential
Copyright © Huawei Technologies Co., Ltd.
24
A FAQs
Q1. Must all disks of an OceanStor enterprise unified storage system reside in the same
storage pool?
Answer: No. An OceanStor enterprise unified storage system manages disks based on disk
domains and storage pools. A user can create one or more disk domains that are isolated from
one another in terms of resources, performance, and faults. One or more storage pools can be
created in a disk domain. Each storage pool uses various types of disks to provide storage
space.
Q2. How does an OceanStor enterprise unified storage system treat data in the event of
capacity expansion and disk failure?
Answer: After a disk is added to a storage pool, the OceanStor enterprise unified storage
system automatically moves a certain amount of data to the newly added disk space based on
disk usage for capacity balancing so that all disks in the same storage pool have similar space
utilization.
If a disk fails, CKGs related to the failed disk automatically perform data reconstruction, and
the reconstructed data is evenly written to the hot spare space of other functional disks. Users
do not need to specify hot spare space. The OceanStor enterprise unified storage system
automatically selects hot spare space based on disk usage. For details about the reconstruction
process, see section 3.1.2 "Fast Thin Reconstruction, Reducing Dual-Disk Failure
Probability."
Q3. Compared with traditional RAID, in what aspects does RAID 2.0+ demonstrate its high
reliability?
Answer: RAID 2.0+ demonstrates its high reliability in the following aspects:
Load balancing: RAID 2.0+ enables disks to work in a balanced manner, preventing
some disks from being overstressed, which may occur in traditional RAID. For details,
see section 3.1.1 "Automatic Load Balancing, Decreasing the Overall Failure Rate."
Robust reconstruction: RAID 2.0+ enables more disks to share reconstruction loads,
reducing the work stress on each disk. In this way, the disk failure risk is minimized
during reconstruction.
Rapid reconstruction: RAID 2.0+ significantly shortens the reconstruction window to
help an OceanStor enterprise unified storage system regain the error tolerance capability
as soon as possible, thereby improving system reliability. For details, see section 3.1.2
"Fast Thin Reconstruction, Reducing Dual-Disk Failure Probability."
HUAWEI OceanStor Enterprise Unified Storage System
RAID 2.0+ Technical White Paper A FAQs
Issue 01 (2014-06-04) Huawei Proprietary and Confidential
Copyright © Huawei Technologies Co., Ltd.
25
Thin reconstruction: Based on metadata, RAID 2.0+ detects allocated space in use. In the
event of reconstruction, RAID 2.0+ reconstructs only the used space to reduce the
workload and shorten the reconstruction time, lowering the reconstruction failure risk.
Self-healing: RAID 2.0+ uses distributed hot spare space. If an OceanStor enterprise
unified storage system detects a fault, reconstruction automatically starts as long as there
are free CKs in disks, thereby improving reliability and cutting management costs. For
details, see section 3.1.3 "Fault Detection and Self-Healing, Ensuring System
Reliability."
Minimized data loss amount: If a traditional RAID group fails, all data in the RAID
group is affected. In terms of RAID 2.0+, if multiple disks fail, only the data related to
these failed disks is affected, and other data is still accessible. Therefore, the amount of
lost data is much less.
Based on the Markov model, the following table lists the data loss risks of traditional RAID
and RAID 2.0+, with data loss probability and data loss amount taken into consideration.
System Configuration
RAID 2.0+ Configuration
Traditional RAID Configuration
Data Loss Risk
(Traditional RAID/RAID 2.0+)
40 x 600 GB
SAS disks
(disk failure
rate: 1%)
RAID 5 (4+1 disks), 40
disks per DG
Eight RAID 5
(4+1 disks)
groups
16.09
40 x 2 TB
SATA disks
(disk failure
rate: 2%)
RAID 6 (8+2 disks), 40
disks per DG
Four RAID 6
(8+2 disks)
groups
69.29
40 x 600 GB
SAS disks
(disk failure
rate: 1%)
RAID 10 (10 disks), 40
disks per DG
Four RAID 10
(10 disks) groups
39.15
Q4. Is disk utilization of OceanStor enterprise unified storage systems low?
Answer: OceanStor enterprise unified storage systems employ a two-layer virtualization
software architecture based on RAID 2.0+. To flexibly and efficiently manage data, certain
capacity is reserved. However, some people ignore the storage efficiency and think the disk
utilization is low due to the capacity reservation. With this innovative architecture, Huawei
storage delivers the highest storage efficiency in the industry. For example, block-based
resource management ensures global load balancing and improves the reconstruction speed by
20 times. The upper-layer virtualization leverages Smart series software to allocate system
resources in a smart way, notably improving resource utilization.
If SAS disks are used to create RAID 5 (8D+1P) groups, the capacity utilization is about
83.42%, which is 5% lower than the capacity utilization of traditional RAID.
Q5. Will data be lost if two disks concurrently fail in an OceanStor enterprise unified storage
system?
Answer: The essence of this question is about the error tolerance capability of RAID. RAID is
the basis of data storage protection. RAID 5 tolerates a concurrent failure of only one disk (for
HUAWEI OceanStor Enterprise Unified Storage System
RAID 2.0+ Technical White Paper A FAQs
Issue 01 (2014-06-04) Huawei Proprietary and Confidential
Copyright © Huawei Technologies Co., Ltd.
26
traditional RAID) or CK (for RAID 2.0+). RAID 6 tolerates a concurrent failure of two disks
or CKs. Therefore, if you employ a double-parity RAID level (such as RAID 6) regardless of
traditional RAID or RAID 2.0+, a concurrent failure of two disks will not cause data loss.
If RAID 5 is employed, a concurrent failure of two disks will cause data loss in traditional
RAID scenarios. For an OceanStor enterprise unified storage system that employs RAID 2.0+,
however, as long as each CKG does not contain two failed CKs, data will not be lost.
The following figure shows a storage pool that consists of 20 disks, where storage space is
provided for hosts in the form of LUNs, and the RAID policy is RAID 5 (4D+1P).
If HDDs 7 and 9 concurrently fail, only CKs related to the two disks are affected. In the
preceding figure, CKs 71 to 76 and CKs 101 to 106 are affected, whereas CKs 77 to 79 and
CKs 107 to 109 are not affected because no data is stored in these idle CKs. Accordingly,
CKGs 0, 1, 2, 4, 8, 11, 12, 13, 17, 19, 21, and 23 (the red CKGs with an underscore) are
affected. CKGs adopt the RAID 5 (4D+1P) policy, and each affected CKG contains only one
failed CK. Therefore, data provided by each of these CKGs is still available. In terms of hosts,
the corresponding LUNs are still accessible and services are not interrupted.
HUAWEI OceanStor Enterprise Unified Storage System
RAID 2.0+ Technical White Paper B Related Resources
Issue 01 (2014-06-04) Huawei Proprietary and Confidential
Copyright © Huawei Technologies Co., Ltd.
27
B Related Resources
1. RAID 2.0+ Technology Video, helping you easily learn about the advantages and
principles of RAID 2.0+
http://3ms.huawei.com/mm/video/videoMaintain.do?method=showVideoDetail&f_id=142196
5
2. Logical Objects and Key Concepts Related to RAID 2.0+
http://3ms.huawei.com/mm/docMaintain/mmMaintain.do?method=showMMDetail&f_id=ST
R13072905120079
http://3ms.huawei.com/mm/docMaintain/mmMaintain.do?method=showMMDetail&f_id=ST
R13072425080025
HUAWEI OceanStor Enterprise Unified Storage System
RAID 2.0+ Technical White Paper C Acronyms and Abbreviations
Issue 01 (2014-06-04) Huawei Proprietary and Confidential
Copyright © Huawei Technologies Co., Ltd.
28
C Acronyms and Abbreviations
Table 4-1 Acronyms and abbreviations
Acronym and Abbreviation Full Spelling
RAID redundant array of independent disks
RPM Revolutions Per Minute
LUN logical unit number
RAID eXtreme Virtual Engine
XVE Chunk
CK Chunk Group
CKG Disk Group
DG Logical Drive
LD Storage Networking Industry Association
SNIA Annual Failure Rate
AFR Disk Health Analyzer
DHA redundant array of independent disks