FAST VP Deep Dive
-
Upload
yunchao-kevin-wang -
Category
Documents
-
view
153 -
download
1
Transcript of FAST VP Deep Dive
1EMC CONFIDENTIAL—INTERNAL USE ONLY.
FAST VP Deep Dive
Version 1.0, Jan 2014
Kevin Wang
An Introduction of the latest cutting-edge technology in Tiered
Storage
Advanced FAST VP Feature
Introduction
2EMC CONFIDENTIAL—INTERNAL USE ONLY.
Contents
Component Slide #
Documentation 3
FAST Specific Errors 4
FAST VP Concepts
Review
5
VP Compression and Time
to Compress
11
FAST VP with FTS 27
FAST VP Allocation by
Policy
37
FAST VP SRDF
Coordination
42
Case Study 47
Q & A 59
3EMC CONFIDENTIAL—INTERNAL USE ONLY.
Documentation
• Detailed documents/whitepapers on FAST VP
can be found in support.emc.com, will reference
some of these in the following slides which
include:– FAST VP for EMC Symmetrix VMAX Theory and Best Practices for Planning and
Performance
– Implementing Fully Automated Storage Tiering for Virtual Pools (FAST VP) for
EMC Symmetrix VMAX Series Arrays
– EMC Solutions Enabler Symmetrix Array Controls CLI Product Guide (latest
version)
– Best Practices for Fast, Simple, Capacity Allocation with EMC Symmetrix Virtual
Provisioning
• Other training material can refer to FAST VP
Solution Support Session: FAST VP Step by Step
4EMC CONFIDENTIAL—INTERNAL USE ONLY.
FAST Specific Errors• Ucode:
– General VP errors: 7F10, 7F3F, 7F43
– Error sent by engine: 24AF, 20AF, 04DA
• Engine:– The engine can go into degraded mode if it cannot perform some
function.
– When we go into degraded mode the GUI on the SP will show that we
are in this state.
– symfast –sid xxx list –state will also show this state
5EMC CONFIDENTIAL—INTERNAL USE ONLY.
FAST VP Concepts
Review
6EMC CONFIDENTIAL—INTERNAL USE ONLY.
Two Variations on FAST
• FAST (also referred to as FAST DP) supports
disk group provisioning for Symmetrix VMAX:– Full LUN movement of disk group provisioned Thick Devices
– Supports FBA and CKD devices
– Introduced in Enginuity 5874
– Not applicable to VMAX 10K arrays
• FAST VP supports virtual provisioning for
Symmetrix VMAX:– SubLUN movement of Thin Devices
– Introduced in Enginuity 5875 with support for FBA devices
– Enginuity 5876 added support for CKD devices
7EMC CONFIDENTIAL—INTERNAL USE ONLY.
When to use FAST and FAST VP
• Workloads with a higher skew will benefit more
from FAST or FAST VP:– Workloads with skew above 80/20 are considered good
candidates.
– Unbalanced workloads direct a higher percentage of I/O to a
small percentage of the storage allocated.
– Heavily utilized devices are moved to faster technologies, to
reduce response time.
– Under utilized devices are moved to less expensive
technologies, to reduce cost.
• Workloads with a lower skew may not benefit:– Workloads with a skew closer to 50/50 (uniform workload) are
less likely to contain candidates for promotion/demotion.
8EMC CONFIDENTIAL—INTERNAL USE ONLY.
• 30% More
Performance
• 80% Less Footprint
• 20% Lower Costs
• 40% More
Performance
• 60% Less Footprint
• 15% Lower Costs
• 20% More
Performance
• 50% Less Footprint
• Same Costs
Sample Performance Data (94% >= 80/20)
Heavy Skew95% of IO on
5% of data
~12% of
workloadsEFD
3%
FC
0%
SATA
97%
Capacity 1
Moderate Skew90% of IO on
10% of data
~45% of
workloadsEFD
3%
FC
15%
SATA
82%
Capacity 2
Low Skew80% of I/O on
20 % of data
~37% of
workloadsEFD
3%
FC
27%
SATA
70%
Capacity 3
9EMC CONFIDENTIAL—INTERNAL USE ONLY.
FAST VP for Symmetrix
• Without FAST VP a Thin Device is bound to a pool which
contains disks with same technology, RAID protection and
rotational speed.
• With Fast VP busier Thin Device extents are moved to pool(s) in
a faster storage tier though Thin Device stays bound to original
pool.
Untiered VP Storage with busy
and less busy Thin Device
Extents residing in same Pool
Tiered Virtually Provisioned
Storage with busier extents on
Faster tiers
0
0
0
000
0
0
0
01 1 1
1 1 1
1 1 1
2 2 2
2 2 2
2 2 2 2
0 0 0 0 0 0 0 0
1 1 1 1 1 1 1 1 1
2 2 2 2 2 2 2 2
Tier 0 (EFD)
Tier 1 (FC)
Tier 2 (SATA)
T H I N
POOLS
9
10EMC CONFIDENTIAL—INTERNAL USE ONLY.
Elements of FAST
• Symmetrix Tier – a shared storage resource with common
technologies
• FAST Policy – manages data placement and movement across
Storage Types to achieve service levels for one or more Storage
Groups
• Storage Group – logical grouping of standard devices for common
management
10
FAST VP Tiers FAST VP
Policies
Storage Groups
Thin Devices
ThinProd1_SG
ThinProd2_SG
ThinDev_SG
R53_EFD_PoolEFD R5 Thin Tier
R66_FC_Pool
FC R6 Thin Tier
R614_SATA_Pool
SATA R6 Tier
Production
25%
50%
25%
Development
25%
100%
11EMC CONFIDENTIAL—INTERNAL USE ONLY.
VP Compression and Time to
Compress
12EMC CONFIDENTIAL—INTERNAL USE ONLY.
VP Compression
• Saves space within a thin pool
• Works with all TDEVs– Fixed Block Architecture (FBA), including D910 on IBM i
– Count Key Data (CKD)
• Supported with local and remote replication
products– TimeFinder
– Symmetrix Remote Data Facility (SRDF)
• Supported with internal data movement products– Virtual LUN VP mobility (VLUN)
– FAST for Virtual Pools (FAST VP)
13EMC CONFIDENTIAL—INTERNAL USE ONLY.
VP Compression Details
• Requires Enginuity 5876 code (Seine) and SE
7.5+
• Pools enabled for VP compression at creation or
by setting the attribute on an existing pool
• Once enabled, a background task reserves
capacity in the pool to temporarily uncompress
data– This capacity is called the Decompress Read Queue (DRQ)
– Capacity ranges between 76 and 3000 MB depending on pool
size
• Compression can be initiated– Manually using SYMCLI or Unisphere
– FAST VP will automatically compresses infrequently used data
14EMC CONFIDENTIAL—INTERNAL USE ONLY.
Considerations When Using VP
Compression • Limit of 10 terabytes of compressed data per
VMAX engine
• Compression can be disabled when no longer
needed
• Disabling compression does not uncompress
data– Data must be uncompressed before disabling compression
– Space reserved for DRQ returned to pool
• Allocated, but unwritten space will be reclaimed
• Persistent allocations cannot be compressed
• FTS Encapsulated devices cannot be
compressed
15EMC CONFIDENTIAL—INTERNAL USE ONLY.
Data Access
Read Write
• Uncompresses the track into reserved area in the pool
• Space in the reserved area is controlled by a Least Recently Used (LRU) algorithm
• LRU ensures that space is always available to uncompress a track
• Recompression is not required
• Written in uncompressed form to the thin device
• If under FAST control, data will be compressed based on time of last access
• Can be manually compressed
16EMC CONFIDENTIAL—INTERNAL USE ONLY.
MigrationSource
State
Target Compression Enabled Target Compression Disabled
Compression Enabled
Compressed tracks are migrated to the target as compressed tracks.
Target pool: Utilized space increases by the compressed size and the free space decreases by the compressed size.
Source pool: Utilized space decreases by the compressed size and the free space increases by the compressed size.
Compressed device cannot be migratedto a pool with compression disabled.
Compressed device must beuncompressed before it can be migrated.
Compression Disabled
Uncompressed tracks are migrated tothe target pool as uncompressed tracks.
Target pool: Utilized space increases by the uncompressed size and the free space decreases by the uncompressed size.
Source pool: Utilized space decreases by the uncompressed size and the freespace increases by the uncompressed size.
Uncompressed tracks are migrated to the target pool.
Target pool: Utilized space increases by the uncompressed size and the freespace decreases by the uncompressedsize.
Source pool: Utilized space decreasesby the uncompressed size and the free space increases by the uncompressedsize.
17EMC CONFIDENTIAL—INTERNAL USE ONLY.
Enabling/Disabling VP Compression -
SYMCLI• To create a new pool with compression enabled
– symconfigure –sid 78 -cmd
“create pool 101_SATAR6, type = thin,
vp_compression = Enable;” commit
• To enable compression on an existing pool– symconfigure –sid 78 –cmd
“set pool 101_SATAR6, type = thin,
vp_compression = Enable;” commit
• To disable compression on an existing pool– symconfigure –sid 78 –cmd
“set pool 101_SATAR6, type = thin,
vp_compression = Disable;” commit
18EMC CONFIDENTIAL—INTERNAL USE ONLY.
Manual Compression - SYMCLI
• SYMCLI Syntaxsymdev –sid 265 –file archive.txt compress
symdev –sid 265 –devs 025:02A compress –stop
symsg –sid 265 –sg ESXsg compress
symdg –sid 265 –g ESXdg uncompress
symcg –sid 265 –cg VMcg uncompress –stop
• Stopping the compress action does not
uncompress data that has been compressed– Manual intervention is required
19EMC CONFIDENTIAL—INTERNAL USE ONLY.
Enabling Compression
TDEV
ED
CBA
Extents allocated for Decompress-Read-Queue (DRQ)Command issued to enable compression for thin pool
DRQ: no data DRQ: no data DRQ: no data
20EMC CONFIDENTIAL—INTERNAL USE ONLY.
Dir. Header
Compression Flow
TDEV
Compressed ExtentED
CBA
E
C
A
DRQ: no data DRQ: no data DRQ: no data
User issues command to compress TDEVAllocate compressed extentEvaluate extent AStore compressed data for extent A
and update pointers
Reclaim uncompressed extent AEvaluate extent BZero-Reclaim extent B, which contains
all zero data
Evaluate extent CStore compressed data for extent C
and update pointers
Reclaim uncompressed extent CEvaluate extent DSkip extent D (less than 50% compressible)Evaluate extent EStore compressed data for extent E
and update pointers
Reclaim uncompressed extent E
21EMC CONFIDENTIAL—INTERNAL USE ONLY.
Dir. Header
Read Flow
TDEV
Compressed ExtentD
E
C
A
DRQ: no data DRQ: no data DRQ: no dataDRQ: C
C > (DRQ)
Host requests data from extent DExtent D is uncompressed, so the data is
returned as usual
Host requests data from extent CExtent C is compressed, so its data is
uncompressed into an unused – or the least
recently used – extent in the DRQ
Extent C’s uncompressed data is returned to the
host from the DRQ
Note, extent C’s data remains compressed in the
event the extent allocated from the DRQ is
required to service another read from a
compressed extent
22EMC CONFIDENTIAL—INTERNAL USE ONLY.
Dir. Header
Write Flow
TDEV
Compressed ExtentD
E
C
A
DRQ: no data DRQ: no data DRQ: no dataDRQ: C
C > (DRQ)A
Host writes to extent DExtent D is uncompressed, so write flow is
handled normally
Host writes to extent AExtent A is compressed, so a new extent must be
allocated to decompress the data
After extent A is decompressed, pointers are
updated to reflect the data’s new location
Write to extent A continues as normalNOTE: Extent A will not be automatically
recompressed
23EMC CONFIDENTIAL—INTERNAL USE ONLY.
Time to Compress
• VP Compression was introduced in 5876 code (Seine)
• FAST VP’s implementation of the feature is to automate
VP Compression at the sub lun level for thin devices that
are under Fast control.
• The Time to Compress control parameter is what
enables/disables the feature
• Feature is set to disabled by default, the parameter
defaults to “Never” = disabled
• To enable the feature the time to Compress is set to a
“time” value. Any FAST extents that are idle for greater
than this value are candidates for automatic compression.
• Even if the extents qualify for compression the data will
only get compressed if the pool is enabled for VP
Compression.
24EMC CONFIDENTIAL—INTERNAL USE ONLY.
Time to Compress
• For customers time to compress can be set to a min of 40
days and a max of 400 days
• For testing the time to compress can be set to much
lower values.
• Every FAST performance move now decompresses the
data first before moving it (even if compression is not
active on the system)
• For more detailed info on Time to Compress see pages
45 & 46 in “Implementing Fully Automated Storage
Tiering for Virtual Pools (FAST VP) for EMC Symmetrix
VMAX Series Arrays”
25EMC CONFIDENTIAL—INTERNAL USE ONLY.
Time to Compress
• Enabling Compression on a pool• When creating the pool via symconfigure:
– “create pool xxx, type=thin, vp_compression=ENABLE”
• If pool already present via symconfigure:
– “set pool xxx, type=thin, vp_compression=ENABLE”
• Setting the Time to Compress• symfast –sid xxx set –control_parms –time_to_compress
<NumDays>
• Customers not allowed to set below 40 days
• Inhouse can set to a minimum of 1 day via cli but need to put the
following into the API options file
“SYMAPI_MIN_TIME_TO_COMPRESS = 1”
• Should enter this variable into options file both on host and Symm
Service Processor
26EMC CONFIDENTIAL—INTERNAL USE ONLY.
Compression Rate
• The FAST VP Compression Rate determines the
aggressiveness with which data is compressed
• Can be configured between 1 and 10 with 5 as
the default. The lower the value the more
aggressive the rate of compression.
• To set via Symcli:– symfast –sid xxx set –control_parms –fast_compression_rate
<value>
27EMC CONFIDENTIAL—INTERNAL USE ONLY.
FAST VP with
FTS
28EMC CONFIDENTIAL—INTERNAL USE ONLY.
Federated Tiered Storage Overview
• FTS allows external storage arrays to be used as
back-end disks for Symmetrix VMAX arrays
• LUNs on external arrays can be used by the
Symmetrix as:– Raw storage space for the creation of Symmetrix devices
– Data sources that can be encapsulated and the information made
available to a host accessing the Symmetrix
• Symmetrix presents the external storage as
unprotected volumes– Data protection is provided by the external array
• FTS is a free Enginuity feature
29EMC CONFIDENTIAL—INTERNAL USE ONLY.
Components for FTS - 1
• DX Directors
– Stands for DA eXternal and behaves just like a DA
– Handles external LUs as though they are Symmetrix
drives
– Runs on Fiber Optic SLICs just like FA and RF
emulations
8E0, 8E1 7E0, 7E1
DX Director Pair
VMAX 40K Engine
30EMC CONFIDENTIAL—INTERNAL USE ONLY.
Components for FTS - 2
• eDisks– Associated with an external SCSI logical unit
– Accessible through the SAN
– Belong to virtual, unprotected RAID groups
– Also referred to as “external spindle”
• External Disk Group– Virtual groups created to contain eDisks
– Group numbers start with 512
• Virtual RAID Group– Created for each eDisk
– Not locally protected in the Symmetrix
– Relies on protection provided by the external array
31EMC CONFIDENTIAL—INTERNAL USE ONLY.
FTS Virtualization - 1
• Two modes of operation for external storage
– External provisioning uses storage as raw capacity, data is
lost
– Encapsulation allows preservation of data on external
storage
• Standard Encapsulation
• Virtual Provisioning Encapsulation
• External provisioning
– External disk (spindle) is created and used as raw capacity
for new Symmetrix devices
– External disk groups have numbers starting with 512
– External disks are displayed as unprotected drives, RAID
protection is expected to be provided by the remote array
– Virtual Provisioning Data Devices (TDATs) can be created
on external disks
32EMC CONFIDENTIAL—INTERNAL USE ONLY.
FTS Virtualization - 2
• Standard Encapsulation– Creates an eDisk (spindle) for each external LUN and adds it to an
external disk group
– A Symmetrix device is also created at the same time
– Access to user data on the device is permitted through the
Symmetrix device
• Virtual Provisioning Encapsulation– Creates an eDisk (spindle) for each external LUN and adds it to an
external disk group
– A data device and a fully allocated thin device are also created
– This thin device can be used for data migration using VLUN
migration for Virtual Provisioning
33EMC CONFIDENTIAL—INTERNAL USE ONLY.
FAST VP with FTS
• FAST VP (and VLUN) and are fully supported with FTS
• FTS tier was considered the lowest storage tier regardless of its actually
technology and performance prior to 5876.159.102 and SE 7.4 (Fast
Policy can have 4 Tiers :- EFD -> FC -> SATA -> external FTS tier)
• Starting with that release (and SE 7.5), a FTS tier can be any tier in a
FAST VP policy - a.k.a User Defined FTS.
• When an external tier is created, a technology type (EFD, FC, or SATA)
can be specified, in addition to the external location. By specifying a
technology type, a related expectation of performance is associated with
the tier. This will then affect the tiers ranking amongst the other 3 tiers
when added to a FAST VP policy.
• For the 6 possible tier types that can be included in a FAST VP policy, the
rankings, in descending order of performance, are as follows:-
Internal EFD -> External EFD -> Internal FC -> External FC -> Internal SATA ->
External SATA
34EMC CONFIDENTIAL—INTERNAL USE ONLY.
FAST VP with FTS• After an external tier has been created, the technology type can be
modified. If the type is changed, the ranking of the tier will change within
its Policy. As such, an external tier can be upgraded or downgraded
within a Policy. (note: The technology type of a tier can only be modified for an external tier)
• Enginuity executes an initial performance discovery of the tier when it is
first added to a FAST VP policy. This is done to ensure the performance
of the external tier is in line with the expectations of the external
technology (EFD = 3ms / FC = 14ms / Sata ~ 20ms+). Subsequent
lighter performance discoveries are done periodically to validate or
incrementally adjust the previously discovered performance.
• If the performance of an external tier falls below expectations, an event
is triggered alerting users to this (event ID 1511). Users can resolve the
discrepancy by either addressing the cause of the degraded
performance on the external tier, or by lowering the expectations of the
tier.
• symaudit list -sid <Box#) -text , would detail a User if a Tier was
underperforming (below, FTS Tier Ext_FC1 is an external Tier with FC
drives but defined as EFD , FTS Tier Ext_FC2 is defined as FC :-– 03/14/13 18:15 Fast Other SE29b FAST Tier (Ext_FC1) performing worse than expected (LOW) Actual Response Time: 28.09 ms Expected Response Time: 3
ms (or less)
– 03/14/13 18:25 Fast Other SE29d FAST Tier (Ext_FC2) performing worse than expected (LOW) Actual Response Time: 38.9 ms Expected Response Time: 14
ms (or less)
35EMC CONFIDENTIAL—INTERNAL USE ONLY.
FAST VP with FTS• FTS devices get added to the bin as Group Number 512 :-
• A7 for an eDisk TDAT FTS device :-
• We can create a Virtually Provisioned Pool of TDATs using the devices. The
Mirror type for FTS devices is "NORMAL" (Unprotected) - only FTS can have an
Unprotected Mirror/Raid type. Only externally provisioned data devices can be
added to FAST VP tiers (encapsulated data devices cannot).
• To add the Pool to a Tier we use :-
– symtier -sid <Box #> create -name <Tier Name> -external -tgt_unprotected -technology
FC -vp -pool <Pool Name>
– 8D,,,FAST,LIST,TIER (Tier 8 / Pool A = External (Y) set as FC and unprotected)
• Cli commands like symtier list / symfast list –fp –v / symcfg show –pool <Pool Name>
-thin , details similar info.
36EMC CONFIDENTIAL—INTERNAL USE ONLY.
FAST VP with FTS• 98,FAST,SUMM (or 8D,,,FAST,LIST,POOL,RESP) - shows External Tier 8 / Pool A is
set as FC (and color coded accordingly – white = EFD / blue = FC / purple = SATA ,
red plus white or blue or red for FTS)
• To modify a Tiers technology (and performance expectation) :-– symtier -sid <Box #> modify -tier_name <Tier Name> -technology <EFD|FC|SATA>
– Here I have changed Tier 8 from FC, as above, to EFD (note: its actual technology on the external array is
FC).
• 98,FAST,MOVE,READ,<SG Number> - this shows the
Movement Policy and the associated Tier rankings
in the Policy. Here the ranking is :-
EFD -> External EFD -> FC -> SATA.
So our External Tier is ranked 2nd because we
defined it as EFD (ranked above the Internal FC and
SATA Tier as we had defined it as EFD, although it
actually is FC on the remote FTS array).
37EMC CONFIDENTIAL—INTERNAL USE ONLY.
FAST VP Allocation by
Policy
38EMC CONFIDENTIAL—INTERNAL USE ONLY.
Allocation by Policy
• Allows thin devices to be allocated in any of the pools under the fast
policy that the thin device is associated with.
• Introduced in 5876 (Seine)
• If preferred all the thin devices under fast control can be bound to one
pool in the policy. When that pool fills up the allocations will
automatically spill over into the other pools (in the policy). Criteria for
choosing the pool to allocate in is:
– If performance metrics are available for the extent, allocate from a pool in
the appropriate tier
– If performance metrics are not available, allocate from the bound pool
– If bound pool is full, choose tier that has lowest capacity in the policy
• Compliance is honored unless all other pools are full
• Detailed description of the feature can be found in pages 41-43 of
“Implementing Fully Automated Storage Tiering for Virtual Pools
(FAST VP) for EMC Symmetrix VMAX Series Arrays”
39EMC CONFIDENTIAL—INTERNAL USE ONLY.
Allocation by Policy
• Feature is disabled by default
• To enable via Symcli:
– symfast –sid xxx set –control_parms –
vp_allocation_by_fp ENABLE
40EMC CONFIDENTIAL—INTERNAL USE ONLY.
Allocation Flow
Allocation
request
Alloc by
Policy
Enabled?
Try alloc
from bound
pool
Extent has
valid tier &
in
compliance
Try alloc from all
pools in extent‘s
assigned tier
Try alloc
from pool
Try alloc
from all
pools in
tier
Try alloc
from pool
Select tiers in
policy from
smallest to
largest
Failure
All
pools
failed
Failure
All
pools
failed
All
tiers
failed
41EMC CONFIDENTIAL—INTERNAL USE ONLY.
FAST VP allocation by policy – misc.
• FAST VP controlled allocations will not obey
PRC.
– Allocations will violate PRC, since PRC only exists to
protect against FAST movements and is not designed
to block host allocations.
– Therefore it will be possible to exhaust some of the
higher-performing tiers (like EFD) if heavy new
allocations occur on a system, which has a 100%
capacity to policy match.
42EMC CONFIDENTIAL—INTERNAL USE ONLY.
FAST VP SRDF
Coordination
43EMC CONFIDENTIAL—INTERNAL USE ONLY.
FAST VP SRDF Coordination
• By default, FAST VP will act completely independently on each side
of an SRDF link. Typically, the R1 and R2 devices in an SRDF
pairing will undergo very different workloads - read/write mix for the
R1 and writes only for the R2. As a result, decisions regarding data
placement on each side of the link could also, potentially, differ.
• Enginuity 5876 introduces SRDF awareness for FAST VP, allowing
performance metrics to be periodically transmitted from R1 to R2,
across the SRDF link. These R1 metrics are merged with the R2
metrics, allowing FAST VP promotion/demotion decisions for R2 data
to account for R1 workload.
• SRDF coordination can be enabled or disabled per storage group,
with the default being disabled.
44EMC CONFIDENTIAL—INTERNAL USE ONLY.
FAST VP SRDF Coordination• Example :- RDF Device 1B40
98,FAST,STAT,MOVE,1B40,1,SHOW (showing Movement Policy Scores) on
R1 side
• 98,FAST,STAT,MOVE,1B40,1,SHOW on R2 side (noting here the Scores are
different to the R1)
• 98,FAST,STAT,PROF,1B40,1,SHOW (displays IO profiles and counts for the
device) for the R1 here
45EMC CONFIDENTIAL—INTERNAL USE ONLY.
FAST VP SRDF Coordination• 98,FAST,STAT,PROF,1B40,1,SHOW (R2 device has different IO profiles and
counts to the R1 above)
• To Enable RDF Coordination (issue on R1 SG) :
symfast -sid <Box#> modify -sg <SG Name> -rdf_coordination ENABLE -fp_name
<Policy Name>
• The Cli command symfast -sid <Box#> show -association -sg <SG Name> will show if RDF
Coordination is currently Enabled or Disabled. Also can be verified at Inlines with 8D,,,FAST,LIST,ASSN
(Flag of "R" for RDF Coordination)
46EMC CONFIDENTIAL—INTERNAL USE ONLY.
FAST VP SRDF Coordination• Later the IO Profiles and Counts from the R2 side look like the R1 Side (R2
screen shot below). Also note the addition of an RDF profile for each extent.
• Now the R2 Movement Policy Scores look like the R1 side (R2 screenshot
below)
• In this mode Tier allocations of R1 and respective R2 devices would look
very similar (as seen with symfast -sid <Box#> list -association -sg <SG
Name> -demand) - Allocation of the data for the R2 devices would be much
closer to that of the R1 data. As the Policies maybe different on each array,
FAST VP may not match the allocations completely.
47EMC CONFIDENTIAL—INTERNAL USE ONLY.
Case
Study
48EMC CONFIDENTIAL—INTERNAL USE ONLY.
Case Study –Westpac Banking, SR 58924478
• Symmetrix VMAX 20K
• Customer is forced to use FAST VP to move data
from pool VP_GREEN to pool VP_BLUE because
of a known bug with VLUN migration (KB 92545)
and was told to use FAST VP as a workaround until
an upgrade to code 5876.229 can be scheduled,
but FAST VP did not move data as what they
wanted so remote support was engaged.
• PSE Lab was engaged and worked for more than
one month to help customer on this issue.
49EMC CONFIDENTIAL—INTERNAL USE ONLY.
Case Study –Westpac Banking, SR 58924478
• 23 Gb data OoC which is unable to be moved into FT_BLUE Tier (Only one thin
pool VP_BLUE in this tier)
• The storage group which is using in this FAST VP configuration is I_tiermove_sg
and device 1D55 is the only one member of that SG.
50EMC CONFIDENTIAL—INTERNAL USE ONLY.
Case Study –Westpac Banking, SR 58924478
• Preliminary Analysis (Why FAST VP cannot move data in this scenario):
51EMC CONFIDENTIAL—INTERNAL USE ONLY.
Case Study –Westpac Banking, SR 58924478
• Initially FAST was implemented to move extents from one FC tier VP_GREEN to another FC tier VP_BLUE and this was not suitable as FAST doesn't cater for this type of movement with disks of the same type and technology.
• Preliminary Conclusion and Suggestion:
• The reason that why FAST VP cannot move data is that FAST doesn't cater for this type of movement with disks of the same type and technology.
• The suggestion which was given by EMC is that create a new FAST VP policy in effort to move from the existing FC tier VP_GREEN to the EFD tier VP_RED and FC Tier VP_BLUE and then from this those two tiers back to the new destination FC tier VP_BLUE
52EMC CONFIDENTIAL—INTERNAL USE ONLY.
Case Study –Westpac Banking, SR 58924478• Customer followed the suggestion, but new problem was found:
• A single FAST policy was created by EMC’s suggestion. FAST was enabled with just a single policy with the extents % set for 100/100/0. They wanted to have all the LUN extents that located in FC Pool VP_GREEN in a SG I_tiermove_sg moved to EFD VP_RED and then back down to VP_BLUE by changing policy later. The reason that why they doing this is because they cannot move data between VP_GREEN and VP_BLUE via FAST VP as they are both FC disks technology.
• FAST VP did not move all the data to target Tier by the new policy, so we must find out why.
53EMC CONFIDENTIAL—INTERNAL USE ONLY.
Case Study –Westpac Banking, SR 58924478
As we can see that device 1D55 still has some extents which is located in pool
VP_GREEN
54EMC CONFIDENTIAL—INTERNAL USE ONLY.
Case Study –Westpac Banking, SR 58924478
• Further Analysis(Why FAST VP stopped moving data again):
• PSE dialed into the box and determined that not all extents had been moved is because the R/T of the
EFD disks were on average not better than 50% of the response they were getting from their FC tiers
hence some of the moves were blocked.
• FAST VP will not promote data to the next pool unless the response time to be gained is greater than
x% according to these rules:
• EFD <= 50% FC
• EFD <= 30% SATA
• FC <= 50% SATA
• Here you can see that POOL 1 which is the EFD pool is giving 4.8 msec response time and the FC pool
2 is giving 8.1 response time. EFD > 50% FC
• As such FAST VP will not move data into the EFD pool.
55EMC CONFIDENTIAL—INTERNAL USE ONLY.
Case Study –Westpac Banking, SR 58924478
• Solution for Response Time checking issue:
• So as a workaround the PSE disabled the R/T so that this would not be the reason for blocking any remaining moves and they are now ok and have been moving the extents which located in VP_GREEN as expected to the EFD VP_RED and then back to the FC VP_BLUE 100% by adjusting the FP policy to 0/100/0.
• Finally all the data tracks of device 1D55 have been moved to pool VP_BLUE.
• The FC/EFD response time is only but one of the parameters that FAST uses to determine what extents get moved. Policy is basically used to act like VLUN Migration its hardly normal FAST workload. The R/T would need to be blocking FAST movement when they implement FAST in a real world scenario i.e. typical workload and as such engineering would not approve drive swaps just purely based on that gen2 has longer RT than Gen3.
• The reason that why customer cannot perform VLUN Migration for this case:
• Device 1D55 started off in VP_GREEN pool. If they use VLUN migrate to move data to VP_BLUE without any modifying, we will hit the bug which listed in KB 92545 .
• They tried to create a FAST tier, assign the TARGET pool to in and retried the VLUN migrate but it failed because that the device 1D55 is already bound to VP_BLUE, not VP_GREEN.
56EMC CONFIDENTIAL—INTERNAL USE ONLY.
Case Study–Capital Group , SR 59368480
• Symmetrix VMAX 20K
• Customer reports that after removing a Thin Pool
FC_T2_P1_49 from FAST VP, and did symmigrate
to move data out of this pool into FC_T2_P1, he still
sees new allocations in the pool FC_T2_P1_49
from TDEVs that are not bound to this pool.
• PSE Lab and SSG was engaged for this issue.
57EMC CONFIDENTIAL—INTERNAL USE ONLY.
Case Study–Capital Group , SR 59368480
• Symmetrix VMAX 20K
• Customer reports that after removing a Thin Pool
FC_T2_P1_49 from FAST VP, and did symmigrate
to move data out of this pool into FC_T2_P1, he still
sees new allocations in the pool FC_T2_P1_49
from TDEVs that are not bound to this pool.
• PSE Lab and SSG was engaged for this issue.
58EMC CONFIDENTIAL—INTERNAL USE ONLY.
Case Study–Capital Group , SR 59368480
• Root Cause:
• This could be related to the fact that “any new device allocation task to the pool will be put in the task queue. If the pool is dis-associated from the tier while the device allocation tasks are still in the queue, the fast will complete the task regardless whether the pool is still under fast control. That's why there were new allocations to the pool after being removed from tier. However, there should be no more new allocations to the pool after the tasks in the queue have been completed”.
59EMC CONFIDENTIAL—INTERNAL USE ONLY.
Q&A
60EMC CONFIDENTIAL—INTERNAL USE ONLY.
THANK YOU