HPE 3PAR Online Import for HDS Storage · HPE 3PAR Online Import for HDS Storage . Best practices...

35
HPE 3PAR Online Import for HDS Storage Best practices for data migration from Hitachi Data Systems arrays to HPE 3PAR StoreServ Storage Technical white paper

Transcript of HPE 3PAR Online Import for HDS Storage · HPE 3PAR Online Import for HDS Storage . Best practices...

Page 1: HPE 3PAR Online Import for HDS Storage · HPE 3PAR Online Import for HDS Storage . Best practices for data migration from Hitachi Data Systems arrays to HPE 3PAR StoreServ Storage

HPE 3PAR Online Import for HDS Storage Best practices for data migration from Hitachi Data Systems arrays to HPE 3PAR StoreServ Storage

Technical white paper

Page 2: HPE 3PAR Online Import for HDS Storage · HPE 3PAR Online Import for HDS Storage . Best practices for data migration from Hitachi Data Systems arrays to HPE 3PAR StoreServ Storage

Technical white paper

Contents Introduction ................................................................................................................................................................................................................................................................................................................................................... 3 Assumptions ................................................................................................................................................................................................................................................................................................................................................. 3 Concepts .......................................................................................................................................................................................................................................................................................................................................................... 4

Example migration flow ............................................................................................................................................................................................................................................................................................................... 4 Data migration phases .................................................................................................................................................................................................................................................................................................................. 5 Zoning steps .......................................................................................................................................................................................................................................................................................................................................... 6 Life of an I/O .......................................................................................................................................................................................................................................................................................................................................... 8 Consistency groups ......................................................................................................................................................................................................................................................................................................................... 8 Pathing evolution for online migration .......................................................................................................................................................................................................................................................................... 9

Requirements .......................................................................................................................................................................................................................................................................................................................................... 10 The HPE 3PAR OIU console ..................................................................................................................................................................................................................................................................................................... 11 Features ........................................................................................................................................................................................................................................................................................................................................................ 11

Volume support .............................................................................................................................................................................................................................................................................................................................. 11 LUN ID conflict resolution ..................................................................................................................................................................................................................................................................................................... 11 Peer links ............................................................................................................................................................................................................................................................................................................................................... 11 Data migration at 16 Gbps ................................................................................................................................................................................................................................................................................................... 12 Multipathing ....................................................................................................................................................................................................................................................................................................................................... 12 SCSI reservations ........................................................................................................................................................................................................................................................................................................................... 13 Encryption ............................................................................................................................................................................................................................................................................................................................................ 13 Replication ........................................................................................................................................................................................................................................................................................................................................... 14 Scripting ................................................................................................................................................................................................................................................................................................................................................. 14

Monitoring and reporting ............................................................................................................................................................................................................................................................................................................. 16 On the source HDS system .................................................................................................................................................................................................................................................................................................. 16 On the destination HPE 3PAR StoreServ system............................................................................................................................................................................................................................................ 16 On the OIU server ......................................................................................................................................................................................................................................................................................................................... 19

Best practices .......................................................................................................................................................................................................................................................................................................................................... 20 Installing the HPE 3PAR Online Import Utility ................................................................................................................................................................................................................................................... 20 Migration preparation................................................................................................................................................................................................................................................................................................................ 21 Host, volumes, and migration ............................................................................................................................................................................................................................................................................................. 23 Peer links and Peer volumes ............................................................................................................................................................................................................................................................................................... 27 Managing Peer link throughput ....................................................................................................................................................................................................................................................................................... 28 Post migration .................................................................................................................................................................................................................................................................................................................................. 29 Miscellaneous .................................................................................................................................................................................................................................................................................................................................... 30

Troubleshooting ................................................................................................................................................................................................................................................................................................................................... 31 General .................................................................................................................................................................................................................................................................................................................................................... 31 Clean out the OIU objects created ................................................................................................................................................................................................................................................................................. 32 Information collection when contacting support .............................................................................................................................................................................................................................................. 33

Licensing ...................................................................................................................................................................................................................................................................................................................................................... 34 Delivery model ........................................................................................................................................................................................................................................................................................................................................ 34 Typography, terminology, and abbreviations ............................................................................................................................................................................................................................................................ 34

Page 3: HPE 3PAR Online Import for HDS Storage · HPE 3PAR Online Import for HDS Storage . Best practices for data migration from Hitachi Data Systems arrays to HPE 3PAR StoreServ Storage

Technical white paper Page 3

Introduction Migrating an organization’s data to a new storage technology can be a complex, time-consuming, and risky undertaking. Numerous technical, business, and human-related decisions have to be worked out during the planning phase, the actual migration, and during post-migration activities.

On the technical side, understanding and mapping the relationships between servers, storage arrays, virtualization appliances, routers and switches, access control policies, and supplementary security layers are a prerequisite for a successful migration. Changing array vendor complicates the migration.

On the business side, migrations are planned on an application-by-application basis, no longer on an array basis. Application downtime is increasingly being resisted, as customers expect 24x7x365 application availability. To satisfy this requirement, the migration has to happen online or with minimal disruption, meaning access to the data is provided while the actual data transfer is ongoing. Application owners make the exact migration starting time more a business decision than a technical one. Consequently, IT staff is involved in individual application migration operations for months in a row to evacuate a larger storage system.

Another item high on the business requirements list is zero data loss in case of a migration failure with rapid failback to the original configuration. Hardware appliances, software licenses, and the need for expensive professional services all increase the price per GB migrated forcing the cost of migration to new storage arrays to be factored in when building the business case for a storage refresh cycle.

From this, it is clear that a migration project has to be planned carefully with all stakeholders identified. Demand is for an uncomplicated yet complete tool supporting migration of volumes from the source to the destination array complying with the requirements listed earlier.

HPE 3PAR Online Import software overcomes these data migration challenges. HPE 3PAR Online Import is a simple and concise “do-it-yourself” tool for data migration orchestration operated from the console of the HPE 3PAR Online Import Utility (OIU). The OIU tool migrates volumes in an online, minimally disruptive or offline mode. The data under migration can be accessed while the transfer is ongoing. Volumes are selected individually—per application, per host, per Volume Set, per Remote Copy Group, or per cluster. Roll back is in minutes with no data loss. The tool comes with a free, one-year license for a new HPE 3PAR StoreServ Storage system.

HPE 3PAR Online Import for Hitachi Data Systems (HDS) systems shares the framework of the OIU tool for EMC and IBM that have been successfully released to market in 2014 and 2015. Many customers use this for data migration from their legacy EMC arrays to HPE 3PAR StoreServ Storage. For the list of supported HDS systems and the support matrix for it, consult the HPE Single Point of Connectivity Knowledge1 (SPOCK).

This paper expands on topics discussed in the HPE 3PAR Peer Motion and HPE 3PAR Online Import User Guide2 and covers additional material on features, best practices, monitoring, reporting, and troubleshooting.

Assumptions It is expected that the reader has absorbed the material in the HPE 3PAR Peer Motion and HPE 3PAR Online Import User Guide.

This paper has been prepared using HPE 3PAR OS 3.2.2 MU3 on the HPE 3PAR StoreServ and HPE 3PAR OIU version 1.5.0.

While throughout this paper mainly the singular form of the word “host” is used, the content applies also to clusters of hosts unless explicitly disclaimed.

Throughout the paper, we assumed that the Online Import Utility is installed in the default installation directory at C:\Program Files (x86)\Hewlett-Packard\hp3paroiu. In case the tool is installed in a different directory, substitute your directory path for the default one.

1 hpe.com/storage/spock 2 h20564.www2.hpe.com/hpsc/doc/public/display?docId=c04783896

Page 4: HPE 3PAR Online Import for HDS Storage · HPE 3PAR Online Import for HDS Storage . Best practices for data migration from Hitachi Data Systems arrays to HPE 3PAR StoreServ Storage

Technical white paper Page 4

Concepts The HPE 3PAR OIU for HDS Storage orchestrates the migration of block data from selected HDS Storage systems to HPE 3PAR StoreServ Storage. The data transfer is array-to-array; the tool does not use host-based mirroring or an external hardware appliance. Online Import is a “pull type” migration where the destination array autonomically requests the data from the source HDS array. The data transfers over the Peer links, the dedicated dual-path Fibre Channel (FC) interconnect between the source and the destination array; the same concept is used in HPE 3PAR Online Import for EMC and IBM systems.

The OIU application offers a command line interface with a small set of high-level commands by which the storage administrator selects the volumes to migrate and submits the task for the physical data transfer from the source to the destination storage system. OIU manages the required changes in pathing, masking, and unmasking of the migrating volumes on the source HDS array by sending Storage Management Initiative Specification3 (SMI-S) commands to the SMI-S provider integrated into the prerequisite Hitachi Command Suite (HCS) software. SMI-S is a standard developed by the Storage Network Industry Association (SNIA) intended to facilitate the management of storage devices from multiple vendors in storage area networks (SANs).

LUNs from one or multiple hosts can be migrated concurrently between the same HDS—HPE 3PAR StoreServ pair. Concurrent migrations between multiple, independent HDS—HPE 3PAR StoreServ pairs can be executed from the same OIU environment. Concurrent migrations from different source brands (EMC, IBM, or HDS) to the same HPE 3PAR StoreServ are not supported.

Example migration flow The OIU console counts 18 intuitive commands. A migration can be executed with as few as eight commands in the order shown in figure 1:

Figure 1. Online Import Utility commands to complete a migration

3 snia.org/

Page 5: HPE 3PAR Online Import for HDS Storage · HPE 3PAR Online Import for HDS Storage . Best practices for data migration from Hitachi Data Systems arrays to HPE 3PAR StoreServ Storage

Technical white paper Page 5

The addsource and adddestination commands tell OIU the details of the source and destination systems taking part in the migration. The createmigration command assembles the list of volumes for the migration along with their provisioning type and landing Common Provisioning Group (CPG) on the destination HPE 3PAR StoreServ. The repeated execution of the showmigration command informs the user on the progress of the createmigration command. When completed, the import of the data from the source system to the destination one is triggered with the startmigration command. showmigrationdetails displays the progress of the migration as a percent value on a per volume and combined basis. When a migration is completed, its details in OIU can be removed using removedestination. When all desired volumes from all hosts have been migrated successfully, the OIU setup can be dismissed using removedestination and removesource. Consult the help in the OIU console and the HPE 3PAR Peer Motion and HPE 3PAR Online Import User Guide for the options and values for each command.

Data migration phases Data migration using HPE 3PAR Online Import can be broken down into three phases.

Pre-migration phase At the start of the pre-migration phase, two FC SAN zones are created for the Peer links between an HDS and an HPE 3PAR StoreServ system. Next, the administrator starts up the HPE 3PAR OIU application and adds in the details of one or more source and destination storage systems in the OIU console. The Utility communicates via SMI-S with HCS managing the HDS system to validate the array type and firmware level of the systems. It also checks for the presence of a valid Online Import or Peer Motion license on the destination HPE 3PAR StoreServ system(s). Next, the administrator zones the host to the HPE 3PAR StoreServ and verifies in the HPE 3PAR StoreServ Management Console (SSMC) or the HPE 3PAR CLI that the expected number of paths from the host to the HPE 3PAR StoreServ is present.

Next, the administrator spells out the migration definition by issuing the createmigration command with its options. These include the five-digit serial number of the HDS source system (for example, 48274), the Node WWN of the destination HPE 3PAR StoreServ (if multiple destinations are added to OIU), and the full or partial list of volumes to be migrated. It also includes the HPE 3PAR persona value for the host operating system, and more.

For online and minimally disruptive migrations, an algorithm in OIU includes any other volumes presented to the same host or any other host in the cluster to the migration definition. This implicit addition is based on the objects present in the HCS database. Next, OIU creates two new Host Groups on the HDS system named HCMDxxxxx with x a hexadecimal number—for example, HCMD00F0B and HCMD08F0A. Each of these contains the WWN of one of the Peer ports on the destination HPE 3PAR StoreServ system.

Next, all volumes to be migrated are presented by OIU to these newly created Host Groups over the HDS FC ports that are the end points of the Peer links. With the Peer ports being initiators, the HPE 3PAR StoreServ system becomes a host to the HDS system. Next, the definition of migrating host and a host set named OIU_HOST_SET_X (X is a whole number) containing the name of the migrating host are created on the destination HPE 3PAR StoreServ.

To end the pre-migration phase, OIU creates one volume per migrating volume on the destination HPE 3PAR StoreServ in RAID 0 protection with the “Peer” provisioning type and the same size and WWN ID as on the HDS source system. If the migration is of the online type, these Peer volumes are now exported to migrating host. After a SCSI bus rescan, this export causes host application I/O to flow to the source HDS storage through the destination HPE 3PAR StoreServ and the Peer links. This complements the I/O flowing from the host to the source directly. In the case of minimally disruptive migration (MDM), the export of the Peer volumes to the host happens with the startmigration command. In both cases, the LUN ID used on the source HDS system is preserved on the destination HPE 3PAR StoreServ unless a conflict occurs.

Page 6: HPE 3PAR Online Import for HDS Storage · HPE 3PAR Online Import for HDS Storage . Best practices for data migration from Hitachi Data Systems arrays to HPE 3PAR StoreServ Storage

Technical white paper Page 6

Data migration phase In preparation for the actual migration, the host multipathing solution must be reviewed. If the Hitachi Dynamic Link Manager (HDLM) multipathing solution is present, it should be removed from the host now; this requires a reboot of the host.

For an online migration, install and configure the host OS native multipathing solution and verify that the application I/O flows in part over the paths to the HPE 3PAR StoreServ. When verified, you can safely unzone the host from the source HDS system.

For MDM and offline migrations, stop all applications using the volumes that will be migrated on the host. Next install and configure the multipathing solution for HPE 3PAR volumes on the host, unzone the host from the source HDS system, and shut it down.

For all migration types, the actual data migration is started now by issuing the startmigration command. For an online migration, the migrating volumes get unpresented from the source HDS system to the host. For MDM, the migrating volumes are now exported to the host on the destination system. For both migration types, the host now accesses the migrating volumes on the HDS system via the destination HPE 3PAR StoreServ system and over the Peer links. For MDM, the host can be powered on once the individual import tasks for the migration of the volumes were created on the destination HPE 3PAR StoreServ; this can be verified through the showmigrationdetails command.

After the reboot of the host, the application(s) can be started and validated. For an offline migration, the host stays down until the migration of all volumes has completed. After the migration started, OIU acts as a monitoring tool to assess the migration progress using the showmigrationdetails command. For all three migration types, the data migration phase ends when all volumes in the migration definition have been transferred.

Post-migration phase At completion of the migration, the presentation of the HDS volumes to the HCMDxxxxx Host Groups is removed; the Host Groups stay in place for later reuse. The migration definition can now be removed from OIU. If no more migrations from the source HDS system by OIU are planned, you can remove the source system from the OIU environment and unzone the Peer links. If no more migrations are to be executed to the HPE 3PAR StoreServ system, the destination system can be removed from the OIU environment and the Peer ports can be reconfigured into host ports. With the whole migration setup removed now, the HCMDxxxxx Host Groups can be removed manually from the HDS source system.

Zoning steps Figures 2–4 elaborate on the FC zoning changes and host downtime for an online migration, MDM, and offline migration. For an online migration, the required zoning changes are executed while the host is up and all applications stay online. During an MDM, the host has to go down for the changes in the zoning and for reconfiguring multipathing on the host. This is indicated by the shaded host box in the figures. The host can reboot and applications restarted after the import tasks are created by HPE 3PAR OS on the destination HPE 3PAR StoreServ.

In MDM, the application downtime is reduced to about 15 minutes while the actual data import time can take hours. For an offline migration, the host and all its applications must stay down during the entire migration. The required zoning changes and reconfiguration of multipathing on the host take place after the data migration is completed for an offline migration.

Page 7: HPE 3PAR Online Import for HDS Storage · HPE 3PAR Online Import for HDS Storage . Best practices for data migration from Hitachi Data Systems arrays to HPE 3PAR StoreServ Storage

Technical white paper Page 7

Figure 2. Zoning steps for an online migration from an HDS source system to an HPE 3PAR StoreServ destination

Figure 3. Zoning steps for an MDM migration from an HDS source system to an HPE 3PAR StoreServ destination

Figure 4. Zoning steps for an offline migration from an HDS source system to an HPE 3PAR StoreServ destination

Page 8: HPE 3PAR Online Import for HDS Storage · HPE 3PAR Online Import for HDS Storage . Best practices for data migration from Hitachi Data Systems arrays to HPE 3PAR StoreServ Storage

Technical white paper Page 8

Life of an I/O At the end of the pre-migration phase of an online migration, the migrating host is connected and zoned to the source HDS and the destination HPE 3PAR StoreServ Storage. All application reads and writes take the path from the host to the source HDS system. The Peer volumes are created and exported to the host but receive no I/O, hence contain no data in this phase. A rescan of the HBAs on the host activates the paths between the host and the destination HPE 3PAR StoreServ. With the Peer volumes and the zones for the Peer links in place, the destination system acts as a proxy for I/O to the source volumes on the HDS system. As a result, application reads and writes now also pass over the new paths through the HPE 3PAR StoreServ and the Peer links to the source volumes.

For a minimally disruptive migration, the migrating host is never connected to both the source and the destination system. To this end, the Peer volumes are created but not exported from the HPE 3PAR StoreServ to the host at the end of the pre-migration phase. At this moment, all application reads and writes still take the path from the host to the source HDS system. The cutover for application I/O from the source to the destination happens during the shutdown of the host—the host is unzoned from the source system and was already zoned to the destination.

For an offline migration, the host application using the volumes under migration is halted upon which the host is shut down and rezoned from the source to the migration.

For the online migration and MDM, at the onset of the actual migration, the presentation of the volumes under migration is removed from the host to the source system disabling I/O over the original paths. After this, the transfer of data to the Peer volumes is initiated. During this import, data migration traffic flows from the HDS source system to the destination HPE 3PAR StoreServ over the Peer links while application I/O travels the opposite direction over the same Peer links.

While the data transfer is ongoing, the source volumes are updated with application writes traveling to them from the host over the destination HPE 3PAR StoreServ and over the Peer links. These application writes are also applied to the destination Peer volumes if the landing region for the write has already been migrated from the source. Once a volume is entirely migrated, write updates only happen on the destination volume and no longer to the source volume. This means the source volume becomes stale right afterwards.

This concept leads to an increasing amount of “double writes” while the migration is progressing. As an example, when a volume has been migrated for 60 percent, every application write has a 6/10 chance to become written to the source and the destination system causing an incremental latency. Updating the source volumes during the span of the migration of a volume is a safety mechanism: it ensures that a migrating volume stays current on the source for the period of the migration. It eases rollback in case the migration does not complete—reverting to the initial situation requires minimal effort being a zoning operation and a re-presentation of the migrating volumes from the source system to the host.

For destination systems on HPE 3PAR OS 3.2.1 and earlier, application reads always come from the source HDS system incurring extra latency and causing bandwidth reduction for the import traffic. For a destination system on HPE 3PAR OS 3.2.2, an application read comes from the destination HPE 3PAR StoreServ system if the region for it has already been migrated. This saves the latency of a hop over the SAN to read the data on the source system freeing up bandwidth for the import traffic.

Consistency groups HPE 3PAR Online Import from HDS systems to an HPE 3PAR StoreServ creates an import task for every volume under migration. Up to nine volumes migrate simultaneously, the other tasks are in the standby state and start execution when one of the nine running tasks completes. All migrating tasks are executed with equal priority.

Because smaller volumes are transferred in less time than larger ones, we could end up with the situation that, for example during a database migration, the small database control and log files are completely migrated to the destination array while the migration of the large database records files is ongoing. If the migration fails due to a hardware or software error, one ends up with the database logs being stale on the source system while the database records are current thanks to the double writes. Restarting the database on the source system using the stale logs files is suboptimal. Rollback requires copying the control and log files from the destination to the source system.

To remove the need for the copy operation, the concept of Consistency Groups (CG) is implemented in HPE 3PAR Online Import. Application I/O issued during a migration to volumes that are members of a CG keeps on being mirrored to the source array, even after a volume migrated completely until all members of the CG are migrated to the destination array. This way the small and large volumes under migration stay current on the source system during the entire migration making rollback in the case of an unfinished migration straightforward and simple. Note that the Consistency Groups in HPE 3PAR Online Import bear no relationship to objects with the same name used in TrueCopy replication between HDS systems.

Page 9: HPE 3PAR Online Import for HDS Storage · HPE 3PAR Online Import for HDS Storage . Best practices for data migration from Hitachi Data Systems arrays to HPE 3PAR StoreServ Storage

Technical white paper Page 9

Defining CGs happens at createmigration time. Multiple CGs can be specified with two or more members. Members can be specified that were not explicitly defined in the createmigration command but will be added implicitly. One of the options of createmigration conveniently adds all explicit and implicitly selected volumes for migration into a single CG. This significantly reduces the time to type the command and the risk of typing mistakes.

Pathing evolution for online migration From figures 2–4, we see that the FC pathing from the host to the volume under migration changes over time by OIU and manual operations. As an example, this section elaborates on the evolution of the pathing on a Red Hat® Enterprise Linux® (RHEL) 6.6 host for a 50 GB volume during the different migration steps for an online migration from an HDS source system to an HPE 3PAR StoreServ. The output in this section is produced with the command multipath -ll on the Linux host.

At the initial situation, the volume is presented over two paths to the host from the HDS system with serial number 53094 (cf66). These paths are registered on the host with device file names /dev/sdh and /dev/sdw. The native Linux device-mapper multipathing software on the host bundles these paths into one named /dev/mapper/mpatheb. The LUN ID for the volume is 7. The SCSI Inquiry string governing the I/O by the host to the volume on the source HDS system is HITACHI,OPEN-V. Here is the output of multipath -ll for this 50 GB volume:

mpatheb (360060e8006cf66000000cf66000000a0) dm-2 HITACHI,OPEN-V

size=50G features='1 queue_if_no_path' hwhandler='0' wp=rw

`-+- policy='round-robin 0' prio=1 status=active

|- 0:0:1:7 sdh 8:112 active ready running

|- 1:0:1:7 sdw 65:96 active ready running

The next output shows the pathing situation after createmigration completed and a SCSI bus rescan was executed on the host. The two original paths are complemented by two more with device file names /dev/sdaq and /dev/sdav:

mpatheb (360060e8006cf66000000cf66000000a0) dm-2 HITACHI,OPEN-V

size=50G features='1 queue_if_no_path' hwhandler='0' wp=rw

`-+- policy='round-robin 0' prio=1 status=active

|- 0:0:1:7 sdh 8:112 active ready running

|- 1:0:1:7 sdw 65:96 active ready running

|- 0:0:3:7 sdaq 66:160 active ready running

`- 1:0:4:7 sdav 66:240 active ready running

These new paths connect the host to two controller nodes on the destination HPE 3PAR StoreServ. Note that the LUN ID for the volume over these new paths is identical. With the Peer links between the HDS source system and the destination HPE 3PAR StoreServ in place, all four paths carry application traffic.

After it is verified that I/O flows over all four paths, the original paths with device names /dev/sdh and /dev/sdw are removed manually resulting in this output:

mpatheb (360060e8006cf66000000cf66000000a0) dm-2 3PARdata,VV

size=50G features='1 queue_if_no_path' hwhandler='0' wp=rw

`-+- policy='round-robin 0' prio=1 status=active

|- 0:0:3:7 sdaq 66:160 active ready running

`- 1:0:4:7 sdav 66:240 active ready running

Application I/O now travels from the host over the destination HPE 3PAR StoreServ Storage over the Peer links to the volume on the HDS system. Note that the SCSI Inquiry string shown in the output of multipath –ll has changed now to the one for HPE 3PAR StoreServ. The SAN zoning from the host to the HDS source system is now removed. At the onset of startmigration, OIU unexports the volume from the host on the HDS system and the import of data commences.

Page 10: HPE 3PAR Online Import for HDS Storage · HPE 3PAR Online Import for HDS Storage . Best practices for data migration from Hitachi Data Systems arrays to HPE 3PAR StoreServ Storage

Technical white paper Page 10

Requirements The hardware environment to execute an HPE 3PAR StoreServ Online Import Utility (OIU) migration from an HDS source system to an HPE 3PAR StoreServ consists of:

• A supported model of HDS Storage system with a supported microcode version

• A Windows® or Linux server installed with a supported version of Hitachi Command Suite (HCS)

• A valid “Core CLI/SMI-S” license installed on HCS

• One or more LUNs on the HDS system presented to a server running a supported operating system and supported multipathing solution; the LUNs should not be in replication with a second HDS array

• A supported model of HPE 3PAR StoreServ with a supported HPE 3PAR OS version

• A valid Online Import or Peer Motion license on the destination HPE 3PAR StoreServ

• A server with a supported Microsoft® Windows operating system running the OIU server software

• A server with a supported Microsoft Windows operating system running the OIU client software; the client software can run on the system running the server portion of OIU as well

• A single or dual FC fabric to create the SAN interconnection between the source HDS and destination HPE 3PAR StoreServ

Consult the HPE SPOCK website for the list of arrays, and host operating systems supported with HPE 3PAR OIU. Any HPE 3PAR StoreServ system currently under support can be the destination of an Online Import migration.

The OIU client software communicates by a REST API to the OIU server. If the OIU server is on a different system than the OIU client, TCP ports 2370 and 2371 need to be open between them. The OIU server sends SMI-S commands over TCP port 5989 to the CIMOM repository built into the HCS software to present/unpresent LUNs and to create and modify Host Groups on the HDS source array. This port needs to be open between the OIU server and the server running the HCS.

Host connectivity to the HDS and the HPE 3PAR StoreServ systems is supported for FC and Fibre Channel over Ethernet (FCoE). In the latter case, an FCoE switch for converting FCoE to FC for array connectivity is required.

The host to which the HDS LUNs are presented stays the same throughout the migration, which means its operating system, its FC HBA or CNA, and their firmware version, have to be supported by the destination HPE 3PAR StoreServ system. An investigation on this has to be conducted during the planning phase of the OIU migration operation.

The frame-based license called “Core CLI/SMI-S” for the SMI-S provider within HCS is included with the purchase of an HDS array and is sufficient to conduct an Online Import migration from the HDS Storage to the HPE 3PAR StoreServ Storage. This license can be installed from the splash screen of the HiCommand GUI. The license shows up in the overview of licenses in the GUI. When installed, verify with the following HCS CLI command that the value for the license parameter is equal to 100:

HiCommandCLI http : //hcs. example. com:2001/service GetServerInfo -u xxx -p yyy

...

license=100

...

The HiCommandCLI command is part of the Command Line Interface to manage an HDS system. It is required for some HiCommand operations on this page. This interface can be downloaded from the Tools -> Download menu on the home page of the HiCommand GUI. Alternatively, it can be installed from C:\Program Files\HiCommand\DeviceManager\HiCommandTools\cli. The Hitachi Device Manager license is optional. If the Device Manager license is installed, it includes the “Core CLI/SMI-S” license. No specific license is required for HPE 3PAR Online Import on the HDS Storage source system.

The HDS array under migration needs to be added to the HCS. As an example, to add an HDS USP_V model with a Service Processor at IP address 10.10.10.1 to the HCS instance running on system hcs .example . com, the following command can be used:

HiCommandCLI http : //hcs . example. com:2001/service AddStorageArray ipaddress=10.10.10.1 family=USP_V displayfamily=USP_V userid=xxx arraypasswd=yyy

Page 11: HPE 3PAR Online Import for HDS Storage · HPE 3PAR Online Import for HDS Storage . Best practices for data migration from Hitachi Data Systems arrays to HPE 3PAR StoreServ Storage

Technical white paper Page 11

The supported values for family and displayfamily in the previous command are in the output of this command:

HiCommandCLI http : // hcs. example. com:2001/service GetServerInfo -u xxx -p yyy

Multiple HDS arrays can be added to the HCS environment.

The HPE 3PAR OIU console All commands for an HDS to HPE 3PAR StoreServ Online Import migration are executed from within the HPE 3PAR OIU console. The Utility client portion can be on a different computer than the server portion. Consult the HPE SPOCK website for the list of supported operating systems for the OIU client and server.

The commands and their options specified in the HPE 3PAR OIU client are case insensitive. However, the option values such as the name for the host, volume, and CPG are strictly case sensitive and should be spelled out using the capitalization in use on the HDS and HPE 3PAR StoreServ systems. An extensive explanation of every command and its options is available in the HPE 3PAR Peer Motion and HPE 3PAR Online Import User Guide or by appending –help to the command in the Utility console. Scripts for the unattended migration of volumes can be created using these commands, see the section on Scripting in this paper for more information on this.

Features Volume support Volumes with the following characteristics can be migrated from a supported HDS Storage system to an HPE 3PAR StoreServ:

• Open Systems volumes of any emulation type

• Logical Unit Size Expansion (LUSE) volumes

• Volumes with a size between 256 MiB and 16 TiB, including values defined on the HDS system by number of cylinders or blocks of 512 bytes

• Volumes in a Dynamic Tiering configuration; they will land in the CPG specified in createmigration regardless of their tiering policy on the HDS source system

Volumes on External Storage and in local (ShadowImage) or remote replication (TrueCopy) cannot be migrated using HPE 3PAR Online Import. Volumes with a mainframe emulation type are not supported for migration.

LUN ID conflict resolution HPE 3PAR Online Import retains a LUN’s configuration during migration—its name, size, WWN, and export host remain identical. While OIU tries to keep the LUN ID the same as well on the destination HPE 3PAR StoreServ, this will not happen if the ID of the migrating volume on the source is already in use on the destination for the particular host. The HPE 3PAR OIU discovers this LUN ID conflict and by default automatically assigns a new, unused LUN number starting from zero to the incoming volume on the destination array; the user has no control over the assigned number. This may cause an issue for applications that require data volumes to be presented by a particular LUN number. The option -autoresolve false in createmigration causes this command to fail with the following message in the output of showmigration:

preparationfailed(-NA)(:OIUERRDST0008:Admit has failed. Failed to export <volume> to <host>: Error: LUN <id> is already taken;)

The customer can resolve the conflict on the source or the destination system; the manipulated volume requires downtime.

Peer links HPE 3PAR OIU uses in-array technology for the data migration: no hardware appliance or external server software is deployed. The source and the destination array are physically interconnected by the Peer links over FC SAN switches; direct FC interconnect is not supported.

For the Peer links, two FC ports in target mode on CHIP boards that are in a different cluster on the HDS source system are selected. These ports can be shared with other hosts. On the destination HPE 3PAR StoreServ system, two unused FC host ports are selected and configured into the Peer connection mode. This configuration is executed by the storage administrator outside of OIU and before the Utility is started. The Peer connectivity between the two storage systems is constructed by zoning one HDS target port in the same SAN zone with one HPE 3PAR Peer port. Each zone should contain only two ports, one from each storage system.

Page 12: HPE 3PAR Online Import for HDS Storage · HPE 3PAR Online Import for HDS Storage . Best practices for data migration from Hitachi Data Systems arrays to HPE 3PAR StoreServ Storage

Technical white paper Page 12

Note that after becoming a Peer port, the WWN of the host port changes from 2x:xx:00:02:AC:xx:xx:xx to 2x:xx:02:02:AC:xx:xx:xx; the change is in bold. This changed WWN must be used in the SAN zoning for the Peer links. The SAN zones for the Peer links are set up outside of OIU using an appropriate SAN switch management tool. The physical FC ports for the Peer links must be on HBAs located in different, partner controller nodes (for example, nodes 2/3 or 6/7). No dedicated HBA is required for the Peer ports on the destination array.

Data migration at 16 Gbps Although the impact of data migration upon hosts is negligible, most customers prefer to conduct a data migration in the least possible time. HPE 3PAR Online Import supports 16 Gbps FC ports for the Peer links on HPE 3PAR OS 3.2.2 eMU2 and later. These ports are integrated into the controller nodes of the HPE 3PAR StoreServ 8000 Storage and HPE 3PAR StoreServ 20000 Storage. They are also available on the recent units of the HPE 3PAR StoreServ 7000 Storage and on a plug-in card in the controllers of all the mentioned systems. An end-to-end 16 Gbps SAN fabric improves the data import throughput compared to 8 Gbps, which reduces the transfer time significantly and renders migrating workloads the benefits of the destination platform quicker. The 16 Gbps Peer ports are compatible with 8 and 4 Gbps FC ports on HDS source systems.

Multipathing All modern host operating systems include a native multipathing solution. Storage array vendors complement these often with software delivering additional, array-specific features. Hitachi Dynamic Link Manager (HDLM) is Hitachi’s multipathing software. Versions of it exist for all host operating systems supported by HPE 3PAR Online Import except HP-UX 11i v3 where Hitachi relies on the native OS multipathing software. HPE 3PAR Online Import does not support the presence of HDLM when migrating volumes, so it needs to be uninstalled prior to initiating the migration. While the removal of HDLM is transparent on Windows, it can cause issues when mounting file systems in VMware®, Linux, and UNIX® systems after the reboot following the removal. On Linux, the device files names for the multipathed devices change from /dev/sddlm[aa-pop][1-15] to /dev/mapper/mpathX for use with device-mapper forcing a review of the /etc/fstab file. This requires careful preparation, in particular when migrating a boot-from-SAN volume. Review the section covering multipathing in HPE 3PAR Peer Motion and HPE 3PAR Online Import User Guide before removing HDLM.

The selection of the LUNs managed by a particular multipathing solution is controlled by the LUN’s SCSI Inquiry string. A blacklist specifies the devices that will be ignored and exceptions to the blacklist can be added. The SCSI Inquiry string identifies the type and make of the device; each array vendor owns one or more of these strings. See figure 5 for the listing of SCSI Inquiry strings known to HDLM in a Windows environment.

Figure 5. SCSI Inquiry strings installed by HDLM on a Windows Server®

Page 13: HPE 3PAR Online Import for HDS Storage · HPE 3PAR Online Import for HDS Storage . Best practices for data migration from Hitachi Data Systems arrays to HPE 3PAR StoreServ Storage

Technical white paper Page 13

When migrating a LUN from an HDS system to an HPE 3PAR StoreServ, the SCSI Inquiry string for the LUN changes. This change requires a reboot for the Windows host forcing Online Import for this operating system to be of the MDM type. This reboot is not necessary for VMware, Linux, and UNIX systems as they handle the change to a new SCSI Inquiry string and multipathing context gracefully.

SCSI reservations At the start of the actual migration from HDS Storage to HPE 3PAR StoreServ Storage, OIU places a SCSI reservation on each migrating volume on the HDS array that is presented to the HCMDxxxxx Host Groups. These reservations can be examined from Remote Web Console (RWC) and from the CCI. In figure 6, we see the reservations in RWC on the SVP marked as PGR/Key in the “Status” column for the LUN details on the four volumes under migration. Make sure to select “LUN Status” on the pull-down menu for “View” in the screen to enable the “Status” column. The user should not release these reservations.

Figure 6. Viewing the SCSI reservation of an HDS LUN under migration

After the successful migration of the volume from HDS Storage to HPE 3PAR StoreServ Storage, these reservations are cleared automatically to allow OIU to remove their presentation from the HCMDxxxxx Host Groups on the HDS system. In case the actual data migration starts but does not complete, these reservations remain present, and must be removed manually if the migration is to be restarted. See Appendix E in chapter 22 of the HPE 3PAR Peer Motion and HPE 3PAR Online Import User Guide on how to remove these SCSI reservations.

Encryption Both HDS and HPE 3PAR StoreServ Storage systems support data-at-rest encryption, although the implementation is different. HDS encrypts incoming data on the storage system level with encryption engines integrated into the controllers. Decryption takes place when a host requests the data resulting in unencrypted information leaving the HDS system. The disk drives in use do not differ from the non-encryption solution.

HPE 3PAR StoreServ systems support data-at-rest using self-encrypting disk drives (SED). Each SED drive in the HPE 3PAR StoreServ contains an ASIC handling the encryption and decryption of data; these operations are not executed in the HPE 3PAR OS. Decryption takes place when a host requests the data resulting in unencrypted information leaving the HPE 3PAR StoreServ. The disk drives supporting data-at-rest encryption differ from those that do not.

Page 14: HPE 3PAR Online Import for HDS Storage · HPE 3PAR Online Import for HDS Storage . Best practices for data migration from Hitachi Data Systems arrays to HPE 3PAR StoreServ Storage

Technical white paper Page 14

The net effect of both implementations is that data enter and leave each array unencrypted—the data-at-rest encryption only exists inside the storage system. This means HPE 3PAR Online Import is transparently compatible with a source HDS system that may or may not encrypt data and with an HPE 3PAR StoreServ destination system that may or may not contain SED drives and has encryption enabled.

Replication Volumes on the HDS system that are in local replication (ShadowImage) or remote replication (TrueCopy) or member of a pool for them, cannot be migrated using HPE 3PAR OIU. The error message in createmigration for both types of replication is:

preparationfailed(-NA-)(ERROR: OIUERRMC10016 One or more of the volumes, selected explicitly or implicitly, are ineligible for migration due to a remote copy relation. OIURSLMC10016 Remove the remote copy relation for the volume(s) selected for migration.;)

Note that the statement the ineligible volume is in a “remote copy relation” is not correct for ShadowImage volumes and pool members. The presence of volumes on the source HDS system in replication but not inside the migration definition does not affect OIU.

The size of an LDEV on HDS systems is defined at creation time in Megabytes (MB), blocks of 512 bytes, or cylinders. Although volume sizes in HPE 3PAR OS are defined on boundaries of 256 MiB, it is possible for OIU to create a Peer volume on the destination HPE 3PAR StoreServ for an HDS volume that does not have a size that is an integral multiple of 256 MiB. Figure 7 shows four Peer volumes each related to an LDEV on an HDS system whose size was selected in cylinders. The size for each volume is not on a boundary of 256 MiB but Online Import managed to get the Peer volume created.

Figure 7. Peer volumes with “broken” values for their size; see the VSize column

Quite interestingly, both the HPE 3PAR StoreServ Management Console (SSMC) and HPE 3PAR CLI user interfaces do not offer the possibility to create a virtual volume on an HPE 3PAR StoreServ with a “broken” size. OIU imports these volumes properly and they become base volumes with the predefined provisioning type. These volumes with broken sizes can be the subject of operations such as snapshots, HPE 3PAR Physical Copy, and replication.

Of particular interest is the situation on replication. To protect the migrated volumes with HPE 3PAR Remote Copy (RC), HPE 3PAR SSMC or HPE 3PAR CLI can be used as an interface to set up replication to a secondary array. The target volume on the RC destination must be of exactly the same size as the source volume. When setting up RC, you have the option in both interfaces to select pre-created volumes on the target system or have the user interface create them for you. If the source volume has the “broken” size because of an earlier migration from an HDS system, you must select the option that the RC interface creates the target volumes for you. The resulting target volumes will have the same, broken size as the source volumes. With these volumes, RC works as usual in any transfer mode.

Scripting Given their ever-increasing quantity and complexity of work, storage administrators combine lengthy, involved, and repetitive sequences of commands into a script that is executed with a single, simple command. Scripts speed up execution and greatly reduce human error in times of pressure, for example, during the creation of hundreds of volumes or when recovering from disaster.

Using the commands available in the HPE 3PAR OIU client, the migration of a standalone host, cluster, or volumes in an online, minimally disruptive, and offline fashion can be scripted. The examples in the next sections run inside a Command Line window on the Windows system executing the client component of OIU.

Page 15: HPE 3PAR Online Import for HDS Storage · HPE 3PAR Online Import for HDS Storage . Best practices for data migration from Hitachi Data Systems arrays to HPE 3PAR StoreServ Storage

Technical white paper Page 15

The following commands are for the log-in into the HPE 3PAR OIU server environment installed on a Microsoft Windows system with IP address 10.1.1.1 followed by adding a source HDS system to it using the specified credentials:

SET DIR1="C:\users\administrator\OIU"

CD "C:\Program Files (x86)\Hewlett-Packard\hp3paroiu\CLI"

TYPE %DIR1%\addsource.data | OIUCli.bat -IPADDRESS 10.1.1.1 –USERNAME <username> -PASSWORD <password> 2&>1 %DIR1%\addsource.out

The last two lines above should be entered on one line. The username and password specified are for an Lightweight Directory Access Protocol (LDAP) or Active Directory (AD) or local Windows account on the OIU server. The TYPE command feeds the contents of the text file addsource.data to the Utility environment. The file contains what one would type in during an interactive OIU session to add the HDS source system to the environment. Here are example contents for this file:

addsource -type HDS -mgmtip 10.1.1.2 -uid 48274 -user <username> -password <password>

The username and password specified are for logging into the HCS software with “Administrator” rights. The syntax and the values for the different parameters of this command are validated in the OIU server. The output of the command is directed to the file addsource.out for parsing. After a successful execution, the output file contains information similar to this:

CLI Version: 1.5.0

Connected to server version: 1.5.0.307.022416

>Successfully logged in

>

>SUCCESS: Added source storage system.

The outcome of parsing this file for the string “>SUCCESS: Added source storage system.” determines if the source system was added successfully. A similar input file with the adddestination command and a similar command line to send it to the OIU server registers the destination HPE 3PAR StoreServ to the OIU environment. The migration definition is entered in the same way by feeding an input file with the desired content into the Utility server. Following is an example content for this input file:

createmigration -sourceuid 48274 -migtype MDM -srcvolmap [{"00:00:27",full,FC_r1}] -destcpg FC_r5 -destprov thin -persona WINDOWS_2008_R2

If the command is syntactically correct, the OIU server scans the array for the eligibility of the explicit and implicitly added volumes. When finished, the resulting output will contain a unique 13-digit number for the migration ID that should be parsed:

>SUCCESS: Migration job submitted successfully. Please check status/details using showmigration command.

Migration id: 1400254873916

Next, the Utility server starts the preparation work to admit the volumes and creates their Peer counterparts on the HPE 3PAR StoreServ destination. This can take a few minutes depending on the number of LDEVs and Host Groups on the HDS source systems and the number of LUNs to be migrated. Its completion is signaled by the presence of the expression preparationcomplete (100%) in the output of the showmigration command. A loop in the script checks for this expression in the output of showmigration every 30 seconds. When completed, the script creates a file with this content:

startmigration –migrationid xyz

with xyz being the 13-digit identifier from the output of createmigration and feeds the contents of this file into the Utility server as shown earlier. The actual migration now starts and takes minutes to hours depending on the number and the size of the volumes to be migrated from the HDS system and the level of activity on the HDS and HPE 3PAR StoreServ system. The script now enters a loop parsing on a 30 seconds time interval the output of showmigration for the word success; this word will mark the end of the data transfer.

When the migration is completed, have the script create the file for removemigration with the 13-digit migration id. Feed this file and the ones for removesource and removedestination, each created upfront with their appropriate parameters, to the OIU environment as explained earlier. This cleans up the migration.

Page 16: HPE 3PAR Online Import for HDS Storage · HPE 3PAR Online Import for HDS Storage . Best practices for data migration from Hitachi Data Systems arrays to HPE 3PAR StoreServ Storage

Technical white paper Page 16

Monitoring and reporting Hewlett Packard Enterprise encourages active monitoring of the data transfer throughput on the source and the destination storage system and in the HPE OIU. The following sections explain how to monitor the migration and how to report on it.

On the source HDS system The Performance Monitor that is part of the Hitachi Performance Manager Suite delivers statistics about the usage of physical hard disks, volumes, processors, cache, ports, and other resources. It also provides usage statistics on traffic between hosts and the HDS Storage source system. This licensed tool can identify the root cause if issues arise on the source HDS system during the Online Import transfer of data from the HDS Storage system to the HPE 3PAR StoreServ. Of particular interest to monitor is the traffic across the ports that are the end points of the Peer links on the HDS side if these ports are shared with other hosts.

On the destination HPE 3PAR StoreServ system The graphical capabilities of the HPE 3PAR SSMC can show the rate of ingest of data in real time over the Peer links into the destination HPE 3PAR StoreServ system.

For an online migration, the Peer links carry half of the application read and write traffic once the volumes under migration were admitted and the host underwent a rescan to pick up the new paths to the destination HPE 3PAR StoreServ. This traffic can be monitored in the HPE 3PAR SSMC before the migration starts. The chart for this traffic can be used to determine a good starting time with low I/O for an online migration. For an MDM and offline migration, no data flows over the Peer links yet when createmigration completed.

During both an online and MDM data transfer, the Peer link graph displays the Online Import traffic from the HDS Storage to the HPE 3PAR StoreServ system combined with the host application reads and writes traveling in the opposite direction. Figure 8 shows this graph for both Peer ports over the course of a few minutes. The vertical axis in the figure shows the averaged data points in KB/s. The granularity of the data points is 5 seconds by default and can be changed to a larger value at creation time of the chart. The data points are averages over the polling interval. For an offline migration, figure 8 would only show the volume import traffic from the source HDS to the destination HPE 3PAR StoreServ.

Figure 8. Graphical representation of the historical throughput for the Peer ports on the destination HPE 3PAR StoreServ system

Every volume migration runs as a separate task inside the HPE 3PAR OS on the destination system. The data transfer progress for the entire migration can be monitored in the SSMC from the “Peer Motions” submenu on the Federations page. The horizontal blue bar in the “Progress” column in the “Activity” screen monitors the progress per volume—see figure 9. At completion, figure 9 shows the end time for the migration and its duration. The “Task Detail” option shows in detail the migration of the volume per region and its pre- and post-processing activities.

Page 17: HPE 3PAR Online Import for HDS Storage · HPE 3PAR Online Import for HDS Storage . Best practices for data migration from Hitachi Data Systems arrays to HPE 3PAR StoreServ Storage

Technical white paper Page 17

Figure 9. Graphical representation of the progress of an Online Import migration for a specific volume

The HPE 3PAR CLI command statport -peer delivers this same information in a numerical format and includes information about the queue length for the Peer ports, their service time, the number of IOPS, and their I/O size (256 MiB for Online Import). This information updates by default every two seconds; its update frequency can be changed with the –d parameter at the start of the command. Figure 10 shows a screenshot of the output of this command.

Figure 10. Output of the HPE 3PAR CLI command statport -peer showing detailed information on the Peer ports traffic

Page 18: HPE 3PAR Online Import for HDS Storage · HPE 3PAR Online Import for HDS Storage . Best practices for data migration from Hitachi Data Systems arrays to HPE 3PAR StoreServ Storage

Technical white paper Page 18

OIU throughput information can also be obtained by monitoring the ports on the SAN switches in the data path of the Peer links. For Brocade switches, use the CLI command portperfshow <port number>; for Cisco switches, use the CLI command show interface <port number>. These switch vendors also make port statistics available through a GUI.

Information about the progress of the region moves per import task can be found in the output of the CLI command showtask in column Step—see figure 11. Notice that all import tasks migrate with medium priority—this cannot be changed.

Figure 11. Region move progress information from the output of the HPE 3PAR CLI command showtask

Detailed information about past migrations by the OIU can be filtered out in the HPE 3PAR Event Log by executing the CLI command showeventlog –min <minutes> -msg import. The -min parameter, expressed in minutes, indicates how long to go back in time to find past migrations. Figure 12 shows output for this command:

Figure 12. Extracting information from the HPE 3PAR Event Log on Online Import migrations

Page 19: HPE 3PAR Online Import for HDS Storage · HPE 3PAR Online Import for HDS Storage . Best practices for data migration from Hitachi Data Systems arrays to HPE 3PAR StoreServ Storage

Technical white paper Page 19

The output of the command reveals the name of the migrated volume(s) (00_14_08/9/A/B), the CPG they were created in (FC_r5) on the destination system and their provisioning type (thin). The command showeventlog –min <minutes> -msg <volume> -oneline shows detailed information about the individual steps executed by the OIU to accomplish the migration for a particular volume—see figure 13 for this.

Figure 13. Extracting detailed information per migrated volume from the HPE 3PAR Event Log

More detailed information can be obtained with showtask –d <task number>; the task number for the migration is found from the showtask command as shown in figure 11.

The data migration continues while any of the monitoring interfaces discussed earlier is closed or while other work is executed in them. You can reopen either application at a later stage to view the progress of the migration.

On the OIU server The command showmigration in the OIU console lists the percentage value for the overall progress of the migration. The command showmigrationdetails –migrationid xyz, with xyz the 13-digit migration identifier, shows the percentage value per volume for the progress.

The XML files at C:\Program Files\Hewlett-Packard\hp3paroiu\OIUData\data for HPE 3PAR Online Import Utility contain info about the current migration or the last one but disappear after issuing the removemigration command. If you need a written record of the details of the migration (source array, destination array, migration type, volumes, CPGs, provisioning types, Consistency Groups, etc.) for reporting reasons, save the XML files to a different name or directory at the end of the migration before removing the migration ID, the source, and destination system in the Utility client.

The activities of the OIU server are captured in detail in the OIU logs files. These log files are located on the OIU server at C:\Program Files (x86)\Hewlett-Packard (x86)\hp3paroiu\OIUTools\tomcat\32-bit\apache-tomcat-7.0.37\logs. The current activities of the OIU server are written to hpoiu.log. When this log file attains its maximum size of 20 MB, it is copied into hpoiu.log.1 upon which a fresh, empty hpoiu.log is created. This rotating log scenario moves hpoiu.log.1 into hpoiu.log.2 when the hpoiu.log file is full again. Nine log files of 20 MB each are kept on disk; the oldest file hpoiu.log.9 is deleted when hpoiu.log becomes full.

When a large number of LUNs on a large HDS Storage system are migrated, you may want to rename the log files or save them in a different location to prevent loss of log files hence loss of information about the migration. Following is a Windows Command Line script to check roughly every 30 seconds for the presence of the file hpoiu.log.1 and copy it to a new file with a unique time stamp each time it is created after hpoiu.log became full.

ECHO OFF

SET DIR="C:\Program Files (x86)\Hewlett-Packard\hp3paroiu\OIUTools\tomcat\32-bit\apache-tomcat-7.0.37\logs"

:loop

COPY %DIR%\hpoiu.log.1 %DIR%\hpoiu.log.1-%time:~0,2%h%time:~3,2%m%time:~6,2%s%.log > NUL

DEL %DIR%\hpoiu.log.1 > NUL

ECHO ---- %time%

PING -n 30 localhost > NUL

GOTO loop

Page 20: HPE 3PAR Online Import for HDS Storage · HPE 3PAR Online Import for HDS Storage . Best practices for data migration from Hitachi Data Systems arrays to HPE 3PAR StoreServ Storage

Technical white paper Page 20

We recommend starting this script on the system running the server component of the HPE 3PAR OIU right before opening the OIU client to add the source and destination array and define the migration.

Best practices Installing the HPE 3PAR Online Import Utility The HDS Storage to HPE 3PAR StoreServ Online Import software is free and downloadable from HPE Software Depot4. An HPE Passport ID is required for this download.

At completion, the HPE 3PAR OIU installer shows the screen shown in figure 14. The check box near the bottom for showing the Windows Installer log is not ticked by default. Hewlett Packard Enterprise recommends saving this log file at the end of the installation in case any troubleshooting is required later on. To save the log file, tick the box for “Show the Windows Installer log” and click “Finish”. Upon this, the Windows Installer log opens in Windows Notepad as a text file with name EOIU_Install.log. Its default location, when saved, is C:\Users\Administrator\Documents. If the box is not ticked while clicking the “Finish” button, the Installer log is lost.

Figure 14. Saving the Windows Installer log at the end of the HPE 3PAR OIU installation

At installation time, the OIU installer allocates TCP port 2370 and 2371 for use with the OIU Server. Port 2370 is the Server Port by which the OIU client and server communicate. Port 2371 is the Shutdown Port for the Apache Tomcat web server. If either of the two ports is in use by other software, the OIU installer pops up the window shown in figure 15 prompting the administrator to supply one or two other port numbers.

Figure 15. Changing the Server and/or Shutdown Port in the OIU installer from the default values

4 h20392.www2.hpe.com/portal/swdepot/displayProductsList.do?category=3PAR

Page 21: HPE 3PAR Online Import for HDS Storage · HPE 3PAR Online Import for HDS Storage . Best practices for data migration from Hitachi Data Systems arrays to HPE 3PAR StoreServ Storage

Technical white paper Page 21

Only the port number that has to be changed has an active text box in figure 15. After making the change(s), click the “Next” button to continue with the installation. If the Server Port was changed, consult the HPE 3PAR Peer Motion and HPE 3PAR Online Import User Guide for the changes needed to force OIU to use this new port number.

When installing only the client or the server component of OIU, you cannot install the missing component while retaining the installed one.

When reinstalling the OIU software immediately after removal, the installer software may tell that TCP port 2370 is taken prompting you to choose another port value. Before supplying a new port, verify with netstat –a in which state this port is for the moment. Here is example output for this command:

C:\Users\Administrator>netstat -a | find “2370”

TCP 127.0.0.1:2370 nodename:64304 TIME_WAIT

The output above shows the port is in TIME_WAIT state, not in the usual LISTENING mode when the OIU Server is active. The TIME_WAIT status indicates that the OIU server has closed the connection but any delayed packets can still be handled appropriately. The connection will be removed after a timeout of a few minutes. Execute the command once more after a few minutes to verify port 2370 no longer shows up. Next, click the “Back” button in figure 15 and proceed to install the OIU software. Now the installer will no longer issue the warning that TCP port 2370 is taken.

At installation, two Windows groups called “HPE Storage Migration Admins” and “HPE Storage Migration Users” are created (unless they exist already) on the Windows system where the Server part of the OIU is installed. The members of “HPE Storage Migration Admins” group can issue every OIU command to set up and execute a migration, the members of the “HPE Storage Migration Users” group can only execute “view” commands such as showconnection and showmigration. These authorization levels are for separating out user classes with different responsibilities. The members of these groups can be defined locally on the Windows system running the OIU Server component or they can be a member of an LDAP or AD setup.

HPE 3PAR OIU retrieves information about the HDS storage system by interfacing with the CIMOM subsystem integrated in HCS. The communication protocol used is SMI-S over TCP port 5989 in secure mode (the default) and 5988 in insecure mode. Some tools for managing SAN and network switches, HBAs and servers, and for metering storage performance may allocate these ports explicitly or silently. HCS itself has the module “Hitachi Tuning Manager—Agent for SAN Switch” that makes use of the default SMI-S ports. When in use, OIU commands such as addsource fail with the message in the log file that the wrong SMI-S namespace is supplied. The general cause of this error is that the CIMOM server of another management tool responds first to the SMI-S query from OIU. Either disable or remove those management tools, change their port number or change the port number on the HCS side. Details on the latter can be found in the HCS Administrator Guide.5 With a changed port number, specify the -port option with the new port number for addsource and adddestination.

Migration preparation HPE 3PAR OIU is meant for migrations with inter-array distances within a typical data center. For longer distances between the source and destination, the application latency of the host reaching out to the destination system may increase unacceptably.

HPE 3PAR OIU does not support the presence of Hitachi’s HDLM multipathing software on a host under migration for any host operating system. HDLM must be uninstalled before zoning the host to the destination HPE 3PAR StoreServ Storage; this uninstall requires a reboot of the host. If the host has LUNs exported to it from another HDS Storage, you can manage their paths with the multipathing software native to the host operating system after the HDLM removal.

During an OIU createmigration and startmigration operation, the HDS array’s Remote Web Console (RWC) should not be in “Resource locked” mode and the Service Processor (SVP) application on the HDS system should not be in “Modify” mode. This would prevent the creation of the Host Groups HCMDxxxxx during the pre-migration stage of OIU. During the migration, it will prevent OIU from unpresenting the LUNs on the HDS Storage from the host. Do not make any changes to the LUNs under migration on the HDS Storage during the pre-migration or the actual data transfer.

5 knowledge.hds.com/Documents/Command_Suite

Page 22: HPE 3PAR Online Import for HDS Storage · HPE 3PAR Online Import for HDS Storage . Best practices for data migration from Hitachi Data Systems arrays to HPE 3PAR StoreServ Storage

Technical white paper Page 22

HPE 3PAR OIU retrieves its information on volumes on the HDS system by making SMI-S calls to the Device Manager database located in the HCS application. That database is not updated periodically in an automated way meaning it is out of sync with the HDS Storage if changes were made to volumes or hosts using RWC, the SVP, or CCI. Always refresh the Device Manager database before starting an OIU migration to ensure the createmigration command will use the information that is current on the HDS Storage.

A volume presented to the host over a single path to an HDS Storage can be migrated to HPE 3PAR StoreServ using HPE 3PAR OIU. The volume will be presented to both HCMDxxxxx Host Groups on the HDS system and migrate to the HPE 3PAR StoreServ, where the definition for the host is created by OIU with the one path known to the system. If the host already exists on the destination HPE 3PAR StoreServ with one or two paths, createmigration may fail with the message:

OIUERRPREP1023: Failed to present volumes to host representing destination at source HCMDxxxxx

To resolve this situation, remove both HCMDxxxxx hosts on the HDS source system, remove the migration definition with removemigration, refresh the HCS Device Manager database, and restart the command.

Some types of host clustering software coordinate access to shared volumes using so-called SCSI-3 Persistent Group Reservations (PGR). A reservation governs the path between an HBA in the host and a volume on the storage array. For Windows clusters the Cluster Service needs to be stopped before executing the createmigration command to release the SCSI-3 reservations. This means the applications that use shared volumes have to be halted on a Windows cluster while createmigration is ongoing.

Evidently, the destination HPE 3PAR StoreServ must have the free space to accommodate the migrating volumes. The space for the volumes is allocated on the destination HPE 3PAR StoreServ after they were unpresented from the host on the HDS system. This happens early after the startmigration command is executed. For volumes landing in full provisioning on the HPE 3PAR StoreServ, the required space for them is entirely allocated, for thin and dedupe provisioned volumes the space is allocated as need while the migration progresses. The provisioning type of the volumes on the HDS system does not matter in this space allocation on the HPE 3PAR StoreServ.

During the preparation of the migration and its actual execution, application writes to thin LUNs continue to consume space from the Dynamic Provisioning (DP) pool. If the free capacity of a DP pool is exhausted, its volumes become “protected” on an HDS system causing host access disruption. Volumes in this protected state cannot be migrated using HPE 3PAR Online Import. Care must be taken that the DP pool does not get in the full condition while the migration is ongoing. To reduce the risk for this situation, you can execute the migration during a moment of low write traffic for the application, stop the application, execute the migration offline, or increase the DP pool size upfront to arrange a safe buffer of free space. A DP pool can be decreased in size after the migration on the HDS VSP platform.

HPE 3PAR OIU creates the definition of the host under migration on the destination HPE 3PAR StoreServ if it does not exist yet. The name of the host created on the HPE 3PAR StoreServ is the name of the HDS Host Group. For a host with two or more HBAs, it is recommended to have an identical name for the Host Groups on both HDS clusters so that OIU creates one host with two or more WWNs on the destination HPE 3PAR StoreServ. An HDS cluster in this context is one unit of the duplexed, redundant set of hardware for frontend and backend adapters and cache.

On HDS Storage systems it is feasible to embed the WWN of all HBAs in one SAN fabric for a cluster of hosts into one Host Group per HDS cluster. For this construct OIU creates a single host on the destination HPE 3PAR StoreServ with the WWN of all HBAs of all clustered hosts in it. To avoid this scenario, it is recommended to create a Host Group in each HDS cluster for every HBA present in the clustered hosts. This change on the HDS system can be done without downtime for the applications.

When specifying the option –vvset XXX and –hostset YYY in createmigration, OIU will bring the clustered hosts automatically into a host set on the HPE 3PAR StoreServ and the migrated volumes will be placed in a volume set. The option for -hostset cannot be used without -vvset. If the customer does not make use of VVsets, choose for an MDM type of migration and during the downtime after createmigration is completed, remove the exports of the LUNs from the individual hosts and remove the VVset. Next, create a host set manually from the hosts, and export the volumes under migration to the host set. After these manipulations and the rezoning work, the hosts can be restarted. This way the hosts will be in a host set but the volumes exported to the hosts will not be in a VVset.

A Host Group name on an HDS system can count up to 64 characters; it has to be shortened to maximum 31 characters for the HPE 3PAR StoreServ to accommodate it. In case the Host Group name is longer than 31 characters, createmigration fails with this message:

preparationfailed(-NA-)(ERROR: OIUERRMS10003 Source host not found. OIURSLMS10003 Ensure that the unique name specified for the source host is valid.;)

Page 23: HPE 3PAR Online Import for HDS Storage · HPE 3PAR Online Import for HDS Storage . Best practices for data migration from Hitachi Data Systems arrays to HPE 3PAR StoreServ Storage

Technical white paper Page 23

On the HPE 3PAR StoreServ, all alphanumeric characters plus the hyphen, period, and underscore are allowed. On HCS a few more characters such as [ ] and { } can be used for the name of the Host Group. Make sure to verify and rename the Host Group under migration in HCS, in case it contains characters that are not allowed on HPE 3PAR StoreServ as this causes createmigration to fail with the same message as mentioned earlier.

The CPG specified for –destcpg must exist on the destination HPE 3PAR StoreServ before issuing the command createmigration; the CPG is not created by OIU if it is not present. Volumes cannot be migrated to a CPG that is part of an HPE 3PAR StoreServ domain. You need to move the CPG to the default domain or land the migrating volumes in a CPG in the default domain and move the data to the intended CPG using Dynamic Optimization.

The Online Import software on the destination HPE 3PAR StoreServ issues sequential reads of 256 MiB to the source HDS system over the Peer links to transfer the data. The impact on application performance during an online and minimally disruptive migration due to these intensive reads is hard to predict. If the host ports of Peer links on the HDS side are shared with other hosts, the impact of the migration may be non-negligible.

Hewlett Packard Enterprise recommends recording a performance baseline for the storage systems, SAN switches, and hosts involved in the week preceding the migration. This activity collects data points on application latency, on IOPS, and throughput on the HDS host ports and SAN switches, and for the I/O subsystems on the HDS and the HPE 3PAR StoreServ. The Performance Monitor in the Hitachi Performance Manager Suite, HPE 3PAR System Reporter, and SAN switch performance collectors can generate this baseline. With this information, the customer can determine periods of low activity for a particular application and hence the best time for migrating it. The increase in I/O, system, and SAN load over the recorded baseline can be attributed to the Online Import activities.

Host, volumes, and migration Names of volumes on the HDS system are of the type CU:LDEV or LDKC:CU:LDEV with LDKC either 0 or 1, and CU and LDEV varying between 00 and FF. OIU retains the name of the volume on the destination HPE 3PAR StoreServ but the colon “:” in the name is not supported on the HPE 3PAR StoreServ. The colons in the name of the HDS volume are converted by OIU into underscores “_”. As an example volume 00:A2:29 on the HDS system will land as 00_A2_29 on the HPE 3PAR StoreServ. HPE 3PAR OS supports volume names of up to 31 characters. We suggest as a post-migration activity, to change the name of every migrated volume to a more meaningful one such as “Brussels_Marketing_DB_Vol3”, which is more revealing than just two or three groups of hexadecimal characters.

For online and minimally disruptive migrations, volumes or hosts may be added implicitly to the explicitly given one(s) following an algorithm in OIU explained in the HPE 3PAR Peer Motion and HPE 3PAR Online Import User Guide. This obviates the need to list explicitly all the volumes exported to a particular host in the option –srcvolmap of the createmigration command, a potentially lengthy activity that may lead to typing mistakes.

The complete list of explicit and implicit volumes selected by the createmigration command is shown in the output of the showmigrationdetails -migrationid <id> with <id> the 13-digit identification of the migration. This command can be executed after showmigration indicates 6 percent completion of the migration preparation. This list can be written to a file using the –filewrite option of showmigrationdetails. Hewlett Packard Enterprise recommends creating this file for easy inspection of the list of volumes scheduled for migration. Review this list with the customer to help ensure that all volumes that must migrate are in the list while volumes that have to stay on the source system are not in the list. Alternatively, you may use the -volmapfile option that accepts a file as a parameter. Use double instead of single backslash characters to refer to a particular directory location on the Windows file system for the file. The file contains one HDS volume per line specified with its colon separators for the volume name. The optional CPG and provisioning type for each volume go on the same line, separated from the volume name and from each other by a comma. If a particular parameter is absent for a volume, it is taken from the values for -destcpg and -destprov. Note that the 64-character alphanumeric LDEV name identifying a volume in HCS or RWC cannot be used in createmigration to specify a volume for migration.

By default, the LUN ID of the migrated volume on the destination is the same as the one on the source system. When the LUN number of a source volume is in use on the destination system for the host under migration, createmigration resolves this LUN ID conflict by selecting an unused ID starting from zero for the destination volume. This behavior can be overruled by the option –autoresolve false for createmigration. With this option, createmigration throws an error when a LUN ID conflict is found. In this case, the LUN number in use has to be changed either on the source or on the destination system; that is disruptive to I/O. When applications depend on finding their data on volumes with a prescribed LUN ID, it is recommended to set the –autoresolve parameter to false.

The size of a migrated volume is the same as on the source system; the volume can be expanded on the destination system after its migration ends. This expansion is non-disruptive for I/O to the volume on the HPE 3PAR StoreServ but may require a disruptive manipulation on the host operating system to allow the volume use the additional space.

Page 24: HPE 3PAR Online Import for HDS Storage · HPE 3PAR Online Import for HDS Storage . Best practices for data migration from Hitachi Data Systems arrays to HPE 3PAR StoreServ Storage

Technical white paper Page 24

LUSE volumes on an HDS Storage are constructed out of a head volume with one or more Expansion LDEVs or other LUSE volumes attached to it. OIU supports the migration of a LUSE volume. In createmigration, the ID of the head LDEV of the LUSE construct is supplied. On the destination HPE 3PAR StoreServ, a volume with size of the combined head and Expansion LDEVs is created.

The WWN of a migrated volume on the destination array is identical to the one on the source array. The command showvv -d shows the WWN of each LUN. Here is example output on the destination HPE 3PAR StoreServ for LUN 00_14_04 that was migrated from an HDS array with serial number 53094 (CF66):

cli% showvv -d 00_14_04

Id Name Rd Mstr Prnt Roch Rwch PPrnt PBlkRemain -------------VV_WWN------------- ------CreationTime------ Udid

5885 00_14_04 RW 1/0/- --- --- --- --- -- 60060E8006CF66000000CF6600001404 2016-06-15 11:07:21 CEST 5885

The WWN of LUN 00_14_04 in column VV_WWN is a 128-bit quantity and lists twice the value CF66 being the serial number of the HDS system in hexadecimal form. Volumes created natively on an HPE 3PAR StoreServ have the serial number of the array in hexadecimal form in the last five characters of the WWN. In the case of a migrated volume from an HDS system, this is clearly not the case: the last six characters of the volume’s WWN are 001404 and represent the LDEV ID on the HDS system. Having in that location a different quantity compared to a natively created volume is not a problem from a technical and operational perspective. You can change the WWN of the imported volumes to a native one of the destination system containing the serial number of the destination array, but that is a disruptive operation to I/O. To avoid this, the WWN can be changed during planned downtime.

HPE 3PAR Online Import cannot migrate volumes on the HDS system that are smaller than 256 MB. The createmigration command will fail with this error message:

preparationfailed(-NA-)(OIUERRPREP1002: Ineligible volume 00:05:02;)

Filtering the log file hpoiu.log for the volume ID reveals the following entries for the mentioned volume:

Volume: string ElementName = "00:05:02"; size is unsupported with volume size: 101

...

Volume is ineligible for migration 00:05:02 XP_UNSUPPORTED_SIZE

From this, it is clear that volume 00:05:02 is smaller than 256 MiB (it is 101 MiB in size) and hence cannot be accommodated on an HPE 3PAR StoreServ. HDS Command Devices are typically smaller than 256 MiB and, for this reason, they should be unpresented from the host before attempting createmigration. Note that the LDEV mentioned in the error message is the first of possibly multiple volumes in the migration definition offending the minimum size. It is recommended to look up all volumes with a size below 256 MiB presented to the Host Group and unpresent them before attempting createmigration again. The presence of volumes smaller than 256 MiB on the array but not under migration poses no problem to OIU.

If the Command Device is larger than 256 MiB or if the LDEV is an HDS System Disk, createmigration fails with this error message:

preparationfailed(-NA-)(ERROR: OIUERRMC10016 One or more of the volumes, selected explicitly or implicitly, are ineligible for migration due to a remote copy relation. OIURSLMC10016 Remove the remote copy relation for the volume(s) selected for migration.;)

Search the OIU log file for the string CommandDevice and unpresent those that are inside the migration definition. Selecting an HDS Command Device for migration to an HPE 3PAR StoreServ serves no purpose since it only contains HDS management data. The same holds for HDS System Disks. If an LDEV with the System Disk attribute was selected, createmigration emits the same error message as mentioned earlier and inspection of the log file shows the attribute of the LDEV.

Page 25: HPE 3PAR Online Import for HDS Storage · HPE 3PAR Online Import for HDS Storage . Best practices for data migration from Hitachi Data Systems arrays to HPE 3PAR StoreServ Storage

Technical white paper Page 25

Volumes on the source HDS system that are planned for migration to the destination HPE 3PAR StoreServ should not have snapshots. This means they cannot be in a pair relationship with a volume from the snapshot pool. Suspending the pair relationship is not enough; both volumes should be in Simplex mode meaning no pair relationship exists. Additionally, all Hitachi Online Remote Copy Manager (HORCM) instances for them in Hitachi’s Command Control Interface (CCI) should be shut down and their Command Device should be unpresented from the host.

The OIU server validates the syntax of all commands issued to the OIU client. The createmigration and the startmigration command render back the prompt before the actual work is finished. The work done by createmigration can be viewed by executing the showmigration command repeatedly; the progress of the work is shown in percentage behind the string preparing like in preparing(6%) in the STATUS(PROGRESS)(MESSAGE) column. Following is an explanation of the work done by OIU at a few of the stages:

(0%) -> Discovering all volumes and ports in the source HDS array

(4%) -> Creating host on destination 3PAR

(6%) -> Creating host on source HDS array representing destination 3PAR

(14%) -> Presenting volumes under migration to 3PAR host on source array

Creating Peer volumes on destination 3PAR

(20%) -> Admitting Peer volumes to 3PAR

Exporting admitted volumes to host on destination 3PAR (only in case of online migration)

(100%) -> Preparation Complete

To validate the volumes, hosts, and Host Groups explicitly defined in createmigration and implicitly added, OIU scans over all LDEVs, ports, Host Groups, and hosts present in the HDS system. This can take some time for a system with thousands of objects of these kinds. Hewlett Packard Enterprise recommends refraining from presenting or unpresenting volumes to the HDS source system during the entire createmigration step.

To improve the performance of the participating arrays in an online or minimally disruptive migration (MDM), Hewlett Packard Enterprise recommends migrating the hosts with the least number of volumes or the smallest total volume size first. This frees up controller resources for the migration of larger hosts and volumes.

OIU does not inform the storage administrator executing the migration when a SAN zone change, if any, has to be carried out. This requires a good understanding of the OIU mechanism. The HPE 3PAR Peer Motion and HPE 3PAR Online Import User Guide provides the information on when exactly the zoning change has to happen.

Assume an OIU Consistency Group (CG) with some small and large volumes in it. The continued writes by the host application(s) to the smaller volumes on the source HDS system that were migrated already oppose the traffic of the actual migration over the Peer links of the large volumes. This leads to an increasing amount of “double writes” while the migration progresses. In a write-intensive environment, this could lead to a progressive reduction of the transfer throughput when the smaller volumes have been migrated. In this situation, Hewlett Packard Enterprise recommends to execute the OIU migration during a period of low application activity and only add volumes to a CG when they really bear a relationship to each other. For destination systems on HPE 3PAR OS 3.2.2 and later, a read by the host application(s) is retrieved from the destination HPE 3PAR StoreServ if the region for it was already migrated from the source HDS Storage. This saves on application read traffic over the Peer links to the source system, compensating wholly or in part for the increased double writes when CGs are used.

As part of the preparation for an online migration, the host is zoned to the destination HPE 3PAR StoreServ. After createmigration ends, the Peer volumes are exported to the HPE 3PAR host on the source system and to the migrating host. Next, a rescan is executed on the host to activate the new paths to the Peer volumes on the destination HPE 3PAR StoreServ. After the rescan, half of the application reads and writes will take the original path from the host to the HDS system and half of them will travel over the HPE 3PAR StoreServ and over the Peer links to the HDS system. At that moment, applications making use of these volumes may experience a slight increase in latency, potentially compensated by the extra set of paths in an I/O-intensive environment. Although this setup can exist for a longer time, we recommend initiating the actual data migration shortly after the completion of the createmigration command and the host rescan to minimize the effect of the situation on the application(s).

In a minimally disruptive or offline migration, the disks may stay offline for the host OS after the host reboot and even a rescan until brought online manually; this is, in particular, the case for Windows with the SAN policy set to “offline.” This means that applications that start at boot time of the host will not find their SAN LUNs and fail their startup. This potentially results in a major alert to a log host, triggering action by the application support team. To avoid this, it is recommended to disable the automatic startup of the application(s) on a host whose volumes will be migrated using the minimally disruptive or offline method. When the host is rebooted and the disks are brought online, the applications can be restarted manually.

Page 26: HPE 3PAR Online Import for HDS Storage · HPE 3PAR Online Import for HDS Storage . Best practices for data migration from Hitachi Data Systems arrays to HPE 3PAR StoreServ Storage

Technical white paper Page 26

OIU creates at the start of every migration a host set on the destination HPE 3PAR StoreServ. This host set is called OIU_HOST_SET_x with x an integer starting at one and incrementing by one every migration. The host or hosts under migration are placed in this host set. After the migration ends, the host set stays in place on the destination HPE 3PAR StoreServ. The host set can be removed manually, if desired, after the migration ends.

In the pre-migration phase, OIU creates two Host Groups on the source HDS system with Host Mode 00[Standard]. The name of these Host Groups is HCMDxxxxx with x a hexadecimal number. One cannot influence the name of these Host Groups and they should not be renamed or removed. These Host Groups are created on the HDS FC ports that are the end points of the Peer links at the HDS side. Each of these Host Group contains one WWN that is the one from a Peer port on the destination HPE 3PAR StoreServ. If possible, these HDS FC ports should have minimal or no traffic by other hosts during the time planned for the OIU migration. Near the end of the pre-migration phase, OIU presents the Peer volumes created on the destination HPE 3PAR StoreServ to these newly created Host Groups. To view these Host Groups, their details, and the LUNs exported to them, you can use HCS, RWC, or this HDS CLI command in which hcs.example.com is the name of the system running the Hitachi Command Suite (HCS) instance:

HiCommandCLI http : // hcs . example . com:2001/service subtarget=HostStorageDomain hsdsubinfo=path,wwn -u xxx -p yyy serialnum=XXX model=YYY

Search for the string HCMD in the output of the above command. These Host Groups are not removed after a migration but are reused at the next one reducing the migration setup time. However, they can be removed after each migration if desired.

For an online migration, the migrated volumes get unpresented from the source HDS system by OIU when the actual data migration starts. In case the migration does not complete successfully and rollback to the initial situation is needed, these presentations from the host to the source HDS system have to be recreated. The presentation of the migrating volumes to the HCMDxxxxx Host Groups needs to be removed for a rollback as well.

You can take advantage of the Online Import migration process to convert fully provisioned volumes on the source HDS system to thin or deduped ones on the destination HPE 3PAR StoreServ. OIU is not thin aware—thin provisioned volumes on the HDS source system are imported by transferring all blocks of their “full” size. Blocks of zeroes in volumes on the source system are intercepted by the HPE 3PAR ASIC on the destination system and are not written to disk for thin destination volumes. For migration to thin volumes, you will need the HPE 3PAR Thin Provisioning Software license. It is equally possible to convert thin HDS volumes to full or deduped ones on the HPE 3PAR StoreServ during an Online Import migration.

If the host under migration does not exist on the destination HPE 3PAR StoreServ system, OIU creates the host or hosts on the destination HPE 3PAR StoreServ. It is created based on information OIU extracts from the Host Group(s) on the source HDS system. The name of the host created is the name of the Host Group on the source HDS system. The WWNs for the hosts are captured from the Host Group(s). The persona value for the host is retrieved from the parameter -persona in the createmigration command. This parameter is mandatory for online and MDM types of migrations and should not be present for an offline migration. If the host already exists on the destination, its persona value remains unchanged despite the parameter’s value in the createmigration command differs. Changing the persona value for a host may be disruptive to I/O.

If the name of the Host Groups to which a LUN is exported are different across the HDS clusters, OIU will create a separate host on the HPE 3PAR StoreServ from each Host Group and embed the WWNs of the Host Group in it. As an example, an HDS system may contain Host Group capri_7B in HDS Cluster 1 and capri_8B in Cluster 2. Both point to the same physical host and contain one or more WWNs. When migrating an LDEV presented to these Host Groups, OIU will create two hosts on the destination HPE 3PAR StoreServ with the host names capri_7B and capri_8B each containing the WWN(s) of the HDS Host Group. While it is not harmful to HPE 3PAR operations, it is unusual to have a single host split over multiple names. This can be fixed by unexporting the volumes from the example hostname capri_7B and add its WWNs to host capri_8B. Removing host capri_7B and renaming capri_8B into the generic capri finishes the operation. While this can be completed online, your volumes are exported for a short time over half the number of paths only.

If one or more of the WWNs in a Host Group on the source HDS system are allocated to a host on the HPE 3PAR StoreServ destination with a different name, the createmigration command fails with error:

preparationfailed(-NA-)(OIUERRPREP1014: Error creating host on destination;)

The solution is to either change the name of the Host Group on the source HDS system to the one of the host on the HPE 3PAR StoreServ or make the change on the HPE 3PAR StoreServ. If a host is connected to the source HDS system by two paths but only by one path to the destination HPE 3PAR StoreServ, createmigration will hang at preparing (6%) looping over the following two lines in hpoiu.log every 5 seconds:

Page 27: HPE 3PAR Online Import for HDS Storage · HPE 3PAR Online Import for HDS Storage . Best practices for data migration from Hitachi Data Systems arrays to HPE 3PAR StoreServ Storage

Technical white paper Page 27

2016-06-17 16:08:59,005 [com.hp.oiu-2] INFO com.hp.oiu.source.srcxp.api.XPApi - Waiting for the createHost Job to be completed...

2016-06-17 16:08:59,005 [com.hp.oiu-2] INFO com.hp.oiu.source.srcxp.api.XPApi - Current createHost OperationalStatus: 11

This happens regardless of the presence or absence of the host on the HPE 3PAR StoreServ. Zoning in a second path from the host will end the looping but lead to a volume admit problem at the end of createmigration. Rerun the createmigration command and the operation will complete properly.

The XML files at C:\Program Files\Hewlett-Packard\hp3paroiu\OIUData\data containing the status of a migration persist across logging out or shutting down the OIU server and powering it up again: the startmigration command can be executed with the migration definition intact.

Peer links and Peer volumes HPE 3PAR OIU requires exactly two physical FC ports configured in “Peer” connection mode and “point” connection type on the destination system to set up the Peer links. The configuration of these ports into Peer mode must be handled outside of the HPE 3PAR OIU. You can use the HPE 3PAR SSMC or the HPE 3PAR CLI for this. You need to have the Peer links operational before starting work in the OIU. The Peer ports and the SAN zoning for the Peer links should stay in place until the data transfer has finished and the migration definition cleanup is completed. You can reuse the Peer ports after completing the migration by changing their type back to host ports.

The Peer links between the arrays are dedicated to the Online Import migration operation and run over a single or redundant pair of SAN switches; direct FC connectivity between the source and the destination systems is not supported. Peer links over FCoE and iSCSI, directly or over switches, are unsupported as well.

Having the Peer ports and the zoning for the Peer links in place days or weeks before the data migration takes place does not harm or induce any performance penalty to applications. They carry no traffic until the OIU operation is started. Hewlett Packard Enterprise recommends setting up the Peer links and verifying their operational status a few days before the migration starts to correct SAN zoning or other issues if any. You can use the following CLI commands on the destination HPE 3PAR StoreServ to test their status:

showpeer

showtarget

showportdev ns n:s:p

(n:s:p is the node:slot:port identification for each of the Peer ports)

Here is example output for these commands for Peer ports on 0:2:1 and 1:2:1:

cli% showpeer

----Name/WWN---- Type Vendor ------WWN------- Port

50060E8006CF6665 Non-InServ HDS 50060E8006CF6665 0:2:1

50060E8006CF6675 Non-InServ HDS 50060E8006CF6675 1:2:1

cli% showtarget

Port ----Node_WWN---- ----Port_WWN---- ------Description------

0:2:1 50060E8006CF6665 50060E8006CF6665 reported_as_scsi_target

1:2:1 50060E8006CF6675 50060E8006CF6675 reported_as_scsi_target

Page 28: HPE 3PAR Online Import for HDS Storage · HPE 3PAR Online Import for HDS Storage . Best practices for data migration from Hitachi Data Systems arrays to HPE 3PAR StoreServ Storage

Technical white paper Page 28

cli% showportdev ns 0:2:1

PtId LpID Hadr ----Node_WWN---- ----Port_WWN---- ftrs svpm bbct flen -----vp_WWN----- -SNN-

0x326e00 0x06 0x00 50060E8006CF6665 50060E8006CF6665 0x0000 0x0000 0x0000 0x0000 20210202AC0039AD n/a

0x32f500 0x01 0x00 2FF70202AC0039AD 20210202AC0039AD 0x8800 0x0032 n/a 0x0800 20210202AC0039AD n/a

cli% showportdev ns 1:2:1

PtId LpID Hadr ----Node_WWN---- ----Port_WWN---- ftrs svpm bbct flen -----vp_WWN----- -SNN-

0x526e00 0x06 0x00 50060E8006CF6675 50060E8006CF6675 0x0000 0x0000 0x0000 0x0000 21210202AC0039AD n/a

0x52f400 0x01 0x00 2FF70202AC0039AD 21210202AC0039AD 0x8800 0x0032 n/a 0x0800 21210202AC0039AD n/a

In the example output, 50060E8006CF66xx are FC port WWNs on the HDS source system, CF66 is the S/N of the HDS Storage in hexadecimal format. WWNs 2xxx0202AC0039AD are on the destination HPE 3PAR StoreServ with xxx the Peer ports equal to 021 and 121. As an extra test, you can run the command showconnection in the OIU console after having added the source and destination systems. Here is example output for this command:

>showconnection

SOURCE_NAME SOURCE_UNIQUE_ID DESTINATION_NAME DESTINATION_UNIQUE_ID DESTINATION_PEER_PORT SOURCE_HOST_PORT

P9500.53094 53094 split 2FF70002AC0039AD 2021-0202-AC00-39AD(0:2:1) 5006-0E80-06CF-6665(CL7-F)

P9500.53094 53094 split 2FF70002AC0039AD 2121-0202-AC00-39AD(1:2:1) 5006-0E80-06CF-6675(CL8-F)

Verify this output against the values selected for the Peer ports and for the HDS ports (CL7-F7 and CL8-F). Exactly two lines of output should be produced and the word |unknown| should not be mentioned. If the output of any of these commands deviates from what is expected, check the Peer link zoning in the SAN. Verify that the correct WWNs for the Peer ports are used with the section 0202AC in them.

The Peer volumes are created on the HPE 3PAR StoreServ in the Peer provisioning type and with RAID 0 protection. This can be verified in the HPE 3PAR SSMC and HPE 3PAR CLI but does not cause any concern. Before the actual migration starts, the Peer volumes do not contain any data. During the migration, the data lands properly on the HPE 3PAR StoreServ in the desired provisioning type and with the intended RAID protection level. At the end of the migration of the volume, the correct provisioning type and the RAID level become visible in the SSMC and the CLI.

Managing Peer link throughput For an online and minimally disruptive migration, applications on the host are operational during the actual data transfer. During the entire duration of a volume’s migration, application data is written to the source system to ensure source volume consistency in case of the need for rollback. The data written to the source system flow over the Peer links in the opposite direction of the migrating data reducing the throughput for Online Import. In the case of an application writing heavily to the migrating volumes, managing the throughput of imported data over the Peer links will benefit the latency of the application.

The throughput of the Peer links can be managed with either of these methods:

1. Managing the Peer link throughput can be initiated from the HDS source array. The HDS USP and HDS VSP offer the Hitachi Server Priority Manager6 subsystem configurable in RWC or through RAIDcom commands in the Hitachi Command Control Interface. Server Priority Manager allows upper limit control for IOPS (single I/O granularity) and throughput (in KB/s) on the path between an HDS Port and a single WWN on a host. In the case of Online Import, the WWN is the Peer port on the destination HPE 3PAR StoreServ that is of the initiator type.

6 support.hds.com/download/epcra/rd702012.pdf

Page 29: HPE 3PAR Online Import for HDS Storage · HPE 3PAR Online Import for HDS Storage . Best practices for data migration from Hitachi Data Systems arrays to HPE 3PAR StoreServ Storage

Technical white paper Page 29

2. All major SAN switch vendors market models that support FC port rate limiting under the name of Quality of Service (QoS) and Ingress Rate Limiting. These subsystems effectively prioritize traffic over the path from one WWN to another one. Granularity varies from coarse (three levels) to a percentage value (100 levels).

3. Reducing the speed of the SFPs in use for the Peer links on the source and/or destination systems to below their nominal data rate effectively reduces the throughput for the Peer links.

These three methods have the drawback that they affect the bandwidth for the reads and writes by the application to the source system during the migration. When method (3) is implemented on the source system in the case of shared Peer link host ports, the bandwidth available to hosts making use of these host ports will be affected adversely. For these reasons, these approaches are less appealing because they may increase the latency of host applications.

An alternative method involves the use of option –subsetvolmap in startmigration. This option accepts volume names in the OIU destination format with the underscore replacing the colon. Here is an example of the command:

startmigration –migrationid xxx –subsetvolmap {“00_14_04”,”00_14_05”}

The volumes specified explicitly are migrated, while the others defined in the migration definition as shown in showmigrationdetails –migrationid xxx, do not migrate. You cannot specify the CPG or provisioning type here, they were already defined in the createmigration command—explicitly or implicitly. The net effect of this option is a staged import of volumes giving the customer granular control over when and how many volumes are migrated reducing the impact of the transfer of data to applications. In another use case, the weekend’s migration window for the volumes exported to the host may be too short to migrate all of them in one step. With this option, the first set of volumes is migrated this weekend while the second set is handled the week after.

There can be hours or days between two migrations with the –subsetvolmap option. Notice that all volumes that are inside the migration definition but not yet migrated, are accessed on the source system over the destination HPE 3PAR StoreServ and over the Peer links. This may incur a small latency increase even if no volume import is ongoing. The migration of the subset of the volumes specified has to finish before another one can be issued, possibly with another subset of volumes specified. Without the option, startmigration will migrate all remaining volumes in the migration definition. All volumes inside the migration definition must be moved over before another createmigration command can be issued.

The migration progress percentage values in the output of the showmigration and showmigrationdetails commands must be interpreted correctly when a subset of the volumes was migrated previously. Assume we migrated one volume of five so far using the -subsetvolmap option for startmigration. When starting the migration for the four remaining volumes, the reading at one moment in time for the progress of the entire migration is 28 percent while the individual migrating volumes were at 6 percent, 9 percent, 11 percent, and 14 percent each, a far lower value than 28 percent. This is explained by the fact the 28 percent is an average over the volumes that migrated already having a migration progress of 100 percent and those migrating now. In the example case, the average of 100 percent and 6 percent, 9 percent, 11 percent, and 14 percent was indeed 28 percent.

For maximum throughout and the shortest migration duration, Hewlett Packard Enterprise recommends to execute an Online Import migration of the online or MDM type at times of low application activity.

Post migration The volumes migrated off the HDS system in online or MDM mode are unpresented from the host at the start of the migration. During the actual migration, they receive write updates by the application over the Peer links. At the completion of the migration, no more updates take place resulting in stale source volumes. No SCSI reservation or key remains on these migrated volumes meaning they can be presented again to the same or another host, for example, for tape backup. This should be done with great care since the information on them is no longer current after the migration ended. No warranty is provided for application data consistency on the source volume after the mirroring of the data stopped at the end of the migration. Having the migrating volumes in a CG does not help in providing application data consistency. Most applications can repair their data volumes to a good extent recovering all or nearly all data at their restart from the source volumes.

The files containing the details on the current migration configuration are at C:\Program Files (x86)\Hewlett-Packard\ hp3paroiu\OIUData\data. The HPE 3PAR OIU does not provide a persistent location with the list of all past migrations performed and their details. The files disappear when the migration definition and the source and destination arrays are removed from the Utility configuration. To keep records of the migration activities, you need to save the XML-formatted files in a different directory at completion of every migration and before the execution of the remove* commands.

Page 30: HPE 3PAR Online Import for HDS Storage · HPE 3PAR Online Import for HDS Storage . Best practices for data migration from Hitachi Data Systems arrays to HPE 3PAR StoreServ Storage

Technical white paper Page 30

Volumes migrated retain the WWN they had on the source HDS system. To use the HPE 3PAR Recovery Manager products, the WWN of the migrated volumes needs to be changed to a native one for the destination HPE 3PAR StoreServ. This change with setvv –wwn is disruptive for I/O to the volume.

The command tunevv fails on volumes with a broken size with this error message:

error: VV needs to be a multiple of 256 MiB

The solution is to use the growvv command to increase its size to a 256 MiB boundary.

Miscellaneous The HPE 3PAR OIU client and server software should not be installed on a Windows host whose volumes will be migrated with the tool. As part of an MDM, the Windows host must be shut down, precluding access to the OIU client making it is impossible to issue OIU commands to the HDS array via the HCS software.

Once the data transfer has started, an OIU migration cannot be stopped. You must complete a started migration before you can create and start a new migration. You cannot define a second migration while the first one is still ongoing in the Online Import Utility server.

The Utility server is a stateless, RESTful API-based engine that processes requests from a Windows Utility client. You should not connect with more than one Utility client to the Utility server and issue OIU commands.

If the createmigration command fails, you need to remove the migration definition twice with the command removemigration–migrationid xxx. If createmigration and startmigration were successful, removemigration needs execution just once.

When the createmigration command was successful and the output of the showmigration command lists preparationcomplete (100%), the actual data migration can be started. If for some reason the migration is not performed, execute the following steps to return to the original situation:

1. Stop the application(s) making use of the migrating volumes

2. Execute the command removemigration; this will remove the Peer volumes created on the HPE 3PAR StoreServ

3. Remove the presentation of the migrating volumes to the HCMDxxxxx Host Groups on the HDS system

4. Refresh the HCS database

5. Restart the application(s)

When all volumes planned for migration are moved over from the HDS system to the HPE 3PAR StoreServ system, the entry for the HDS system in HCS can be removed. To remove HDS model XXX with serial number YYY from the HCS instance running on system hcs.example.com, the following command can be used:

HiCommandCLI http : // hcs. example. com:2001/service DeleteStorageArray -u xxx -p yyy model=XXX serialnum=YYY

The region mover subsystem in HPE 3PAR OS is common between tasks executing Dynamic Optimization, Adaptive Optimization, tunevv, tunesys, snapshot promotions, Peer Motion, and Online Import. This means that every instance of these tasks executing during an Online Import operation will reduce the maximum number of nine simultaneous OIU volume import tasks by one on the destination array lowering the migration throughput. This is shown in figure 16 where active two tune_vv tasks cause only seven import_vv tasks to take off. The medium priority of the import tasks moves four regions per time slice for every volume under active migration. The region move rate for the tune tasks is one region or less per time slice.

Page 31: HPE 3PAR Online Import for HDS Storage · HPE 3PAR Online Import for HDS Storage . Best practices for data migration from Hitachi Data Systems arrays to HPE 3PAR StoreServ Storage

Technical white paper Page 31

Figure 16. The presence of tune tasks forces the number of migrating import tasks below nine simultaneous ones.

The result of tunesys is compromised when running Online Import to a CPG subject to tunesys: regions dropped “late” by Online Import in the CPG under tunesys will stay in that “wrong” CPG forcing a second tunesys after Online Import ended.

Troubleshooting The HPE 3PAR Peer Motion and HPE 3PAR Online Import User Guide contains an expanded section at the end of the document on troubleshooting a number of situations that can occur while executing Online Import. Refer to that section for help during the setup or execution of the various commands in OIU and try the steps listed to remedy the problem. The next section deals with some of the error messages and how you can recover from them.

General If the showmigration command lists any of the following errors after createmigration returned with the prompt and a migration ID:

preparationfailed(-NA-)(:OIUERRPREP1004: Implicit hosts have multiple operating systems;)

preparationfailed(-NA-)(:OIUERRDST0008:Admit has failed. PM Rescan failed to fetch the PD list;)

preparationfailed(-NA-)(:OIUERRDST0008:Admit has failed. Volume 60060E800589E200000089E20000002C is only accessible through one peer port;)

use the RWC, SVP, HDS CLI, or HCS to verify on the HDS system that no paths exist from the volumes to be migrated to the Host Groups HCMDxxxxx. If these paths exist, remove them, refresh the HCS database, and retry the command. If the problem persists, remove both Host Groups with name HCMDxxxxx from the HDS source system and retry the createmigration command.

If the output for the showconnection command shows |unknown| for the HDS source details like this:

>showconnection

SOURCE_NAME SOURCE_UNIQUE_ID DESTINATION_NAME DESTINATION_UNIQUE_ID DESTINATION_PEER_PORT SOURCE_HOST_PORT

|unknown| |unknown| Test3PAR 2FF70002AC0039AD 2122-0202-AC00-39AD(1:2:2) 5006-0E80-06CF-6665(|unknown|)

|unknown| |unknown| Test3PAR 2FF70002AC0039AD 2022-0202-AC00-39AD(0:2:2) 5006-0E80-06CF-6675(|unknown|)

it could mean that the Peer link zoning is not correct or that the HDS source system is no longer available. Review the SAN zoning and make sure you are using the modified WWNs of the Peer ports in the zones. Check with the command showsource if the source array is still defined in the Online Import console environment. If it is listed still, you may want to remove it and add it again. If more than two lines appear below the header in the output of showconnection, the SAN zones for the Peer links contain WWNs of FC ports that are not implicated in connecting the HDS

Page 32: HPE 3PAR Online Import for HDS Storage · HPE 3PAR Online Import for HDS Storage . Best practices for data migration from Hitachi Data Systems arrays to HPE 3PAR StoreServ Storage

Technical white paper Page 32

system to the HPE 3PAR StoreServ one. The spurious WWNs should be removed to obtain just two lines of output. After remediation, showconnection may not change output until you log out of the OIU and reconnect.

OIU from HDS Storage to HPE 3PAR StoreServ Storage does not require Virtual Peer ports for any migration method and any host operating system. If createmigration fails with the following message in showmigration:

ERROR: OIUERRCS1003 There is no one-to-one mapping between the peer ports/NPIV port and the source host ports

OIURSLCS1002 Ensure that there is a one-to-one mapping between the peer ports, NPIV port and the source host ports, and that network connectivity between the host, source array and destination array is proper

OIU discovered the presence of Virtual Peer ports. Verify this with showport -peer and remove them with the controlport command.

The createmigration will not show more than one error message per execution although multiple errors may exist. For example, createmigration will fail if a volume listed explicitly or added implicitly is in a ShadowImage, TrueCopy, or snapshot pair relationship. When corrected, the command may still fail because the same or another volume in the list is smaller than 256 MiB or because the destination CPG does not exist or still for another reason. The log file does not contain the exhaustive list of all offending items.

If during a createmigration operation the showmigration command lists preparing(12%) in the “Status” column for a long time, verify that RWC and the SVP application are not in “Resource Locked” or “Modify” mode. Close them if necessary and the createmigration will move on to 20 percent and finally finish. If during the startmigration operation the showmigration command lists unpresenting(0%) in the “Status” column for a long time, verify that RWC and the SVP application are not in “Resource Locked” or “Modify” mode. Close them if necessary and the data transfer will start immediately.

The error message by createmigration when a volume with a mainframe emulation type is specified for migration is:

ERROR: OIUERRMS10004 Could not fetch source volume details. OIURSLMS10004 Ensure that the volume you are trying to migrate is present in the source array. Wait for some time and retry.

Note that mainframe volumes are not supported for migration.

If the startmigration command fails because of a cause outside of OIU, the command can be submitted again after resolving the issue. As an example, if the volumes under migration on the source HDS system failed to get unpresented because RWC is in locked mode, startmigration will time out and fail. When RWC is unlocked, re-entering the startmigration command will execute the migration.

Clean out the OIU objects created If the data migration did not complete for some reason (for example, a SAN issue leading to broken Peer links), use the following steps to clean up all objects created by OIU:

1. On the HPE 3PAR StoreServ destination array:

a. Unexport all Peer volumes and remove them

b. Remove the host set named OIU_HOST_SET_X with X a whole number

2. On the HDS source array:

a. Remove any presentations of LDEVs to both Host Groups named HCMDxxxxx

b. Remove any trailing SCSI reservations on the volumes under migration—see the SCSI reservations section and figure 6

c. Remove both Host Groups named HCMDxxxxx (optional)

3. On the Windows system running the OIU server:

a. Remove the “database” of OIU by entering the following commands into a Windows Command Line batch file and executing it:

net stop HPOIUESERVER

ping -n 20 localhost > NUL

sc query HPOIUESERVER

Page 33: HPE 3PAR Online Import for HDS Storage · HPE 3PAR Online Import for HDS Storage . Best practices for data migration from Hitachi Data Systems arrays to HPE 3PAR StoreServ Storage

Technical white paper Page 33

rmdir /s /q "C:\Program Files (x86)\Hewlett-Packard\hp3paroiu\OIUData"

ping -n 3 localhost > NUL

net start HPOIUESERVER

ping -n 20 localhost > NUL

sc query HPOIUESERVER

pause

You can also execute these commands one-by-one by hand. Wiping the OIU database in the OIUData directory removes your source and destination system definition, so you need to issue the addsource and adddestination commands again after logging in into the OIU console. Next, make sure to have the correct output in the showconnection command before executing the createmigration command again.

Information collection when contacting support When contacting HPE Support for an Online Import issue, you can proactively collect information and attach it to your request for help. The steps for collecting the information are outlined here. We have assumed the default installation directory for OIU at C:\Program Files (x86)\Hewlett-Packard\hp3paroiu. If OIU was installed in a different directory, change the path in the following steps accordingly.

1. On the Windows Server running the OIU console software:

a. Remove the data and logs directory to get clean, short logs:

I. Stop the HPE 3PAR Online Import service: net stop HPOIUESERVER

II. Navigate to the OIUData directory: cd C:\Program Files (x86)\Hewlett-Packard\hp3paroiu

III. Move the OIUData directory: move OIUData OIUData.old

IV. Navigate to the OIU logs directory: cd C:\Program Files (x86)\Hewlett-Packard\hp3paroiu \OIUTools\tomcat\32-bit\apache-tomcat-7.0.59

V. Move the logs directory: move logs logs.old

VI. Restart the HPE 3PAR Online Import service: net start HPOIUESERVER

At the restart of the service, the directories that were moved are recreated with the same name in the same location.

b. Open the OIU console and log in.

c. Execute the following commands and capture the command with its options and its output in a file:

I. addsource, adddestination and showconnection

II. createmigration

III. showmigration when createmigration fails

IV. showmigrationdetails –migrationid xxx with xxx the 13-digit migration ID when createmigration fails

d. Gather the logs and the OIU database files:

I. Stop the HPE 3PAR Online Import service: net stop HPOIUESERVER

II. Navigate to the OIUData directory: cd C:\Program Files (x86)\Hewlett-Packard\hp3paroiu

III. Copy the directory OIUData: copy OIUdata OIUdata_new

IV. Compress the copied files and send the output

V. Navigate to OIU logs directory: cd C:\Program Files (x86)\Hewlett-Packard\hp3paroiu \OIUTools\tomcat\32-bit\apache-tomcat-7.0.59

VI. Copy the logs directory: copy logs logs_new

VII. Compress the copied files and send the output

VIII. Restart the HPE 3PAR Online Import service: net start HPOIUESERVER

Page 34: HPE 3PAR Online Import for HDS Storage · HPE 3PAR Online Import for HDS Storage . Best practices for data migration from Hitachi Data Systems arrays to HPE 3PAR StoreServ Storage

Technical white paper Page 34

2. On the CLI for the destination HPE 3PAR StoreServ:

a. Copy to a file the output of the following HPE 3PAR CLI commands:

I. showsys –d

II. showversion –a –b

b. After createmigration has failed, copy the output of the following commands to a file:

I. showportdev ns n:s:p with n:s:p the location of each of the Peer ports

II. showtarget

III. showtarget –lun all

3. On the server running the HCS software

a. Collect the logs for HCS by the HDM Trouble Information Acquisition (TIA) tool. This tool is started by executing the batch file TIA.bat at C:\Program Files\HiCommand\DeviceManager\SupportTools\CollectTool or another, non-default directory. The output location of the tool is defined in TIA.properties in the same directory.

Compress the files obtained in steps 1, 2, and 3 and send them to the HPE Support group.

Licensing The destination HPE 3PAR StoreServ Storage must have a valid HPE 3PAR Online Import or HPE 3PAR Peer Motion license installed. A frame-based Online Import license valid for one year comes with every newly ordered HPE 3PAR StoreServ 8000 and 20000. Consult your HPE representative or HPE partner for more licensing information. No license is required on the source HDS Storage system.

Delivery model Hewlett Packard Enterprise has designed the HPE 3PAR Online Import Utility with ease of use in mind. As a result, customers can execute the pre-migration, the actual data transfer, and the post-migration. Assistance with the migration from HPE Technical Services Consulting is available as well.

An Online Import migration can be part of a packaged data migration or a custom data migration service. Each type of service will bring in expertise, best practices, and automation to deliver a successful end-to-end migration solution. Consult your HPE representative or HPE partner and this web page7 for more information about migration services.

Typography, terminology, and abbreviations

Note In this white paper, the text in the SimplePro font is typed by a storage administrator in a computer terminal session or the output of such an action.

The following terminology and abbreviations are used in this white paper.

AD Active Directory—a directory service that Microsoft developed for Windows domain networks

CCI Command Control Interface—a host-based software by Hitachi Data Systems to perform configuration and data management operations on HDS Storage systems

CIMOM Common Information Model Object Manager—a central component of the WBEM server that is responsible for the communication between clients and information providers

CLI Command Line Interface

CPG Common Provisioning Group—a template to create HPE 3PAR StoreServ volumes

Destination system The storage system that receives the data that are migrated

7 www8.hp.com/us/en/business-services/it-services/storage-services.html

Page 35: HPE 3PAR Online Import for HDS Storage · HPE 3PAR Online Import for HDS Storage . Best practices for data migration from Hitachi Data Systems arrays to HPE 3PAR StoreServ Storage

Technical white paper

Sign up for updates

© Copyright 2016–2017 Hewlett Packard Enterprise Development LP. The information contained herein is subject to change without notice. The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein.

Microsoft, Windows, and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. Red Hat is a registered trademark of Red Hat, Inc. in the United States and other countries. UNIX is a registered trademark of The Open Group. Linux is the registered trademark of Linus Torvalds in the U.S. and other countries. VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other third-party trademark(s) is/are property of their respective owner(s).

4AA6-7835ENW, January 2017, Rev. 1

FC Fibre Channel—a computer network technology that interconnects a server and a storage system

FCoE Fibre Channel over Ethernet—a computer network technology that encapsulates Fibre Channel frames over an Ethernet network

HBA Host Bus Adaptor—an expansion card in a server that implements connectivity to the outside world

HDLM Hitachi Dynamic Link Manager—Hitachi’s multipathing software solution

HDS Hitachi Data Systems—a disk array vendor

HORCM Hitachi Online Remote Copy Manager—a framework to create and manage snapshots, ShadowImage, and TrueCopy setups

Host The server whose volume(s) are under replication from the source to the destination system

LDAP Lightweight Directory Access Protocol—an open, vendor-neutral, industry-standard application protocol for accessing and maintaining distributed directory information services over an IP network

LDEV Logical Device—a volume on an HDS Storage system

Link The logical interconnection between the source and the destination system under migration

LUSE Logical Unit Size Expansion—a feature on an HDS disk array to configure a volume by combining several smaller ones

LVM Logical Volume Manager—a method for allocating space on disk storage devices that offers striping across multiple drives and RAID protection levels

MDM Minimally Disruptive Migration—one of the three migration types in HPE 3PAR Online Import

PGR Persistent Group Reservations—a way to coordinate access to shared volumes between multiple hosts

QoS Quality of Service—a tool to manage IOPS and bandwidth from a host to a storage device

RC Remote Copy—the replication solution for HPE 3PAR StoreServ systems

RHEL Red Hat Enterprise Linux—a distribution of the Linux operating system developed for the business market

RWC Remote Web Console—an interface for managing HDS Storage systems

SED Self-encrypting disk—a hard drive with a circuit built into the disk drive controller that encrypts all data stored to magnetic media and decrypts them automatically

SMI-S Storage Management Initiative Specification—a standardized tool for storage management

Source system The storage system that contains the data to be migrated

SSMC HPE 3PAR StoreServ Management Console—a graphical user interface for managing the HPE 3PAR StoreServ

Storage Area Network, SAN A high-speed special-purpose network that interconnects different kinds of storage devices with servers running applications

SFP Small Form-factor Pluggable—a compact, hot-pluggable transceiver used for data communication over FC or FCoE

S-VOL Secondary volume—on an HDS system the volume in a pair relationship with the P-VOL or primary volume in ShadowImage, TrueCopy, or snapshot

SVP Service Processor—a laptop physically located inside an HDS Storage system to manage the array

VLUN An HPE 3PAR volume that is exported to a host

WWN World Wide Name—a 48-bit quantity the uniquely defines an HBA interface

Zoning operation Creates, modifies, or deletes a logical connection in the SAN between a FC HBA on the server and one on the storage system

Learn more at hpe.com/3par