Au Aix Powerha Cluster Migration PDF

15
© Copyright IBM Corporation 2013 Trademarks IBM PowerHA SystemMirror cluster migration Page 1 of 15 IBM PowerHA SystemMirror cluster migration IBM Power Systems high availability Kunal Langer ([email protected]) Technical Consultant IBM 04 September 2013 IBM® PowerHA® SystemMirror is an application that makes a system fault resilient and reduces downtime of applications or databases. This article helps customers to plan for and successfully accomplish cluster migration. Introduction The purpose of this article is to provide a step-by-step guide for migrating an existing PowerHA cluster (at PowerHA 6.1.0) to PowerHA SystemMirror 7.1.2. This article helps in understanding how to plan for and accomplish a successful migration. It provides an overview of cluster variants in the PowerHA 7.1.2 and provides an overview of the PowerHA migration process, various migration methodologies, and the requirements for the migration process. I will discuss some migration limitations and prerequisites along with the planning process and also introduce the clmigcheck utility, which checks the current cluster configuration for any unsupported element in the cluster as well as collecting additional information required for the migration. The actual migration steps are presented in detail for use by customers to seamlessly migrate their two-node PowerHA 6.1 (single-site clusters) to PowerHA 7.1.2. Cluster Aware AIX The Cluster Aware AIX (CAA) is a built-in clustering capability of the IBM AIX® operating system. Using CAA, administrators can create a cluster of AIX nodes and take advantage of the capabilities of cluster. The CAA has many capabilities, and some of them are listed below: • Cluster-wide event management • Communication and storage events such as node up and down, network adapter up and down, network address changes, and disk up and down • Predefined and user-defined events • Cluster-wide storage naming service • Cluster-wide command distribution • Commands and application programming interfaces (APIs) to create clusters across a set of AIX systems: Kernel-based heartbeats and messages provide a robust cluster infrastructure

Transcript of Au Aix Powerha Cluster Migration PDF

Page 1: Au Aix Powerha Cluster Migration PDF

© Copyright IBM Corporation 2013 TrademarksIBM PowerHA SystemMirror cluster migration Page 1 of 15

IBM PowerHA SystemMirror cluster migrationIBM Power Systems high availability

Kunal Langer ([email protected])Technical ConsultantIBM

04 September 2013

IBM® PowerHA® SystemMirror is an application that makes a system fault resilient andreduces downtime of applications or databases. This article helps customers to plan for andsuccessfully accomplish cluster migration.

IntroductionThe purpose of this article is to provide a step-by-step guide for migrating an existing PowerHAcluster (at PowerHA 6.1.0) to PowerHA SystemMirror 7.1.2. This article helps in understandinghow to plan for and accomplish a successful migration. It provides an overview of cluster variantsin the PowerHA 7.1.2 and provides an overview of the PowerHA migration process, variousmigration methodologies, and the requirements for the migration process. I will discuss somemigration limitations and prerequisites along with the planning process and also introduce theclmigcheck utility, which checks the current cluster configuration for any unsupported elementin the cluster as well as collecting additional information required for the migration. The actualmigration steps are presented in detail for use by customers to seamlessly migrate their two-nodePowerHA 6.1 (single-site clusters) to PowerHA 7.1.2.

Cluster Aware AIXThe Cluster Aware AIX (CAA) is a built-in clustering capability of the IBM AIX® operatingsystem. Using CAA, administrators can create a cluster of AIX nodes and take advantage of thecapabilities of cluster. The CAA has many capabilities, and some of them are listed below:

• Cluster-wide event management• Communication and storage events such as node up and down, network adapter up and

down, network address changes, and disk up and down• Predefined and user-defined events

• Cluster-wide storage naming service• Cluster-wide command distribution• Commands and application programming interfaces (APIs) to create clusters across a set of

AIX systems: Kernel-based heartbeats and messages provide a robust cluster infrastructure

Page 2: Au Aix Powerha Cluster Migration PDF

developerWorks® ibm.com/developerWorks/

IBM PowerHA SystemMirror cluster migration Page 2 of 15

and by default, use multichannel communication between nodes using the network andstorage area network (SAN) physical links

Cluster repository diskA cluster repository disk is a storage device shared across all the cluster nodes. This disk isused as a central repository. You can have only one cluster repository disk. In PowerHA 7.1.2,you can define a backup repository disk, which can be used in case the primary repository diskfails. For a linked cluster (true XD cluster), each PowerHA site will have its own repository. Therepository disk cannot be mirrored using AIX Logical Volume Manager (LVM), and therefore, planto have Redundant Array of Independent Disks (RAID) mirroring for the disk. The minimum spacerequired for a cluster repository disk is 1 GB. Refer to the PowerHA SystemMirror Admin Guide forinformation on how to define backup repository disk.

Multicast IP addressesCAA uses multicast addresses for cluster communication between the nodes in the cluster. It ismandatory to have multicast enabled in your cluster network infrastructure.

Differences between clcomdES and clcomdStarting with AIX 6.1 TL6 and AIX 7.1, the cluster communication daemon has been integrated intoAIX as part of the CAA infrastructure. Some of the differences between the clcomdES subsystem(used by previous versions of PowerHA) and the new clcomd daemon of CAA and PowerHA 7.1and later are provided in this section.

• Install: The clcomdES subsystem is part of the PowerHA SystemMirror install media.Whereas, clcomd is part of AIX installed with Base AIX Enterprise Edition (delivered with thebos.cluster.rte file set).

• Name: The subsystem name of traditional cluster communication daemon is clcomdES; thenew subsystem name is clcomd.

• Run ability: The clcomdES daemon is always running on the nodes installed with PowerHASystemMirror (run from /etc/inittab). The clcomd daemon is always running on nodes even ifPowerHA SystemMirror is not installed (this is run from /etc/inittab as well).

• Port: The clcomdES subsystem uses port 6191 (/etc/services). The clcomd daemon usesport 16191 (/etc/services); also uses the clcomdES port 6191 if PowerHA SystemMirrormigration is detected.

• Cluster definition: The clcomdES subsystem uses the /usr/es/sbin/cluster/etc/rhosts filefor initial cluster definition. It can be populated with IP addresses for all available adapters onthe node. Whereas, clcomd uses /etc/cluster/rhosts for initial cluster definition. The file /etc/cluster/rhosts should be populated with IP addresses, only one per line, of members in thisfile. Then, refresh clcomd using the refresh –s clcomd command.

• Definition query: The clcomdES subsystem gets the cluster definition from PowerHASystemMirror configuration data, whereas clcomd queries the definition of the cluster usingkernel API (making use of the CAA infrastructure).

Differences between PowerHA 6.1 and PowerHA 7.1 and laterWith the introduction of the CAA feature in AIX 7.1 and AIX 6.1 TL6, PowerHA SystemMirror hasundergone a lot of architectural changes. Due to architectural changes in PowerHA 7.1 and later

Page 3: Au Aix Powerha Cluster Migration PDF

ibm.com/developerWorks/ developerWorks®

IBM PowerHA SystemMirror cluster migration Page 3 of 15

with the advent of CAA, PowerHA 7.1 and later expects the communication path for cluster nodebe set to the IP address mapped to the host name. Some of the differences between PowerHA 6.1and PowerHA 7.1 and later are:

• PowerHA 7.1 and later releases are based upon CAA where monitoring and eventmanagement is built into the AIX kernel providing robust foundation not prone to jobscheduling. In the previous releases, PowerHA monitored soft and hard errors within thecluster from various event sources using Reliable Scalable Cluster Technology (RSCT).

• In PowerHA 6.1 and lower releases, the main communication path goes from PowerHAto group services (grpsvcs subsystem of RSCT) and then to topology services (topsvcssubsystem of RSCT) and back. In PowerHA 7.1 and later releases, the main communicationpath goes from PowerHA to group services (cthags) and then to CAA.

• With PowerHA 7.1, event management is handled by using a new pseudo file systemarchitecture called Autonomic Health Advisor File System (AHAFS). This is used by CAA asits monitoring framework.

• PowerHA 7.1 uses the cluster repository disk, Fibre Channel (FC)/SAN adapters andmulticasting for heartbeating. Heartbeat is performed by sending and receiving special gossippackets across the network using the multicast protocol. The gossip packets are alwaysreplied to by other nodes. In older releases of PowerHA, IP and non-IP networks participatedin heartbeats and detection or diagnosis of network, node, or network adapter failures. Theseheartbeat packets were never acknowledged.

• PowerHA 7.1 and later releases use a special gossip protocol over the multicast address todetermine node information and implement scalable reliable multicast. Older releases usetraditional cluster communication daemon (clcomdES subsystem) which gets informationfrom PowerHA Object Data Manager (ODM) and uses the heartbeat mechanism provided byRSCT for node information processing.

• PowerHA 7.1 and later releases, introduced Systems Events, which are handled by theclevmgrdES subsystem. The root volume group (rootvg) system event allows the monitoringof loss of access to the rootvg. Loss of access to rootvg results in log entry in the system errorlog and system reboot. Older releases of PowerHA do not handle rootvg failures.

PowerHA 7.1.2 cluster variants

PowerHA SystemMirror 7.1.2 allows customers to configure three different styles of clustersnamely local, stretched, and linked clusters.

Local cluster – It is a simple, multinode, single-site or local cluster configured using node orlogical partitions (LPARs) within a single data center. This is the most typical cluster configurationproviding for local PowerHA cluster fallover. Local fallover provides a faster transition onto anothermachine than a fallover going to a geographically dispersed site. Local clusters can benefit fromadvanced functions such as IBM PowerVM Live Partition Mobility (LPM) between machines withinthe same site. This combination of IBM PowerVM functions and IBM PowerHA SystemMirrorclustering is useful for helping to avoid any service interruption for a planned maintenance eventwhile protecting the environment in the event of unforeseen outage.

Page 4: Au Aix Powerha Cluster Migration PDF

developerWorks® ibm.com/developerWorks/

IBM PowerHA SystemMirror cluster migration Page 4 of 15

Stretched cluster – The term denotes a cluster that has sites defined within the same geographiclocation. This provides for a campus style disaster recovery and high availability cluster withcluster nodes separated by a shorter distance. The sites can be near enough to have sharedlogical unit numbers (LUNs) in the same SAN. The key aspect about stretched cluster is thatit uses a shared repository disk. Stretched clusters can support cross-site LVM mirroring, IBMHyperSwap®, and Geographic Logical Volume Manager (GLVM). Extended distance sites with IP-only connectivity are not possible with this configuration.

Figure 1. Example of a stretched cluster

A stretched cluster configuration can also be used with PowerHA 7.1.2 Standard Edition withthe use of LVM cross-site Mirroring. The stretched cluster is capable of using all three levels ofcluster communication (TCP/IP, SAN heartbeat and repository disk). The distance can be up to 15km, with direct SAN links and up to 120 km with dense wavelength division multiplexing (DWDM)or coarse wavelength division multiplexing (CWDM) or other SAN extenders. This provides forsynchronous replication or mirroring.

Linked cluster – The term denotes a cluster that has sites defined across geographic locationsallowing configuration of a traditional extended distance cluster between two sites, for exampleBrisbane and Singapore. The key aspect of a linked cluster that makes it different from extendeddistance clusters in previous versions is the use of SIRCOL in CAA. This means that each sitehas its own CAA repository disk, which is replicated automatically between sites by CAA. Linkedcluster sites communicate with each other using unicast and not multicast as it is the case withstretched cluster or normal cluster. However, local sites internally use multicast, and therefore,multicast still must be enabled in the network at each site.

Figure 2. Illustration of a linked cluster

Page 5: Au Aix Powerha Cluster Migration PDF

ibm.com/developerWorks/ developerWorks®

IBM PowerHA SystemMirror cluster migration Page 5 of 15

All the interfaces are defined in this type of configuration as CAA gateway addresses. CAAmaintains the repository information automatically across sites through unicast addresscommunication.

Migration overviewUnlike previous PowerHA SystemMirror migration methods, there will be some cases wheremigration will have to be done manually by the customer resulting in a complete clusteroutage. These conditions can be detected at the time we run /usr/sbin/clmigcheck. Thereare three supported migration paths for PowerHA SystemMirror 6.1 migration to PowerHASystemMirror 7.1.2. Each one requires an AIX upgrade, migration to AIX 6.1 TL8 SP1 or later, orAIX 7.1 TL2 SP1. Migration to PowerHA SystemMirror 7.1.2 is a two-phase process.

• Phase I: AIX migration or upgrade based on the current AIX level.• Phase II: PowerHA SystemMirror migration

AIX migration

Refer to the AIX information center for steps on how to migrate or upgrade AIX.

PowerHA migration

PowerHA SystemMirror provides the following three different migration options.

• Offline migration: As the name suggests, this type of migration involves bringing downthe entire PowerHA cluster, installing PowerHA SystemMirror 7.1.2, and restarting clusterservices for one node at a time.

• Rolling migration: During rolling migration, the workload is moved from the node whereit is currently running to another node in the system. This is followed by the installation ofPowerHA 7.1.2 and the starting of cluster services. These steps are followed on all theremaining nodes.

• Snapshot migration: This really is not a migration at all. Customers would remove theprevious version of PowerHA SystemMirror and install the newer version of PowerHASystemMirror 7.1.2. Customer would then use the PowerHA SystemMirror 7.1.2 configurationinterface, either the Director GUI, System Management Interface Tool (SMIT), or commandline to install the same configuration as they previously had, that is, restoring from clustersnapshot.

Migration requirementsBefore you start migrating the cluster nodes, ensure that the following tasks are completed:

1. Back up all the application and system data.2. Create a back out or reversion plan. A back out plan allows for easy restoration of cluster

and AIX configuration in case migration runs into some problem. System backup should becreated using the mksysb and savevg utilities.

Page 6: Au Aix Powerha Cluster Migration PDF

developerWorks® ibm.com/developerWorks/

IBM PowerHA SystemMirror cluster migration Page 6 of 15

3. Ensure that the Communication Path to Node option in the PowerHA cluster nodes is set tothe IP address mapping to the hostname.

4. Save the existing cluster configuration. Also, save any user provided scripts, most commonlycustom events, pre and post event scripts, notification scripts, and application controllerscripts.

Some migration requirements are as follows:

1. All cluster nodes have one shared disk that will be used for cluster repository, having at least1 GB size. The list of supported FC and SAS adapters for connection to the repository disk:

• 4 GB Single-Port Fibre Channel PCI-X 2.0 DDR Adapter (FC 1905; CCIN 1910)• 4 GB Single-Port Fibre Channel PCI-X 2.0 DDR Adapter (FC 5758; CCIN 280D)• 4 GB Single-Port Fibre Channel PCI-X Adapter (FC 5773; CCIN 5773)• 4 GB Dual-Port Fibre Channel PCI-X Adapter (FC 5774; CCIN 5774)• 4 Gb Dual-Port Fibre Channel PCI-X 2.0 DDR Adapter (FC 1910; CCIN 1910)• 4 Gb Dual-Port Fibre Channel PCI-X 2.0 DDR Adapter (FC 5759; CCIN 5759)• 8 Gb PCI Express Dual Port Fibre Channel Adapter (FC 5735; CCIN 577D)• 8 Gb PCI Express Dual Port Fibre Channel Adapter 1Xe Blade (FC 2B3A; CCIN 2607)• 3 Gb Dual-Port SAS Adapter PCI-X DDR External (FC 5900 and 5912; CCIN 572A)

For the most current list of supported storage adapters, refer to the IBM PowerHASystemMirror for AIX web page.

2. Ensure that the current network infrastructure supports multicast. Enable multicast traffic onall network switches connected to all cluster nodes.

3. Ensure that the /etc/cluster/rhosts file is properly filled with hostnames or IP addresses of allcluster nodes (IP addresses mapping to the host name), else cluster communication will failand migration will not take place.

4. Ensure that all cluster nodes have the requisite version of AIX installed. Refer to the followingtable.PowerHA version AIX version required

PowerHA 7.1.0 AIX 6.1 TL6 SP1 or AIX 7.1 TL0 SP1

PowerHA 7.1.1 AIX 6.1 TL7 SP2 or AIX 7.1 TL1 SP2

PowerHA 7.1.2 AIX 6.1 TL8 SP1 or AIX 7.1 TL2 SP1

5. Ensure that Virtual I/O Server (VIOS) 2.2.0.1-FP24-SP01 or later is installed.6. The following additional file sets are required:

• bos.cluster• bos.ahafs• bos.clvm.enh• devices.common.IBM.storfwork (required for SAN heartbeat)

7. RSCT version:• rsct.core.rmc 3.1.4.0• rsct.basic 3.1.4.0• rsct.compat.basic.hacmp 3.1.4.0• rsct.compat.clients.hacmp 3.1.4.0

Page 7: Au Aix Powerha Cluster Migration PDF

ibm.com/developerWorks/ developerWorks®

IBM PowerHA SystemMirror cluster migration Page 7 of 15

Migration limitationsThere are certain limitations in migrating to PowerHA 7.1.2 because of the structural changes. Thelimitations are listed below:

• Not all configurations can be migrated.• Configurations with FDDI, ATM, X.25 or Token Ring cannot be migrated and must be

removed before migration.• Configurations with IP Address Takeover (IPAT) using replacement or Hardware Address

Takeover (HWAT) cannot be migrated, and must be removed from configuration.• Configurations with heartbeat over IP aliasing must be removed before migration.

• Non-IP networking is accomplished differently.PowerHA 7.1.2 (and the underlying CAA) use multicast, FC/SAN and the cluster repositorydisk for heartbeating. Traditional non-IP networks such as rs232, diskhb, mndhb, tmscsi,tmssa are not supported. These will be removed during migration.

clmigcheck utilityThe clmigcheck utility is part of base AIX, included with AIX 6.1 TL6 or later. It is an interactivetool that verifies the current cluster configuration, checks for unsupported elements, and collectsadditional information required for migration. You must run this command on all cluster nodes, onenode at a time, before installing PowerHA 7.1.2. The initial screen is as follows:

----------[PowerHA System Mirror Migration Check] -------------Please select one of the following options:1 = Check ODM configuration.2 = Check snapshot configuration.3 = Enter repository disk and multicast IP addresses.

Select one of the above, "x" to exit or "h" for help:

Note that at any prompt, you can type h for help about that data entry prompt.

• Option 1 checks SystemMirror configuration data (/etc/es/objrepos) and provides errors andwarnings if there are any elements in the configuration that must be removed manually. In thatcase, the flagged elements must be removed, cluster configuration verified and synchronized,and this command must be re-run until the SystemMirror configuration data check completeswithout errors.

• Option 2 checks a snapshot (present in /usr/es/sbin/cluster/snapshots) and provides errorinformation if there are any elements in the configuration that will not migrate. BecausePowerHA SystemMirror provides no tools to edit a snapshot, any errors checking thissnapshot means that it cannot be used for migration. In this case, the customer might haveto apply the snapshot on the back-level PowerHA SystemMirror and update the configurationmanually. Save the new snapshot and start the procedure all over again.

• Option 3 queries the customer for additional configuration needed and saves it in a file in /var on every node in the cluster. When option 3 is selected from the main screen, you will beprompted for repository disk and multicast dotted decimal IP addresses. This data will bestored in a file (/var/clmigcheck/clmigcheck.txt) on every node in the cluster. When PowerHASystemMirror 7.1.2 is installed, this file is read and the SystemMirror configuration data is

Page 8: Au Aix Powerha Cluster Migration PDF

developerWorks® ibm.com/developerWorks/

IBM PowerHA SystemMirror cluster migration Page 8 of 15

populated. The customer must use either option 1 or option 2 successfully before runningoption 3, which collects and stores configuration data.

When the /usr/sbin/clmigcheck command is run on the last node of the cluster before installingPowerHA SystemMirror 7.1.2, the CAA infrastructure will be started. This can be verified byrunning the /usr/sbin/lscluster –m command.

FC/SAN based heartbeat mechanismThe cluster communication in PowerHA 7.1 and later (and CAA) is achieved by communicatingover multiple redundant paths. This includes the important process of sending and processingthe cluster heartbeats by each participating node. The following redundant paths provide robustclustering foundation:

• TCP/IP (basically using multicast address)• Optional SAN or FC adapters• Repository disk

SAN-based path is a redundant, high-speed path of communication established between thehosts by using the SAN fabric that exists in any data center between hosts. Discovery-basedconfiguration reduces the burden of configuring the links. PowerHA 7.1.2 supports SAN-basedheartbeat within a site. It is not mandatory to set up FC or SAN-based heartbeat path, if theconfigured SANComm (sfwcomm as seen in lscluster –i output) provides additional heartbeatpath for redundancy.

The SAN heartbeat infrastructure can be accomplished in several ways:

• Using real adapters on the cluster nodes and enabling the storage framework capability(sfwcomm device) of the host bus adapters (HBAs). Currently, FC and SAS technologies aresupported. The Setting up cluster storage communication link provides more details aboutsupported HBAs and the required steps to set up the storage framework communication.

• In a virtual environment using N-Port ID Virtualization (NPIV) or virtual Small ComputerSystem Interface (vSCSI) with a VIOS instance, enabling the sfwcomm interface requiresactivating the target mode (the tme attribute) on the real adapter in the VIOS instance anddefining a private virtual LAN (VLAN) (ID 3358) for communication between the partitioncontaining the sfwcomm interface and VIOS. The real adapter on the VIOS must be asupported HBA.

The target mode enabled (tme) attribute for a supported adapter is only available when theminimum AIX level for CAA is installed. The configuration steps are as follows:

1. Configure the FC adapters for SAN heartbeat on the VIOS instances. Use the chdevcommand to enable the tme attribute:# chdev –l fcsX –a tme=yes -perm

2. Run the chdev command to enable dynamic tracking and fast failure recovery on all FSCSIadapters.# chdev –l fscsiX –a dyntrk=yes –a fc_err_recov=fast_fail

Page 9: Au Aix Powerha Cluster Migration PDF

ibm.com/developerWorks/ developerWorks®

IBM PowerHA SystemMirror cluster migration Page 9 of 15

3. Restart the VIOS instances.4. On the Hardware Management Console (HMC) create a new virtual Ethernet adapter for each

cluster LPAR and VIOS. Set the VLAN ID to 3558 (no other VLAN ID is allowed).5. On the VIOS, run the cfgmgr command and check for the virtual Ethernet adapter and

sfwcomm device using the lsdev command.# lsdev –C | grep sfwcommsfwcomm0 Available 01-00-02-FF Fibre Channel Storage Framework Commsfwcomm1 Available 01-01-02-FF Fibre Channel Storage Framework Comm

6. On the cluster nodes, run the cfgmgr command and check for the virtual Ethernet andsfwcomm device using the lsdev command.

7. No other configuration is required in PowerHA. When the cluster is up and running, you cancheck the status of SAN heartbeat using the lscluster –i command.

You can run clras from /usr/lib/cluster, as shown below, to check if sfwcomm and dpcomm areworking or not.

0) root @ <nodename>: /usr/lib/cluster # ./clrassancomm_status +---------------------------------------------------------------+ |NAME | UUID | STATUS |+---------------------------------------------------------------+ |servr2.abcdefg.xxx.com | 6c3af126-d8d4-11e2-9c7a-00145ee770e9 | UP |+---------------------------------------------------------------+ (0) root @<nodename>: /usr/lib/cluster # ./clras dpcomm_status+---------------------------------------------------------------+ | NAME | UUID |STATUS | +---------------------------------------------------------------+ |servr1.abcdefg.xxx.com | 54119a46-d8d4-11e2-ac6b-00145ee770e9 | UP|+---------------------------------------------------------------+ |servr2.abcdefg.xxx.com | 6c3af126-d8d4-11e2-9c7a-00145ee770e9 | UP|+---------------------------------------------------------------+ (0) root @<nodename>: /usr/lib/cluster #

PowerHA migrationBefore migrating PowerHA to PowerHA 7.1 and later, test whether the nodes in your environmentsupport multicast-based communication. To test end-to-end multicast communication for all nodesused to create the cluster on your network, run the mping command, which is part of the CAAframework of AIX. You can run mping with a specific multicast address; otherwise the commanduses a default address. The following is an example of the mping command for the multicastaddress 228.168.101.43, where nodeA is the receiver and nodeB is the sender. You must run thefollowing commands from both the nodes at the same time:

1. From nodeA, run mping –r –v –c 5 –a 228.168.101.432. From nodeB, run mping –s –v –c 5 –a 228.168.101.43

Repeat the steps this time by reversing the sender and receiver.

Offline migrationYou can choose to stop cluster services on all nodes, and then install PowerHA 7.1. After allthe checks are successful, the clconvert utility runs from installp to convert the configurationrepresented in the back level PowerHA configuration data classes to PowerHA 7.1 and laterversion, including the running of mkcluster and creating the CAA version of cluster in additionto removing any discovered interface not in the previous version of PowerHA (such as SAN/FC

Page 10: Au Aix Powerha Cluster Migration PDF

developerWorks® ibm.com/developerWorks/

IBM PowerHA SystemMirror cluster migration Page 10 of 15

heartbeat sfwcomm). After AIX has been migrated, follow these steps to migrate the PowerHAlevel to version PowerHA 7.1 and later.

1. Stop cluster services on all cluster nodes. Use the smitty clstop command and select theBring a Resource Group Offline option.

2. Ensure that the cluster services have been stopped. Use the lssrc –ls clstrmgrEScommand to check the cluster state. It should be ST_INIT.

3. Run /usr/sbin/clmigcheck on the first node and select option 1.4. If the cluster cannot be migrated, the clmigcheck utility will indicate that in error messages.

Remove the unsupported elements. If no errors are reported, skip step 5.5. Perform a verification and then synchronize.6. Run clmigcheck once again and select option 1. The clmigcheck command says The ODM

has no unsupported elements, as shown in the following figure.

7. Now select option 3 to enter the repository disk information and optionally provide themulticast IP address. The data is saved in the /var/clmigcheck/clmigcheck.txt file on eachnode. You need to enter this information only on the first node.

8. Populate the /etc/cluster/rhosts file on this node with the IP addresses of all the clusternodes (addresses corresponding to hostname command).

9. Refresh the clcomd daemon, and run the refresh –s clcomd command.10. Install PowerHA 7.1 and later on the first node.11. Run the following steps on all remaining nodes, one at a time.

a. Run /usr/sbin/clmigcheck. It prompts you to install new version of PowerHA, as shownin the message in the following figure.

Page 11: Au Aix Powerha Cluster Migration PDF

ibm.com/developerWorks/ developerWorks®

IBM PowerHA SystemMirror cluster migration Page 11 of 15

b. Add the IP addresses of all the cluster nodes in the /etc/cluster/rhosts file and refreshthe clcomd daemon.

c. Install PowerHA 7.1.2.12. /usr/sbin/clmigcheck detects the last node when it runs and it creates a cluster-aware

infrastructure, that is, a CAA cluster on all the nodes. This can be verified by running the /usr/sbin/lscluster –m command

13. Update the /etc/cluster/rhosts file, refresh clcomd and install PowerHA SystemMirror 7.1.14. Start the cluster services, one node at a time, and ensure that each node successfully joins

the cluster. After the last node has joined the cluster, your migration is successful.

Rolling migration

In rolling migration, the newer version of PowerHA is installed (one node at a time), while theremaining nodes continue to run cluster services and host the workload. In this mixed versionstate, PowerHA continue to respond to cluster events. In a rolling migration, you stop clusterservices on the target node with the Move Resource Groups option. There is a brief interruptionwhile the application moves to the backup or fallover node, and a second interruption while theapplication moves back to the primary or home node after it has been migrated. The steps tomigrate are as follows:

1. Run /usr/sbin/clmigcheck on the first node, and select option 1.2. If the cluster cannot be migrated, error messages will be displayed. In that case, remove all

unsupported elements.3. Verify and sync the corrected cluster definition from the first node.4. Populate the /et/cluster/rhosts file on this node, and refresh the clcomd daemon.

Page 12: Au Aix Powerha Cluster Migration PDF

developerWorks® ibm.com/developerWorks/

IBM PowerHA SystemMirror cluster migration Page 12 of 15

5. Run clmigcheck again to verify that there are no further unsupported elements. The TheODM has no unsupported elements message is displayed, as shown in the following figure.

6. Stop the cluster services with the Move Resource Groups option.7. Run clmigcheck again and select option 3. Enter the shared repository disk, and optionally,

provide the multicast IP address. The information is saved in /var/clmigcheck/clmigcheck.txt8. Install the newer version of PowerHA on this node.9. After the installation is complete, start the cluster services.

10. On the remaining nodes, follow these steps, one node at a time, after stopping the clusterservices with the Move Resource Groups option.

a. Populate the /etc/cluster/rhosts file with the IP addresses of all cluster nodes.b. Refresh the clcomd daemon.c. Run /usr/sbin/clmigcheck. A message, as shown in the following figure is displayed.

d. Install PowerHA 7.1.2 on the node.e. Start cluster services.

11. /usr/sbin/clmigcheck will detect the last node when it runs and will create a CAA cluster onall the nodes. Run the /usr/sbin/lscluster –m command to verify this.

Page 13: Au Aix Powerha Cluster Migration PDF

ibm.com/developerWorks/ developerWorks®

IBM PowerHA SystemMirror cluster migration Page 13 of 15

12. Start cluster services on the last node. After the last node joins the cluster, your migration iscomplete.

Snapshot migration

The snapshot migration path requires cluster services to be down on all the nodes, thus calling fora cluster outage or application downtime. To migrate a cluster using this path, you need to performthe following steps.

1. Create a cluster snapshot. By default, the snapshot is saved in the /usr/es/sbin/cluster/snapshots directory. Save a copy of it in /tmp or some other location.

2. Stop the cluster services on all nodes using the Bring Resource Groups Offline option.3. Run /usr/sbin/clmigcheck on the first node, and then select option 2. Enter the snapshot

name.4. If the utility reports errors for unsupported elements, the snapshot cannot be migrated. In this

case, remove all unsupported elements reported by clmigcheck. If no errors are reported, goto step 7.

5. Take a new cluster snapshot and save a copy of it in /tmp.6. Run /usr/sbin/clmigcheck again with option 2 to ensure that there are no unsupported

elements.7. Choose option 3 in /usr/sbin/clmigcheck to enter the shared disk (repository disk) and

optionally, the multicast address.8. Remove the existing version of PowerHA software on all cluster nodes.9. In the /etc/cluster/rhosts file, fill in the IP addresses of all cluster nodes (IP addresses

corresponding to the host name command).10. Refresh clcomd using the refresh –s clcomd command.11. Install a newer version of PowerHA.12. Convert the snapshot using the clconvert_snapshot command:

# /usr/es/sbin/cluster/conversion/clconvert_snapshot –v 6.1.0 –s <snapshot file name>

13. Restore the converted snapshot. Use the path: smitty sysmirror -> Cluster Nodes andNetworks -> Manage the Cluster -> Snapshot Configuration -> Restore the ClusterConfiguration from a Snapshot.

14. After the restoration is done, run verification and synchronization. This creates and enablesthe CAA infrastructure. You can verify this using the lscluster –m command.

15. Start the cluster services, one node at a time. After the last node joins the cluster, themigration is complete.

Resources

Learn

• What's New in PowerHA• PowerHA version compatibility matrix• Cluster Aware AIX• PowerHA Information Center• How to test multicast

Page 14: Au Aix Powerha Cluster Migration PDF

developerWorks® ibm.com/developerWorks/

IBM PowerHA SystemMirror cluster migration Page 14 of 15

• AIX 7.1 migration• AIX 6.1 migration• Stay current with IBM developerWorks technical events and webcasts focused on a variety of

IBM products and IT industry topics.• Refer to the Planning a two-node IBM PowerHA SystemMirror cluster - Six must-know items

tutorial for advice and guidance on building a PowerHA cluster.

Get products and technologies

• Find and download service packs from Fix Central.

Discuss

• Participate in the discussion forum• Get involved in the My developerWorks community. Connect with other developerWorks users

while exploring the developer-driven blogs, forums, groups and wikis.

Page 15: Au Aix Powerha Cluster Migration PDF

ibm.com/developerWorks/ developerWorks®

IBM PowerHA SystemMirror cluster migration Page 15 of 15

About the author

Kunal Langer

Kunal Langer works as a Power Systems Technical Consultant in Systems andTechnology Group Lab Based Services (LBS) based out of India. He has more thansix years of experience in AIX and PowerHA development, testing, and supportand demonstrated expertise in PowerHA SystemMirror installation, configuration,administration, testing, and development. He has experience in interacting withcustomers and handling customer-critical situations. You can contact Kunal [email protected].

© Copyright IBM Corporation 2013(www.ibm.com/legal/copytrade.shtml)Trademarks(www.ibm.com/developerworks/ibm/trademarks/)