TsDisaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro...
-
Upload
ibm-india-smarter-computing -
Category
Documents
-
view
184 -
download
8
description
Transcript of TsDisaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro...
. . . . . . . .
© Copyright IBM Corporation, 2011.
Disaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and
Global Mirror
Solution installation and configuration
IBM Systems and Technology Group ISV Enablement
April 2011
© Copyright IBM Corporation, 2011 Disaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and Global Mirror
Table of contents
Abstract ..................................................................................................................................... 1
Introduction .............................................................................................................................. 1
About high availability .............................................................................................................................. 1
About disaster recovery ........................................................................................................................... 1
About IBM ................................................................................................................................................ 2
About Symantec ....................................................................................................................................... 2
About Veritas Storage Foundation HA ..................................................................................................... 2
About IBM Storwize V7000 system ......................................................................................................... 2
Storwize V7000 product highlights ................................................................................................... 3
Overview ................................................................................................................................... 4
Test system configuration ....................................................................................................... 4
Installing and configuring Storwize V7000 ............................................................................. 8
Fabric zoning ........................................................................................................................................... 8
Storwize V7000 Software installation ...................................................................................................... 9
Storwize V7000 storage configuration ................................................................................................... 10
Setting up the Storwize V7000 remote copy partnership ...................................................................... 11
Creating Storwize V7000 Metro Mirror, Global Mirror, and FlashCopy consistency groups ................. 12
Creating Storwize V7000 Metro Mirror, Storwize V7000 Global Mirror relationship, and FlashCopy mappings ............................................................................................................................................... 13
Installing Veritas Storage Foundation .................................................................................. 14
Symantec product licensing ................................................................................................................... 14
Supported AIX operating systems ......................................................................................................... 15
Database requirements.......................................................................................................................... 15
Disk space ............................................................................................................................................. 15
Environment variables ........................................................................................................................... 15
Virtual IP address ................................................................................................................................... 16
Prerequisites for local and remote cluster installation ........................................................................... 16
Mounting a software disk ....................................................................................................................... 16
Installing SFHA 5.1 SP1PR1 using the Veritas product installers ......................................................... 17
Installing Veritas Storage Foundation HA using webinstaller interface ................................................. 20
Installing VCS agents............................................................................................................................. 20
Installing VCS agent for Oracle ............................................................................................. 20
Installing VCS agent for IBM SAN Volume Controller copy services ................................. 21
Installing and configuring Oracle .......................................................................................... 21
Configuring applications for disaster recovery.................................................................... 22
Quick setup ............................................................................................................................................ 22
General configuration steps ................................................................................................................... 23
Setting up Storwize V7000 cluster partnership ...................................................................................... 23
Configuring Storwize V7000 Metro Mirror, GlobalMirror, and FlashCopy relationship .......................... 23
Setting up VCS Global Custer ............................................................................................................... 23
Configuring VCS application service group ........................................................................................... 25
© Copyright IBM Corporation, 2011 Disaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and Global Mirror
Adding the VCS SAN Volume Controller copy services resource ......................................................... 26
Before you configure the agent for SVCCopyServices .................................................................. 26
Adding the agent resource type ..................................................................................................... 27
SVCCopyServices resource type definition ................................................................................... 27
Attribute definitions for the SVCCopyServices agent ..................................................................... 28
Required attributes ......................................................................................................................... 28
Setting up Fire Drill ................................................................................................................................ 29
About Fire Drill resource................................................................................................................. 29
SVCCopyServicesSnap resource type definition ........................................................................... 30
Attribute definitions for the SVCCopyServicesSnap agent ............................................................ 30
Required attributes ......................................................................................................................... 30
Configure the Fire Drill service group ............................................................................................. 31
Enable the Fire Drill attribute .......................................................................................................... 32
Failover scenarios .................................................................................................................. 32
Application host failover ......................................................................................................................... 33
Disaster recovery in a Global Cluster configuration .............................................................................. 34
Checking failover readiness with Fire Drill ............................................................................................. 35
Summary ................................................................................................................................. 36
Appendix A: OpenSSH client configuration between AIX and Storwize V7000 ................. 37
Appendix B: Setting up the database applications .............................................................. 41
Setting up the Oracle database application ........................................................................................... 41
Appendix C: Sample main.cf file - isvp_sfha_clusterA ........................................................ 46
Appendix D: Veritas Software file sets listing ...................................................................... 50
Trademarks and Special Notices ........................................................................................... 51
© Copyright IBM Corporation, 2011 Disaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and Global Mirror
1
Abstract
This paper describes how Symantec and IBM have installed, configured, and validated high availability (HA) and disaster recovery (DR) configurations for Oracle with IBM Storwize V7000 systems. These validations include local HA configurations using Veritas Storage Foundation and Veritas Cluster Server (VCS). The configuration was extended to a DR configuration using IBM Storwize V7000 Metro Mirror for synchronous replication and Storwize V7000 Global Mirror for asynchronous replication using the VCS agent for IBM SVCCopyServices and VCS Global Cluster Option (GCO) for alternate site failover and failback capability.
Introduction
Infrastructure for mission-critical applications must be able to meet the organization's recovery time
objective (RTO) and recovery point objective (RPO) for resuming operation in the event of a site disaster.
This solution addresses environments where the RPOs and RTOs are in the range of minutes to a few
hours. While backup is the foundation for any DR plan, typical RTOs for tape-based backup are well
beyond these objectives. Also, replication alone is not enough as having the application data at a DR site
is of limited use without also having the ability to start the correct sequence of database management
systems, application servers, and business applications.
Symantec‟s DR solutions, Metro Clustering and Global Clustering, are extensions of local HA clustering
using Veritas Storage Foundation and Veritas Cluster Server. This validated and documented solution is
an example of Global Clustering, which is a collection of two or more VCS clusters at separate locations
linked together with VCS Global Cluster option to enable wide-area failover and disaster recovery. Each
local cluster within the global cluster is connected to its own shared storage. Local clustering provides
local failover for each site. IBM® Storwize® V7000 storage system Metro Mirror replicates data between
sites to maintain synchronized copies of storage at the two sites. For a disaster that affects an entire site,
the customer makes a decision on whether or not to move operations to the disaster recovery site. When
that decision to move the operations is made, the application is automatically migrated to a system at the
DR site. IBM Storwize V7000 Global Mirror replicates data asynchronously between sites and applies
recovery at the DR sites.
About high availability
The term high availability or HA refers to a state where data and applications are highly available because
software or hardware is in place to maintain the continued functioning in the event of computer failure. HA
can refer to any software or hardware that provides fault tolerance, but generally the term has become
associated with clustering. Local clustering provides HA through database and application failover.
Veritas Storage Foundation Enterprise HA (SF/HA) includes Veritas Storage Foundation and Veritas
Cluster Server and provides the capability for local clustering. The Storwize V7000 disk system includes a
wide range of HA features as well.
About disaster recovery
Wide-area disaster recovery provides the ultimate protection for data and applications in the event of a
disaster. With an appropriate disaster recovery solution in place, if a disaster affects a local or
metropolitan area, data and critical services can be failed over to a site hundreds or even thousands of
miles away. IBM Storwize V7000 Metro Mirror and Global Mirror, combined with Veritas Storage
Foundation Enterprise HA/DR provide the capability for implementing disaster recovery.
© Copyright IBM Corporation, 2011 Disaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and Global Mirror
2
About IBM
IBM is one of the world's largest information technology company, with over 80 years of leadership in
helping businesses innovate by delivering a wide range of solutions and technologies that enable
customers, large and small, to deliver more-efficient and effective services. IBM's comprehensive server,
storage, software, and services portfolio are designed to help you create new business insight by
integrating, analyzing, and optimizing information on demand. From its foundations of virtualization,
openness, and innovation through collaboration, IBM can optimize management of information through
technology innovations and infrastructure simplification to help achieve maximum business productivity.
You can visit IBM at ibm.com.
About Symantec
Symantec is a global leader in infrastructure software, enabling businesses and consumers to have
confidence in a connected world. The company helps customers protect their infrastructure, information,
and interactions by delivering software and services that address risks to security, availability,
compliance, and performance. Headquartered in Cupertino, California, Symantec has operations in more
than 40 countries. You can visit Symantec at symantec.com
About Veritas Storage Foundation HA
Veritas Storage Foundation HA is a comprehensive solution that delivers data and application availability
by bringing together two industry-leading products: Veritas Storage Foundation and Veritas Cluster
Server.
Veritas Storage Foundation provides a complete solution for heterogeneous online storage management.
Based on the industry-leading Veritas Volume Manager and Veritas File System, it provides a standard
set of integrated tools to centrally manage explosive data growth, maximize storage hardware
investments, provide data protection, and adapt to changing business requirements. Unlike point
solutions, Storage Foundation enables IT organizations to manage their storage infrastructure with one
tool. With advanced features, such as centralized storage management, nondisruptive configuration and
administration, dynamic storage tiering, dynamic multipathing, data migration, and local and remote
replication, Storage Foundation enables organizations to reduce operational costs and capital
expenditures across the data center.
Veritas Cluster Server is an industry leading clustering solution for reducing both planned and unplanned
downtime. By monitoring the status of applications and automatically moving them to another server in
the event of a fault, Veritas Cluster Server can dramatically increase the availability of an application or
database. Veritas Cluster Server can detect faults in an application and all its dependent components,
including the associated database, operating system, network, and storage resources. When a failure is
detected, Veritas Cluster Server gracefully shuts down the application, restarts it on an available server,
connects it to the appropriate storage device, and resumes normal operations. Veritas Cluster Server can
temporarily move applications to a standby server when routine maintenance, such as upgrades or
patches, requires that the primary server be taken offline.
About IBM Storwize V7000 system
IBM Storwize V7000 system is a large midrange storage platform, which in addition to new functionality,
leverages proven virtualization technologies available previously in other popular IBM storage products.
© Copyright IBM Corporation, 2011 Disaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and Global Mirror
3
The IBM Storwize V7000 system was introduced in the fall of 2010 to provide a cost-effective, flexible
storage platform designed to help users reduce the costs and complexity associated with storage, and
improve efficiency. Similar to the earlier IBM System Storage® SAN Volume Controller product, the IBM
Storwize V7000 system is capable of pooling storage volumes from other IBM and non-IBM storage
systems into reservoirs of capacity for centralized management. One of the
key differentiators between Storwize V7000 system and the IBM SAN
Volume Controller is that the Storwize V7000 ships with serial-attached
SCSI (SAS) solid-state drives (SSDs), or a combination of each in the primary storage engine, and can
be scaled further in size using expansion enclosures containing additional drives.
Storwize V7000 product highlights
The Storwize V7000 system is designed to:
Provide mid-range customers with a cost-effective, scalable storage platform that can provide
advanced features available typically only with more-expensive enterprise class products
Deliver a new, easy to use graphical interface which makes exploiting the features of this platform
simple
Introduce new automated tiering capability called IBM Easy Tier™ technology, which when
enabled, can reallocate the busiest disk extents to faster SSDs, maximizing the utilization of
storage resources
Provide multiprotocol support for Fibre Channel (FC) and Internet Small Computer Systems
Interface (iSCSI) attachment.
Deliver the same key virtualization features made popular with IBM SAN Volume Controller. The
Storwize V7000 system can also:
− Combine storage capacity from multiple disk systems into a single reservoir of capacity
that can be better managed as a business resource and not as separate boxes
− Help increase storage utilization by providing host applications with more-flexible access
to capacity
− Help improve productivity of storage administrators by enabling management of
heterogeneous storage systems using a common interface
− Support improved application availability by insulating host applications from changes to
the physical storage infrastructure
− Enable a tiered storage environment in which the cost of storage can be better matched
to the value of data
− Support advanced copy services from higher to lower-cost devices and across storage
systems from multiple vendors
© Copyright IBM Corporation, 2011 Disaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and Global Mirror
4
Find the IBM Storwize V7000 on the web at: ibm.com/systems/storage/disk/storwize_v7000/index.html
Overview
This whitepaper illustrates the steps involved in installing and configuring Veritas Storage Foundation HA
and IBM Storwize V7000 Metro Mirror (synchronous copy) or IBM Storwize V7000 Global Mirror
(asynchronous copy) for a host failover and disaster recovery readiness scenario.
The host failover scenario simulates a fault in a node of a two-node Veritas Cluster Server (VCS) cluster
at Site A. When an application service group fails over to the other node, a fault is injected in to that node
causing the application to failover to a node of a two-node cluster at Site B.
The disaster recovery readiness scenario simulates a disaster by introducing a fault in affecting both the
nodes of the cluster and storage at Site A, causing the application to failover to a node of a two-node
cluster at Site B. VCS configured with Global Cluster Option (GCO) brings up the application global
service group on that node at Site B. As soon as Site A is restored and becomes available, the
application is failed back to Site A. The VCS IBM Storwize V7000 Copy Services agent manages the
state of the Storwize V7000 remote copy relationship, required for failover and failback. The “Failover
scenarios” section provides procedural details.
Two sets of volume groups are used for the configuration described for the scenario in this section. One
set is used for configuring Metro Mirror and another set is used for Global Mirror. Optionally, to test for DR
readiness the VCS Fire Drill set is configured. A fire drill procedure verifies the fault-readiness of a
disaster recovery configuration. VCS Fire Drill uses IBM Tivoli® Storage FlashCopy® Manager to create
copies of the Metro Mirror or Global Mirror target volumes at Site B. A third volume group set is used for
FlashCopy.
This paper provides instructions for installing and configuring a test setup and then provides procedures
for the scenarios.
Test system configuration
A cluster configuration can be configured as shown in Figure 1. This configuration includes:
A cluster at Site A consisting of two IBM POWER7® processor-based servers running on IBM
AIX®, which are virtual I/O (VIO) client logical partitions (LPARs), configured as a two-node VCS
cluster, a Storwize V7000 system providing FC-attached storage volumes to the AIX VIO server,
then through to the client LPAR.
A cluster at Site B consisting of two IBM POWER7® processor-based servers running on IBM
AIX, which are VIO client LPARs configured as a two-node VCS cluster, with a Storwize V7000
system providing FC-attached storage volumes to the AIX VIO server, then through to the client
LPAR.
GCO in VCS on each cluster was enabled to allow failover of resource groups between the two
clusters.
© Copyright IBM Corporation, 2011 Disaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and Global Mirror
5
Site A Site B
570
Isv_sfha_clusterA
isvp12 / AIX 7.1
SFHA
5.1SP1,PR1
isvp13 / AIX 7.1
SFHA
5.1SP1,PR1
IBM Storwize V7000
6.1.0.0
p790
570
Isv_sfha_clusterB
isvp15 / AIX 7.1
SFHA
5.1SP1,PR1
isvp14 / AIX 7.1
SFHA
5.1SP1,PR1
IBM Storwize V7000
6.1.0.0
p790
19 2318 2217 2116 2011 159 13 10 148 123 71 5 2 6402498-B24
SAN24B-4
19 2318 2217 2116 2011 159 13 10 148 123 71 5 2 6402498-B24
SAN24B-4
19 2318 2217 2116 2011 159 13 10 148 123 71 5 2 6402498-B24
SAN24B-4
19 2318 2217 2116 2011 159 13 10 148 123 71 5 2 6402498-B24
SAN24B-4
Figure 1: VCS cluster and IBM Storwize V7000 Metro Mirror and Global Mirror for disaster recovery
The high-level configuration setup built for this white paper (refer to Figure 1) consists of four application
hosts. The configuration and hardware and software components are listed in Table 1 and Table 2. The
hosts are VIO client LPARs to a single VIO server on each of the two physical IBM Power® 790 servers.
The four LPARs are configured to form two 2-node Storage Foundation for High Availability (SFHA)
clusters. Figure 2 shows a detailed test system configuration.
The two clusters represent two sites, Site A and Site B. The cluster at Site A is the active cluster and the
cluster at Site B is the standby cluster. Table 5 shows the Storwize V7000 logical unit number (LUN)
mapping. As this is a VIO server configuration, all volumes are mapped to the VIO server on the
Power 790 server at each site, and then passed through to a virtual Small Computer System Interface
(VSCSI) adapter on the VIO client LPAR (the cluster nodes).
In the test configuration, there is only one interswitch link (ISL) shown between the remote and local
cluster. This is adequate for demonstration but the design of a production configuration should ensure
adequate provisioning of intercluster bandwidth. More than two ISLs might be required and they might
need to be trunked. In this case, 900 Mbps of bandwidth has been allocated for the intercluster
connectivity.
© Copyright IBM Corporation, 2011 Disaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and Global Mirror
6
Storage Pool „mdiskgrp 0 ‟ -
8 . 1 TB
Storage Pool „isvp‟ - 2 TB
isv _ sfha _ clust erA
isvp 12 IBM p 790
AIX 7 . 1 Symantec
SFHA 5 . 1 SP 1 PR 1
isv _ sfha _ clust erA
isvp 13 IBM p 790
AIX 7 . 1 Symantec
SFHA 5 . 1 SP 1 PR 1
Site A
isv _ sfha _ clust erB
isvp 14 IBM p790
AIX 7 . 1 Symantec
SFHA 5 . 1 SP 1 PR 1
isv _ sfha _ clust erB
isvp 15 IBM p 790
AIX 7 . 1 Symantec
SFHA 5 . 1 SP 1 PR 1
Site B
Site B Site A
isv 7 k 4 isv 7 kd 10
mdisk 3 mdisk 6 mdisk 7 mdisk 8 mdisk 9
mdisk 10 mdisk 11
sfha _ gmdisk 001 sfha _ gmdisk 002 sfha _ mmdisk 001 sfha _ mmdisk 002
mdisk 6
sfha _ dr _ gmdisk 001 sfha _ dr _ gmdisk 002 sfha _ dr _ mmdisk 001 sfha _ dr _ mmdisk 002 sfha _ FD _ dr _ gmdisk 001 sfha _ FD _ dr _ gmdisk 002 sfha _ FD _ dr _ mmdisk 001 sfha _ FD _ dr _ mmdisk 002
19 23 18 22 17 21 16 20 11 15 9 13 10 14 8 12 3 7 1 5 2 6 4 0 2498- B24 SAN 24 B - 4
SAN 02
19 23 18 22 17 21 16 20 11 15 9 13 10 14 8 12 3 7 1 5 2 6 4 0 2498- B24 SAN 24 B - 4
SAN 03
19 23 18 22 17 21 16 20 11 15 9 13 10 14 8 12 3 7 1 5 2 6 4 0 2498- B24 SAN 24 B - 4
SAN 02
19 23 18 22 17 21 16 20 11 15 9 13 10 14 8 12 3 7 1 5 2 6 4 0 2498- B24 SAN 24 B - 4
SAN 04
19 23 18 22 17 21 16 20 11 15 9 13 10 14 8 12 3 7 1 5 2 6 4 0 2498- B24 SAN 24 B - 4
SAN 01
GM MM Firedrill / FlashCopy Figure 2: Test cluster storage configuration with Storwize V7000
© Copyright IBM Corporation, 2011 Disaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and Global Mirror
7
Application host VCS Cluster Servers / IBM POWER7 based LPARs
Cluster sites Site A ( Primary ) Site B ( Secondary )
VCS cluster names isvp_sfha_clusterA isvp_sfha_clusterB
System / Cluster node names
isvp12 isvp13 isvp14 isvp15
Number of processors 2 2 2 2
Processor clock speed
3000 MHz 3000 MHz 3300 MHz 3300 MHz
Processor type 64-bit 64-bit 64-bit 64-bit
Kernel type 64-bit 64-bit 64-bit 64-bit
Memory size 8192 MB 8192 MB 8192 MB 8192 MB
Physical machine type / model
8233-E8B 8233-E8B 8233-E8B 8233-E8B
Activated platform level
86 86 86 86
Storwize V7000 configuration
Cluster sites Site A ( Master ) Site B ( Auxiliary )
Hardware model 2076-124
Basic I/O System (BIOS) version
American Megatrends v.28
Code level 6.1.0.0 (build 49.0.1010130001)
Storwize V7000 isv7k4.storage.tucson.ibm.com isv7kd10.storage.tucson.ibm.com
System / Cluster node names
node1 node2 node1 node2
Configuration node Yes No Yes No Host bus adapter (HBA) ports
4 4
Capacity 15.5 TB / 3.5 TB assigned 8.1 TB / 1.8 TB assigned
I/O group io_grp0 io_grp0
Table 1: Hardware configuration
SAN
Switch model IBM 2498-B24 x 4
Firmware version V6.4.0a
Ports 24 each
Vendor Software Version
IBM AIX 7.1 7.1.0.0
Oracle Corporation Oracle 11gR2
Symantec Veritas Storage Foundation Enterprise 5.1 SP1 PR1
Symantec Veritas High Availability 5.1SP1,PR1 Agent for Oracle by Symantec
5.1 SP1 PR1
Symantec Veritas Clustering Support for IBM SVCCopyServices (VRTSvcssvc.rte)
5.0.6.0
Table 2: Software configuration
© Copyright IBM Corporation, 2011 Disaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and Global Mirror
8
Table 3 lists the product documentation set that is required for installation, configuration, and
troubleshooting the setup.
Product Use this guide
Storwize V7000 Storwize V7000 Infocenter http://publib.boulder.ibm.com/infocenter/storwize/ic/index.jsp
Veritas Storage Foundation Enterprise HA
Veritas Storage Foundation and High Availability Installation Guide https://sort.symantec.com/public/documents/sfha/5.1sp1pr1/aix/productguides/pdf/sf_install_51sp1pr1_aix.pdf
Veritas Cluster Server
Veritas Cluster Server Installation Guide https://sort.symantec.com/public/documents/sfha/5.1sp1pr1/aix/productguides/pdf/vcs_install_51sp1pr1_aix.pdf Veritas Cluster Server Administrator‟s Guide https://sort.symantec.com/public/documents/sfha/5.1sp1pr1/aix/productguides/pdf/vcs_admin_51sp1pr1_aix.pdf
Veritas Volume Manager
Veritas Volume Manager Administrator's Guide https://sort.symantec.com/public/documents/sfha/5.1sp1pr1/aix/productguides/pdf/vxvm_admin_51sp1pr1_aix.pdf
Veritas Cluster Server Agents
Veritas Cluster Server Agent for Oracle Installation and Configuration Guide https://sort.symantec.com/public/documents/sfha/5.1sp1pr1/aix/productguides/pdf/vcs_oracle_agent_51sp1pr1_aix.pdf Veritas Cluster Server Agent for IBM SVCCopyServices Installation and Configuration Guide
https://sort.symantec.com/agents/download_docs/2192/vcs_svc_install
Table 3: Required documents
Installing and configuring Storwize V7000
Included here are some general guidelines for configuration of the Storwize V7000 in preparation for the
Storage Foundation for High Availability installation.
Fabric zoning
For more details on zoning requirements for Storwize V7000, review the sections on Zoning details and
Zoning examples available in the Storwize V7000 Information Center at
http://publib.boulder.ibm.com/infocenter/storwize/ic/index.jsp. Table 4 shows the zones created for this
configuration.
Zone Zone configuration – Fabric 1 and 2 (SAN01, SAN03 /
SAN02, SAN04)
ISV7K4 includes aliases for two Storwize V7000 ports and a port each from ISVP12 / ISVP13
ISV7KD10 includes aliases for two Storwize V7000 ports and a port each from ISVP14 / ISVP15
ISV7KD10_ISV7K4_Mirror includes one Storwize V7000 port from each disk system to allow formation of intercluster link, for remote copy (Global Mirror and Metro Mirror)
Table 4: Fabric zoning
© Copyright IBM Corporation, 2011 Disaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and Global Mirror
9
Storwize V7000 Software installation
You can refer to the IBM Storwize V7000 Information Center at
http://publib.boulder.ibm.com/infocenter/storwize/ic/index.jsp to obtain detailed planning, installation, and
software configuration instructions.
From the Storwize V7000 GUI interface or command-line interface (CLI), ensure FlashCopy, Metro Mirror,
and Global Mirror licenses are activated.
© Copyright IBM Corporation, 2011 Disaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and Global Mirror
10
Storwize V7000 storage configuration
This section assumes that the Storwize V7000 storage subsystems are up and running. Provided you
have configured the Secure Shell (SSH) client on each participating node in your cluster, you can
SSH/login to the Storwize V7000 CLI from any VCS cluster node or you can connect to the Storwize
V7000 integrated GUI console using the IP address of the controller canister. You can find detailed
instruction on configuring SSH between AIX hosts and the Storwize V7000 disk system in “Appendix A:
OpenSSH client configuration between AIX and Storwize V7000”.
The method shown in this section demonstrates the Storwize V7000 CLI interface that is used to
configure volumes from managed disks (MDisks) available to the Storwize V7000 storage system. With
Storwize V7000, MDisks can be either LUNS from an external SAN-attached storage device or
alternatively (as in this case) from SAS drives or SSDs available inside the Storwize V7000 disk system.
In this configuration, the storage is Storwize V7000 physical disk. The volumes were created from a
storage pool containing seven MDisks, each of which represents an individual RAID5 array spanning 7 or
8 x 300 GB SAS drives – delivering a usable capacity of 1.6 TB or 1.9 TB, respectively.
Typical disk configuration consists of initial disk discovery (in the case of externally-attached SAN disks),
or configuration of existing physical disks in the Storwize V7000 disk enclosures into MDisks/arrays of a
particular RAID type, then adding these to one or more storage pools. Volumes are then created from the
available capacity in the storage pools and finally are mapped to the hosts.
The volume and host mapping required for this setup is shown in this section. Table 5 shows the volumes
configured for Metro and Global Mirror used in this configuration. The following commands given provide
an example of creating storage pools (formerly manged disk groups), volumes, and mapping volumes to
hosts.
Creating a storage pool (formerly known as managed disk group):
IBM_2076:ISV7K4:admin> svctask detectmdisk IBM_2076:ISV7K4:admin> svcinfo lsmdiskcandidate IBM_2076:ISV7K4:admin> svcinfo lsmdisk -delim : -filtervalue mode=unmanaged IBM_2076:ISV7K4:admin> svctask mkmdiskgrp -name isvp -ext 256 -mdisk mdisk3 IBM_2076:ISV7K4:admin> svctask addmdisk -mdisk mdisk6 mdisk7 mdisk8 mdisk9 mdisk10 mdisk11 isvp IBM_2076:ISV7K4:admin> svcinfo lsmdiskgrp
Creating volumes (formerly known as virtual disks or VDisks - this sample shows creation of Metro Mirror volumes):
IBM_2076:ISV7K4:admin> svctask mkvdisk -name sfha_mmdisk001 -iogrp io_grp0 -mdiskgrp ispv –vtype striped -size 100 -unit gb IBM_2076:ISV7K4:admin> svctask mkvdisk -name sfha_mmdisk002 -iogrp io_grp0 -mdiskgrp isvp -vtype striped -size 100 -unit gb IBM_2076:ISV7K4:admin> svcinfo lsvdisk
Mapping VDisks to hosts:
IBM_2076:ISV7K4:admin> svctask lsvdiskhostmap –host <hostname1> –scsi 0 –force 1
© Copyright IBM Corporation, 2011 Disaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and Global Mirror
11
IBM_2076:ISV7K4:admin> svctask mkvdiskhostmap –host <hostname 2> –scsi 0 -force 2
Master cluster (Site A) Auxiliary cluster (Site B)
Volumes for Metro Mirror sfha_mmdisk001-002 sfha_DR_mmdisk001-002
Volumes for Global Mirror sfha_gmdisk001-002 sfha_DR_gmdisk001-002
Volumes for Fire Drill
- sfha_FD_dr_mmdisk001-002 sfha_FD_dr_gmdisk001-002
Relationship Name Master volume Auxiliary volume
Metro Mirror relationship v7k_mm_DR
sfha_mmdisk001-002 sfha_dr_mmdisk001-002
Global Mirror relationship v7k_gm_DR
sfha_gmdisk001-002 sfha_dr_gmdisk001-002
FlashCopy mappings
Name Fire Drill (FD)volume Auxiliary volume
Metro Mirror FC_mm_fcmap0001-
0002 sfha_FD_dr_mmdisk001-
002 sfha_dr_mmdisk001-
002
Global Mirror FD_gm_fcmap0001-
0002 sfha_FD_dr_gmdisk001-
002 sfha_dr_mmdisk001-
002
FlashCopy consistency group (Metro Mirror volumes)
FD_mm_fcconsistgrp
FlashCopy consistency group (Global Mirror volumes)
FD_gm_fcconsistgrp
Metro Mirror and Global Mirror consistency groups
Name State Copy type
Master cluster
Auxiliary cluster
Primary Relationship count
v7k_mm_DR
Consistent synchronized
Metro isv7k4 isv7kd10 Master 2
v7k_gm_DR
Consistent synchronized
Global isv7k4 isv7kd10 Master 2
Cluster Name
Location State Bandwidth Link tolerance (s)
Inter-cluster delay
simulation (ms)
Intra-cluster delay
simulation (ms)
Metro Mirror cluster partnership
isv7k4 Remote Fully configured
900 300 20 0
Global Mirror cluster partnership
isv7kd10 Remote Fully configured
900 300 20 0
Table 5: Storwize V7000 disk configuration
Setting up the Storwize V7000 remote copy partnership
Login to the Storwize V7000 through SSH from any VCS cluster node or alternatively connect to the
Storwize V7000 GUI console.
#ssh -l admin -i /.ssh/id_dsa isv7k4.storage.tucson.ibm.com
© Copyright IBM Corporation, 2011 Disaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and Global Mirror
12
IBM_2076:ISV7K4:admin>
To establish fully-functional remote copy partnerships on the Storwize V7000 systems, issue these
commands on both the disk systems. This step is a prerequisite for creating Metro Mirror or Global
Mirrors between volumes on the Storwize V7000 systems.
IBM_2076:ISV7K4:admin> svctask mkpartnership -bandwidth 900 ISV7KD10 IBM_2076:ISV7KD10:admin> svctask mkpartnership -bandwidth 900 ISV7K4
Update intercluster delay for the Global Mirror configuration. To configure the parameters, you can refer
to chapter 15 of IBM System Storage SAN Volume Controller and Storwize V7000 Command-Line
Interface User’s Guide at:
ftp://ftp.software.ibm.com/storage/san/sanvc/V6.1.0/SVC_and_V7000_Command_Line_Interface_Users_
Guide.pdf.
Available intercluster delay parameters include:
-gmlinktolerance link_tolerance:
Is the length of time, in seconds, for which an inadequate inter-cluster link will be tolerated for
Global Mirror operation.
Accepts values from 60 to 86400 seconds in steps of 10 seconds. Default is 300 seconds. The
link tolerance can be disabled by entering the value 0 for this parameter.
-gminterdelaysimulation inter_cluster_delay_simulation:
Is the Inter-cluster delay simulation that simulates Global Mirror round-trip delay between two
clusters, in milliseconds. Default is 0, valid range is 0 to 100 milliseconds.
-gmintradelaysimulation intra_cluster_delay_simulation:
Is the Intra-cluster delay simulation that simulates Global Mirror round trip-delay in milliseconds.
Default is 0, valid range is 0 to 100 milliseconds.
You can get the cluster alias using the following commands. IBM_2076:ISV7K4:admin> svcinfo lscluster ISV7K4 IBM_2076:ISV7K4:admin> svctask chcluster -alias 00000200A0C000A0 -name ISV7K4 -gmlinktolerance 300 -gminterdelaysimulation 20 -gmintradelaysimulation 0
Creating Storwize V7000 Metro Mirror, Global Mirror, and FlashCopy consistency
groups
You can create Metro Mirror and Global Mirror consistency groups by specifying a name and the remote
Storwize V7000 name as shown below. Make sure the two Storwize V7000 systems are up and in
communication throughout the create process. The new consistency group does not contain any
relationships and will be in an empty state. A consistency group is used to ensure that a number of
relationships are managed so that in the event of a disconnection of the relationships, the data on all
volumes within the group is in a consistent state. This can be important in a database application where
© Copyright IBM Corporation, 2011 Disaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and Global Mirror
13
data files and log files are held on separate volumes and consequently are being managed by separate
relationships. In the event of a disaster, the primary and secondary sites might become disconnected. If
the relationships associated with the volumes are not in a consistency group, then as the disconnection
happens, and the Mirror relationships stop copying data from the primary to the secondary site, there is
no assurance that updates to the two separate secondary volumes stops in a consistent manner.
IBM_2076:ISV7K4:admin> svctask mkrcconsistgrp -name v7k_mm_DR -cluster ISV7K4 IBM_2076:ISV7K4:admin> svctask mkrcconsistgrp -name v7k_gm_DR -cluster ISV7K4
The VCS Fire Drill function utilizes FlashCopy to create and maintain the snapshot of the application
resource. Therefore create a FlashCopy consistency group from the Storwize V7000 at Site B.
You need to ensure that the Storwize V7000 FlashCopy license is enabled. In this case, the test team is
creating a FlashCopy consistency group for the Metro Mirror volumes and another for Global Mirror
volumes.
IBM_2076: ISV7KD10:admin> svctask chlicense -flash on IBM_2076:ISV7KD10:admin>svctask mkfcconsistgrp -name FD_mm_fcconsistgrp FlashCopy Consistency Group, id [1], successfully created IBM_2076:ISV7KD10:admin>svctask mkfcconsistgrp -name FD_gm_fcconsistgrp FlashCopy Consistency Group, id [2], successfully created
Creating Storwize V7000 Metro Mirror, Storwize V7000 Global Mirror relationship,
and FlashCopy mappings
Now create Metro Mirror and Global Mirror relationships using the following command. This relationship
persists until it is deleted. The auxiliary virtual disk needs to be identical in size to the master virtual disk.
The master and auxiliary cannot be in an existing relationship. Neither of the disks can be the target of a
FlashCopy mapping. Copy type defaults to metro if -global is not specified.
IBM_2076:ISV7K4:admin>svctask mkrcrelationship -master 6 -aux 11 -sync -cluster ISV7KD10 -name v7k_mmrel001 -consistgrp v7k_mm_DR IBM_2076:ISV7K4:admin>svctask mkrcrelationship -master 7 -aux 13 -sync -cluster ISV7KD10 -name v7k_mmrel002 -consistgrp v7k_mm_DR IBM_2076:ISV7K4:admin>>svctask mkrcrelationship -master 8 -aux 13 -sync -cluster ISV7KD10 -name v7k_gmrel001 -consistgrp v7k_gm_DR -global IBM_2076:ISV7K4:admin>>svctask mkrcrelationship -master 9 -aux 14 -sync -cluster ISV7KD10 -name v7k_gmrel002 -consistgrp v7k_gm_DR -global
Next, you need to create FlashCopy mappings. The mkfcmap command creates a new FlashCopy
mapping that maps a source volume to a target volume for subsequent copying. Here, the test team has
a FlashCopy target volume on the Stowize V7000 for every Metro Mirror and Global Mirror volume to be
used to support Fire Drill testing. The test team is creating a total of four FlashCopy mappings and adding
© Copyright IBM Corporation, 2011 Disaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and Global Mirror
14
them to one of the FlashCopy consistency groups created in the “Creating Storwize V7000 Metro Mirror,
Global Mirror, and FlashCopy consistency groups” section
IBM_2076:ISV7KD10:admin>svctask mkfcmap -source 13 -target 17 -consistgrp 2 -name FC_gm_fcmap0001 -copyrate 100 -incremental -cleanrate 100 FlashCopy Mapping, id [1], successfully created IBM_2076:ISV7KD10:admin>svctask mkfcmap -source 14 -target 18 -consistgrp 2 -name FC_gm_fcmap0002 -copyrate 100 -incremental -cleanrate 100 FlashCopy Mapping, id [2], successfully created IBM_2076:ISV7KD10:admin>svctask mkfcmap -source 11 -target 15 -consistgrp 1 -name FC_mm_fcmap0001 -copyrate 100 -incremental -cleanrate 100 FlashCopy Mapping, id [3], successfully created IBM_2076:ISV7KD10:admin>svctask mkfcmap -source 12 -target 16 -consistgrp 1 -name FC_mm_fcmap0002 -copyrate 100 -incremental -cleanrate 100 FlashCopy Mapping, id [4], successfully created
Installing Veritas Storage Foundation
This section provides a walk through of the installation prerequisites, as well as detailed command line
output resulting from the installation of our first SFHA cluster.
Symantec product licensing
Symantec SFHA licensing is typically done during product installation. During the installation process,
both of the Symantec installer options provide opportunity to enter the product license key.
The VRTSvlic package enables product licensing. After VRTSvlic is installed, the following commands
and their manual pages are available on the system:
vxlicinst installs a license key for a Symantec product
vxlicrep displays currently installed licenses
vxlictest retrieves features and their descriptions encoded in a license key
You need to make sure that you have activated the Veritas Storage Foundation Enterprise HA/DR
AIX 5.1 license key.
© Copyright IBM Corporation, 2011 Disaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and Global Mirror
15
Supported AIX operating systems
This release of Veritas Storage Foundation (5.1, SP1, PR1) operates on:
IBM POWER7 processor running
AIX 7.1 TL0 or higher
AIX 6.1 TL5 with SP1 or higher
AIX 5.3 in POWER6 or IBM Power6+™ compatibility at TL11, with Service Pack 2 or higher, or
TL 10 with Service Pack 4 or later.
IBM POWER6® processors, at one of the following levels:
AIX 7.1, TL0 or higher
AIX 6.1, TL2 or higher
AIX 5.3 at one of the following levels:
− TL7 with Service Pack 6 or higher
− TL8 with Service Pack 4 or higher
Both of the Symantec product installer interfaces verify the required update levels. The installation
process terminates if the target systems do not meet maintenance-level requirements.
For any Veritas cluster product, all nodes in the cluster need to have the same operating system version
and update level.
Database requirements
The Oracle database version supported by Veritas high availability release 5.1SP1, PR1 is Oracle 11gR2,
but generally the VCS agent supports a particular Oracle release as long as Oracle already supports the
release on that particular AIX level.
Disk space
Use the Perform a Preinstallation Check (P) option from the product installer to determine whether
there is sufficient space.
Environment variables
Most of the commands used in the installation are in the /sbin or /usr/sbin directory. However, there are
additional variables that are required to use a Veritas Storage Foundation product after installation.
© Copyright IBM Corporation, 2011 Disaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and Global Mirror
16
You need to add the following directories to your PATH environment variable:
If you are using Bourne or Korn shell (sh or ksh), use the commands:
$ PATH=$PATH:/usr/sbin:/opt/VRTSvxfs/sbin:/opt/VRTSob/bin:\/opt/VRTSvcs/bin:/opt/VRTS/bin$ MANPATH=/usr/share/man:/opt/VRTS/man:$MANPATH $ export PATH MANPATH
If you are using a C shell (csh or tcsh), use the commands:
% set path = ( $path /usr/sbin /opt/VRTSvxfs/sbin \/opt/VRTSvcs/bin /opt/VRTSob/bin /opt/VRTS/bin )% setenv MANPATH /usr/share/man:/opt/VRTS/man:$MANPATH
Note: The nroff versions of the online manual pages are not readable using the man command if the
bos.txt.tfs file set is not installed; however, the VRTSvxvm and VRTSvxfs packages install ASCII versions
in the /opt/VRTS/ man/catman* directories that are readable without the bos.txt.tfs file set.
Virtual IP address
This configuration needs several IP addresses depending on the products you are enabling. You need to
have at least six virtual IPs allocated for the two clusters. The following list shows virtual IPs required for
this configuration, in this case a virtual address on each cluster for Oracle and Global Cluster.
Purpose isvp_sfha_clusterA isvp_sfha_clusterB Oracle failover 10.10.0.152 10.10.0.153 GCO 10.10.0.150 10.10.0.151
Prerequisites for local and remote cluster installation
Establishing communication between nodes is required to install Veritas software from a remote system
or to install and configure a cluster.
The node from which the installation utility is running must have permissions to run remote shell (rsh) or
SSH utilities as root on all cluster nodes or remote systems. For both of the cluster installations, the SFHA
product code was unpacked on a single node and deployed to the remaining hosts using the Symantec
installer invoked on that initial host (isvp12).
You need to make sure that the hosts to be configured as cluster nodes have two or more network
interface cards (NICs) and are connected for heartbeat links. Refer to the Veritas Cluster Server
installation Guide at:
https://sort.symantec.com/public/documents/sfha/5.1sp1pr1/aix/productguides/pdf/vcs_install_51sp1pr1_
aix.pdf for more details.
Mounting a software disk
If you are using DVD as an installation media, you must have superuser (root) privileges to mount the
Veritas disks.
To mount the Veritas software disk:
1. Log in as superuser.
© Copyright IBM Corporation, 2011 Disaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and Global Mirror
17
2. Place the Veritas software disk into a DVD drive connected to your system.
3. Mount the disk by determining the device access name of the DVD drive.
The format for the device access name is cdX where X is the device number. After inserting the
disk, type the following commands:
# mkdir -p /cdrom
# mount -V cdrfs -o ro /dev/cdX /cdrom
Installing SFHA 5.1 SP1PR1 using the Veritas product installers
Note: Veritas products are installed under the /opt directory on the specified host systems. Ensure that
the directory /opt exists and has write permissions for root before starting an installation procedure.
The Veritas product installer is the recommended method to license and install the product. There are
command-line-driven and web-based versions of the product installer available. Both the versions enable
you to verify preinstallation requirements, configure the product, and view the product‟s description.
You can use the product installer to install Veritas Storage Foundation and Veritas Storage Foundation
Enterprise HA. Usually, during an installation, you can type b (back) to return to a previous section of the
installation procedure. The back feature of the installation scripts is context-sensitive, so it returns to the
beginning of a grouped section of questions. If an installation procedure hangs, press Ctrl+C to stop and
exit the program. There is a short delay before the script exits.
To install a Storage Foundation product, run the following steps from one node in each cluster.
1. If the installation file sets are on a DVD media, make sure the disk is mounted. Refer to
the “Mounting a software disk” section. This installation scenario uses downloaded
compressed tarfiles that have been unpacked into the /tmp directory on one of the cluster
nodes.
2. To invoke the common installer, run the installer command on the disk as shown in this example:
# /tmp/installer (to invoke the menu-based installation interface)
or, alternatively
# /tmp/webinstaller (to invoke the web-based installation interface)
3. Enter I to install a product and press Enter to begin.
4. When the list of available products is displayed, select the product you want to install and enter the
corresponding number and press Enter. The product installation begins automatically.
5. Enter the Storage Foundation Enterprise HA/DR product license information.
Enter a product_name license key for isvp_sfha_clusterA: [?] XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-X XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-X successfully registered on isvp_sfha_clusterA Do you
want to enter another license key for isvp_sfha_clusterA? [y,n,q,?] (n)
Enter a product_name license key for isvp_sfha_clusterB: [?] XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-X XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-X successfully registered on isvp_sfha_clusterB
Do you want to enter another license key for isvp_sfha_clusterB? [y,n,q,?] (n)
Enter n if you have no further license keys to add for a system.
You are then prompted to enter the keys for the next system.
© Copyright IBM Corporation, 2011 Disaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and Global Mirror
18
Note: Each system requires a product license before installation. License keys for additional product
features should also be added at this time.
6. Choose to install all file sets.
SF can be installed without optional filesets to conserve disk space. Additional
filesets are typically installed to simplify future upgrades.
1) Required Veritas Storage Foundation filesets - 928 MB required
2) All Veritas Storage Foundation filesets - 1063 MB required
Select the filesets to be installed on all systems? [1-2,q,?] (2)
7. At the installed product list page, enter y or press Enter to configure the Storage Foundation and VCS
products.
It is possible to install SF filesets without performing configuration.
It is optional to configure SF now. If you choose to configure SF later, you
can do so manually or run the installsf -configure command.
Are you ready to configure SF? [y,n,q] (y)
Do you want to configure VCS on these systems at this time? [y,n,q] (y)
8. The installer prompts for details for configuring the VCS cluster for Storage Foundation (SF) Enter the
unique cluster name and cluster ID number.
Enter the unique cluster name: [?]isvp_sfha_clusterA
Enter the unique Cluster ID number between 0-65535: [b,?] 0
The installer discovers the NICs available on the first system and reports them,
Discovering NICs on isvp_sfha_clusterA...discovered en0 en1 en2
9. Enter private heartbeat NIC information for each host.
Enter the NIC for the first private heartbeat link on isvp12:[b,?] en1
Would you like to configure a second private heartbeat link?[y,n,q,b,?] (y) y
Enter the NIC for the second private heartbeat link on isvp12:[b,?] en2
Would you like to configure a third private heartbeat link?[y,n,q,b,?] (n) n
Do you want to configure an additional low priority heartbeat link? [y,n,q,b,?]
(n) n
Are you using the same NICs for private heartbeat links on all systems?
[y,n,q,b,?] (y) y
Note: When answering as y, be sure that the same NICs are available on each system; the installer
does not verify this.
Notice that in this example, en0 is not selected for use as a private heartbeat NIC because it is
already in use as the public network interface.
10. A summary of the information you entered is given. When prompted, confirm that the
information is correct.
Is this information correct? [y,n,q] (y)
© Copyright IBM Corporation, 2011 Disaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and Global Mirror
19
If the information is correct, press Enter. If the information is not correct, enter n. The installer
prompts you to enter the information again.
11. When prompted to configure the product to use Veritas Security Services, enter y or n to configure.
Note: Before configuring a cluster to operate using Veritas Security Services, another system must
already have Veritas Security Services installed and be operating as a Root Broker. Refer to the
Veritas Cluster Server Installation Guide for more information on configuring a VxSS Root Broker.
Would you like to configure product_name to use Veritas Security Services? [y,n,q]
(n) n
12. A message displays notifying you of the information required to add users. When prompted, set the user name and /or password for the administrator.
Do you want to set the username and/or password for the Admin user (default
username = 'admin', password='password')? [y,n,q] (n)
13. Enter n if you want to decline. If you enter y, you are prompted to change the password. You are
prompted to add another user to the cluster.
Do you want to add another user to the cluster? [y,n,q] (n)
Enter n if you want to decline, enter y if you want to add another user. You are prompted to
verify the user.
Is this information correct? [y,n,q] (y)
Enter y or n to verify if this information is correct.
14. You are prompted to configure the cluster management console. Enter y or n to configure the cluster
management console.
Do you want to configure the Cluster Management Console [y,n,q] (n) n
15. You are prompted to configure the cluster connector. Enter y or n to configure the cluster
connector. Do you want to configure the cluster connector [y,n,q] (n)
16. When prompted to configure SMTP notification, enter y to not configure SMTP.
Do you want to configure SMTP notification? [y,n,q] (n)
17. When prompted to configure SNMP notification, enter y to not configure SMTP notification.
Do you want to configure SNMP notification? [y,n,q] (n)
18. When prompted to set up the default disk group for each system, enter n to set up the disk group for
each system. Do you want to set up a default disk group for each system? [y,n,q,?] (n)
19. You are prompted to enter the fully qualified hostname of system isvp12. Enter y for the isvp12.domain_name.
Is the fully qualified hostname of system "isvp12" ="isvp12.domain_name"?
[y,n,q] (y)
20. 23 You are prompted to enter the fully qualified hostname of system isvp13. Enter y for the ispv13 domain_name.
Is the fully qualified hostname of system "isvp13" ="isvp13.domain_name"?
[y,n,q] (y)
21. You are prompted to enable Storage Foundation Management Server Management. Enable Storage Foundation Management Server Management? [y,n,q] (n)
© Copyright IBM Corporation, 2011 Disaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and Global Mirror
20
22. Enter n to enable Storage Foundation Management Server Management. You are prompted to start
Storage Foundation.
Do you want to start Veritas Storage Foundation processes now? [y,n,q]
(y)...Startup completed successfully on all systems
You declined to set up the name of the default disk group for isvp12.
You declined to set up the name of the default disk group for isvp13.
Installation log files, summary file, and response file are saved at:
/opt/VRTS/install/logs/installsf-7ai12i
When installsf installs the software, some software might be applied rather than committed. It is the
responsibility of the system administrator to commit the software, which can be performed later with the -c
option of the installp command.
Installing Veritas Storage Foundation HA using webinstaller interface
In addition to the command-line installer, Symantec provides a simple-to-use web-based interface. The
installer can be run from root directory of the installation media or the directory resulting from having
unpacked the SFHA installation bundle.
In this test case, the installer is run from the directory /tmp/dvd1-aix:
isvp12> ./webinstaller &
Installing VCS agents
This section is intended to provide some configuration guidance for the VCS bundled agents for Oracle
and Storwize V7000 replication (the SVCCopyServices agent)
Installing VCS agent for Oracle
For complete details refer to the Veritas Cluster Server Agent for Oracle Installation and Configuration
Guide at:
https://sort.symantec.com/public/documents/sfha/5.1sp1pr1/aix/productguides/pdf/vcs_oracle_agent_51s
p1pr1_aix.pdf
You must install the Oracle agent on each node in the cluster. In global cluster environments, install the
agent on each node in each cluster. These instructions assume that you have already installed Cluster
Server. Peform the following steps to install the agent.
1. Make sure the disk is mounted. Refer to the “Mounting a software disk” section for
more information on it.
2. Run the following command to navigate to the location of the agent packages. # cd /cdrom/cluster_server_agents/oracle_agent/pkgs
3. Run the following command to add the file sets for the software
© Copyright IBM Corporation, 2011 Disaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and Global Mirror
21
# installp -ac -d VRTSvcsor.rte.bff VRTSvcsor
Installing VCS agent for IBM SAN Volume Controller copy
services
The VCS agent used to support remote copy services on IBM Storwize V7000 system is the same agent
that has been used in previous VCS configurations with IBM SAN Volume Controller. Despite the agent
name (IBM SVCCopyServices) the required resource attributes are applicable to both SAN Volume
Controller and Storwize V7000 disk systems.
For complete details refer to the Veritas Cluster Server Agent for IBM SVCCopyServices Installation and
Configuration Guide at:
https://sort.symantec.com/agents/download_docs/2192/vcs_svc_install
You must install the IBM SAN Volume Controller copy services agent on each node in the cluster. In
global cluster environments, install the agent on each node in each cluster. These instructions assume
that the Veritas Cluster Server is already installed. Perform the following steps to install the agent.
1. Make sure the disk is mounted. See “Mounting a Software Disk”.
2. Run the following command to navigate to the location of the agent packages:
# cd /cdrom/aix/replication/svccopyservices_agent /version/pkgs The variable version represents the version of the agent. In this scenario, version 5.0.3.0 has been installed.
3. Run the following command to add the file sets for the software. # installp -ac -d VRTSvcssvc.rte.bff VRTSvcssvc
All the required software components have now been installed. You should be able to list out the filesets
that are mentioned in the “Appendix D: Veritas Software file sets listing” on each application host.
Installing and configuring Oracle
Installing and configuring Oracle involves the following tasks:
Installation of Oracle software
Creation of an Oracle instance
Creation of the database
First, you need to install Oracle on all nodes in both clusters. Make sure that the installation prerequisites
are met and are identical on all nodes, especially the user and group ID, passwords, owner and group
permissions and listener port ID. Refer to the appropriate “Appendix B: Setting up the database
applications” section for instructions to setup the database.
In this configuration, a database representing testmm schema is built. A database workload application is
used to populate and simulate a Transaction Processing Performance Council OLTP (TPC-C) workload.
You will need a workload application to exercise the database load.
© Copyright IBM Corporation, 2011 Disaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and Global Mirror
22
.
Configuring applications for disaster recovery
Having installed and configured the base system software and applications, the test team is now ready for
configuring the required applications for high availability and disaster recovery. Most clustered
applications can be adapted to a disaster recovery environment by:
Setting up the Storwize V7000 remote copy partnership
Creating Storwize V7000 Metro Mirror, Storwize V7000 Global Mirror relationship, and FlashCopy mappings
Setting up VCS Global Custer
Configuring VCS application service group
Adding the VCS SAN Volume Controller copy services resource
Setting up
To quickly setup a similar test configuration, follow the steps in the “Quick setup” section. You can also
follow the procedure in the “General configuration steps” section and refer to the documents mentioned in
those sections for detailed configuration steps.
Quick setup
The following section lists the steps to configure a new SFHA cluster using the provided sample VCS
configuration file from “Appendix C: Sample main.cf file - isvp_sfha_clusterA”.
1. Make sure that all of objects mentioned in VCS configuration file are created and available. 2. Halt the cluster server from any node in the clusters in Site A and Site B
#/opt/VRTSvcs/hastop –all 3. Cut and paste the VCS configuration file (main.cf) from Appendix C: Sample main.cf file -
isvp_sfha_clusterA to /etc/VRTSvcs/conf/config directory as shown here.
On cluster nodes isvp12, isvp13 in Site A as: main.cf.siteA On cluster nodes isvp14, isvp15 in Site B as: main.cf.siteB
4. Modify the values of hostnames, IP addresses, mount points, volumes and disk group resources,
cluster names, passwords, and so on to match your site specific configuration. 5. Run the following commands to copy the VCS agent type definition files, if they are not present in
the destination directory. #cp /etc/VRTSagents/ha/conf/Oracle/OracleTypes.cf /etc/VRTSvcs/conf/config/ #cp /etc/VRTSvcs/conf/SVCCopyServicesTypes.cf /etc/VRTSvcs/conf/config/
6. Run the following commands to overwrite the existing main.cf file on the primary node on each of
the respective clusters, then use remote shell or SSH to copy that file to its local partner. For example main.cf on Site A cluster node 1 (isvp12) #cd /etc/VRTSvcs/conf/config
© Copyright IBM Corporation, 2011 Disaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and Global Mirror
23
#cp main.cf.siteA main.cf
#rcp main.cf isvp13:/etc/VRTSvcs/conf/config/main.cf On Site B cluster node 1 (ispv14) #cd /etc/VRTSvcs/conf/config #cp main.cf.siteB main.cf
#rcp main.cf isvp15:/etc/VRTSvcs/conf/config/main.cf 7. Run the following command to verify that the main.cf does not have any errors and fix it if there
are any issues.
#/opt/VRTSvcs/bin/hacf –verify If there are no errors, command returns with exit code 0.
8. Run the following commands to start the cluster on each node in the clusters in Site A and Site B.
#/opt/VRTSvcs/hastart –all #/opt/VRTSvcs/hastart –all
9. Start the cluster manager from any node in the cluster Site A. Login to one of the nodes with the
administrator userID as admin and password as password. 10. On the first node at Site A, login to the cluster manager and bring the app_grp1 service group
online if it is not already online. 11. Now you are ready to manage the clusters from the cluster manager GUI. To test HA and DR
scenarios, proceed with the instructions in the “Failover scenarios” section.
General configuration steps
In this section, the steps mentioned in the setup are explained in detail. Refer to the documents listed in
Table 3 for additional details of configuration procedures. You will need the following guides:
Veritas Cluster Server User’s Guide
Veritas Cluster Server Bundled Agents Reference Guide
Veritas Cluster Server Agent for Oracle Installation and Configuration Guide
Veritas Cluster Server Agent for IBM SVCCopyServices installation and Configuration Guide
Setting up Storwize V7000 cluster partnership
Refer to “Setting up the Storwize V7000 remote copy partnership” section for more information.
Configuring Storwize V7000 Metro Mirror, GlobalMirror, and FlashCopy
relationship
For more details, refer to the following sections: “Creating Storwize V7000 Metro Mirror, Global Mirror,
and FlashCopy consistency groups” and “Creating Storwize V7000 Metro Mirror, Storwize V7000 Global
Mirror relationship, and FlashCopy mappings”.
Setting up VCS Global Custer
© Copyright IBM Corporation, 2011 Disaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and Global Mirror
24
Linking clusters
Refer to the section Linking clusters in the Veritas Cluster Server User’s Guide. The Remote Cluster
Configuration wizard provides an easy interface to link clusters. Before linking clusters, verify that the
virtual IP address for the ClusterAddress attribute for each cluster is set. Use the same IP address as the
one assigned to the IP resource in the ClusterService group. Have the following information ready: The
active host name or IP address of each cluster in the global configuration and of the cluster being added
to the configuration, the administrator login name and password for each cluster in the configuration.
The VCS cluster management GUI (Java™ Console) is no longer packaged in the latest 5.1 release of
VCS. If you attempt to issue the command used to manage previous installations, you will see the
following message:
isvp12> /opt/VRTSvcs/bin/hagui & [1] 13369510 isvp12> VCS Single Cluster Manager (Java Console) is no longer packaged with VCS. Symantec recommends use of the Veritas Operations Manager (VOM) to manage, monitor and report on multi-cluster environments. You can download VOM at http://go.symantec.com/vom . If you wish to continue using the VCS Single Cluster Manager, you can get it at no charge at the http://go.symantec.com/vcsm_download website.
If you elect to download and use the VCS single-cluster manager, you can still click Edit Add/Delete
Remote Cluster or alternatively use the Veritas Operations Manager (VOM)
Configuring global cluster
From any node in the clusters in Site A and Site B, run the GCO Configuration wizard to create or update
the ClusterService group. The wizard verifies your configuration and validates it for a global cluster setup.
#/opt/VRTSvcs/bin/gcoconfig
The wizard discovers the NIC devices on the local system and prompts you to enter the device to be used
for the global cluster.
Specify the name of the device and press Enter. If you do not have NIC resources in your configuration,
the wizard asks you whether the specified NIC will be the public NIC used by all systems. Enter y if it is
the public NIC; otherwise enter n. If you entered n, the wizard prompts you to enter the names of NICs on
all systems.
Enter the virtual IP to be used for the global cluster, which you have already identified. If you do not have
IP resources in your configuration, the wizard prompts you for the netmask associated with the virtual IP.
The wizard detects the netmask; you can accept the suggested value or enter another value.
The wizard starts running the commands to create or update the ClusterService group. Various
messages indicate the status of these commands. After running these commands, the wizard brings the
ClusterService group online.
© Copyright IBM Corporation, 2011 Disaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and Global Mirror
25
Configuring VCS application service group
The VCS service groups can be setup once the VCS agents have been installed. In this setup Oracle is
installed on all the cluster nodes.
Figure 3 shows the dependency graph of the VCS resources within each service group. There are two
service groups. The application service group test_shared_DG and the cluster service group which is
created when you enable Global Cluster Option. The application group has been configured as a global
service group, which means it will be able to fail over to local and remote cluster nodes, if required. You
also need to observe that based on the resource dependency view, the Oracle database resource
depends on the mount, volume, diskgroup, and SVCCopyServices resources being available.
Refer to the Veritas Cluster Server Agent for Oracle Installation and Configuration Guide for adding and
configuring Oracle resource types.
Configure the application service group on the primary and secondary clusters. It involves the following
steps:
1. Start the VCS Single Cluster Manager (Java Console), or alternatively use the Veritas Operations
Manager (VOM) and log on to the cluster.
2. If the agent resource type (Oracle / SVCCopyServices ) is not added to your configuration, add it.
From the Cluster Manager File menu, click Import Types and select the appropriate file types for
the agent in the following directory path : /etc/VRTSagents/ha/conf/Oracle/OracleTypes.cf,
/etc/VRTSvcs/conf/SVCCopyServicesTypes.cf
3. Click Import.
4. Save the configuration.
5. Create the service groups. In this case, you can create the test_shared_DG service group. Refer
to chapter 4 of the Veritas Cluster Server Agent for Oracle Installation and Configuration guide to
create the service group.
6. Add the resources mentioned in the test_shared_DG service group in main.cf for the respective
cluster.
7. Configure the service group as a global group using the Global Group Configuration wizard. Refer
to the Veritas Cluster Server User’s Guide for more information.
© Copyright IBM Corporation, 2011 Disaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and Global Mirror
26
Figure 3: Sample VCS dependency tree for service group resources
Adding the VCS SAN Volume Controller copy services resource
Refer to chapter 3 of Veritas Cluster Server Agent for IBM SVCCopyServices installation and
Configuration Guide for detailed information on the configuration of Fire Drill resource.
The VCS agent for IBM SVCCopyServices manages replication relationships and consistency groups that
are defined on SAN Volume Controller clusters or IBM Storwize V7000 subsystems.
Before you configure the agent for SVCCopyServices
Before configuring the agent, review the following information:
Set up the SSH identity file on the VCS hosts prior to configuring the service group. Use the SSH
keygen, if required. Generate a public and private key pair using the ssh-keygen utility. Copy the
public key on the Storwize V7000 system. For detailed instruction on generating the keypair for
AIX, refer to the “Appendix A: OpenSSH client configuration between AIX and Storwize V7000”
section.
Review the configuration concepts, which describe the agent‟s type definition and attributes.
Verify that you have installed the agent on all the nodes in the cluster.
© Copyright IBM Corporation, 2011 Disaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and Global Mirror
27
Verify the hardware setup for the agent.
Make sure that the cluster has an effective heartbeat mechanism in place.
Adding the agent resource type
Perform the following steps on every node in the cluster. Adding and configuring the agent manually
in a global cluster involves the following tasks. Next section describes details of the resource type
definition and attributes.
1. Start the VCS Single Cluster Manager (Java Console), or alternatively use the VOM and log on to
the cluster.
2. If the agent resource type is not added to your configuration, add it. From the Cluster Manager
File menu, click Import Types and select /etc/VRTSvcs/conf/SVCCopyServicesTypes.cf
3. Click Import.
4. Save the configuration.
Now you can configure the SVCCopyServices resource by:
Creating service groups for SVCCopyServices, which you have already done in the previous
section by creating test_shared_DG service group.
Adding a resource of type SVCCopyServices at the bottom of the service group. If you are
adding the resource, use the Java Console GUI, VOM, or the command line and follow the
resource names from the example and create the v7k_agent resource
#hares –add v7k_agent SVCCopyServices test_shared_DG
Configuring the attributes of the SVCCopyServices resource. In this example add the
GroupName and Storwize V7000 IP attribute values.
#hares –modify app_svccp GroupName v7k_mm_DR #hares –modify app_svccp SVCClusterIP XXX.XXX.XXX
SVCCopyServices resource type definition
The IBM SVCCopyServices agent is represented by the SVCCopyServices type in VCS.
type SVC CopyServices (
str GroupName
int IsConsistencyGroup = 1
str SSHBinary = "/usr/bin/ssh"
str SSHPathToIDFile
str SVCClusterIP
str SVCUserName = admin
int StopTakeover = 0
int DisconnectTakeover = 0
static str ArgList[] = {GroupName, IsConsistencyGroup, SSHBinary,
SSHPathToIDFile, SVCClusterIP, SVCUserName, StopTakeover,
DisconnectTakeover}
)
© Copyright IBM Corporation, 2011 Disaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and Global Mirror
28
Attribute definitions for the SVCCopyServices agent
Review the description of the agent attributes.
Required attributes
You need to assign values to the required attributes.
Attribute Description
GroupName Name of the replication relationship or consistency group that is managed by the agent.
Type-dimension string-scalar
IsConsistency Group
Indicates whether the value specified in the GroupName attribute is the name of a single replication relationship or of a consistency group consisting of several replication relationships. Attribute value is either 0 or 1. Default is 1.
Type-dimension string-scalar
SSHBinary
Contains the absolute path to the SSH binary. SSH is the mode of communication with the SAN Volume Controller cluster that is connected to the node. Default is /usr/bin/ssh.
Type-dimension string-scalar
SSHPathToID File
Contains the absolute path to the identity file used for authenticating the host with the SAN Volume Controller cluster. The corresponding public key must be uploaded on the SAN Volume Controller cluster so that the SAN Volume Controller cluster can correctly authenticate the host.
SVCClusterIP Is the IP address of the SAN Volume Controller cluster in the dot notation. The agent uses this IP address to communicate with the SAN Volume Controller cluster.
Type-dimension string-scalar
SVCUserName
Is the user name that authenticates the SSH connection with the SAN Volume Controller cluster. Default is admin.
StopTakeover
Determines whether the agent makes read-write access available to the host when the replication is in a stopped state (that is consistent_stopped). The status of the replication goes into a stopped state when the user fires the stoprcrelationship or the stoprcconsistgrp command. Thus, no replication occurs between the primary and secondary SAN Volume Controller clusters. Attribute value is either 0 or 1. Default value is 0. If it is set to 1, there is a possibility for data loss if after the replication was stopped, the application continues to write data on the primary cluster. Thus, when the agent enables read-write access on the secondary SAN Volume Controller cluster, the secondary SAN Volume Controller cluster does not have up-to-date data on it. The possible stopped states are: inconsistent_stopped and consistent_stopped When the state of the replication is consistent_stopped and StopTakeover = 1, the agent enables read-write access for the SAN Volume Controller cluster. When the state of the replication is inconsistent_stopped, the
© Copyright IBM Corporation, 2011 Disaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and Global Mirror
29
agent does not enable read-write access for the SAN Volume Controller cluster.
Type-dimension string-scalar
Disconnect Takeover
Determines whether the agent makes read-write access available to the host when the replication is in a disconnected state (that is consistent_disconnected). The status of the replication goes into a disconnected state when the primary and secondary SAN Volume Controller clusters lose communication with each other. Thus, no replication occurs between the primary and secondary SAN Volume Controller clusters. Attribute value is either 0 or 1. Default is 0. The possible disconnected states are:
idling_disconnected
inconsistent_disconnected
consistent_disconnected When the state of the replication is consistent_disconnected and DisconnectTakeover = 1, the agent enables read-write access for the SAN Volume Controller cluster. When the state of the replication is idling_disconnected, the agent does not enable read-write access for the SAN Volume Controller cluster.
Type-dimension string-scalar
Table 6: SVCCopyServices agent attribute properties
A resource of type SVCCopyServices might be configured as follows in main.cf: This example is a copy of the resource as configured by the test team.
SVCCopyServices v7k_agent (
TriggerResStateChange = 1
SSHPathToIDFile = "/home/root/.ssh/id_rsa"
GroupName = v7k_mm_DR
SVCClusterIP = "9.11.xxx.xxx"
)
Setting up Fire Drill
Refer to Chapter 5 in the Veritas Cluster Server Agent for IBM SVCCopyServices installation and
Configuration Guide for detailed configuration procedure of Fire Drill resource.
You can obtain the SVCCopyServices agent v.5.0.06.0 for AIX from the following URL:
https://sort.symantec.com/agents/detail/2245
About Fire Drill resource
A Fire Drill procedure verifies the fault readiness of a disaster recovery configuration. This procedure
is done without stopping the application at the primary site and disrupting user access.
A Fire Drill is performed at the secondary site using a special service group. The Fire Drill service
group is identical to the application service group, but uses a Fire Drill resource in place of the
replication agent resource. The Fire Drill service group uses a copy of the data that is used by the
application service group. In clusters employing IBM SVCCopyServices, the SVCCopyServicesSnap
resource manages the replication relationship during a Fire Drill.
© Copyright IBM Corporation, 2011 Disaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and Global Mirror
30
Bringing the Fire Drill service group online demonstrates the ability of the application service group to
come online at the remote site when a failover occurs.
The SVCCopyServicesSnap agent is the Fire Drill agent for IBM SVCCopyServices. It is included in the
VCS agent for IBM SVCCopyServices agent package. The agent manages the replication relationship
between the source and target arrays when running a Fire Drill. You can configure the
SVCCopyServicesSnap resource in the Fire Drill service group, in place of the SVCCopyServices
resource. VCS supports the Gold, Silver and Bronze Fire Drill configurations for the agent. Refer to the
Veritas Cluster Server Agent for IBM SVCCopyServices Installation and Configuration Guide mentioned
in Table 3 for more details.
SVCCopyServicesSnap resource type definition
The IBM SVCCopyServicesSnap agent is represented by the SVCCopyServicesSnap type in VCS.
type SVCCopyServicesSnap (
static int MonitorInterval = 300
static int OpenTimeout = 180
static int NumThreads = 1
static int OfflineMonitorInterval = 0
static int OnlineTimeout = 6000
static int RestartLimit = 1
static str ArgList[] = { TargetResName, MountSnapshot, UseSnapshot,
RequireSnapshot, FCMapGroupName }
str TargetResName
int MountSnapshot
int UseSnapshot
int RequireSnapshot
str FCMapGroupName
temp str Responsibility
)
Attribute definitions for the SVCCopyServicesSnap agent
Review the description of the agent attributes. You can find additional details on the
SVCCopyServicesSnap agent in the Veritas Cluster Server Agent for IBM SVCCopyServices Installation
and Configuration Guide, mentioned in Table 3.
Required attributes
You must assign values to required attributes.
Attribute Description
TargetResName Name of the resource managing the LUNs that you want to take a snapshot of. Set this attribute to the name of the SVCCopyServices resource if you want to take a snapshot of replicated data. Set this attribute to the name of the DiskGroup resource if the data is not replicated. For example, in a typical Oracle setup, you might replicate data files and redo logs, but you might choose to avoid replicating temporary tablespaces. The temporary tablespace must still exist at the DR site and might be part of its own disk group.
Type-dimension string-scalar
© Copyright IBM Corporation, 2011 Disaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and Global Mirror
31
UseSnapshot Specifies whether the SVCCopyServicesSnap resource takes a
local snapshot of the target array. Set this attribute to 1 for Gold and Silver configurations. For Bronze, set this attribute to 0.
Type-dimension integer-scalar
RequireSnapshot Specifies whether the SVCCopyServicesSnap resource must take a snapshot before coming online. Set this attribute to 1 if you want the resource to come online only after it succeeds in taking a snapshot. Set this attribute to 0 if you do want the resource to come online even if it fails to take a snapshot. Setting this attribute to 0 creates the Bronze configuration. Note: Set this attribute to 1 only if UseSnapshot is set to 1
Type-dimension integer-scalar
MountSnapshot Specifies whether the resource uses the snapshot to bring the service group online. Set this attribute to 1 for Gold configuration. For Silver and Bronze configurations, set the attribute to 0. Note: Set this attribute to 1 only if UseSnapshot is set to 1.
Type-dimension integer-scalar
Responsibility
Do not modify. For internal use only. Used by the agent to keep track of desynchronizing snapshots.
Type-dimension temporary string
FCMapGroupName Name of the FlashCopy mapping or FlashCopy consistency group. If the target SVCCopyServices resource contains a consistency group, set FCMapGroupName to a FlashCopy consistency group. If the target SVCCopyServices resource contains a relationship, set FCMapGroupName to a FlashCopy mapping. This attribute is optional for Bronze configurations.
Type-dimension string-scalar
Table 7: SVCCopyServicesSnap attribute definitions
A resource of type SVCCopyServicesSnap may be configured as follows in main.cf: Also check the
configuration and note that it is in the Gold configuration.
SVCCopyServicesSnap FD_app_svccpsnap (
TargetResName = app_svccp
MountSnapshot = 1
UseSnapshot = 1
RequireSnapshot = 1
FCMapGroupName = FD_gmappgrp1cg
)
Configure the Fire Drill service group
On one of the nodes of the secondary VCS cluster, perform the following tasks to configure the fire
drill service group.
1. In Cluster Explorer, click the Service Group tab in the left pane.
2. Click the fire drill service group in the left pane and click the Resources tab in the right pane.
3. Right-click the SVCCopyServices resource and click Delete.
4. Add a resource of type SVCCopyServicesSnap and configure its attributes.
5. Right-click the resource to be edited and click View Properties View. If a resource to be
edited does not appear in the pane, click Show All Attributes.
© Copyright IBM Corporation, 2011 Disaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and Global Mirror
32
6. Edit attributes to reflect the configuration at the remote site. For example, change the Mount
resources so that they point to the volumes that are used in the Fire Drill service group.
Enable the Fire Drill attribute
To enable the Fire Drill attribute:
1. In Cluster Explorer, click the Types tab in the left pane, right-click the type to be edited, and
click View Properties View.
2. Click Show All Attributes.
3. Double-click Fire Drill.
4. In the Edit Attribute dialog box, enable Fire Drill as required, and click OK.
5. Repeat the process of enabling the Fire Drill attribute for all required resource types.
6. Bring the Fire Drill service group online application running. This action validates that your
disaster recovery service group can fail over to the secondary site in the event of a disaster at
the primary site
Failover scenarios
This section describes the procedures for testing some common failover scenarios. This setup contains
one service group containing Oracle resource types. The example uses a Storwize V7000 Metro Mirror
setup.
To use the Storwize V7000 Global Mirror configuration, replace the SVCCopyservices GroupName
attribute, v7k_mm_DR by v7k_gm_DR, and for Fire Drill support (if being used), modify the
SVCCopyServiceSnap attributes accordingly.
For the Metro Mirror configuration, the SVCCopyservices GroupName attribute is v7k_mm_DR.
For the Global Mirror configuration, the SVCCopyservices GroupName attribute is v7k_gm_DR.
For the Metro Mirror configuration, the SVCCopyServicesSnap FCMapGroupName attribute is
FD_mm_fcconsistgrp.
For the Global Mirror configuration, the SVCCopyServicesSnap FCMapGroupName attribute is
FD_gm_fcconsistgrp.
You need to make sure that the switch zoning configuration requirements are met for Global Mirror.
Before you start the scenarios, make sure that both the clusters in Site A and Site B are up and running.
Log in to a site A and site B cluster node and issue the following command:
#/opt/VRTSvcs/bin/hastatus
Start the VCS Single Cluster Manager (Java Console), or alternatively use the VOM and log on to the
cluster.
Make sure that the Clusterservice group is online on a node at each site and the test_shared_DG service
group is online on a cluster node at Site A.
© Copyright IBM Corporation, 2011 Disaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and Global Mirror
33
While the application is running, from the Storwize V7000 console, ensure that the Metro Mirror
consistency group is in the consistent_synchronized state and the primary is the master cluster
(isvp_sfha_clusterA).
Application host failover
In this scenario, a node in the cluster at Site A where the application is online is lost. The application fails
over to the second node in the cluster. Next, that node is also lost. As all the nodes in the cluster are
down, the application fails over to a cluster node at Site B.
To test the host failure across nodes and clusters, perform the following steps:
1. On the switch, disable a host port associated with the of cluster node at site A, where the
application is online. This action introduces a fault. The service group fails over to the second
node of the cluster at Site A. You need to wait until the service group comes online on the second
node at Site A, and then verify that the DB application data is consistent. While the Application is
running, from the Storwize V7000 console or CLI interface ensure that the Metro Mirror
consistency group is in the consistent_synchronized state and the primary is the master cluster
(ISV7K4).
IBM_2076:ISV7K4:admin>svcinfo lsrcconsistgrp
2. Bring the Fire Drill service group at Site B offline, if it is online.
3. Disable the host port on the switch of the second cluster node at Site A.
4. A cluster down alert appears and gives the administrator an opportunity to fail over the service
group manually to one of the cluster nodes (for example, isvp14) at Site B.
5. Wait till the service group comes online on isvp14 and then verify that the DB application data is
consistent. While the application is running, from the IBM Storwize V7000 console, ensure that
the Metro Mirror consistency group is in the consistent_synchronized state and the primary is the
auxiliary cluster (ISV7KD10).
6. Follow step 1 to fail over the application to isvp15 (the second node at Site B) and verify the
status.
7. Enable the switch ports of the two cluster nodes at Site A and ensure that the cluster comes up
and the service group is offline.
8. From the active node at Site B, move the service group to its original host (in this example,
isvp12). In the Service Groups tab of the Cluster Manager Configuration tree, right-click the
service group. Click Switch To and click isvp12 on which the service group was originally online.
9. The service group comes online on isvp_sfha_clusterA. From the Storwize V7000 console, verify
that the Storwize V7000 at the Site A becomes the primary again. Wait until the service group
comes online on isvp_sfha_clusterA, check that the DB application data is consistent.
© Copyright IBM Corporation, 2011 Disaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and Global Mirror
34
Disaster recovery in a Global Cluster configuration
You need to test cluster failover in case of a disaster and failback after restoration of the cluster at the
disaster site (Site A). In this case, simulate a disaster by introducing fault to all the nodes in the cluster
and storage at Site A simultaneously.
To test the cluster failover and failback, perform the following steps:
1. Make sure that the application is up and running on one of the cluster nodes, for example, isvp12
in Site A.
2. Bring the Fire Drill service group to offline at Site B, if it is online.
3. Disable the host ports and storage switch ports of cluster Site A.
4. A cluster down alert appears and gives the administrator an opportunity to fail over the service
group manually to one of the cluster nodes (for example, isvp14) at Site B. The Storwize V7000
Metro Mirror consistency group lists primary as the auxiliary cluster (ISV7KD10). You can also
check the status from the VCS log at /var/VRTSvcs/log/engine_A.log.
5. Enable the disabled switch ports of cluster Site A. Ensure that the cluster comes up and the
ClusterService group goes online. The application service group is offline.
6. Manually resynchronize the data between the primary and secondary sites. If the current primary
is retained as the primary for the replication relationship and the replication is not in a stopped
state, stop the replication manually.
Use the stoprcrelationship or the stoprcconsistgrp command, without the -access option. Use
the startrcrelationship or the startrcconsistgrp command with the -force option on the
Storwize V7000 disk system that the user wants to retain as primary.
Alternatively, use the update action entry point with no arguments specified.
If you change the primary for the replication relationship, stop the replication manually (if it is not
stopped already).
Enable read / write access on both Storwize V7000 disk systems by using the
stoprcrelationship or the stoprcconsistgrp command with the –access option.
Use the startrcrelationship or the startrcconsistgrp command with both the -force and
-primary options. Alternatively, use the update action entry point with exactly one argument that
is the Storwize V7000 disk system which the user retains as primary.
7. After the resynchronization is complete, switch application to the cluster at Site A.
8. The service group comes online on isvp12. Now, from the Storwize V7000 CLI or web interface,
verify that the subsystem at Site A becomes the primary again. Wait until the service group
comes online on isvp12, and check that the DB application data is consistent.
© Copyright IBM Corporation, 2011 Disaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and Global Mirror
35
Checking failover readiness with Fire Drill
In this example, the Fire Drill service group is in the Gold configuration.
Refer to Chapter 5 in the Veritas Cluster Server Agent for IBM SVCCopyServices installation and
Configuration Guide for details on Bronze, Silver, and Gold configuration for the Fire Drill resource.
A local snapshot of the target LUN is taken and the fire drill service group in brought online by mounting
the replication target LUN based on the Fire Drill configuration. VCS creates a lock file to indicate that the
resource is online.
To check the failover readiness with Fire Drill:
1. Ensure that the application service group, test_shared_DG, is offline at cluster Site B.
2. Bring the fire drill service, firedrill_appgrp1, online on a node (isvp14) at site B.
#hagrp –online firedrill_appgrp1 –sys isvp14 –localclus
This action validates your disaster recovery configuration. The production service group can fail
over to the secondary site in the event of an actual failure (disaster) at the primary site. If the fire
drill service group does not come online, review the VCS engine log for more information.
3. Take the fire drill offline after its functioning has been validated.
#hagrp –offline firedrill_appgrp1 –sys isvp14 –localclus
Note: Failing to take Fire Drill service group offline might cause failures in your environment. For
example, if the application service group fails over to the node hosting fire drill service group, there might
be resource conflicts resulting in both service groups faulting, and based on failover policy for the
application service group, it can failover to another node in the cluster.
© Copyright IBM Corporation, 2011 Disaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and Global Mirror
36
Summary
Clustering software, such as Veritas Cluster Server, has for many years been one of the standard
approaches to protect against failures of individual hardware or software components. As more and more
organizations look to add robust disaster recovery capabilities to their mission-critical systems, merely
shipping backup tapes to an offsite location is not adequate. This white paper has shown how a local HA
cluster can be extended with DR capabilities.
The IBM Storwize V7000 Metro Mirror and Global Mirror features are utilized to add data replication
capabilities to the solution. VCS wizards are used to convert two independent clusters (at two different
locations) into a global cluster with automated failover capability between locations in the event of a site
disaster.
The result is a robust DR environment capable of meeting stringent recovery time objectives.
© Copyright IBM Corporation, 2011 Disaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and Global Mirror
37
Appendix A: OpenSSH client configuration between AIX and
Storwize V7000
First, ensure that the OpenSSH client bundle is installed on the AIX LPAR:
isvp12> lslpp -l|grep ssh
openssh.base.client 5.4.0.6100 COMMITTED Open Secure Shell Commands
openssh.base.server 5.4.0.6100 COMMITTED Open Secure Shell Server
openssh.man.en_US 5.4.0.6100 COMMITTED Open Secure Shell
openssh.msg.en_US 5.4.0.6100 COMMITTED Open Secure Shell Messages -
openssh.base.client 5.4.0.6100 COMMITTED Open Secure Shell Commands
openssh.base.server 5.4.0.6100 COMMITTED Open Secure Shell Server
If not, refer back to your AIX 7.1 installation media, and use it as the source for a smitty install bundle:
Figure 4: Smitty install bundle used for OpenSSH
You should be able to see the openSSH client package at the end of the list:
© Copyright IBM Corporation, 2011 Disaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and Global Mirror
38
Figure 5: The openssh client file sets
After installing you can find the SSH client binary in /bin, with the configuration files residing in /etc/ssh/.
Generally, you should not need to make any modifications to the SSH client configuration file.
isvp12> cd /etc/ssh
isvp12> ls
moduli ssh_host_key ssh_prng_cmds
ssh_config ssh_host_key.pub sshd_config
ssh_host_dsa_key ssh_host_rsa_key
ssh_host_dsa_key.pub ssh_host_rsa_key.pub
Now that the SSH client code is installed on the LPAR, you need to generate a public and private key pair
(which will be stored in the user‟s home directory).
© Copyright IBM Corporation, 2011 Disaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and Global Mirror
39
Figure 6: Generating an SSH key pair
After generated the keys, you need to use FTP to copy the public key file from the server to your local
workstation. This public key will be used with a new user ID, which will be created on the Storwize V7000
system. This user ID, once configured to utilize the server‟s public SSH key, will be the means by which
the server communicates with the Storwize V7000 system. In the Storwize V7000 GUI (shown in Figure
7), you can see the isvp14_vcs_agent user ID which was previously created specifically for the host to
manage communication with the Storwize V7000 system for the purpose of managing remote copy
relationships. You can create any meaningful username you would like.
© Copyright IBM Corporation, 2011 Disaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and Global Mirror
40
Figure 7: A sample Storwize V7000 user ID for SSH access
© Copyright IBM Corporation, 2011 Disaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and Global Mirror
41
Figure 8 – Defining a new user on Storwize V7000 for SSH-only access
You will not require a password for these IDs but will need to upload the public SSH key that you saved
locally in the previous step. This key need to match the private key (which means they should have been
generated together).
After creating the user ID on the Storwize V7000 storage system, you can obtain a password-less SSH
session between the client LPAR and the storage subsystem. Be sure to use the -l flag specifying admin
as the user, because this is the only one that Storwize V7000 will accept for sessions without providing a
password. This communication is required for the SVCCopyServices agent to function correctly in SFHA
to manage replication on Storwize V7000.
isvp12> ssh -l admin -i /home/root/.ssh/id_rsa isv7k4
IBM_2076:ISV7K4:admin>
Appendix B: Setting up the database applications
This section shows the output collected by the test team during creation of testmm Oracle instance.
These include group and user creation as well as creation of the Oracle home directory, along with
commands used in the environment to create the database instance.
Setting up the Oracle database application
You can use the following commands to create groups.
© Copyright IBM Corporation, 2011 Disaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and Global Mirror
42
mkgroup -'A' id='1000' adms='root' oinstall
mkgroup -'A' id='2000' adms='root' dba
mkgroup -'A' id='4000' adms='root' oper
You can use the following command to create users.
mkuser id='1101' pgrp='oinstall' groups='dba,oper' home='/home/oracle' oracle
You can use the following command to create ORACLE_HOME
mkdir –p /u01/app/112/dbhome
chown –R oracle:oinstall /u01/app/112/dbhome
Database installation
On each system, install Oracle 11g. Mount the Oracle software disk or have access to the Oracle
software store. Install Oracle as Oracle user. Edit the .profile file and set the following environment
variables. Follow the instructions in the installer GUI and complete the installation.
#su – oracle PATH=/usr/bin:/etc:/usr/sbin:/usr/ucb:$HOME/bin:/usr/bin/X11:/sbin:/u01/app/oracle/product/11.2.0/dbhome_1/bin:. ORACLE_BASE=/u01/app/oracle ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1 ORACLE_SID=testmm export PATH export DISPLAY export ORACLE_HOME export ORACLE_BASE export ORACLE_SID $ . ./.profile $ cd <oracle software disk path>/Disk1 ./Disk1/runInstaller -ignoreSysPrereqs
Database creation
The test team used Oracle dbca tool to create the database. The scripts that dbca uses to create the
database are under the /u01/app/oracle/admin/testmm/scripts directory.
$ ls CloneRmanRestore.sql lockAccount.sql testmm.sh cloneDBCreation.sql postDBCreation.sql testmm.sql init.ora postScripts.sql inittestmmTemp.ora rmanRestoreDatafiles.sql $ cat testmm.sh #!/bin/sh OLD_UMASK=`umask` umask 0027
© Copyright IBM Corporation, 2011 Disaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and Global Mirror
43
mkdir -p /u01/app/oracle/admin/testmm/adump mkdir -p /u01/app/oracle/admin/testmm/dpdump mkdir -p /u01/app/oracle/admin/testmm/pfile mkdir -p /u01/app/oracle/cfgtoollogs/dbca/testmm mkdir -p /u01/app/oracle/product/11.2.0/dbhome_1/dbs mkdir -p /v7k_mm_testmount/testmm umask ${OLD_UMASK} ORACLE_SID=testmm; export ORACLE_SID PATH=$ORACLE_HOME/bin:$PATH; export PATH echo You should Add this entry in the /etc/oratab: testmm:/u01/app/oracle/product/11.2.0/dbhome_1:Y /u01/app/oracle/product/11.2.0/dbhome_1/bin/sqlplus /nolog @/u01/app/oracle/admin/testmm/scripts/testmm.sql
$ cat testmm.sql set verify off ACCEPT sysPassword CHAR PROMPT 'Enter new password for SYS: ' HIDE ACCEPT systemPassword CHAR PROMPT 'Enter new password for SYSTEM: ' HIDE host /u01/app/oracle/product/11.2.0/dbhome_1/bin/orapwd file=/u01/app/oracle/product/11.2.0/dbhome_1/dbs/orapwtestmm force=y @/u01/app/oracle/admin/testmm/scripts/CloneRmanRestore.sql @/u01/app/oracle/admin/testmm/scripts/cloneDBCreation.sql @/u01/app/oracle/admin/testmm/scripts/postScripts.sql @/u01/app/oracle/admin/testmm/scripts/lockAccount.sql @/u01/app/oracle/admin/testmm/scripts/postDBCreation.sql
$ cat CloneRmanRestore.sql SET VERIFY OFF connect "SYS"/"&&sysPassword" as SYSDBA set echo on spool /u01/app/oracle/admin/testmm/scripts/CloneRmanRestore.log append startup nomount pfile="/u01/app/oracle/admin/testmm/scripts/init.ora"; @/u01/app/oracle/admin/testmm/scripts/rmanRestoreDatafiles.sql; spool off
CreateDBFiles.sql connect SYS/&&sysPassword as SYSDBA
set echo on spool /oracle/orahome/assistants/dbca/logs/CreateDBFiles.log CREATE TABLESPACE "USERS1" LOGGING DATAFILE '/oradata/&&DBNAME/mnt2/users01.dbf' SIZE 5M REUSE AUTOEXTEND ON NEXT 1280K MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO ; CREATE TABLESPACE "USERS2" LOGGING DATAFILE '/oradata/&&DBNAME/mnt3/users02.dbf' SIZE 5M REUSE AUTOEXTEND ON NEXT 1280K MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO ; CREATE TABLESPACE "USERS3" LOGGING DATAFILE '/oradata/&&DBNAME/mnt4/users03.dbf' SIZE 5M REUSE AUTOEXTEND ON NEXT 1280K
© Copyright IBM Corporation, 2011 Disaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and Global Mirror
44
MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO ; spool off
$ cat cloneDBCreation.sql SET VERIFY OFF connect "SYS"/"&&sysPassword" as SYSDBA set echo on spool /u01/app/oracle/admin/testmm/scripts/cloneDBCreation.log append Create controlfile reuse set database "testmm" MAXINSTANCES 8 MAXLOGHISTORY 1 MAXLOGFILES 16 MAXLOGMEMBERS 3 MAXDATAFILES 100 Datafile '/v7k_mm_testmount/testmm/system01.dbf', '/v7k_mm_testmount/testmm/sysaux01.dbf', '/v7k_mm_testmount/testmm/undotbs01.dbf', '/v7k_mm_testmount/testmm/users01.dbf' LOGFILE GROUP 1 ('/v7k_mm_testmount/testmm/redo01.log') SIZE 51200K, GROUP 2 ('/v7k_mm_testmount/testmm/redo02.log') SIZE 51200K, GROUP 3 ('/v7k_mm_testmount/testmm/redo03.log') SIZE 51200K RESETLOGS; exec dbms_backup_restore.zerodbid(0); shutdown immediate; startup nomount pfile="/u01/app/oracle/admin/testmm/scripts/inittestmmTemp.ora"; Create controlfile reuse set database "testmm" MAXINSTANCES 8 MAXLOGHISTORY 1 MAXLOGFILES 16 MAXLOGMEMBERS 3 MAXDATAFILES 100 Datafile '/v7k_mm_testmount/testmm/system01.dbf', '/v7k_mm_testmount/testmm/sysaux01.dbf', '/v7k_mm_testmount/testmm/undotbs01.dbf', '/v7k_mm_testmount/testmm/users01.dbf' LOGFILE GROUP 1 ('/v7k_mm_testmount/testmm/redo01.log') SIZE 51200K, GROUP 2 ('/v7k_mm_testmount/testmm/redo02.log') SIZE 51200K, GROUP 3 ('/v7k_mm_testmount/testmm/redo03.log') SIZE 51200K RESETLOGS; alter system enable restricted session; alter database "testmm" open resetlogs; exec dbms_service.delete_service('seeddata'); exec dbms_service.delete_service('seeddataXDB'); alter database rename global_name to "testmm"; ALTER TABLESPACE TEMP ADD TEMPFILE '/v7k_mm_testmount/testmm/temp01.dbf' SIZE 20480K REUSE AUTOEXTEND ON NEXT 640K MAXSIZE UNLIMITED; select tablespace_name from dba_tablespaces where tablespace_name='USERS'; select sid, program, serial#, username from v$session; alter database character set INTERNAL_CONVERT WE8MSWIN1252; alter database national character set INTERNAL_CONVERT AL16UTF16; alter user sys account unlock identified by "&&sysPassword"; alter user system account unlock identified by "&&systemPassword"; alter system disable restricted session;
© Copyright IBM Corporation, 2011 Disaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and Global Mirror
45
$ cat postScripts.sql
SET VERIFY OFF
connect "SYS"/"&&sysPassword" as SYSDBA
set echo on
spool /u01/app/oracle/admin/testmm/scripts/postScripts.log append
@/u01/app/oracle/product/11.2.0/dbhome_1/rdbms/admin/dbmssml.sql;
execute dbms_datapump_utl.replace_default_dir;
commit;
connect "SYS"/"&&sysPassword" as SYSDBA
alter session set current_schema=ORDSYS;
@/u01/app/oracle/product/11.2.0/dbhome_1/ord/im/admin/ordlib.sql;
alter session set current_schema=SYS;
connect "SYS"/"&&sysPassword" as SYSDBA
connect "SYS"/"&&sysPassword" as SYSDBA
execute ORACLE_OCM.MGMT_CONFIG_UTL.create_replace_dir_obj;
$ cat lockAccount.sql
SET VERIFY OFF
set echo on
spool /u01/app/oracle/admin/testmm/scripts/lockAccount.log append
BEGIN
FOR item IN ( SELECT USERNAME FROM DBA_USERS WHERE ACCOUNT_STATUS IN
('OPEN', 'LOCKED', 'EXPIRED') AND USERNAME NOT IN (
'SYS','SYSTEM') )
LOOP
dbms_output.put_line('Locking and Expiring: ' || item.USERNAME);
execute immediate 'alter user ' ||
sys.dbms_assert.enquote_name(
sys.dbms_assert.schema_name(
item.USERNAME),false) || ' password expire account lock' ;
END LOOP;
END;
/
spool off
© Copyright IBM Corporation, 2011 Disaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and Global Mirror
46
$ cat postDBCreation.sql
SET VERIFY OFF
connect "SYS"/"&&sysPassword" as SYSDBA
set echo on
spool /u01/app/oracle/admin/testmm/scripts/postDBCreation.log append
select 'utl_recomp_begin: ' || to_char(sysdate, 'HH:MI:SS') from dual;
execute utl_recomp.recomp_serial();
select 'utl_recomp_end: ' || to_char(sysdate, 'HH:MI:SS') from dual;
execute dbms_swrf_internal.cleanup_database(cleanup_local => FALSE);
commit;
connect "SYS"/"&&sysPassword" as SYSDBA
set echo on
create spfile='/u01/app/oracle/product/11.2.0/dbhome_1/dbs/spfiletestmm.ora' FROM
pfile='/u01/app/oracle/admin/testmm/scripts/init.ora';
shutdown immediate;
connect "SYS"/"&&sysPassword" as SYSDBA
startup ;
spool off
Appendix C: Sample main.cf file - isvp_sfha_clusterA
This is the simple main.cf file used in the test team‟s SFHA configuration
include "OracleASMTypes.cf"
include "types.cf"
include "Db2udbTypes.cf"
include "OracleTypes.cf"
include "SVCCopyServicesTypes.cf"
include "SybaseTypes.cf"
cluster isvp_sfha_clusterA (
UserNames = { admin = gJKcJEjGKfKKiSKeJH }
ClusterAddress = "10.10.0.150"
Administrators = { admin }
)
remotecluster isvp_sfha_clusterB (
ClusterAddress = "10.10.0.151"
)
© Copyright IBM Corporation, 2011 Disaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and Global Mirror
47
heartbeat Icmp (
ClusterList = { isvp_sfha_clusterB }
Arguments @isvp_sfha_clusterB = { "10.10.0.151" }
)
system isvp12 (
)
system isvp13 (
)
group ClusterService (
SystemList = { isvp12 = 0, isvp13 = 1 }
AutoStartList = { isvp12, isvp13 }
OnlineRetryLimit = 3
OnlineRetryInterval = 120
)
Application wac (
StartProgram = "/opt/VRTSvcs/bin/wacstart"
StopProgram = "/opt/VRTSvcs/bin/wacstop"
MonitorProcesses = { "/opt/VRTSvcs/bin/wac" }
RestartLimit = 3
)
IP gcoip (
Device = en0
Address = "10.10.0.150"
NetMask = "255.255.254.0"
)
NIC gconic (
Device = en0
Protocol = IPv4
NetworkHosts = { "9.11.82.177", "9.11.82.178", "9.11.82.179",
"9.11.82.180" }
)
gcoip requires gconic
wac requires gcoip
// resource dependency tree
//
// group ClusterService
// {
// Application wac
// {
// IP gcoip
// {
// NIC gconic
// }
// }
// }
© Copyright IBM Corporation, 2011 Disaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and Global Mirror
48
group test_shared_DG (
SystemList = { isvp12 = 0, isvp13 = 1 }
ClusterList = { isvp_sfha_clusterB = 1, isvp_sfha_clusterA = 0 }
Authority = 1
)
DiskGroup Diskgroup (
DiskGroup = v7k_mm_dg
)
Mount Mount (
MountPoint = "/v7k_mm_testmount"
BlockDevice = "/dev/vx/dsk/v7k_mm_dg/v7k_mm_vol"
FSType = vxfs
FsckOpt = "-y"
CreateMntPt = 2
VxFSMountLock = 0
)
Oracle oracle (
Sid = testmm
Owner = oracle
Home = "/u01/app/oracle/product/11.2.0/dbhome_1"
)
SVCCopyServices v7k_agent (
TriggerResStateChange = 1
SSHPathToIDFile = "/home/root/.ssh/id_rsa"
GroupName = v7k_mm_DR
SVCClusterIP = "9.11.82.251"
)
Volume Volume (
Volume = v7k_mm_vol
DiskGroup = v7k_mm_dg
)
Diskgroup requires v7k_agent
Mount requires Volume
Volume requires Diskgroup
oracle requires Mount
// resource dependency tree
//
// group test_shared_DG
// {
// Oracle oracle
// {
// Mount Mount
// {
// Volume Volume
// {
// DiskGroup Diskgroup
© Copyright IBM Corporation, 2011 Disaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and Global Mirror
49
// {
// SVCCopyServices v7k_agent
// }
// }
// }
// }
// }
© Copyright IBM Corporation, 2011 Disaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and Global Mirror
50
Appendix D: Veritas Software file sets listing
This section provides the list of Veritas Storage Foundation and VCS agent packages installed with the
steps followed in the procedure mentioned in this white paper.
VRTSamf 5.1.110.0 COMMITTED Veritas AMF by Symantec VRTSaslapm 5.1.110.0 COMMITTED Array Support Libraries and VRTSat.client 5.0.32.0 COMMITTED Symantec Product VRTSat.server 5.0.32.0 COMMITTED Symantec Product VRTScps 5.1.110.0 COMMITTED Veritas Co-ordination Point VRTSdbed 5.1.110.0 COMMITTED Storage Management Software VRTSfssdk 5.1.110.0 COMMITTED Veritas Libraries and Header VRTSgab 5.1.110.0 COMMITTED Veritas Group Membership and VRTSllt 5.1.110.0 COMMITTED Veritas Low Latency Transport VRTSob 3.4.290.0 COMMITTED Veritas Enterprise VRTSodm 5.1.110.0 COMMITTED Veritas Extension for Oracle VRTSperl 5.10.0.9 COMMITTED Perl 5.10.0 for Veritas VRTSsfmh 3.1.429.0 COMMITTED Veritas Storage Foundation VRTSspt 5.5.0.5 COMMITTED Veritas Support Tools by VRTSvcs 5.1.110.0 COMMITTED Veritas Cluster Server by VRTSvcsag 5.1.110.0 COMMITTED Veritas Cluster Server Bundled VRTSvcsea 5.1.110.0 COMMITTED Veritas High Availability VRTSvcssvc.rte 5.0.6.0 COMMITTED VERITAS Clustering Support for VRTSveki 5.1.110.0 COMMITTED Veritas Kernel Interface by VRTSvlic 3.2.52.0 COMMITTED Symantec License Utilities VRTSvxfen 5.1.110.0 COMMITTED Veritas I/O Fencing byls VRTSvxfs 5.1.110.0 COMMITTED Veritas File System by VRTSvxvm 5.1.110.0 COMMITTED Veritas Volume Manager by VRTSamf 5.1.110.0 COMMITTED Veritas AMF by Symantec VRTSaslapm 5.1.110.0 COMMITTED Array Support Libraries and VRTSat.server 5.0.32.0 COMMITTED Symantec Product VRTScps 5.1.110.0 COMMITTED Veritas Co-ordination Point VRTSdbed 5.1.110.0 COMMITTED Storage Management Software VRTSgab 5.1.110.0 COMMITTED Veritas Group Membership and VRTSllt 5.1.110.0 COMMITTED Veritas Low Latency Transport VRTSob 3.4.290.0 COMMITTED Veritas Enterprise VRTSodm 5.1.110.0 COMMITTED Veritas Extension for Oracle VRTSperl 5.10.0.9 COMMITTED Perl 5.10.0 for Veritas VRTSsfmh 3.1.429.0 COMMITTED Veritas Storage Foundation VRTSvcs 5.1.110.0 COMMITTED Veritas Cluster Server by VRTSvcsea 5.1.110.0 COMMITTED Veritas High Availability VRTSvcssvc.rte 5.0.6.0 COMMITTED VERITAS Clustering Support for VRTSveki 5.1.110.0 COMMITTED Veritas Kernel Interface by VRTSvxfen 5.1.110.0 COMMITTED Veritas I/O Fencing by VRTSvxfs 5.1.110.0 COMMITTED Veritas File System by VRTSvxvm 5.1.110.0 COMMITTED Veritas Volume Manager by
© Copyright IBM Corporation, 2011 Disaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and Global Mirror
51
Trademarks and Special Notices
© Copyright IBM Corporation 2011. All rights Reserved.
References in this document to IBM products or services do not imply that IBM intends to make them
available in every country.
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business
Machines Corporation in the United States, other countries, or both. If these and other IBM trademarked
terms are marked on their first occurrence in this information with a trademark symbol (® or ™), these
symbols indicate U.S. registered or common law trademarks owned by IBM at the time this information
was published. Such trademarks may also be registered or common law trademarks in other countries. A
current list of IBM trademarks is available on the Web at "Copyright and trademark information" at
www.ibm.com/legal/copytrade.shtml.
Java and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or
its affiliates.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the
United States, other countries, or both.
Intel, Intel Inside (logos), MMX, and Pentium are trademarks of Intel Corporation in the United States,
other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
SET and the SET Logo are trademarks owned by SET Secure Electronic Transaction LLC.
DIA Data Integrity Assurance®, Storewiz®, Storwize®, and the Storwize® logo are trademarks or
registered trademarks of Storwize, Inc., an IBM Company.
Other company, product, or service names may be trademarks or service marks of others.
Information is provided "AS IS" without warranty of any kind.
All customer examples described are presented as illustrations of how those customers have used IBM
products and the results they may have achieved. Actual environmental costs and performance
characteristics may vary by customer.
Information concerning non-IBM products was obtained from a supplier of these products, published
announcement material, or other publicly available sources and does not constitute an endorsement of
such products by IBM. Sources for non-IBM list prices and performance numbers are taken from publicly
available information, including vendor announcements and vendor worldwide homepages. IBM has not
tested these products and cannot confirm the accuracy of performance, capability, or any other claims
related to non-IBM products. Questions on the capability of non-IBM products should be addressed to the
supplier of those products.
All statements regarding IBM future direction and intent are subject to change or withdrawal without
notice, and represent goals and objectives only. Contact your local IBM office or IBM authorized reseller
for the full text of the specific Statement of Direction.
© Copyright IBM Corporation, 2011 Disaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and Global Mirror
52
Some information addresses anticipated future capabilities. Such information is not intended as a
definitive statement of a commitment to specific levels of performance, function or delivery schedules with
respect to any future products. Such commitments are only made in IBM product announcements. The
information is presented here to communicate IBM's current investment and development activities as a
good faith effort to help with our customers' future planning.
Performance is based on measurements and projections using standard IBM benchmarks in a controlled
environment. The actual throughput or performance that any user will experience will vary depending
upon considerations such as the amount of multiprogramming in the user's job stream, the I/O
configuration, the storage configuration, and the workload processed. Therefore, no assurance can be
given that an individual user will achieve throughput or performance improvements equivalent to the
ratios stated here.
Photographs shown are of engineering prototypes. Changes may be incorporated in production models.
Any references in this information to non-IBM websites are provided for convenience only and do not in
any manner serve as an endorsement of those websites. The materials at those websites are not part of
the materials for this IBM product and use of those websites is at your own risk.