TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

224

Click here to load reader

description

This EMC Engineering TechBook provides a general description of EMC products that can be used for IMS administration on z/OS. Using EMC products to manage IMS environments can reduce database and storage management administration, reduce CPU resource consumption, and reduce the time required to clone, backup, or recover IMS systems.

Transcript of TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Page 1: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

IMS on z/OS Using EMC Symmetrix Storage Systems

Version 2.0

• IMS Performance and Layout

• Backup, Recovery, and Disaster Recovery

• Cloning and Replication

Paul Pendle

Page 2: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.02

Copyright © 2002, 2003, 2007, 2009, 2010, 2013 EMC Corporation. All rights reserved.

EMC believes the information in this publication is accurate as of its publication date. The information issubject to change without notice.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NOREPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THISPUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY ORFITNESS FOR A PARTICULAR PURPOSE.

Use, copying, and distribution of any EMC software described in this publication requires an applicablesoftware license.

For the most up-to-date regulatory document for your product line, go to the Technical Documentation andAdvisories section on EMC Online Support.

For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com.

All other trademarks used herein are the property of their respective owners.

Part number h903.5

Page 3: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Contents

Preface

Chapter 1 IMS on z/OSOverview............................................................................................ 18IMS components................................................................................ 19IMS subsystems................................................................................. 19

IMS datasets.................................................................................21IMS data sharing ............................................................................... 23

Chapter 2 EMC Foundation ProductsIntroduction ....................................................................................... 26Symmetrix hardware and Enginuity features............................... 28

Symmetrix VMAX platform......................................................28Enginuity operating environment............................................30Symmetrix features for mainframe ..........................................31

ResourcePak Base for z/OS............................................................. 33Features ........................................................................................35

SRDF family of products for z/OS................................................. 38SRDF Host Component for z/OS.............................................39SRDF mainframe features..........................................................39Concurrent SRDF and SRDF/Star............................................40Multi-Session Consistency.........................................................42SRDF/AR.....................................................................................42EMC Geographically Dispersed Disaster Restart .................42SRDF Enterprise Consistency Group for z/OS......................43Restart in the event of a disaster or non-disasters .................48

EMC AutoSwap................................................................................. 49AutoSwap highlights .................................................................50

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0 3

Page 4: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Contents

Use cases ...................................................................................... 51TimeFinder family products for z/OS........................................... 51

TimeFinder/Clone for z/OS..................................................... 52TimeFinder Utility for z/OS ..................................................... 53TimeFinder/Snap for z/OS ...................................................... 53TimeFinder/Mirror for z/OS ................................................... 54TimeFinder/CG.......................................................................... 55Consistent dataset snap ............................................................. 56

IMS considerations for ConGroup ................................................. 57EMC z/OS Storage Manager .......................................................... 58Symmetrix Management Console .................................................. 59Symmetrix Performance Analyzer ................................................. 60Unisphere for VMAX ....................................................................... 61Virtual Provisioning ......................................................................... 62Fully Automated Storage Tiering ................................................... 62Data at Rest Encryption ................................................................... 63

Chapter 3 Advanced Storage ProvisioningVirtual Provisioning ......................................................................... 66

Terminology ................................................................................ 66Overview ..................................................................................... 68Virtual Provisioning operations ............................................... 69Host allocation considerations ................................................. 71Balanced configurations ............................................................ 71Pool management ....................................................................... 72Thin device monitoring ............................................................. 74Considerations for IMS components on thin devices ........... 75Replication considerations ........................................................ 75Performance considerations...................................................... 77

Fully Automated Storage Tiering ................................................... 78Introduction................................................................................. 78Best practices for IMS and FAST VP........................................ 80Unisphere for VMAX ................................................................. 81IMS and SMS storage groups.................................................... 84IMS and HSM.............................................................................. 84

Chapter 4 IMS Database CloningIMS database cloning ....................................................................... 88Replicating the data.......................................................................... 89

Cloning IMS data using TimeFinder/Mirror ......................... 90Cloning IMS databases using TimeFinder/Clone................. 92

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.04

Page 5: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Contents

Cloning IMS databases using dataset snap.............................94

Chapter 5 Backing Up IMS EnvironmentsOverview ............................................................................................ 98IMS commands for TimeFinder integration................................ 100Creating a TimeFinder/Mirror backup for restart ..................... 100

Creating a TimeFinder/Mirror backup .................................101TimeFinder/Mirror using remote consistent split...............103TimeFinder/Mirror using ConGroups ..................................104TimeFinder/Mirror backup methods comparisons.............105

TimeFinder/Mirror backup for recovery..................................... 106TimeFinder/Mirror backup for recovery..............................106TimeFinder/Mirror backup for remote recovery ................108Creating IMS image copies ......................................................109

Backing up IMS databases using dataset snap ........................... 113Creating multiple split mirrors ..................................................... 114Backing up a TimeFinder copy...................................................... 115

Backup using DFDSS................................................................116Backup using FDR ....................................................................117

Keeping track of dataset placement on backup volumes.......... 117

Chapter 6 IMS Database Recovery ProceduresOverview .......................................................................................... 120Database recovery from an IMS image copy............................... 123Database recovery using DFDSS or FDR..................................... 123Restoring a non-IMS copy.............................................................. 124

Restoring using TimeFinder/Mirror backup........................125Restoring using TimeFinder/Clone backup .........................126Restoring using dataset snap backup ....................................127Restoring IMS using DFDSS....................................................127Restoring IMS from an FDR dump ........................................128

Recovering IMS from a non-IMS copy......................................... 129Recovering using TimeFinder/Mirror...................................129Recovering using TimeFinder/Clone ....................................131Restoring IMS using a dataset snap backup .........................133

Chapter 7 Disaster Recovery and Disaster RestartOverview .......................................................................................... 138Disaster restart versus disaster recovery ..................................... 139Definitions ........................................................................................ 140

5IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 6: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Contents

Dependent-write consistency ................................................. 140Database restart ........................................................................ 141Database recovery .................................................................... 141Roll-forward recovery.............................................................. 142

Considerations for disaster recovery/restart ............................. 142Recovery point objective ......................................................... 143Recovery time objective........................................................... 143Operational complexity ........................................................... 144Source LPAR activity ............................................................... 145Production impact .................................................................... 145Target host activity................................................................... 145Number of copies of data ........................................................ 145Distance for solution ................................................................ 146Bandwidth requirements......................................................... 146Federated consistency.............................................................. 146

Tape-based solutions ...................................................................... 148Tape-based disaster recovery ................................................. 148Tape-based disaster restart ..................................................... 148

Remote replication challenges ...................................................... 149Propagation delay .................................................................... 150Bandwidth requirements......................................................... 150Network infrastructure............................................................ 151Method and frequency of instantiation................................. 151Method of re-instantiation ...................................................... 151Change rate at the source site ................................................. 152Locality of reference ................................................................. 152Expected data loss .................................................................... 153Failback considerations ........................................................... 153

Array-based remote replication.................................................... 154Planning for array-based replication ........................................... 154SRDF/S single Symmetrix array to single Symmetrix array.... 155SRDF/S and consistency groups .................................................. 156

Rolling disaster ......................................................................... 156Protecting against rolling disasters........................................ 158SRDF/S with multiple source Symmetrix arrays ................ 160ConGroup considerations ....................................................... 161Consistency groups and IMS dual WADS............................ 162

SRDF/A............................................................................................ 168Preparation for SRDF/A ......................................................... 169SRDF/A using multiple source Symmetrix.......................... 170How to restart in the event of a disaster ............................... 171

SRDF/AR single-hop ..................................................................... 172How to restart in the event of a disaster ............................... 174

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.06

Page 7: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Contents

SRDF/Star ........................................................................................ 174SRDF/Extended Distance Protection ........................................... 175High-availability solutions ............................................................ 177

AutoSwap...................................................................................177Geographically Dispersed Disaster Restart ..........................178

IMS Fast Path Virtual Storage Option .......................................... 180

Chapter 8 Performance TopicsOverview .......................................................................................... 184The performance stack ................................................................... 184

Importance of I/O avoidance..................................................185Storage system layer considerations ......................................186

Traditional IMS layout recommendations ................................... 186General principles for layout...................................................187IMS layouts and replication considerations..........................187

RAID considerations....................................................................... 188RAID recommendations ..........................................................191

Flash drives ...................................................................................... 193Magnetic disk history ...............................................................193IMS workloads best suited for Flash drives..........................194Flash drives and storage tiering..............................................195

Data placement considerations ..................................................... 197Disk performance considerations ...........................................197Hypervolume contention.........................................................199Maximizing data spread across the back end.......................200Minimizing disk head movement ..........................................201

Other layout considerations .......................................................... 202Database layout considerations with SRDF/S .....................202TimeFinder targets and sharing spindles..............................203Database clones using TimeFinder/Snap .............................204

Extended address volumes ............................................................ 204

Appendix A ReferencesReferences......................................................................................... 208

Appendix B Sample Skeleton JCLModified image copy skeleton sample JCL................................. 210

Glossary

7IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 8: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Contents

Index

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.08

Page 9: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Title Page

Figures

1 IMS processing architecture overview........................................................ 192 IMS in a sysplex configuration..................................................................... 233 Symmetrix hardware and software layers.................................................. 284 Symmetrix VMAX logical diagram ............................................................. 305 z/OS SymmAPI architecture........................................................................ 346 SRDF family for z/OS.................................................................................... 397 Classic SRDF/Star support configuration .................................................. 418 SRDF Consistency Group using SRDF-ECA.............................................. 479 AutoSwap before and after states ................................................................ 5010 TimeFinder family of products for z/OS.................................................... 5211 EMC z/OS Storage Manager functionality ................................................ 5812 Thin devices, pools, and physical disks ...................................................... 7013 Thin device pool with devices filling up .................................................... 7314 Thin device pool rebalanced with new devices......................................... 7315 FAST VP storage groups, policies, and tiers .............................................. 8016 IMS metadata and physical cloning processes........................................... 8917 IMS databases cloning using TimeFinder/Mirror .................................... 9018 IMS databases cloning using TimeFinder/Clone...................................... 9219 IMS database cloning using dataset snap................................................... 9420 TimeFinder/Mirror backup using consistent split.................................. 10221 TimeFinder/Mirror backup using remote consistent split .................... 10322 TimeFinder/Mirror backup using ConGroup......................................... 10423 TimeFinder/Mirror backup for recovery ................................................. 10724 TimeFinder/Mirror backup for recovery using remote split ................ 10825 IMS image copy from a BCV ...................................................................... 11026 IMS image copy from a dataset snap ........................................................ 11227 Backing up IMS databases using dataset snap ........................................ 11328 IMS configured with three sets of BCVs ................................................... 11529 IMS recovery using DFDSS or FDR........................................................... 12330 Restoring an IMS database to a point in time using BCVs..................... 130

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0 9

Page 10: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Figures

31 Restoring an IMS database to a point in time using STDs..................... 13232 Single Symmetrix array to single Symmetrix array ................................ 15533 Rolling disaster with multiple production Symmetrix arrays .............. 15734 Rolling disaster with SRDF Consistency Group protection .................. 15935 SRDF/S with multiple Symmetrix arrays and ConGroup protection . 16136 IMS ConGroup configuration .................................................................... 16237 IMS ConGroup configuration .................................................................... 16438 Case 1—Failure of the SRDF links ............................................................. 16539 Case 2—Channel extension fails first ........................................................ 16740 SRDF/Asynchronous replication configuration ..................................... 16841 SRDF/AR single-hop replication configuration...................................... 17242 SRDF/Star configuration ............................................................................ 17543 SRDF/EDP block diagram.......................................................................... 17644 GDDR-managed configuration with SRDF/Star and AutoSwap......... 17945 EMC foundation products .......................................................................... 18046 The performance stack ................................................................................ 18547 RAID 5 (3+1) layout detail .......................................................................... 18948 Anatomy of a RAID 5 random write......................................................... 19049 Magnetic disk capacity and performance history ................................... 19450 Partitioning on tiered storage..................................................................... 19651 Disk performance factors ............................................................................ 199

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.010

Page 11: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Title Page

Tables

1 Virtual Provisioning terms............................................................................. 662 Thick-to-thin host migrations........................................................................ 763 Thick-to-thin array-based migrations .......................................................... 764 Backup methods and categories.................................................................... 995 TimeFinder/Mirror backup methods ........................................................ 1066 Backup and restart/restore methods ......................................................... 122

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0 11

Page 12: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.012

Page 13: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Preface

As part of an effort to improve and enhance the performance andcapabilities of its product lines, EMC periodically releases revisionsof its hardware and software. Therefore, some functions described inthis document may not be supported by all versions of the softwareor hardware currently in use. For the most up-to-date information onproduct features, refer to your product release notes.

This document provides a comprehensive treatment of EMCSymmetrix technologies that can be used to enhance IMS z/OSdatabases.

Audience This TechBook is part of the Symmetrix documentation set and isintended for use by database and system administrators, systemsintegrators, and members of EMC Global Services who support andwork with IMS for z/OS.

Readers of this document are expected to be familiar with thefollowing topics:

◆ IMS for z/OS◆ z/OS operating system◆ Symmetrix fundamentals

Organization This TechBook is divided into the following seven chapters and twoappendices:

Chapter 1, “IMS on z/OS,” provides a high-level overview of IMScomponents in a z/OS operating environment.

Chapter 2, “EMC Foundation Products,” describes EMC productsused to support the management of IMS z/OS environments.

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0 13

Page 14: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

14

Preface

Chapter 3, “Advanced Storage Provisioning,” provides informationon storage provisioning enhancements that are new with Enginuity5876.

Chapter 4, “IMS Database Cloning,” provides guidance on how toclone IMS systems and databases using TimeFinder technology.

Chapter 5, “Backing Up IMS Environments,” describes how EMCSymmetrix technology can be used to enhance an IMS backupstrategy.

Chapter 6, “IMS Database Recovery Procedures,” is a follow-on toChapter 4 and describes how recovery processes can be facilitatedusing the backups described in Chapter 4.

Chapter 7, “Disaster Recovery and Disaster Restart,” describes thedifference between using traditional IMS recovery techniques versusEMC restart solutions.

Chapter 8, “Performance Topics,” describes the best practices for IMSperformance on Symmetrix systems.

Appendix A, “References,” provides the reader a list of references foradditional material available from either EMC or IBM.

Appendix B, “Sample Skeleton JCL,” provides sample skeleton JCLfor TimeFinder/Mirror and TimeFinder/Clone snap operations.

Conventions used inthis document

EMC uses the following conventions for special notices.

Note: A note presents information that is important, but not hazard-related.

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 15: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Preface

Typographicalconventions

EMC uses the following type style conventions in this document:

Normal Used in running (nonprocedural) text for:• Names of interface elements (such as names of windows, dialog boxes, buttons,

fields, and menus)• Names of resources, attributes, pools, Boolean expressions, buttons, DQL

statements, keywords, clauses, environment variables, functions, utilities• URLs, pathnames, filenames, directory names, computer names, filenames,

links, groups, service keys, file systems, notifications

Bold Used in running (nonprocedural) text for:• Names of commands, daemons, options, programs, processes, services,

applications, utilities, kernels, notifications, system calls, man pages

Used in procedures for:• Names of interface elements (such as names of windows, dialog boxes, buttons,

fields, and menus)• What user specifically selects, clicks, presses, or types

Italic Used in all text (including procedures) for:• Full titles of publications referenced in text• Emphasis (for example a new term)• Variables

Courier Used for:• System output, such as an error message or script• URLs, complete paths, filenames, prompts, and syntax when shown outside of

running text

Courier bold Used for:• Specific user input (such as commands)

Courier italic Used in procedures for:• Variables on command line• User input variables

< > Angle brackets enclose parameter or variable values supplied by the user

[ ] Square brackets enclose optional values

| Vertical bar indicates alternate selections - the bar means “or”

{ } Braces indicate content that you must specify (that is, x or y or z)

... Ellipses indicate nonessential information omitted from the example

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0 15

Page 16: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

16

Preface

The author of thisTechBook

This TechBook was written by Paul Pendle, an EMC employeeworking for the Symmetrix Mainframe Engineering team based atHopkinton, Massachusetts. Paul Pendle has over 35 years ofexperience in databases, hardware, software and operating systems,both from a database administrator perspective and from a systemsadministrator/systems programming perspective.

Your comments Your suggestions will help us continue to improve the accuracy,organization, and overall quality of the user publications. Send youropinions of this document to:

[email protected]

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 17: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

1

This chapter presents these topics:

◆ Overview............................................................................................. 18◆ IMS components................................................................................. 19◆ IMS subsystems.................................................................................. 19◆ IMS data sharing ................................................................................ 23

IMS on z/OS

IMS on z/OS 17

Page 18: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

18

IMS on z/OS

OverviewIBM’s IMS database management system (DBMS) was developed andreleased in the mid-1960s, becoming one of the first commercialDBMSs available in the marketplace. The IMS database server is stillthe DBMS of choice for business environments that dictateperformance, availability, and reliability, as attested to by its wideacceptance in the marketplace.

Traditionally, IMS has been used for large databases requiringhigh-volume transaction and/or batch processing. IMS has been thecornerstone database for many industries, such as retail point of sale,the financial marketplace, the banking community, and airlinereservation systems. IMS installations in these industries require ahigh number of concurrent transactions coupled with highavailability for the database and operational environments in general.

The evolution of IMS during the past four decades has introducedmany features, each designed to enhance the data center's ability toreach its reliability and availability goals. IMS continues the evolutionby providing improved performance, scalability, concurrency,operational, and high-availability features. Version 10 providessupport for Java, XML, and Service-Oriented Architecture(SOA).

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 19: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

IMS on z/OS

IMS componentsThere are two components to IMS:

◆ IMS DB (IMS Database Manager)—A database managementsystem

◆ IMS TM (IMS Transaction Manager)—A data communicationssystem

Figure 1 on page 19 is an overview of the IMS processing architecture..

Figure 1 IMS processing architecture overview

IMS subsystemsIMS has several address spaces with various functions. The core ofIMS is the control region. It has a number of dependent regions thatprovide additional services to it or in which IMS applications can run.These regions and other terms are discussed in this section.

z/OS console

IMS masterterminal

User terminal

RECONS

IMSdatabases

OLDSWADS

SLDSRLDS

MSGQ

DBRC

Control region

BMP regions MPR regions IFP regions JBP regions JMP regions

DL/1 address space

Online log archive

ICO-IMG-000705

IMS components 19

Page 20: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

20

IMS on z/OS

Control region—This is the address space that provides manydatabase management services, such as logging, archiving, restartand recovery, and shutdown. The terminals, message queues, andlogs are attached to this region. The control region uses cross-memoryservices to communicate with the other regions.

There are three types of control regions, depending on thecomponents being used:

◆ DB/DC—Control region with both transaction manager anddatabase manager components installed. This started task runscontinuously and is normally started by a z/OS STARTcommand. The Control region starts the DBRC region duringinitialization. DL/I can be included as part of the Control region.In Figure 2 on page 23, the DL/I region runs in its own addressspace.

◆ DBCTL—The DBCTL controller has no network or messagequeues and merely controls access to its databases by way of itsclients.

◆ DCCTL—Control region with an interface to virtualtelecommunications access method (VTAM) that can manage itsown VTAM network of LTERMs as well as its own messagequeues for transaction processing. Database access is notimplemented in this region.

Database Recovery Control Region (DBRC)—The DBRC automatesdatabase and system recovery by tracking image copies, log changes,and database allocation records, as well as performing other tasks.DBRC stores this information in the RECON datasets. The DBRCcontrols logging and database recovery operations. It also controlsaccess to databases.

DL/I—This address space performs most of the dataset accessfunctions for the IMS database component (except for Fast Pathdatabases). This region is not present in a DCCTL environment.

Common Queue Server (CQS)—This address space is used in asysplex sharing of the IMS message queues. It is present in IMSDCCTL and DB/DC environments.

Application-dependent regions—Address spaces that are used for theexecution of application programs that use IMS services. The types ofdependent regions are:

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 21: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

IMS on z/OS

◆ Message Processing Regions (MPR) — These regions are assignedtransaction classes that correspond to classes assigned totransaction codes. Transactions come into MPRs by way of IMSTM.

◆ Batch Message Processing (BMP)—Batch programs run in thismode against IMS databases without stopping the databases toonline transactions. Two types of applications can run in BMPregions: Message-driven programs that read and process the IMSmessage queue, and non-message driven programs that areinitiated by batch jobs.

◆ IMS Fast Path (IFP) program region—These regions processtransactions against Fast Path databases.

◆ Java Message Processing (JMP) region—These regions are similarto MPR regions. JMP programs are executed through transactionsinvoked through terminals or other applications.

◆ Java Batch Processing (JBP) region—JBP applications performbatch-type processing online and can access IMS message queuesfor output processing. These applications are started from TSO orby submitting a batch job.

◆ DBCTL thread (DBT)

◆ Utility type regions (BMH, FPU)

Batch application regions—Applications that only use databasemanager functions can run in a separate z/OS address space notconnected to an IMS control region.

Internal Resource Lock Manager (IRLM)—Manages (serializes)concurrent database update requests from applications. IRLM can actglobally among external IMS systems and is required for sysplex datasharing.

IMS datasetsIMS Databases are physical structures used to store data in anorganized manner. IMS uses a hierarchical model to store its data. Ituses VSAM or OSAM access methods at the dataset layer. These aresome of the critical recovery datasets used by IMS:

◆ Online Log datasets (OLDS)—IMS active log datasets.

IMS subsystems 21

Page 22: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

22

IMS on z/OS

◆ Write Ahead dataset (WADS)—dataset used to write critical logrecords required at system restart prior to their externalization tothe OLDS.

◆ System Log dataset (SLDS)—A copy of the OLDS at archive time.It is used during an IMS emergency restart and IMS cold start,and may be used if a RLDS is unreadable.

◆ Recovery Log dataset (RLDS)—A subset of the SLDS that iscreated at OLDS archival time. It contains only database changerecords used during the execution of change accumulation or animage copy restore, including logs.

◆ RECON—KSDS datasets containing logging information andevents that might affect the recovery of databases. DBRC usesand updates this information.

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 23: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

IMS on z/OS

IMS data sharingIMS data sharing configurations allow multiple IMS systems toaccess the same databases while spreading the workload acrossmultiple LPARs. There are performance improvements that can begained by enabling data sharing, especially in very busy systems thathave heavy update/logging activity.

Figure 2 is an example of an IMS data sharing configuration in asysplex.

Figure 2 IMS in a sysplex configuration

ICO-IMG-000706

RECONS

DBs

CF

CF

DBRC DBRC

IMS IMS

Logs LogsIRLM IRLM

BPs BPs

APPL

APPL

IMS data sharing 23

Page 24: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

24

IMS on z/OS

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 25: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

2

This chapter introduces the EMC foundation products discussed inthis document that are used in a combined Symmetrix, IMS, andz/OS environment:

◆ Introduction ........................................................................................ 26◆ Symmetrix hardware and Enginuity features................................ 28◆ ResourcePak Base for z/OS.............................................................. 33◆ SRDF family of products for z/OS .................................................. 38◆ EMC AutoSwap.................................................................................. 49◆ TimeFinder family products for z/OS ............................................ 51◆ IMS considerations for ConGroup................................................... 57◆ EMC z/OS Storage Manager............................................................ 58◆ Symmetrix Management Console ................................................... 59◆ Symmetrix Performance Analyzer .................................................. 60◆ Unisphere for VMAX......................................................................... 61◆ Virtual Provisioning........................................................................... 62◆ Fully Automated Storage Tiering .................................................... 62◆ Data at Rest Encryption..................................................................... 63

EMC FoundationProducts

EMC Foundation Products 25

Page 26: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

26

EMC Foundation Products

IntroductionEMC® provides many hardware and software products that supportapplication environments on Symmetrix® systems. The followingproducts, which are highlighted and discussed, were used and/ortested with the IMS on z/OS. This chapter provides a technicaloverview of the EMC products used in this document.

EMC Symmetrix—EMC offers an extensive product line of high-endstorage solutions targeted to meet the requirements ofmission-critical databases and applications. The Symmetrix productline includes the DMX™ Direct Matrix Architecture® and the VMAX®

Virtual Matrix Family. EMC Symmetrix is a fully redundant,high-availability storage processor, providing nondisruptivecomponent replacements and code upgrades. The Symmetrix systemfeatures high levels of performance, data integrity, reliability, andavailability.

EMC Enginuity™ Operating Environment—Enginuity enablesinteroperation between current and previous generations ofSymmetrix systems and enables them to connect to a large number ofserver types, operating systems, and storage software products, and abroad selection of network connectivity elements and other devices,including Fibre Channel, ESCON, FICON, GigE, iSCSI, directors, andswitches.

EMC Mainframe Enabler—Mainframe Enabler is a package thatcontains all the Symmetrix API runtime libraries for all EMC MFsoftware. These software packages can be used to monitor deviceconfiguration and status, and to perform control operations ondevices and data objects within a storage complex.

EMC Symmetrix Remote Data Facility (SRDF®)—SRDF is abusiness continuity software solution that replicates and maintains amirror image of data at the storage block level in a remote Symmetrixsystem. The SRDF host component (HC) is a licensed feature product,and when licensed in the SCF address space, it provides commandsets to inquire on and manipulate remote Symmetrix relationships.

EMC SRDF Consistency Groups—An SRDF consistency group is acollection of related Symmetrix devices that are configured to act inunison to maintain data integrity. The devices in consistency groupscan be spread across multiple Symmetrix systems.

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 27: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

EMC Foundation Products

EMC TimeFinder®—TimeFinder is a family of products that enablevolume-based replication within a single Symmetrix system. Data iscopied from Symmetrix devices using array-based resources withoutusing host CPU or I/O. The source Symmetrix devices remain onlinefor regular I/O operations while the copies are created. TheTimeFinder family has three separate and distinct software products,TimeFinder/Mirror, TimeFinder/Clone, and TimeFinder/Snap:

◆ TimeFinder/Mirror enables users to configure special devices,called business continuance volumes (BCVs), to create a mirrorimage of Symmetrix standard devices. Using BCVs, TimeFindercreates a point-in-time copy of data that can be repurposed. TheTimeFinder/Mirror component extends the basic API commandset of Mainframe Enabler to include commands that specificallymanage Symmetrix BCVs and standard devices.

◆ TimeFinder/Clone or, more fully, TimeFinder/Clone MainframeSnap Facility (TFCMSF) enables users to make copies of datafrom source volumes to target volumes without consumingmirror positions within the Symmetrix. The data is available to atarget’s host immediately upon activation, even if the copyprocess has not completed. Data may be copied from a singlesource device to as many as 16 target devices. A source devicecan be either a Symmetrix standard device or a BCV device.TFCSMF also can replicate datasets using the dataset snapfeature.

◆ TimeFinder/Snap enables users to utilize special devices in theSymmetrix array called virtual devices (VDEVs) and save areadevices (SAVDEVs). These devices can be used to makepointer-based, space-saving copies of data simultaneously onmultiple target devices from a single source device. The data isavailable to a target’s host immediately upon activation. Datamay be copied from a single source device to as many as 128VDEVs. A source device can be either a Symmetrix standarddevice or a BCV device. A target device is a VDEV. A SAVDEV isa special device, without a host address, that is used to hold thechanging contents of the source or target device after the snap isactivated.

EMC Change Tracker—EMC Symmetrix Change Tracker softwaremeasures changes to data on a Symmetrix volume or group ofvolumes. Change Tracker software is often used as a planning tool inthe analysis and design of configurations that use the EMCTimeFinder or SRDF components to store data at remote sites.

Introduction 27

Page 28: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

28

EMC Foundation Products

Symmetrix hardware and Enginuity featuresSymmetrix hardware architecture and the Enginuity operatingenvironment are the foundation for the Symmetrix storage platform.This environment consists of the following components:

◆ Symmetrix hardware

◆ Enginuity operating functions

◆ Symmetrix application program interface (API) for mainframe

◆ Symmetrix applications

◆ Host-based Symmetrix applications

◆ Independent software vendor (ISV) applications

Figure 3 shows the relationship between these software layers andthe Symmetrix hardware.

Figure 3 Symmetrix hardware and software layers

Symmetrix VMAX platformThe EMC Symmetrix VMAX array with Enginuity is the latest entryto the Symmetrix product line. Built on the strategy of simple,intelligent, modular storage, it incorporates a scalable fabricinterconnect design that allows the storage array to grow seamlesslyfrom an entry-level configuration into the world's largest storagesystem. The Symmetrix VMAX array provides improved

Symmetrix hardware

EMC Mainframe Enabler Applications Program Interface (API)

Symmetrix-based applications

Host-based Symmetrix applications

Independent Software Vendor applications

Enginuity operating environment functions

ICO-IMG-000746

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 29: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

EMC Foundation Products

performance and scalability for demanding enterprise storageenvironments, while maintaining support for EMC's broad portfolioof platform software offerings.

The 5876 Enginuity operating environment for Symmetrix systems isa feature-rich Enginuity release supporting Symmetrix VMAXstorage arrays. With the release of Enginuity 5876, Symmetrix VMAXsystems deliver software capabilities that improve capacityutilization, ease of use, business continuity, and security.

The Symmetrix VMAX subsystem also maintains customerexpectations for high-end storage in terms of availability. High-endavailability is more than just redundancy. It means nondisruptiveoperations and upgrades, and being always online. SymmetrixVMAX arrays provide:

◆ Nondisruptive expansion of capacity and performance at a lowerprice point

◆ Sophisticated migration for multiple storage tiers within the array

◆ The power to maintain service levels and functionality asconsolidation grows

◆ Simplified control for provisioning in complex environments

Many of the new features provided by the new EMC SymmetrixVMAX platform can reduce operational costs for customersdeploying IMS on z/OS solutions, as well as enhance functionality toenable greater benefits. This TechBook details those features thatprovide significant benefits to customers utilizing IMS on z/OS.

Figure 4 on page 30 illustrates the architecture and interconnection ofthe major components in the Symmetrix VMAX storage system.

Symmetrix hardware and Enginuity features 29

Page 30: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

30

EMC Foundation Products

Figure 4 Symmetrix VMAX logical diagram

Enginuity operating environmentSymmetrix Enginuity is the operating environment for all Symmetrixsystems. Enginuity manages and ensures the optimal flow andintegrity of data through the different hardware components. It alsomanages Symmetrix operations associated with monitoring andoptimizing internal data flow. This ensures the fastest response to theuser's requests for information, along with protecting and replicatingdata. Enginuity provides the following services:

◆ Manages system resources to intelligently optimize performanceacross a wide range of I/O requirements.

◆ Ensures system availability through advanced fault monitoring,detection, and correction capabilities and provides concurrentmaintenance and serviceability features.

◆ Offers the foundation for specific software features availablethrough EMC disaster recovery, business continuance, andstorage management software.

◆ Provides functional services for both array-based functionalityand for a large suite of EMC storage application software.

ICO-IMG-000752

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 31: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

EMC Foundation Products

◆ Defines priority of each task, including basic system maintenance,I/O processing, and application processing.

◆ Provides uniform access through APIs for internal calls, andprovides an external interface to allow integration with othersoftware providers and ISVs.

Symmetrix features for mainframeThis section discusses supported Symmetrix features for mainframeenvironments.

I/O support featuresParallel Access Volume (PAV)—Parallel Access Volumes areimplemented within a z/OS environment. It allows one I/O to takeplace for each base unit control block (UCB), and one for eachstatically or dynamically assigned alias UCB. These alias UCBs allowparallel I/O access for volumes. Current Enginuity releases providesupport for static, dynamic, and hyperPAVs. HyperPAVs allow feweraliases to be defined within a logical control unit. With hyperPAVs,aliases are applied to the base UCBs (devices) that require them themost at the time of need.

Multiple Allegiance (MA)—While PAVs facilitate multiple parallelaccesses to the same device from a single LPAR, Multiple Allegiance(MA) allows multiple parallel nonconflicting accesses to the samedevice from multiple LPARs. Multiple Allegiance I/O executesconcurrently with PAV I/O. The Symmetrix storage system treatsthem equally and guarantees data integrity by serializing writeswhere extent conflicts exist.

Host connectivity options—Mainframe host connectivity issupported through Fibre Channel, ESCON, and FICON channels.Symmetrix storage systems appear to mainframe operating systemsas any of the following control units: IBM 3990, IBM 2105, and IBM2107. The physical storage devices can appear to the mainframeoperating system as any mixture of different sized 3380 and 3390devices.

ESCON support—Enterprise Systems Connection (ESCON) is afiber-optic connection technology that interconnects mainframecomputers, workstations, and network-attached storage devices

Symmetrix hardware and Enginuity features 31

Page 32: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

32

EMC Foundation Products

across a single channel and supports half-duplex data transfers.ESCON may also be used for handling Symmetrix Remote DataFacility (SRDF) remote links.

FICON support—Fiber Connection (FICON) is a fiber-optic channeltechnology that extends the capabilities of its previous fiber-opticchannel standard, ESCON. Unlike ESCON, FICON supportsfull-duplex data transfers and enables greater throughput rates overlonger distances. FICON uses a mapping layer based on technologydeveloped for Fibre Channel and multiplexing technology, whichallows small data transfers to be transmitted at the same time aslarger ones. With Enginuity 5670 and later, Symmetrix storagesystems support FICON ports. With Enginuity service release5874.207, a VMAX supports 8 Gb FICON connectivity.

zHPF support—System z10 High Performance FICON (zHPF)represents the latest enhancement to the FICON interface architectureaimed at offering an improvement in the performance of onlinetransaction processing (OLTP) workloads. Customers that arepresently channel-constrained running heavy IMS workloads using a4K page size will reap the greatest benefit from this feature. zHPF ischargeable, licensable feature with Enginuity service release 5874.207.

Fibre Channel support—Fibre Channel is a supported option inSRDF environments.

GigE support—GigE is a supported option in SRDF environments.Symmetrix GigE directors in an SRDF environment provide direct,end-to-end TCP/IP connectivity for remote replication solutions overextended distances with built-in compression. This removes the needfor costly FC-to-IP converters and helps utilize the existing IPinfrastructures without major disruptions.

Data protection optionsSymmetrix subsystems incorporate many standard features thatprovide a higher level of data availability than conventional DirectAccess Storage Devices (DASD). These options ensure an evengreater level of data recoverability and availability. They areconfigurable at the logical volume level so different protectionschemes can be applied to different classes of data within the sameSymmetrix controller on the same physical device. Customers canchoose from the following data protection options to match their datarequirements:

◆ Mirroring (RAID 1) or RAID 10

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 33: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

EMC Foundation Products

◆ RAID 6 (6+2) and RAID 6 (14+2)

◆ RAID 5 (3+1) and RAID 5 (7+1)

◆ Symmetrix Remote Data Facility (SRDF)

◆ TimeFinder

◆ Dynamic Sparing

◆ Global Sparing

Other featuresOther IBM-supported compatibility features include:

◆ Channel Command Emulation for IBM ESS 2105/2107

◆ Concurrent Copy

◆ Peer to Peer Remote Copy (PPRC)

◆ PPRC/XRC Incremental Resync

◆ Extended Remote Copy (XRC) with multireader

◆ Dynamic Channel Path Management (DCM)

◆ Dynamic Path Reconnection (DPR) support

◆ Host data compression

◆ Logical Path and Control Unit Address support (CUADD)

◆ Multi-System Imaging

◆ Partitioned dataset (PDS) Search Assist

◆ FlashCopy Version 1 and 2

◆ High Performance Ficon (zHPF) and multitrack zHPF

◆ Extended Address Volumes (EAV)

ResourcePak Base for z/OSEMC ResourcePak® Base for z/OS is a software facility that makescommunication between mainframe-based applications (provided byEMC or ISVs) and a Symmetrix storage subsystem more efficient.ResourcePak Base is designed to improve performance and ease ofuse of mainframe-based Symmetrix applications.

ResourcePak Base delivers EMC Symmetrix Control Facility(EMCSCF) for IBM and IBM-compatible mainframes. EMCSCFprovides a uniform interface for EMC and ISV software products,

ResourcePak Base for z/OS 33

Page 34: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

34

EMC Foundation Products

where all products are using the same interface at the same functionlevel. EMCSCF delivers a persistent address space on the host thatfacilitates communication between the host and the Symmetrixsubsystem, as well as other applications delivered by EMC and itspartners.

Figure 5 on page 34 logically depicts the relationships between theSCF address space and the other software components accessing theSymmetrix system.

Figure 5 z/OS SymmAPI architecture

ResourcePak Base is the delivery mechanism for EMC SymmetrixApplications Programming Interface for z/OS (SymmAPI™-MF).ResourcePak Base provides a central point of control by givingsoftware a persistent address space on the mainframe forSymmAPI-MF functions that perform tasks such as the following:

◆ Maintaining an active repository of information about EMCSymmetrix devices attached to z/OS environments and makingthat information available to other EMC products.

◆ Performing automation functions.

◆ Handling inter-LPAR (logical partition) communication throughthe Symmetrix storage subsystem.

ResourcePak Base provides faster delivery of new Symmetrixfunctions by EMC and ISV partners, along with easier upgrades. Italso provides the ability to gather more meaningful data when usingtools such as TimeFinder Query because device status information isnow cached along with other important information.

Symmetrix devices

ICO-IMG-000104

Program calls

Symmetrix Control Facility

(ResourcePak Base)

I O S

EMCSAI SNAPAPI Automation: SWAP Metadata: Config info Device status Event monitor

(TimeFinder, SRDFHost Component)

EMC or ISV developedproducts

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 35: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

EMC Foundation Products

ResourcePak Base for z/OS is a prerequisite for EMC mainframeapplications, like the TimeFinder Product Set for z/OS or SRDF HostComponent for z/OS, and is included with these products.

With EMC Mainframe Enabler V7.0 and later, ResourcePak Base,TimeFinder, and SRDF are shipped in a single distribution.

FeaturesResourcePak Base provides the following functionality withEMCSCF:

◆ Cross-system communication

◆ Nondisruptive SymmAPI-MF refreshes

◆ Save Device Monitor

◆ SRDF/A Monitor

◆ Group Name Service (GNS) support

◆ Pool management (save, DSE, thin)

◆ SRDF/AR resiliency

◆ SRDF/A Multi-Session Consistency

◆ SWAP services

◆ Recovery services

◆ FlashCopy emulation (Enginuity 5772 or earlier)

◆ Licensed feature code management

Cross-system communicationInter-LPAR communication is handled by the EMCSCF cross-systemcommunication (CSC) component. CSC uses a Symmetrix storagesubsystem to facilitate communications between LPARs. SeveralEMC Symmetrix mainframe applications use CSC to handleinter-LPAR communications.

Nondisruptive SymmAPI-MF refreshesEMCSCF allows the SymmAPI-MF to be refreshed nondisruptively.Refreshing SymmAPI-MF does not impact currently executingapplications that use SymmAPI-MF, for example, Host Component orTimeFinder.

ResourcePak Base for z/OS 35

Page 36: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

36

EMC Foundation Products

Save Device MonitorThe Save Device Monitor periodically examines the consumedcapacity of the device pool (SNAPPOOL) used by TimeFinder/Snapwith the VDEV licensed feature code enabled. The Save DeviceMonitor also checks the capacity of the device pool (DSEPOOL) usedby SRDF/A.

The Save Device Monitor function of EMCSCF provides a way to:

◆ Automatically check space consumption thresholds.

◆ Trigger an automated response that is tailored to the specificneeds of the installation.

SRDF/A MonitorThe SRDF/A Monitor in ResourcePak Base is designed to:

◆ Find EMC Symmetrix controllers that are running SRDF/A.

◆ Collect and write SMF data about those controllers.

After ResourcePak Base is installed, the SRDF/A Monitor is startedas a subtask of EMCSCF.

Group Name Service supportResourcePak Base includes support for Symmetrix Group NameService (GNS). GNS allows you to define a device group once, andthen use that single definition across multiple EMC products onmultiple platforms. This means that you can use a device groupdefined through GNS with both mainframe and open systems-basedEMC applications. GNS also allows you to define group names forvolumes that can then be operated upon by various other commands.

Pool managementWith ResourcePak Base version 5.7 and later, generalized device poolmanagement is a provided service. Pool devices are a predefined setof devices that provide a pool of physical space. Pool devices are nothost-accessible. The CONFIGPOOL commands allow management ofSNAPPOOLs or DSEPOOLs with CONFIGPOOL batch statements.

SRDF/AR resiliencySRDF/AR can recover from internal failures without manualintervention. Device replacement pools for SRDF/AR (orSARPOOLs) are provided to prevent SRDF/AR from halting due todevice failure. In effect, SARPOOLs are simply a group of devicesthat are unused until SRDF/AR needs one of them.

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 37: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

EMC Foundation Products

SRDF/A Multi-Session ConsistencySRDF/A Multi-Session Consistency (MSC) is a task in EMCSCF thatensures remote R2 consistency across multiple Symmetrixsubsystems running SRDF/A. MSC provides the following:

◆ Coordination of SRDF/A cycle switches across systems.

◆ Up to 24 SRDF groups in a multi-session group.

◆ One SRDF/A session per Symmetrix array, and one SRDF/Agroup per Symmetrix system when using Enginuity 5X70.

◆ With Enginuity level 5x71 and later, SRDF/A groups are dynamicand are not limited to one per Symmetrix. Group commands ofENABLE, DISPLAY, DISABLE, REFRESH, and RESTART arenow available.

SWAP servicesResourcePak Base deploys a SWAP service in EMCSCF. It is used byAutoSwap™ for planned outages with the ConGroup ContinuousAvailability Extensions (CAX).

Recovery servicesRecovery service commands allow you to perform recovery on localor remote devices (if the links are available for the remote devices).

FlashCopy support in SCFFlashCopy version 1 and version 2 support is enabled in EMCSCFthrough a LFC when using Enginuity 5772 or earlier. FlashCopysupport is enabled in Enginuity 5773 and later and does not require alicense key in EMCSCF.

Licensed feature code managementEMCSCF manages licensed feature codes (LFCs) to enable separatelychargeable features in EMC software. These features require an LFCto be provided during the installation and customization of EMCSCF.LFCs are available for:

◆ Symmetrix Priority Control

◆ Dynamic Cache Partitioning

◆ AutoSwap (ConGroup with AutoSwap Extensions)—SeparateLFCs required for planned and unplanned swaps

◆ EMC Compatible Flash (Host Software Emulation)

◆ EMC z/OS Storage Manager (EzSM)

ResourcePak Base for z/OS 37

Page 38: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

38

EMC Foundation Products

◆ SRDF/Asynchronous (MSC)

◆ SRDF/Automated Replication

◆ SRDF/Star

◆ TimeFinder/Clone MainFrame Snap Facility

◆ TimeFinder/Consistency Group

◆ TimeFinder/Snap (VDEV)

SRDF family of products for z/OSAt the conceptual level, SRDF is mirroring (RAID level 1) one logicaldisk device (the primary source/R1 within a primary Symmetrixsubsystem) to a second logical device (the secondary target/R2, in aphysically separate secondary Symmetrix subsystem) over ESCON,Fibre Channel, or GigE high-speed communication links. Thedistance separating the two Symmetrix subsystems can vary from afew feet to thousands of miles. SRDF is the first software product forthe Symmetrix storage subsystem. Its basic premise is that a remotemirror of data (data in a different Symmetrix subsystem) can serve asa valuable resource for:

◆ Protecting data using geographical separation.

◆ Giving applications a second location from which to retrieve datashould the primary location become unavailable for any reason.

◆ Providing a means to establish a set of volumes on which toconduct parallel operations, such as testing or modeling.

SRDF has evolved to provide different operation modes(synchronous, adaptive copy—write pending mode, adaptivecopy—disk mode, and asynchronous mode). More advancedsolutions have been built upon it, such as SRDF/AutomatedReplication and SRDF/Star, Cascaded SRDF and SRDF/EDP.

Persistent throughout these evolutionary stages has been control ofthe SRDF family products, by the mainframe-based application calledSRDF Host Component. SRDF Host Component is a controlmechanism through which all SRDF functionality is made availableto the mainframe user. EMC Consistency Group for z/OS is anotheruseful feature for managing dependent-write consistency acrossinter-Symmetrix links with one or more mainframes attached.

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 39: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

EMC Foundation Products

Figure 6 SRDF family for z/OS

Figure 6 indicates that the modules on the right plug in to one of themodules in the center as an add-on function. For example, SRDFConsistency Group is a natural addition to customers running SRDFin synchronous mode.

SRDF Host Component for z/OSSRDF Host Component for z/OS, along with ResourcePak Base forz/OS (API services module), is delivered when ordering a member ofthe SRDF product family. For more information about SRDFtechnology in general, visit:

http://www.emc.com/products/detail/software/srdf.htm

SRDF mainframe featuresSRDF mainframe features include the following:

◆ Ability to deploy SRDF solutions across the enterprise: SRDFHost Component can manage data mirroring for both CKD andFBA format disks. In these deployments, both a mainframe andone or more open system hosts are attached to the primary sideof the SRDF relationship. Enterprise SRDF deployments can becontrolled either by mainframe hosts or by open systems hosts,though the toolsets are different in each environment.

SR

DF

fam

ily fo

r z/

OS

ICO-IMG-000745

SRDF/S SRDF/Star

SRDF/CG

SRDF/AR

Synchronous for zero data exposure

SRDF/A Asynchronous for

extended distances

SRDF/DM Efficient data mobility

between arrays

Multi-point replication option

Consistency Group option

Automated Replication option

SRDF/EDPExtended Distance

Protection

SRDF family of products for z/OS 39

Page 40: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

40

EMC Foundation Products

◆ Support for either ESCON or FICON host channels regardless ofthe SRDF link protocol employed: SRDF is a protocol thatconnects Symmetrix systems, mirroring data on both sides of thecommunications link. Host connectivity to the Symmetrix storagesystem (ESCON versus FICON) is independent of the protocolsused in moving data between the SRDF links. SRDF supports allthe standard link protocols: ESCON, Extended ESCON, FibreChannel, and GigE.

◆ Software support for taking an SRDF link offline: SRDF HostComponent has a software command that can take an SRDF linkoffline independent of whether it is taking the target volumeoffline. This feature is useful if there are multiple links in theconfiguration and only one is experiencing issues, for exampletoo many bounces (sporadic link loss) or error conditions. In thiscase, it is unnecessary to take all links offline, when taking theone in question offline is sufficient.

◆ SRDF Host Component additional interfaces: Besides the consoleinterface, all features of SRDF Host Component can be employedusing REXX scripting and/or the Stored Procedure Executive(SPE), a powerful tool for automating repeated processes.

Concurrent SRDF and SRDF/StarSRDF/Star is built upon several key technologies:

◆ Dynamic SRDF, Concurrent SRDF

◆ ResourcePak Base for z/OS

◆ SRDF/Synchronous

◆ SRDF/Asynchronous

◆ SRDF Consistency Group

◆ Certain features within Enginuity

SRDF/Star provides advanced multi-site business continuityprotection that augments Concurrent SRDF and SRDF/A operationsfrom the same primary volumes with the ability to incrementallyestablish an SRDF/A session between the two remote sites in theevent of a primary site outage. This capability is only availablethrough SRDF/Star software.

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 41: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

EMC Foundation Products

SRDF/Star is a combination of mainframe host software andEnginuity functionality that operates concurrently. Figure 7 depicts aclassic three-site SRDF/Star configuration.

Figure 7 Classic SRDF/Star support configuration

The concurrent configuration option of SRDF/A provides the abilityto restart an environment at long distances with minimal data loss,while simultaneously providing a zero data loss restart capability at alocal site. Such a configuration provides protection for both a sitedisaster and a regional disaster, while minimizing performanceimpact and loss of data.

In a concurrent SRDF/A configuration without SRDF/Starfunctionality, the loss of the primary A site would normally meanthat the long-distance replication would stop, and data would nolonger propagate to the C site. Data at C would continue to age asproduction was resumed at site B. Resuming SRDF/A between sitesB and C would require a full resynchronization to re-enable disasterrecovery protection. This consumes both time and resources and is aprolonged period without normal DR protection.

SRDF/Star provides a rapid re-establishment of cross-site protectionin the event of a primary site (A) failure. Rather than a fullresynchronization between sites B and C, SRDF/Star provides adifferential B-C synchronization, dramatically reducing the time to

R1 R2

BCV

R2

Primary site (A)(production) Local site (B)

Remote site (C)

SRDF/Synchronous

SRDF/Asynchronous

ActiveInactive

ICO-IMG-000105

SRDF family of products for z/OS 41

Page 42: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

42

EMC Foundation Products

remotely protect the new production site. SRDF/Star also provides amechanism for the user to determine which site (B or C) has the mostcurrent data in the event of a rolling disaster affecting site A. In allcases, the choice of which site to use in a failure is left to thecustomer’s discretion.

Multi-Session ConsistencyIn SRDF/A environments, consistency across multiple Symmetrixsystems for SRDF/A sessions is provided by the Multi-SessionConsistency (MSC) task that executes in the EMCSCF address space.MSC provides consistency across as many as 24 SRDF/A sessionsand is enabled by a Licensed Feature Code.

SRDF/ARSRDF/Automated Replication (SRDF/AR) is an automated solutionthat uses both SRDF and TimeFinder to provide periodicasynchronous replication of a restartable data image. In a single-hopSRDF/AR configuration, the magnitude of controlled data loss coulddepend on the cycle time chosen. However, if greater protection isrequired, a multi-hop SRDF/AR configuration can providelong-distance disaster restart with zero data loss using a middle orbunker site.

EMC Geographically Dispersed Disaster RestartEMC Geographically Dispersed Disaster Restart (GDDR) is amainframe software product that automates business recoveryfollowing both planned outages and disasters, including the total lossof a data center. EMC GDDR achieves this goal by providingmonitoring, automation, and quality controls for many EMC andthird-party hardware and software products required for businessrestart.

Because EMC GDDR restarts production systems following disasters,it does not reside on the same LPAR that it protects. EMC GDDRresides on separate logical partition (LPAR) from the host system thatis running application workloads.

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 43: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

EMC Foundation Products

In a three-site SRDF/Star with AutoSwap configuration, EMC GDDRis installed on a control LPAR at each site. Each EMC GDDR node isaware of the other two EMC GDDR nodes through networkconnections between each site. This awareness allows EMC GDDR to:

◆ Detect disasters

◆ Identify survivors

◆ Nominate the leader

◆ Recover business at one of the surviving sites

To achieve the task of business restart, EMC GDDR automationextends well beyond the disk level (on which EMC has traditionallyfocused) and into the host operating system. It is at this level thatsufficient controls and access to third-party software and hardwareproducts exist, enabling EMC to provide automated recoveryservices.

EMC GDDR’s main activities include:

◆ Managing planned site swaps (workload and DASD) between theprimary and secondary sites and recovering the SRDF/Star withAutoSwap environment.

◆ Managing planned site swaps (DASD only) between the primaryand secondary sites and recovering the SRDF/Star withAutoSwap environment.

◆ Managing the recovery of the SRDF environment and restartingSRDF/A in the event of an unplanned site swap.

◆ Active monitoring of the managed environment and respondingto exception conditions.

SRDF Enterprise Consistency Group for z/OSAn SRDF Consistency Group is a collection of devices logicallygrouped together to provide consistency. Its purpose is to maintaindata integrity for applications that are remotely mirrored,particularly those that span multiple RA groups or multipleSymmetrix storage systems. The protected applications may becomprised of multiple heterogeneous data resource managers spreadacross multiple host operating systems. It is possible to spanmainframe LPARs, UNIX, and Windows servers. Theseheterogeneous platforms are referred to as hosts.

SRDF family of products for z/OS 43

Page 44: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

44

EMC Foundation Products

If a primary volume in the consistency group cannot propagate datato its corresponding secondary volume, EMC software suspends datapropagation from all primary volumes in the consistency group. Thesuspension halts all data flow to the secondary volumes and ensuresa dependent-write consistent secondary volume copy at the point intime that the consistency group tripped.

The dependent-write principle concerns the logical dependencybetween writes that are embedded by the logic of an application,operating system, or database management system (DBMS). Thisnotion is that a write will not be issued by an application until a prior,related write has completed (a logical dependency, not a timedependency). Some aspects of this notion are:

◆ Inherent in all DBMS

◆ IMS block write is a dependent write based on a successful logwrite

◆ Applications can also use this technology

◆ Power failures create dependent-write consistent images

DBMS restart transforms a dependent-write consistent data state to atransactionally consistent data state.

When the amount of data for an application becomes very large, thetime and resources required for host-based software to protect, backup, or execute decision-support queries on these databases becomescritical. In addition, the time required to shut down thoseapplications for offline backup is no longer acceptable, andalternative implementations are required. One alternative is SRDFConsistency Group technology that allows users to remotely mirrorthe largest data environments and automatically createdependent-write consistent, restartable copies of applications inseconds without interruption to online services.

Disaster restart solutions that use consistency groups provide remoterestart with short recovery time objectives. SRDF synchronousconfigurations provide zero data loss solutions using consistencygroups. Zero data loss implies that all completed transactions at thebeginning of a disaster will be available at the target storage systemafter restart.

An SRDF consistency group has two methodologies to preserve adependent-write consistent image while providing a synchronousdisaster restart solution with a zero data loss scenario. These twomethodologies are described in the following two sections.

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 45: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

EMC Foundation Products

ConGroup using IOS and PowerPathThis methodology preserves a dependent-write consistent imageusing IOS on mainframe hosts and PowerPath® on open systemshosts. This method requires all hosts have connectivity to all involvedSymmetrix storage systems, either through direct connections orindirectly through one or more SAN configurations. These hosts arenot required to have the logical devices visible, A path (gatekeeper)to each involved Symmetrix storage system is sufficient. Theconsistency group definition, software, and licenses must reside onall hosts involved in the consistency group. The read and write I/Osare both held with the IOS and PowerPath methodology.

ConGroup using SRDF/ECAThis is the preferred methodology with ConGroup. It preserves adependent-write consistent image using SRDF EnginuityConsistency Assist (SRDF-ECA). This method requires a minimum ofone host having connectivity to all involved Symmetrix storagesystems, either through direct connections or indirectly through oneor more storage network configurations. EMC recommends having atleast two such hosts for redundancy purposes. In the event of a hostfailure, the second host can automatically take over control of theconsistency functions. These hosts are referred to as control hosts,and are the only hosts required to have the consistency groupdefinition, software, and licenses. During a ConGroup trip,SRDF-ECA defers writes to all involved logical volumes in theconsistency group. Subsequent read I/Os are held per logical volumeonce the first write on that volume is deferred. This is done only for ashort period of time while the consistency group suspends transferoperations to the secondary volumes.

EMC recommends that SRDF-ECA mode be configured when using aconsistency group in a mixed mainframe and open systemsenvironment with both CKD and FBA (fixed block architecture)devices.

Tripping a consistency group can occur either automatically ormanually. Scenarios in which an automatic trip would occur include:

◆ One or more primary volumes cannot propagate writes to theircorresponding secondary volumes.

◆ The remote device fails.

◆ The SRDF directors on either the primary or secondarySymmetrix storage systems fail.

SRDF family of products for z/OS 45

Page 46: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

46

EMC Foundation Products

In an automatic trip, the Symmetrix storage system completes thewrite to the primary volume, but indicates that the write did notpropagate to the secondary volume. EMC software, combined withSymmetrix Enginuity, intercepts the I/O and instructs the Symmetrixstorage system to suspend all primary volumes in the consistencygroup from propagating any further writes to the secondaryvolumes. Once the suspension is complete, writes to all primaryvolumes in the consistency group continue normally, but are notpropagated to the target side until normal SRDF mirroring resumes.

An explicit trip occurs when a susp-cgrp (suspend ConGroup)command is invoked using SRDF Host Component software.Suspending the consistency group creates an on-demand, restartablecopy of the database at the secondary site. BCV devices synchronizedwith the secondary volumes are then split after the consistency groupis tripped, creating a second dependent-write consistent copy of thedata. During the explicit trip, SRDF Host Component issues thecommand to create the dependent-write consistent copy, but mayrequire assistance from either IOSLEVEL or SRDF-ECA by way ofConGroup software if I/O is received on one or more of the primaryvolumes, or if the Symmetrix commands issued are abnormallyterminated before the explicit trip.

An SRDF consistency group maintains consistency withinapplications spread across multiple Symmetrix storage systems in anSRDF configuration by monitoring data propagation from theprimary volumes in a consistency group to their correspondingsecondary volumes. Consistency groups provide data integrityprotection during a rolling disaster. The loss of an SRDFcommunication link is an example of an event that could be a part ofa rolling disaster.

Figure 8 on page 47 depicts a dependent-write I/O sequence where apredecessor log write happens before a IMS block write from adatabase buffer pool. The log device and data device are on differentSymmetrix storage systems with different replication paths. Figure 8on page 47 also demonstrates how rolling disasters are preventedusing SRDF Consistency Group technology.

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 47: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

EMC Foundation Products

Figure 8 SRDF Consistency Group using SRDF-ECA

1. A consistency group is defined containing volumes X, Y, and Z onthe source Symmetrix system. This consistency group definitionmust contain all of the devices that need to maintaindependent-write consistency and reside on all participating hostsinvolved in issuing I/O to these devices. A mix of CKD(mainframe) and FBA (UNIX/Windows) devices can be logicallygrouped together. In many cases, the entire processingenvironment may be defined in a single consistency group toensure dependent-write consistency.

2. The rolling disaster described previously begins.

3. The predecessor log write occurs to volume Z, but cannot bereplicated to the remote site.

4. Since the predecessor log write to volume Z cannot be propagatedto the remote Symmetrix system, a consistency group trip occurs.

a. The source Symmetrix Enginuity captures the write I/O thatinitiated the trip event and defers all write I/Os to all logicaldevices within the consistency group on this Symmetrixsystem. The control host software is constantly polling allinvolved Symmetrix systems for such a condition.

Host 1

DBMS

RDF/ECA

SCF/SYMAPIIOS/PowerPath

Solutions Enabler consistency group

Host 2

DBMS

Solutions Enabler consistency group

2

R1(Z)

R1(Y)

R1(X)

R2(Y)

R2(Z)

R2(X)

R1(C)

R1(B)

R1(A)

R2(B)

R2(C)

R2(A)

SCF/SYMAPI

RDF/ECA

IOS/PowerPath

ICO-IMG-000659

DBMSrestartablecopy

ConGroupdefinition(X,Y,Z)

X = DBMS data Y = Application data Z = Logs

1

3

45

6

87

SuspendR1/R2relationship

SRDF family of products for z/OS 47

Page 48: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

48

EMC Foundation Products

b. Once a trip event is detected by the host software, aninstruction is sent to all involved Symmetrix systems in theconsistency group definition to defer all write I/Os for alllogical devices in the group. This trip is not an atomic event.The process guarantees dependent-write consistency,however, because of the integrity of the dependent write I/Oprinciple. Once a write I/O could not be received as completeto the host (the predecessor log write), the DBMS prevents thedependent I/O from being issued.

5. Once all of the involved Symmetrix storage systems havedeferred the writes for all involved logical volumes of theconsistency group, the host software issues a suspend action onthe primary/secondary relationships for the logically groupedvolumes, which immediately disables all replication of thosegrouped volumes to the remote site. Other volumes outside of thegroup are allowed to continue replicating, provided thecommunication links are available.

6. After the relationships are suspended, the completion of thepredecessor write is acknowledged back to the issuing host.Furthermore, all I/Os that were held during the consistencygroup trip operation are released.

7. The dependent data write is issued by the DBMS and arrives at Xbut is not replicated to its secondary volume.

When a complete failure occurs from this rolling disaster, thedependent-write consistency at the secondary site is preserved. If acomplete disaster does not occur, and the failed links are reactivated,consistency group replication can be resumed. EMC recommendscreating a copy of the dependent-write consistent image while theresume takes place. After the SRDF process reaches synchronization,the dependent-write consistent copy is achieved at the remote site.

Restart in the event of a disaster or non-disastersTwo major circumstances require restartability or continuity ofbusiness processes as facilitated by SRDF: a true, unexpected disaster,and an abnormal termination of processes on which dataflowdepends. Both circumstances require that a customer immediatelydeploy the proper resource and procedures to correct the situation. Itis generally the case that an actual disaster is more demanding of allnecessary resources in order to successfully recover/restart.

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 49: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

EMC Foundation Products

DisasterIn the event of a disaster, where the primary Symmetrix storagesystem is lost, it is necessary to run database and application servicesfrom the DR site. This requires a host at the DR site. The first action isto write-enable the secondary devices (R2s). At this point, the hostcan issue the necessary commands to access the disks.

After the data is available to the remote host, the IMS system may berestarted. IMS performs an implicit recovery when restarted byresolving in-flight transactions. Transactions that were committed,but not completed, are rolled forward and completed using theinformation in the active logs. Transactions that have updates appliedto the database, but were not committed, are rolled back. The result isa transactionally consistent database.

Abnormal termination (not a disaster)An SRDF session can be interrupted by any situation that preventsthe flow of data from the primary site to the secondary site (forexample, a software failure, network failure, or hardware failure).

EMC AutoSwapEMC AutoSwap provides the ability to move (swap) workloadstransparently from volumes in one set of Symmetrix storage systemsto volumes in other Symmetrix storage systems without operationalinterruption. Swaps may be initiated either manually as plannedevents or automatically as unplanned events (upon failure detection):

◆ Planned swaps facilitate operations such as nondisruptivebuilding maintenance, power reconfiguration, DASD relocation,and channel path connectivity reorganization.

◆ Unplanned swaps protect systems against outages in a number ofscenarios. Examples include: power supply failures, buildinginfrastructure faults, air conditioning problems, loss of channelconnectivity, entire DASD system failures, operator error, or theconsequences of intended or unintended fire suppression systemdischarge.

AutoSwap, with SRDF and SRDF Consistency Group, dramaticallyincrease data availability.

EMC AutoSwap 49

Page 50: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

50

EMC Foundation Products

In Figure 9, swaps are concurrently performed while applicationworkloads continue in conjunction with EMC Consistency Group.This option protects data against unforeseen events and ensures thatswaps are unique, atomic operations that maintain dependent-writeconsistency.

Figure 9 AutoSwap before and after states

AutoSwap highlightsAutoSwap includes the following features and benefits:

◆ Testing on devices in swap groups to ensure validity ofaddress-switching conditions. This supports grouping devicesinto swap groups and treats each swap group as a single-swapentity.

◆ Consistent swapping—Writes to the group are held during swapprocessing, ensuring dependent-write consistency to protect dataand ensure restartability.

◆ Swap coordination across multiple z/OS images in a sharedDASD or parallel sysplex environment. During the time whendevices in swap groups are frozen and I/O is queued, AutoSwapreconfigures SRDF pairs to allow application I/O streams to beserviced by secondary SRDF devices. As the contents of UCBs are

SRDF/S

SRDF/CG

Before

SRDF/S

SRDF/CG

After

ICO-IMG-000107

R1 R2 R2R1

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 51: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

EMC Foundation Products

swapped, I/O redirection takes place transparently to theapplications. This redirection persists until the next InitialProgram Load (IPL) event.

Use casesAutoSwap includes the following features and benefits:

◆ Performs dynamic workload reconfiguration without applicationdowntime.

◆ Swaps large numbers of devices concurrently.

◆ Handles device group operations.

◆ Relocates logical volumes.

◆ Performs consistent swaps.

◆ Implements planned outages of individual devices or entiresubsystems.

◆ Reacts appropriately to unforeseen disasters if an unplannedevent occurs.

◆ Protects against the loss of all DASD channel paths or an entirestorage subsystem. This augments the data integrity protectionprovided by consistency groups by providing continuousavailability in the event of a failure affecting the connectivity to aprimary device.

TimeFinder family products for z/OSFor years, the TimeFinder family of products has provided importantcapabilities for the mainframe. TimeFinder was recently repackagedas shown in Figure 10 on page 52, but the product’s features andfunctions remain the same.

TimeFinder family products for z/OS 51

Page 52: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

52

EMC Foundation Products

Figure 10 TimeFinder family of products for z/OS

TimeFinder/Clone for z/OSTimeFinder/Clone for z/OS is documented as a component of theTimeFinder/Clone Mainframe SNAP Facility. It is the code anddocumentation associated with making full-volume snaps anddataset level snaps. As such, they are space-equivalent copies.TimeFinder/Clone does not consume a mirror position, nor does itrequire the BCV flag for targets on the Symmetrix storage system.Certain TimeFinder/Mirror commands, such as Protected BCVEstablish, are unavailable in the TimeFinder/Clone Mainframe SnapFacility. This command is one of pointer-based copy technologyrather than mirror technology. Other protection mechanisms, such asRAID 5, are available for the target storage volumes as well.

Additional mainframe-specific capabilities of TimeFinder/Clone forz/OS include:

◆ Dataset-level snap operations.

◆ Differential snap operations. These require only the changed datato be copied on subsequent full-volume snaps or dataset snaps.

◆ Support for consistent snap operations. These make the targetvolumes dependent-write consistent and require theTimeFinder/Consistency Group product.

Tim

eFin

der

fam

ily fo

r z/

OS TimeFinder/Clone

TimeFinder/Snap

Ultra-functional, high- performance copies

Economical space- saving copies

TimeFinder/Mirror

TimeFinder/CG

Classic high- performance option

Consistency Group option

ICO-IMG-000108

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 53: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

EMC Foundation Products

◆ Up to 16 simultaneous point-in-time copies of a single primaryvolume.

◆ Compatibility with STK Snapshot Copy and IBM snap products,including reuse of its SIBBATCH syntax.

TimeFinder Utility for z/OSThis utility conditions the catalog by relabeling and recatalogingentries to avoid issues associated with duplicate volume names in themainframe environment. This utility is also delivered withTimeFinder/Mirror, TimeFinder/Clone, and TimeFinder/Snapproducts. TimeFinder Utility for z/OS offers:

◆ Compatibility with mainframe security mechanisms such asRACF.

◆ Integration with many mainframe-specific ISVs and theirrespective products.

TimeFinder/Snap for z/OSTimeFinder/Snap for z/OS uses the code and documentation fromTimeFinder/Clone, but with an important difference: Snaps madewith this product are virtual snaps, meaning they take only a portionof the space a full-volume snap would. Invocation of this feature isthrough the keyword VDEV (Virtual Device). If the VDEV argumentis used, only the pre-update image of the changed data plus a pointeris kept on the target. This technique considerably reduces disk spaceusage for the target. This feature also provides for one or more namedSNAPPOOLs that can be managed independently.

Mainframe-specific features of the TimeFinder/Snap for z/OSproduct include:

◆ The same code and syntax as the TimeFinder/Clone MainframeSNAP Facility (plus the addition of the VDEV keyword).

◆ The same features and functions as TimeFinder/CloneMainframe SNAP Facility, and therefore, the same benefits.

◆ Full-volume support only (no dataset support).

◆ Up to 128 simultaneous point-in-time SNAPs of a single primaryvolume.

TimeFinder family products for z/OS 53

Page 54: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

54

EMC Foundation Products

◆ Duplicate Snap—Duplicate Snap provides the ability to create apoint-in-time copy of a previously activated virtual device. Thisallows the creation of multiple copies of a point-in-time copy.This feature requires Mainframe Enablers V7.2 or later, and ismanageable using Symmetrix Management Console V7.2 or later.

◆ ICF Catalog conditioning with TimeFinder Utility for z/OS—Thisallows relabeling and recataloging entries and avoids issuesassociated with duplicate volume names in the mainframeenvironment.

◆ Compatibility with mainframe security mechanisms such asRACF.

◆ Integration with many mainframe-specific ISVs and theirrespective products.

TimeFinder/Mirror for z/OSTimeFinder/Mirror for z/OS provides BCVs and the means by whicha mainframe application can manipulate them. BCVs are speciallytagged logical volumes manipulated by using theseTimeFinder/Mirror commands: ESTABLISH, SPLIT,RE-ESTABLISH, and RESTORE.

Mainframe-specific features of the TimeFinder/Mirror productinclude:

◆ The TimeFinder Utility for z/OS, which conditions the VTOC,VTOCIX, the VVDS, and the ICF catalog by re-labeling andre-cataloging entries, thereby avoiding issues associated withduplicate volume names and dataset names.

◆ The ability to create dependent-write consistent BCVs locally orremotely (with the plug-in module called TimeFinder/Consistency Group) without the need to quiesce production jobs.

◆ BCV operations important to IT departments include:

• Using the BCV as the source for backup operations.

• Using the BCV for test LPARs with real data. The speed withwhich a BCV can be rebuilt means that multiple test cycles canoccur rapidly and sequentially. Applications can be stagedusing BCVs before committing them to the next applicationrefresh cycle.

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 55: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

EMC Foundation Products

• Using the BCV as the source for data warehousingapplications rather than the production volumes. Because theBCVs are a point-in-time mirror image of the production data,they can be used as golden copies of data to be written andrewritten repeatedly.

◆ The use of SRDF/Automated Replication.

◆ The support for mainframe TimeFinder queries including the useof wildcard matching.

◆ The compatibility with mainframe security mechanisms such asRACF.

◆ The integration of DBMS utilities available from ISVs and theirproducts.

◆ The integration of many mainframe-specific ISVs and theirproducts.

On the Symmetrix VMAX platform, all TimeFinder/Mirroroperations are emulated in Enginuity and converted toTimeFinder/Clone operations transparently.

TimeFinder/CGTimeFinder/CG (Consistency Group) is a plug-in module for theTimeFinder/Mirror, TimeFinder/Clone, and TimeFinder/Snapproducts. TimeFinder/CG provides consistency support for variousTimeFinder family commands. TimeFinder/CG is licensed separatelyand uses a Licensed Feature Code implementation model.

TimeFinder/CG allows the use of the CONSISTENT(YES) parameteron TimeFinder/Clone and TimeFinder/Snap, as well as the CONSparameter on TimeFinder/Mirror SPLIT statements. This allowsTimeFinder to create an instantaneous point-in-time copy of all thevolumes being copied. The copy thus created is a dependent-writeconsistent copy that is in a state very similar to that which is createdduring a power outage. If an IMS system is copied this way, arestartable image is created.

TimeFinder/CG invokes ECA (Enginuity Consistency Assist) to holdI/Os while the copy is taken. There is little or no effect on the hostapplication or database during this time.

TimeFinder family products for z/OS 55

Page 56: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

56

EMC Foundation Products

Consistent dataset snapThe consistent dataset snap capability for z/OS, offered withEnginuity 5875, is unique to EMC. Consistent dataset snap allows theuser to obtain a dependent-write consistent image of multiple IMSdatasets using the TimeFinder/Clone MF Snap Facility SNAPDATASET function.

Currently, inter-dataset consistency is only achievable by requestingan exclusive enqueue on the source datasets, which is veryimpractical in production IMS environments. The larger theproduction environment is, the more impractical it becomes. Inaddition, the enqueue approach does not provide for cross-sysplexconsistency, another consideration in very large environments.

Other factors prohibiting this function in the past were that the SNAPdataset approach operated on a single dataset at a time, and theEnginuity extent snap took far too long to be executed under anEnginuity Consistency Assist (ECA) window. A further complicationwas that the extent snap did not allow for the separation of theestablish and activate phases of the snap of a dataset (but wasallowed for a full volume). This was critically needed and has nowbeen provided.

Thus, the new consistent extent snap feature of Enginuity 5875provides the ability to do separate establish and activate extent-levelsnap operations, which in turn results in dataset snap processing onan entire group of datasets such that dependent-write consistency isensured across the resulting group of target datasets.

A separate ACTIVATE statement can now follow SNAP datasetstatements and CONSISTENT(YES) is now allowed on ACTIVATE asdocumented in detail in the EMC TimeFinder/Clone Mainframe SNAPFacility Product Guide.

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 57: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

EMC Foundation Products

IMS considerations for ConGroupVolumes in the ConGroup device list for the IMS group must includeall volumes where IMS system data and all other applicationdatabases reside. IMS system data whose devices must be includedare:

◆ RECON1, RECON2, RECON3—RECON copies + spare

◆ DFSOLPxx—Primary IMS log datasets

◆ DFSOLSxx—Secondary IMS log datasets

◆ DFSWADSx—WADS datasets + spares

◆ RDS—Restart dataset

◆ MODSTAT—Active library list

◆ MSDBCPx—Fast Path MSDB checkpoint datasets

◆ MSDBINIT—Fast Path MSDB input

◆ MSDBDUMP—Fast Path MSDB output

◆ IMSACB—Database and program descriptors (active, inactive,and staging)

◆ MODBLKS—Database, programs, and transactions (active,inactive, and staging)

◆ MATRIX—Security tables (active, inactive, and staging)

◆ FORMAT—MFS maps (active, inactive, and staging)

◆ IMSTFMT—Test MFS maps (active, inactive, and staging)

◆ DBDLIB – DBD control blocks

◆ PSBLIB—PSB control blocks

◆ LGMSG, LGMSGx—Long message queues

◆ SHMSG, SHMSGx—Short message queues

◆ QBLKS—Message queue blocks

◆ RESLIB, MDALIB—IMS STEPLIB libraries

◆ PROCLIB—IMS procedure library

◆ All user and vendor databases

◆ SLDS, RLDS if long-running applications are a concern

◆ ICF user catalogs for system or user data

IMS considerations for ConGroup 57

Page 58: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

58

EMC Foundation Products

◆ System runtime libraries

◆ User program libraries

EMC z/OS Storage ManagerThe EMC z/OS Storage Manager (EzSM) is a mainframe softwareproduct providing storage management in a Symmetrixenvironment. EzSM provides mainframe storage managers andoperations staff a flexible, z/OS-centric view of storage that presentsboth Symmetrix-specific information and z/OS storage managementdata in a single easy-to-use 3270 interface.

With EzSM, users can discover and monitor the volumes in aSymmetrix controller, set alerts for volumes, summarize Symmetrixconfiguration information, and much more. EzSM is installed withSMP/E (System Modification Program Extended), an element ofz/OS that is used to install most software products. EzSM logs useractivity and records changes in SMP. Standard security packages andz/OS messaging are used, which allows customer automationpackages to filter EzSM messages. Figure 11 is a logical view of EzSMfunctionality.

Figure 11 EMC z/OS Storage Manager functionality

z/OS resources Storage resources

Catalogmigrationstorage

Volumesdatasets

SMSHSM

Symmetrix

Functionsinfrastructure

ICO-IMG-000750

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 59: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

EMC Foundation Products

Symmetrix Management ConsoleSymmetrix Management Console (SMC) employs a simple andintuitive Web-based user interface to administer the most commondaily storage management functions for the Symmetrix array. Theintention is that SMC can be used quickly and efficiently by operatorsof all experience levels.

Members of the mainframe community typically participate in astructured change control process to manage their storageenvironment with certainty and stability. When using the SMC,mainframe storage administrators can avoid consultation with EMCpersonnel on array change control activities and perform the actionsthemselves, removing one level of complexity in the change controlprocess. It is anticipated that changes can be enacted in a more timelyfashion, and communication errors avoided, when SMC is used byauthorized customer administrators to directly perform arraymodifications.

SMC puts control of the following array activities into the hands ofthe mainframe storage administrator:

◆ Device creation and removal

◆ Device base and alias addressing

◆ Local and remote replication

◆ Quality of service

◆ Replication and quality-of-service monitoring

◆ Management of FAST policies and configurations

SMC is designed to deliver Symmetrix array management that isresponsive to user controls and modest on server resourcerequirements. As a consequence of this design mandate, one SMCinstance is recommended for controlling a maximum of 64 K devices.Some mainframe sites may require several SMC instances to providemanagement coverage for the entire storage pool. Each SMC instance,however, shares a server with other applications, and each instanceremains quick, light, and independent.

SMC is intended to make array management faster and easier. Usingdialog boxes structured into task wizards, SMC accelerates setup,configuration, and routine tasks. By providing simplified replicationmanagement and monitoring, SMC delivers ease of use thattranslates into efficient operation. Finally, managing for the future,

Symmetrix Management Console 59

Page 60: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

60

EMC Foundation Products

SMC makes new functionality available in the same simple intuitivemanner, greatly lessening the learning curve necessary to implementany new technology and functionality.

With SMC, the mainframe user community now has an additionalchoice in Symmetrix array management. This choice is a tool that iseasy to deploy, simplifies complex tasks through structured templatesand wizards, delivers responsive user interaction, and readilyintegrates the tasks and changes of tomorrow.

Note: In May 2012 SMC merged into a new GUI product named Unisphere®.

Symmetrix Performance AnalyzerEMC Symmetrix Performance Analyzer (SPA) is an intuitive,browser-based tool used to perform historical trending and analysisof Symmetrix array performance data. SPA was developed to workwith the Symmetrix Management Console (SMC). The SPA interfacecan open in its own web window from the SMC menu or on its own.SPA adds an optional layer of data collection, analysis, andpresentation tools to the SMC implementation. You can use SPA to:

◆ Set performance thresholds and alerts

◆ View high frequency metrics as they become available

◆ Perform root cause analysis

◆ View graphs detailing system performance

◆ Drill down through data to investigate issues

◆ Monitor performance and capacity over time

SPA also provides a fast lane to display possible performance roadblocks with one click, and includes export and print capability for alldata graphs.

Note: In May 2012, SMC merged into a new GUI product named Unisphere.

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 61: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

EMC Foundation Products

Unisphere for VMAXAvailable since May 2012, EMC Unisphere for VMAX replacesSymmetrix Management Console (SMC) and SymmetrixPerformance Analyzer (SPA). With Unisphere for VMAX, customerscan provision, manage, monitor, and analyze VMAX arrays from oneconsole, significantly reducing storage administration time.

Unisphere for VMAX offers big-button navigation and streamlinedoperations to simplify and reduce the time required to manage datacenter storage. Unisphere for VMAX simplifies storage managementunder a common framework. You can use Unisphere to:

◆ Perform configuration operations

◆ Manage volumes

◆ Perform and monitor local and remote replication functions

◆ Monitor VMAX alerts

◆ Manage Fully Automated Storage Tiering (FAST and FAST/ VP)

◆ Manage user accounts and their roles

Unisphere for VMAX provides a single GUI interface for centralizedmanagement of your entire VMAX storage environment. Thisincludes:

Configuration—Volume creation, set VMAX and volume attributes,set port flags, and create SAVE volume pools. Change volumeconfiguration, set volume status, and create/dissolve metavolumes.

Performance—Performance data previously available throughSymmetrix Performance Analyzer (SPA) is now included inUnisphere for VMAX. Monitor, analyze, and manage performancesettings such as threshold, alerts, metrics and reports.

Replication monitoring—View and manage TimeFinder sessions.Includes session controls, details, and modes. View and manageSRDF groups and pools.

Usability—Provides a simple, intuitive graphical user interface forarray discovery, monitoring, configuration, and control of Symmetrixsubsystems. A single pane of glass provides a view of Symmetrixarrays. From a dashboard view (that provides an overall view of theVMAX environment), you can drill down into the array dashboard.The array dashboard provides a summary of the physical and virtual

Unisphere for VMAX 61

Page 62: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

62

EMC Foundation Products

capacity for that array. Additionally, the alerts are summarized here,and a user can further drill down into the alerts to get moreinformation.

Security—Unisphere for VMAX supports the following types ofauthentication: Windows, LDAP, and local Unisphere users. Userauthentication and authorization using defined user roles, such asStorageAdmin, Auditor, SecurityAdmin, and Monitor.

Virtual ProvisioningThe Enginuity 5876 release for Symmetrix VMAX arrays deliversVirtual Provisioning™ for CKD devices. After several years ofsuccessful deployment in open systems (FBA) environments,mainframe VMAX users now have the opportunity to deploy thindevices for IMS and other mainframe applications. VirtualProvisioning for IMS subsystems is described in detail in Chapter 3.

Fully Automated Storage TieringThe Enginuity 5874 service release for Symmetrix VMAX arraysprovides the mainframe user Fully Automatic Storage Tiering (FAST)capability. Using FAST, whole IMS devices can be moved from onestorage type to another storage type based on user-defined policies.Device movement plans are automatically recommended (andoptionally executed) seamlessly by the FAST controller in the storagearray. FAST operates at a granularity of a Symmetrix logical device,so it is always a full device that is migrated between storage types(drive types and RAID protections).

The Enginuity 5876 provides FAST functionality at the sub-volumeand sub-dataset level. When a IMS subsystem has been built usingthin devices using Virtual Provisioning, it can take advantage of thisfeature. The 5876 Enginuity feature is called Fully Automated StorageTiering Virtual Pools (FAST VP). This feature is describe in moredetail in Chapter 3, page 3-65.

The Unisphere graphical user interface is used to manage FAST andFAST VP operations. At this time, there is no specific batch job controlto automate the functions of data movement.

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 63: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

EMC Foundation Products

Data at Rest EncryptionEnginuity 5875 provides support for Symmetrix Data at RestEncryption (Data Encryption) on all drive types supported by theSymmetrix VMAX system. The Data Encryption feature is availablefor new VMAX systems shipping with Enginuity 5875. It is notavailable as an upgrade option for existing VMAX systems (installedprior to the availability of Enginuity 5875), nor can it be madeavailable through the RPQ process. This feature makes use of somespecial new hardware at the device adapter (DA) level. Further, thiscapability at the DA level to provide encryption for all drivescontrolled by that director represents a competitive advantage sinceother enterprise-class subsystems require that special drives be usedto achieve encryption.

There are no user controls to enable or disable Data Encryption in aVMAX subsystem.

DARE adds Data Encryption capability to the back end of the VMAXsubsystem, and adds encryption key management services to theservice processor. Key management is provided by the RSA® KeyManager (RKM) “lite” version that can be installed on the VMAXservice processor by the Enginuity installer program. This newfunctionality for the service processor consists of the RKM client andembedded server software from RSA.

Data at Rest Encryption 63

Page 64: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

64

EMC Foundation Products

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 65: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

3

This chapter describes the advanced storage provisioning techniquesthat were introduced with Enginuity 5876.

◆ Virtual Provisioning........................................................................... 66◆ Fully Automated Storage Tiering .................................................... 78

Advanced StorageProvisioning

Advanced Storage Provisioning 65

Page 66: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

66

Advanced Storage Provisioning

Virtual ProvisioningEnginuity 5876 includes significant enhancements for mainframeusers of the Symmetrix VMAX array that rival in importance to theoriginal introduction of the first Symmetrix Integrated Cached DiskArray in the early 1990s. After several years of successful deploymentin open systems (FBA) environments, mainframe VMAX users nowhave the opportunity to deploy Virtual Provisioning and FullyAutomated Storage Tiering for Virtual Pools (FAST VP) for count keydata (CKD) volumes.

This chapter describes the considerations for deploying an IMS forz/OS database using Virtual Provisioning. An understanding of theprinciples that are described here will allow the reader to deploy IMSfor z/OS databases on thin devices in the most effective manner.

TerminologyThe Virtual Provisioning for mainframe feature brings with it somenew terms that may be unfamiliar to mainframe practitioners. Table 1describes these new terms which are used extensively throughoutthis chapter.

Table 1 Virtual Provisioning terms

Term Description

Device A logical unit of storage defined within a Symmetrix array.

Device capacity The actual storage capacity of a device.

Device extent The size of the smallest contiguous region of a device for whichan extent mapping can occur.

Host-accessibledevice

A device that is presented on a FICON channel for host use.

Internal device A device used for internal function of the array.

Storage pool A collection of internal devices for some specific purpose.

Thin device A host-accessible device that has no storage directly associatedwith it.

Data device An internal device that provides storage capacity to be used bya thin device.

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 67: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Advanced Storage Provisioning

Extent mapping Specifies the relationship between the thin device and datadevice extents. The extent sizes between a thin device and adata device do not need to be the same.

Thin pool A collection of data devices that provide storage capacity for'Thin Devices'

Thin pool capacity The sum of the capacities of the member data devices.

Bind The process by which one or more thin devices are associatedto a thin pool.

Unbind The process by which a thin device is disassociated from agiven thin pool. When unbound, all previous extent allocationsfrom the data devices are erased and returned for reuse.

Enabled datadevice

A data device belonging to a thin pool on which extents can beallocated for thin devices bound to that thin pool.

Disabled datadevice

A data device belonging to a thin pool from which capacitycannot be allocated for thin devices. This state is under usercontrol. If a data device has existing extent allocations when adisable operation is executed against it, the extents arerelocated to other enabled data devices with available freespace within the thin pool.

Thin pool enabledcapacity

The sum of the capacities of enabled data devices belonging toa thin pool.

Thin poolallocated capacity

A subset of thin pool enabled capacity that has been allocatedfor the exclusive use of all thin devices bound to that thin pool.

Thin poolpre-allocatedcapacity

The initial amount of capacity that is allocated when a thindevice is bound to a thin pool. This property is under usercontrol. For CKD thin volumes, the amount of pre-allocation iseither 0% or 100%.

Thin deviceminimumpre-allocatedcapacity

The minimum amount of capacity that is pre-allocated to a thindevice when it is bound to a thin pool. This property is not underuser control.

Table 1 Virtual Provisioning terms (continued)

Term Description

Virtual Provisioning 67

Page 68: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

68

Advanced Storage Provisioning

OverviewVirtual Provisioning brings a new type of device into the mainframeenvironment called a thin device. Symmetrix thin devices are logicaldevices that can be used in many of the same ways that standardSymmetrix devices have traditionally been used. Unlike traditionalSymmetrix devices, thin devices do not need to have physical storagecompletely allocated at the time the device is created and presentedto a host. The physical storage that is used to supply disk space tothin devices comes from a shared storage pool called a thin pool. Thethin pool is comprised of devices called data devices that provide theactual physical storage to support the thin device allocations.

Virtual Provisioning brings many benefits:

◆ Balanced performance

◆ Better capacity utilization

◆ Ease of provisioning

◆ Simplified storage layouts

◆ Simplified database allocation

◆ Foundation for storage tiering

It is possible to over-provision thin storage volumes using VirtualProvisioning. Over-provisioning means that more space is presentedto the mainframe host than is actually available in the underlyingstorage pools. Customers may decide to over-provision storagebecause it simplifies the provisioning process, or because they wantto improve capacity utilization, or both. In either case, when anover-provisioning configuration is being used, careful monitoring of

Thin device writtencapacity

The capacity on a thin device that was written to by a host. Inmost implementations, this is a subset of the thin deviceallocated capacity.

Thin devicesubscribedcapacity

The total capacity that a thin device is entitled to withdraw froma thin pool, which may be equal to or less than the thin devicecapacity.

Thin deviceallocation limit

The capacity limit that a thin device is entitled to withdraw from athin pool, which may be equal to or less than the thin devicesubscribed capacity.

Table 1 Virtual Provisioning terms (continued)

Term Description

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 69: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Advanced Storage Provisioning

the storage pools is required to ensure that they do not reach 100percent utilization. In addition, processes to reclaim storage that hasbeen used and then deleted need to be executed to reclaim capacityand return it to the pool.

If over-provisioning is not needed, customers simply provide thindevice capacity that matches the pool capacity. With this kind ofimplementation neither pool monitoring nor space reclamation arenecessary.

Virtual Provisioning operationsWhen a write is performed to a part of the thin device for whichphysical storage has not yet been allocated, the Symmetrix subsystemallocates physical storage from the thin pool only for that portion ofthe thin device. The Symmetrix operating environment, Enginuity,satisfies the requirement by providing a block of storage from thethin pool, called a thin device extent. This approach reduces theamount of storage that is actually consumed.

The minimum amount of physical storage that can be reserved at atime for the dedicated use of a thin device is called a thin deviceextent. This is the minimum amount of physical storage that can bereserved at a time for the dedicated use of a thin device. Thisapproach reduces the amount of storage that is actually consumed.

The entire thin device extent is physically allocated to the thin deviceat the time the thin storage allocation is made. The thin device extentis allocated from any one of the data devices in the associated thinpool.

When a read is performed on a thin device, the data being read isretrieved from the appropriate data device in the thin pool to whichthe thin device is associated. If, for some reason, a read is performedagainst an unallocated portion of the thin device, standard record 0 isreturned.

When more physical data storage is required to service existing orfuture thin devices (for example, when a thin pool is approachingmaximum usage), data devices can be added dynamically to existingthin pools without the need for a system outage. A rebalance processcan be executed to ensure that the pool capacities are balanced toprovide an even distribution across all the devices in the pool, bothnew and old. New thin devices can also be created and associatedwith existing thin pools.

Virtual Provisioning 69

Page 70: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

70

Advanced Storage Provisioning

When data devices are added to a thin pool, they can be in an enabledor disabled state. In order for the data device to be used for thinextent allocation, it needs to be in the enabled state. For it to beremoved from the thin pool, it needs to be in the disabled state.Disabling a data device with active thin device allocations is the firststep to removing the device from the pool. The second step is to drainit, which causes all the active extents on the device to be transferredto other enabled devices in the pool. When fully drained, the datadevice can be removed from the pool and utilized for other purposes.

Figure 12 depicts the relationships between thin devices and theirassociated thin pools. The host-visible thin devices are presented on achannel and have UCB addresses. The storage for the thin devices isprovisioned from the thin device pool. The data devices in the thindevice pool are created from RAID ranks in the Symmetrix VMAXarray. The data devices do not have an address on the channel.

Figure 12 Thin devices, pools, and physical disks

The way thin extents are allocated across the data devices results in aform of striping in the thin pool. The more data devices that are in thethin pool, the wider the striping, and the greater the number ofdevices that can participate in application input/output (I/O). Thethin extent size for CKD devices is 12 3390 tracks in size.

RequirementsVirtual Provisioning for CKD devices requires Enginuity code level of5876 or later. In order to create thin pools and manage them,Mainframe Enablers V7.4 or later is required. Unisphere™ can also be

Thin devicepool

Physical disks

VMAX

Thin devices(3390)

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 71: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Advanced Storage Provisioning

used to manage Virtual Provisioning components. Thin device poolscan only be created by the user and they cannot be created during thebin file (VMAX configuration) creation process. Thin pools cannotuse RAID 10 protection when using CKD format. RAID 1, RAID 5, orRAID 6 are available for devices in the thin pool.

Host allocation considerationsHost dataset and volume management activities behave in subtlydifferent ways when using Virtual Provisioning. These behaviors areusually invisible to end users. However, the introduction of thindevices can put these activities in a new perspective. For example, theamount of consumed space on a volume, as seen through ISPF 3.4may be different from the actual consumed space from the thin pool.From a Virtual Provisioning point of view, a thin device could beusing a substantial number of thin extents, even though the space isshowing as available to the operating system. Alternatively, thevolume may look completely full from a VTOC point of view and yetmay be consuming minimal space from the thin pool.

These situations and perspectives are new with Virtual Provisioning.A clear understanding of them is necessary for a successfuldeployment of a thinly provisioned IMS subsystem. An awareness ofhow IMS and z/OS behave with this technology can lead to aneducated choice of host deployment options and can yield maximumvalue from the Virtual Provisioning infrastructure.

Balanced configurationsWith Virtual Provisioning on CKD volumes, it is now possible tohave the nirvana of storage layouts: Balanced capacity and balancedperformance. To achieve this goal prior to this product was a nearimpossibility. SMS does its level best to balance capacity, but withoutregard for performance. This manifests in the terribly skewedconfigurations that EMC encounters on a daily basis, where 20percent of the IMS volumes are performing 80 percent of theworkload.

Deploying a skewed IMS subsystem on thin devices still leaves theaccess to the thin devices skewed, but the storage layer is completelybalanced due to the nature of the wide striping of the thin volumes

Virtual Provisioning 71

Page 72: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

72

Advanced Storage Provisioning

across the pool. So now it is possible to have the best of both worlds:Better utilization and better performance, since bottlenecks at thedisk layer have been removed.

Pool management

Fully provisioning volumesIt is possible to fully provision a z/OS volume when binding it to athin pool. This means that all the tracks of the thin volume areallocated immediately from the thin pool. When this approach isused for all thin volumes assigned to a thin pool, and when there ismore space in the pool than the aggregate total of thin-devicecapacity, it is a strategy to avoid oversubscription. This method ofprovisioning requires that the capacity of the thin pool is at least aslarge as the sum of the capacities of all the thin devices bound to thepool.

Fully provisioning thin volumes has the significant advantage ofwide striping the thin volumes across all the devices in the thin pool(a benefit that should be utilized with large volumes, such asMOD27s, MOD54s, and EAVs).

To prevent space-reclamation routines from returning free space thatresults from deleting files, the thin devices must be bound to the poolusing the persistent option.

OversubscriptionOversubscription is a key value of EMC Virtual Provisioning.Oversubscription allows storage administrators to provision morestorage to the end user than is actually present. It is a commonpractice to request more storage than is actually needed simply toavoid the administrative overhead of the requisition/provisioningprocess. In many cases, this additional storage is never used. Whenthis is the case, oversubscription can reduce your overall Total Cost ofOwnership (TCO).

When implementing IMS systems with oversubscription, it is veryimportant to manage the capacity of a thin pool. As the thin poolstarts to fill up, more storage has to be added to the pool to alleviatethe situation. When a pool fills up and there is no more space toallocate to a write, IMS receives I/O errors on the dataset. This is nota desirable situation.

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 73: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Advanced Storage Provisioning

Messages are written to SYSLOG indicating that the thin poolutilization level after certain pre-defined levels have been exceeded.Automation can be used to track these messages and issue theappropriate alerts.

Adding storage to a pool and rebalancingWhen the pool thresholds are exceeded (or at any other time), datadevices may be added to the thin pool to increase capacity. Whendevices are added, an imbalance is created in the device capacityutilization. Some volumes may be nearly full, while the new volumeswill be empty.

Figure 13 shows the four volumes that comprise a thin pool, each at75 percent of capacity:

Figure 13 Thin device pool with devices filling up

Adding two additional volumes to the pool, and rebalancing thecapacity, as shown in Figure 14, results in a capacity redistribution:

Figure 14 Thin device pool rebalanced with new devices

Note that the rebalancing activity is transparent to the host andexecutes without using any host I/O or CPU. For best practices, it isrecommended that you do not add just two volumes to the pool. Youshould add enough devices to avoid frequent rebalancing efforts.

Virtual Provisioning 73

Page 74: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

74

Advanced Storage Provisioning

Space reclamationSpace reclamation is a process of returning disk space to the poolafter it is no longer in use. For example, when a IMS online REORGtakes place, a shadow copy of the IMS database is created. Theoriginal location of the database is eventually deleted. Even though ithas been deleted from a z/OS perspective, and the space is showingas available when viewing the VTOC, the space is actually stillconsumed in the thin pool.

The management of the space reclaim function is not usually in thedomain of the DBA, however, it is still instructive to understand howit works.

Space reclamation is performed by the Thin Reclaim Utility (TRU),which is a part of Mainframe Enablers V7.4 and later. For thin devicesthat are not bound with the PERSIST and PREALLOCATE attributes,TRU enables the reclamation of the thin device track groups for reusewithin the virtual pool by other thin devices. It does this by firstidentifying the free space in VTOC, initially by way of a scanfunction, then on an ongoing basis by way of the z/OS scratch exit. Itthen periodically performs a reclaim operation, which marks tracksas empty in the array (no user records, only standard R0). TheSymmetrix zero reclaim background task then returns these emptytrack groups to the free list in the virtual pool.

Note that this function only applies to volumes that have been thinprovisioned in the Symmetrix subsystem. Volumes that are fullypre-allocated and marked persistent are not eligible for reclamation.

Thin device monitoringIn a situation where over-provisioning has been used, that is to saythe aggregate size of the thin devices exceeds the size of the thin pool,it is mandatory that some kind of pool monitor is active to alert onpool threshold conditions. The Symmetrix Control Facility activelymonitors thin pools and sends alerts to SYSLOG based on certainpercentage full conditions of the thin pool. Based on these messages,the storage administrator can add more devices to the thin pool asneeded.

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 75: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Advanced Storage Provisioning

Considerations for IMS components on thin devicesThere are certain aspects of deploying a IMS subsystem using VirtualProvisioning that the IMS DBA should be aware of. How the variouscomponents work in this kind of configuration depends on thecomponent itself, its usage, and, sometimes, the underlying host datastructure supporting the component. This section describes how thevarious IMS components interact with Virtual Provisioning devices.

OLDS and WADSIMS log files are formatted by the DBA as a part of the subsystemcreation process. Every single page of the log files is written to at thistime, meaning that the log files become fully provisioned when theyare initialized and will not cause any thin extent allocations after this.

One thing to remember is that the active log files become stripedacross all the devices in the thin pool. Therefore, no single physicaldisk will incur the overhead of all the writes that come to the logs. Ifthe standards for a particular site require that logs need to beseparated from the data, then this can be achieved by creating asecond thin pool that is dedicated for the active log files.

For better IMS performance, it is recommended to VSAM stripe theIMS log files. This recommendation holds true even if the IMS logsare deployed on thin devices.

Replication considerationsThin devices behave in exactly the same way as normal, standardSymmetrix devices in regards to both local and remote replication.Thus, a thin device can be either a source or a target volume for aTimeFinder replication process, and it can be either an R1 or an R2 inan SRDF configuration. There are no specific considerations related tothin devices using either of these two replication functions.

Migration from thick to thinWhen a customer decides to use Virtual Provisioning for the first timein a mainframe environment, and needs to migrate existing IMSsubsystems to thin devices, a question arises: What is the best way tomigrate from thick to thin?

Virtual Provisioning 75

Page 76: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

76

Advanced Storage Provisioning

Table 2 and Table 3 show some different utilities that can be used andthe particular considerations with those method.

Table 2 Thick-to-thin host migrations

Host-based copy

Disruptive toapplication(access must beinterrupted) Notes

DFDSS, FDRDSF, otherutilities

YES Offers dataset extent consolidation

DFSMS re-allocation YES Redefinition of batch datasets results inmigration to a new SMS group of thindevices with volume selection changes inACS routines

EMC z/OS Migrator NO Volume and dataset level in single product;smaller-to-larger volume REFVTOCperformed

TDMF (ZDMF) NO TDMF=Volume-level product;ZDMF=datast-level product;smaller-to-larger volume REFVTOCperformed

FDRPAS (FDRMOVE) NO FDRPAS=Volume level product;FDRMOVE=dataset level product;smaller-to-larger volume REFVTOCperformed

Table 3 Thick-to-thin array-based migrations

VMAX array-basedreplication

Disruptive to application(access must be interrupted) Notes

EMC SRDF/DataMobility (AdaptiveCopy)

YES Check EMC Support matrix forsupported Enginuity codelevels.

EMC TimeFinder/Clone(SNAP VOLUME)

YES

EMC TimeFinder/Clone(SNAP DATASET)

YES

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 77: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Advanced Storage Provisioning

Performance considerationsVirtual Provisioning provides balanced performance due to thewide-striping and balanced capacity in the thin pool. With the smallchunk size of 12 tracks, it is highly unlikely that any workload skewwill be visible on a particular disk. This drives the performancecapability of the system up to much higher levels than can beachieved on normally provisioned systems.

There are some other considerations that need to be understoodwhen deploying IMS using Virtual Provisioning.

Chunk size and sequential processingThe Virtual Provisioning chunk size of 12 tracks is the effective stripesize across the thin pool. This allows for more balanced performanceas mentioned above. One effect of this striping is to make IMS pagesthat are adjacent on the thin device to be non-adjacent on the actualphysical disks. This has the potential to de-optimize IMS sequentialactivity on the actual array because the disk is unable to go intostreaming mode. A single user performing read ahead functions,such as OSAM Sequential Buffering, will complete a scan faster thanthe equivalent scan on a thin device.

However, it is very important to understand, that it is extremely rareto have a single user/process owning a disk. On busy systems, thedisk is going to be shared among many users and processes, and thedisk/head mechanism will be servicing many different access extentson these very large physical disks. This disables the ability of the diskto stream reads to the host. Thus the random layout of the chunks onthe DASD is not such a big problem in busy environments.

Virtual provisioning overheadWhen a block is read from or written to a thin device, Enginuityexamines the metadata (pointers) for the thin device to determine thelocation of the appropriate block in the thin pool. There is a slight costin I/O response time for this activity. In addition, when a new chunkis allocated from the thin pool, there is also additional code that needsto be executed. Both of these overheads are extremely small whencompared to the overall performance of a widely striped IMSsubsystem that runs with no disk-storage bottlenecks.

Virtual Provisioning 77

Page 78: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

78

Advanced Storage Provisioning

Fully Automated Storage TieringThis section provides information for IMS for z/OS and FAST VPdeployments and also includes some best practices regardingimplementation of IMS for z/OS with FAST VP configurations.

IntroductionFAST VP is a dynamic storage tiering solution for the VMAX Familyof storage controllers that manages the movement of data betweentiers of storage to maximize performance and reduce cost. Volumesthat are managed by FAST VP must be thin devices.

Delivered with Enginuity 5876, FAST VP is a VMAX array featurethat dynamically moves data between tiers to maximize performanceand reduce cost. It non-disruptively moves sets of 10-track groups(6.8 MB) between storage tiers automatically at the sub-volume levelin response to changing workloads. It is based on, and requires,virtually provisioned volumes in the VMAX array.

EMC determined the ideal chunk size (6.8 MB) from analysis of 50billion I/Os provided to EMC by customers. A smaller size increasesthe management overhead to an unacceptable level. A larger sizeincreases the waste of valuable and expensive Enterprise Flash drive(EFD) space by moving data to EFD that is not active. Tieringsolutions using larger chunk sizes require a larger capacity ofsolid-state drives, which increases the overall cost.

FAST VP fills a long-standing need in z/OS storage management:Active performance management of data at the array level. It doesthis very effectively by moving data in small units, making it bothresponsive to the workload and efficient in its use of control-unitresources.

Such sub-volume, and more importantly, sub-dataset, performancemanagement has never been available before and represents arevolutionary step forward by providing truly autonomic storagemanagement.

As a result of this innovative approach, compared to an all-FibreChannel (FC) disk drive configuration, FAST VP can offer betterperformance at the same cost, or the same performance at a lowercost.

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 79: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Advanced Storage Provisioning

FAST VP also helps users reduce DASD costs by enablingexploitation of very high capacity SATA technology for low-accessdata, without requiring intensive performance management bystorage administrators.

Most impressively, FAST VP delivers all these benefits without usingany host resources whatsoever.

FAST VP uses three constructs to achieve this:

FAST storage group

A collection of thin volumes that represent an application orworkload. These can be based on SMS storage group definitions in az/OS environment.

FAST policy

The FAST VP policy contains rules that govern how much capacity ofa storage group (in percentage terms) is allowed to be moved intoeach tier. The percentages in a policy must total at least 100 percent,but may exceed 100 percent. This may seem counter-intuitive but iseasily explained. Suppose you have an application that you wantFAST VP to determine exactly where the data needs to be withoutconstraints, you would create a policy that permits 100 percent of thestorage group to be on EFD, 100 percent on FC, and 100 percent onSATA. This policy totals 300 percent. This kind of policy is the leastrestrictive that you can make. Mostly likely you will constrain howmuch EFD and FC a particular application is able to use but leaveSATA at 100 percent for inactive data.

Each FAST storage group is associated with a single FAST policydefinition.

FAST tier

A collection of up to four virtual pools with common drivetechnology and RAID protection. At the time of writing, the VMAXarray supports four FAST tiers.

Figure 15 depicts the relationship between VP and FAST VP in theVMAX Family arrays. Thin devices are grouped together into storagegroups. Each storage group is usually mapped to one or moreapplications or IMS subsystems that have common performancecharacteristics.

Fully Automated Storage Tiering 79

Page 80: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

80

Advanced Storage Provisioning

Figure 15 FAST VP storage groups, policies, and tiers

A policy is assigned to the storage group that denotes how much ofeach storage tier the application is permitted to use. The figure showstwo IMS subsystems, IMSA and IMSB, with a different policy.

IMSA

This has a policy labeled Optimization, which allows IMSA to haveits storage occupy up to 100 percent of the three assigned tiers. Inother words, there is no restriction on where the storage for IMSA canreside.

IMSB

This has a policy labeled Custom, which forces an exact amount ofstorage for each tier. This is the most restrictive kind of policy that canbe used and is effected by making the total of the allocations equalone hundred percent.

More details on FAST VP can be found in the white paperImplementing Fully Automated Storage Tiering for Virtual Pools (FASTVP) for EMC Symmetrix VMAX Series Arrays.

Best practices for IMS and FAST VPIn this section, some best practices are presented for IMS for z/OS ina FAST VP context. IMS can automatically take advantage of theadvanced dynamic and automatic tiering provided by FAST VPwithout any changes. However, there are some decisions that need tobe made at setup time with respect to the performance and capacityrequirements on each tier. There is the also the setup of the storage

TiersStorage groups Policies

IMSA

IMSB

100%100%

100%100%100%100%

Op�miza�on

5%5%

30%30%65%65%

Custom

200 GB EFDRAID 5 (3+1)

450 GB FCRAID 1

2TB SATARAID 6 (6+2)

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 81: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Advanced Storage Provisioning

group, as well as the time windows, and some other additionalparameters. All of the settings can be performed using Unisphere forVMAX.

Unisphere for VMAXUnisphere for VMAX is used to manage all the necessary componentsto enable FAST VP for IMS subsystems. While details on the use ofUnisphere are beyond the scope of this TechBook, the followingparameters need to be understood to make an informed decisionabout the FAST VP setup.

Storage groupsWhen creating a FAST VP storage group (not to be confused with anSMS storage group), you should select thin volumes that are going tobe treated in the same way, with the same performance and capacitycharacteristics. A single IMS subsystem and all of its volumes mightbe an appropriate grouping. It might also be convenient to map aFAST VP storage group to a single SMS storage group, or you couldplace multiple SMS storage groups into one FAST VP storage group.Whatever is the choice, remember that a FAST VP storage group canonly have thin devices in it.

If you have implemented Virtual Provisioning and are later addingFAST VP, when creating the FAST VP storage group with Unisphere,you must use the option Manual Selection and select the thinvolumes that are to be in the FAST VP storage group.

FAST VP policiesFor each storage group that you define for IMS, you need to assign apolicy for the tiers that the storage is permitted to reside on. If yourtiers are EFD, FC, and SATA, as an example, you can have a policythat permits up to 5 percent of the storage group to reside on EFD, upto 60 percent to reside on FC, and up to 100 percent to reside onSATA. If you don't know what proportions are appropriate, you canuse an empirical approach and start incrementally. The initial settingsfor this would be 100 percent on FC and nothing on the other twotiers. With these settings, all the data remains on FC (presuming itlives on there already). At a later time, you can dynamically changethe policy to add the other tiers and gradually increase the amount ofcapacity allowed on EFD and SATA. This can be performed using the

Fully Automated Storage Tiering 81

Page 82: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

82

Advanced Storage Provisioning

Unisphere GUI. Evaluation of performance lets you know howsuccessful the adjustments were, and the percentage thresholds canbe modified accordingly.

A policy totaling exactly 100 percent for all tiers is the most restrictivepolicy and determines what exact capacity is allowed on each tier.The least restrictive policy allows up to 100 percent of the storagegroup to be allocated on each tier.

IMS test systems would be good targets for placing large quantitieson SATA. This is because the data can remain for long times betweendevelopment cycles, and the performance requirements can besomewhat looser. In addition, test systems do not normally have ahigh performance requirement and most likely will not need to resideon the EFD tier. An example of this kind of policy would be 50percent on FC and 100 percent on SATA.

Even with high I/O rate IMS subsystems, there is always data that israrely accessed that could reside on SATA drives without incurring aperformance penalty. For this reason, you should consider puttingSATA drives in your production policy. FAST VP does not demoteany data to SATA that is accessed frequently. An example of a policyfor this kind of subsystem would be 5 percent on EFD, 100 percent onFC, and 100 percent on SATA.

Time windows for data collectionMake sure that you collect data only during the times that are criticalfor the IMS applications. For instance, if you reorganize IMSdatabases on a Sunday afternoon, you may want to exclude that timefrom the FAST VP statistics collection. Note that the performancetime windows apply to the entire VMAX controller, so you need tocoordinate the collection time windows with your storageadministrator.

Time windows for data movementMake sure you create the time windows that define when data can bemoved from tier to tier. Data movements can be performance-basedor policy-based. In either case, it places additional load on the VMAXarray and should be performed at times when the application is lessdemanding. Note that the movement time windows apply to theentire VMAX controller, so you need to coordinate them with otherapplications requirements that are under FAST VP control.

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 83: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Advanced Storage Provisioning

IMS OLDS and WADSIMS log files are formatted by the DBA when initializing IMS. Everysingle page of the log files is written to at this time, meaning that thelog files become fully provisioned when they are initialized and willnot cause any thin extent allocations after this. The IMS logs are thusspread across the pool and take advantage of being widely striped.

FAST VP does not use cache hits as a part of the analysis algorithmsto determine what data needs to be moved. Since all writes are cachehits, and the IMS log activity is primarily writes, it is highly unlikelythat FAST VP will move parts of the active log to another tier. Thinkof it this way: Response times are already at memory speed due to theDASD fast write response, so can you make it any faster?

For better IMS performance, it is recommended to VSAM stripe theIMS log files, especially when SRDF is being used. Thisrecommendation holds true even if the IMS active logs are deployedon thin devices.

IMS REORGsAny kind of reorganization for an IMS database can undo a lot of thegood work that FAST has accomplished. Consider a database that hasbeen optimized by FAST VP and has its hot pages on EFD, its warmpages on FC, and its cold pages on SATA. At some point, the DBAdecides to do an online REORG. A complete copy of the database ismade in new unoccupied space and potentially unallocated part ofthe thin storage pool. If the database can fit, it is completely allocatedon the thin pool associated with the new thin device containing thedatabase. This new database on a thin device is (most likely) all onFibre Channel drives again. In other words, de-optimized. After someoperational time, FAST VP begins to promote and demote thedatabase track groups when it has obtained enough informationabout the processing characteristics of these new chunks. So, it is areality, that an IMS REORG could actually reduce the performance ofthe tables space/partition.

There is no real good answer to this. But on the bright side, it isentirely possible that the performance gain through using FAST VPcould reduce the frequency of REORGs, if the reason for doing theREORG is performance based. So when utilizing FAST VP, youshould consider revisiting the REORG operational process for IMS.

Fully Automated Storage Tiering 83

Page 84: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

84

Advanced Storage Provisioning

z/OS utilitiesAny utility that moves a dataset/volume (for instance ADRDSSU)changes the performance characteristics of that dataset/volume untilFAST VP has gained enough performance statistics to determinewhich track groups of the new dataset should be moved back to thedifferent tiers they used to reside upon. This could take some time,depending on the settings for the time windows and performancecollection windows.

IMS and SMS storage groupsThere is a natural congruence between SMS and FAST VP wherestorage groups are concerned. Customers group applications anddatabases together into a single SMS storage group when they havesimilar operational characteristics. If this storage group were built onthin devices (a requirement for FAST VP), a FAST VP storage groupcould be created to match the devices in the SMS storage group.While this is not a requirement with FAST VP, it is a simple andlogical way to approach the creation of FAST VP storage groups. Builtin this fashion, FAST VP can manage the performance characteristicsof the underlying applications in much the same way that SMSmanages the other aspects of the storage management.

IMS and HSMIt is unusual to have HSM archive processes apply to production IMSdatabases, but it is fairly common to have them apply to test,development, and QA environments. HMIGRATE operations arefairly frequent in those configurations, releasing valuable storage forother purposes. With FAST VP, you can have the primary volumesaugmented with economic SATA capacity and use less aggressiveHSM migration policies.

The disadvantages of HSM are:

◆ When a single record is accessed from a migrateddatabase/partition, the entire database needs to be HRECALLed.

◆ When HSM migrates and recalls databases, it uses costly hostCPU and I/O resources.

The advantages of using FAST VP to move data to primary volumeson SATA are:

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 85: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Advanced Storage Provisioning

◆ If the database resides on SATA, it can be accessed directly fromthere without recalling the entire database.

◆ FAST VP uses the VMAX storage controller to move data betweentiers.

An example of a FAST VP policy to use with IMS test subsystems is 0percent on EFD, 50 percent on FC, and 100 percent on SATA. Overtime, if the subsystems are not used, and there is demand for the FCtier, FAST VP will move the idle data to SATA.

Fully Automated Storage Tiering 85

Page 86: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

86

Advanced Storage Provisioning

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 87: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

4

This chapter presents these topics:

◆ IMS database cloning......................................................................... 88◆ Replicating the data ........................................................................... 89

IMS Database Cloning

IMS Database Cloning 87

Page 88: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

88

IMS Database Cloning

IMS database cloningMany IMS customers require IMS databases to be cloned within anIMS system or across IMS systems. The reasons for cloning IMSdatabases include:

◆ Building test environments

◆ Providing system integration and test environments

◆ Building read-only decision support environments

◆ Satisfying a number of other business-related requirements forIMS data

The amount of data cloned and the frequency at which it is clonedtend to be driven by the business requirements of the organizationand ultimately determine the technology used to replicate theinformation.

Cloning IMS data with TimeFinder minimizes mainframe resourcesby moving data within the Symmetrix system, without using hostI/O and CPU. This mechanism also minimizes databaseunavailability and reduces database I/O contention that can occurwith host-based cloning.

Cloned IMS databases can be used for ad hoc reporting whilereducing contention on production databases. Using TimeFinder toclone the data provides a very fast method for creating the cloneddatabases and reduces downtime required for performing this task.

Historically, IMS databases have been cloned using applicationprograms to unload data and then reload it back into its targetdatabases. Other data cloning implementations use HierarchicalStorage Management (HSM) software and tape managementsoftware, like FDR or DFDSS, to copy database datasets to tape, andthen restore the datasets on a target IMS database system. Both ofthese data cloning techniques require significant mainframe I/O andCPU resources to copy the data from its source to some intermediatestorage device and then to the target system.

Cloning data within and across IMS database management systemswith EMC TimeFinder involves two separate processes. Figure 16 onpage 89 shows the processes involved when cloning IMS data from asource environment to a target environment.

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 89: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

IMS Database Cloning

Figure 16 IMS metadata and physical cloning processes

The following steps described the circled numbers in Figure 16:

1. Establish all associated metadata on the target environment. Thisis done once to allow access to the cloned database on the targetIMS system and requires the following procedures:

• Defining the databases in the target environment, systemgeneration

• Generating the DBD and the ACB (DBDGEN, ACBGEN)

• Assembling the target Dynamic Allocation Macro to link thedatabase definition to its physical database dataset

• Registering the database in target RECON

2. Clone the data. The Symmetrix system performs this task, whichis directed by utilities executing on the host.

Replicating the dataCopying the data using TimeFinder/Mirror and TimeFinder/Clonecan be done in three ways:

◆ Volume mirroring (TimeFinder/Mirror)

◆ Volume snap (TimeFinder/Clone and TimeFinde/Snap)

◆ dataset snap (TimeFinder/Clone)

The best technique depends on the density of cloned datasets on thevolumes, and the number of concurrent copies required. Thefollowing are criteria to consider prior to selecting the technique for agiven environment:

HostMetadata management

ICO-IMG-000670

Targetdata

Symmetrix

Sourcedata

Source IMSdatabase

Target IMSdatabase

Replicating the data 89

Page 90: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

90

IMS Database Cloning

◆ The volume-level techniques are best suited for cloning bigapplications that are populated on a fixed set of volumes.Volume-level techniques use incremental techniques within thearray to make the copying more efficient.

◆ Some cloning implementations require creating multiple copies ofthe database at the same time. These types of implementationsmay be suited for dataset snap or volume snap processingbecause multiple copies of the same data could be performed atthe same time.

◆ dataset snap is best suited for cloning IMS databases that aresparsely populated across a number of volumes. Customers thatare in an SMS-managed environment and/or do not have a set ofdedicated volumes to contain IMS data may be more likely tochoose dataset snap for cloning data over the volume-levelprocesses.

Cloning IMS data using TimeFinder/MirrorCloning IMS data using TimeFinder/Mirror requires the database tobe quiesced for update processing through the duration of the SPLIT.Once the SPLIT has completed, the source IMS databases can berestarted for normal update processing. The time required for theSPLIT is measured in seconds. Figure 17 on page 90 depicts ahigh-level setup used to replicate IMS databases usingTimeFinder/Mirror.

Figure 17 IMS databases cloning using TimeFinder/Mirror

To clone IMS databases using TimeFinder/Mirror, use the followingprocedure:

Host

ICO-IMG-000715

Targetdata

Symmetrix

Sourcedata

ESTABLISH

SPLIT

STD BCV

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 91: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

IMS Database Cloning

1. De-allocate the datasets from the target IMS to externalize targetIMS buffers to disk. Use the /DBR DB command, and issue asubsequent /DIS DB command to ensure databases have beenquiesced.

Execute a TimeFinder/Mirror ESTABLISH command tosynchronize the BCV volumes to the source IMS volumes thatcontain the databases to be cloned. The ESTABLISH can beperformed while processing on the source IMS databasecontinues and is transparent to the applications running there.Use the following sample JCL to perform the ESTABLISH:

//ESTABLIS EXEC PGM=EMCTF//SYSOUT DD SYSOUT=*//SYSIN DD *GLOBAL WAIT,MAXRC=0,FASTEST(Y)QUERY 1,5420ESTABLISH 2,5420,5410QUERY 3,5420/*

2. De-allocate datasets from the IMS source databases andexternalize source IMS buffers to disk. When the copy of thevolumes is required, the source databases must be quiesced usingthe /DBR DB command. Issue a subsequent /DIS DB command toensure databases have been quiesced.

3. Issue the TimeFinder SPLIT command to split the BCVs. Use thefollowing sample JCL to perform the TimeFinder/Mirror SPLIT:

//TFSPLIT EXEC PGM=EMCTF//SYSOUT DD SYSOUT=*//SYSIN DD *GLOBAL NOWAIT,MAXRC=0QUERY 1,5420SPLIT 2,5420QUERY 3,5420/*

4. Start source IMS databases for normal processing. Normalprocessing can now be resumed on the source databases. Use the/STA DB command.

5. The BCVs are now ready to be processed to allow access to thedatabase by the target IMS. The TimeFinder utility is used torelabel the volumes so that they can be varied online for datasetprocessing. The RELABEL statement specifies the new VOLSER.The RENAME statement changes the high-level qualifier of thedataset to match that of the target IMS. The PROCESS statement

Replicating the data 91

Page 92: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

92

IMS Database Cloning

identifies the types of files to be processed for each BCV. TheCATALOG statement identifies the ICF catalog that catalogs therenamed datasets. Use the following sample JCL to condition theBCVs:

//TFURNM EXEC PGM=EMCTFU//SYSOUT DD SYSOUT=*//TFINPUT DD *RELABEL CUU=5420,OLD-VOLSER=EMCS01,NEW-VOLSER=EMCT01PROCESS CUU=5420,VOLSER=EMCT01,VSAMCATALOG IMSTPRD1.USER.CATALOG,DEFAULTRENAME source_hlq, target_hlq

/*

6. Use the /STA DB command to start the target IMS databases fornormal processing.

Cloning IMS databases using TimeFinder/CloneThe replication procedure when using TimeFinder/Clone fullvolume snap is similar to that of TimeFinder/Mirror. Targetdatabases should be defined on the target IMS and the sourcedatabases must be de-allocated before doing the volume snap.

Figure 18 IMS databases cloning using TimeFinder/Clone

To copy IMS databases using TimeFinder/Clone, use the followingprocedure:

1. De-allocate datasets from the target IMS and externalize targetIMS buffers to disk. Use the /DBR DB command. Issue asubsequent /DIS DB command to ensure databases have beenquiesced.

Host

ICO-IMG-000716

Targetdata

Symmetrix

Sourcedata

TF/CloneVolumeSNAP

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 93: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

IMS Database Cloning

2. De-allocate datasets from the source IMS database andexternalize IMS buffers to disk. The databases must be quiescedusing the /DBR DB command. Issue a subsequent /DIS DB

command to ensure the databases have been quiesced.

3. Snap the volumes containing the datasets that are associated withthe databases being cloned. The following sample JCL can beused to perform a volume snap:

//SNAPVOL EXEC PGM=EMCSNAP//SYSOUT DD SYSOUT=*//EMCQCAPI DD SYSOUT=*//EMCQCFMT DD SYSOUT=*//QCOUTPUT DD SYSOUT=*//QCINPUT DD *GLOBAL CONSISTENT(YES)-

PARALLEL(YES)SNAP VOLUME ( SOURCE (UNIT(1000)) -

TARGET (UNIT(1001)) -WAITFORCOMPLETION(NO) -COPYV(N) -REPL(Y) -WAITFORSESSION(NO))

SNAP VOLUME ( SOURCE (UNIT(1002)) -TARGET (UNIT(1003)) -WAITFORCOMPLETION(NO) -COPYV(N) -REPL(Y) -WAITFORSESSION(NO))

ACTIVATE/*

4. Normal processing can now be resumed on the source databases.Use the /STA DB command.

5. Using the TimeFinder utility rename the copied datasets. Thefollowing sample JCL renames the source HLQ to the target HLQ:

//TFURNM EXEC PGM=EMCTFU//SYSOUT DD SYSOUT=*//TFINPUT DD *RELABEL CUU=1001,OLD-VOLSER=EMCS01,NEW-VOLSER=EMCT01RELABEL CUU=1003,OLD-VOLSER=EMCS02,NEW-VOLSER=EMCT02PROCESS CUU=1001,VOLSER=EMCT01,VSAMPROCESS CUU=1003,VOLSER=EMCT02,VSAMCATALOG IMSTPRD1.USER.CATALOG,DEFAULTRENAME source_hlq,target_hlq

/*

Replicating the data 93

Page 94: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

94

IMS Database Cloning

6. Start the target IMS databases for normal processing. Use the/STA DB command.

Cloning IMS databases using dataset snapdataset snap comes as a part of the TimeFinder/Clone MainframeSnap Facility. Cloning IMS databases using dataset snap processing isbest for cloning IMS databases that are sparsely populated across anumber of volumes. The snap process can create multiple clonecopies.

Figure 19 IMS database cloning using dataset snap

To copy an IMS database using dataset snap use the followingprocedure:

1. De-allocate datasets from the target IMS and externalize sourceIMS buffers to disk. Use the /DBR DB command. Issue asubsequent /DIS DB command to ensure that the database hasbeen quiesced.

2. Snap the datasets associated with the databases to be cloned. Usethe following sample JCL to perform the dataset snap:

//RUNSNAP EXEC PGM=EMCSNAP//SYSUDUMP DD SYSOUT=*//SYSOUT DD SYSOUT=*//QCINPUT DD *SNAP DATASET ( -SOURCE('source_hlq_qualifiers')-TARGET('target_hlq_qualifiers')-

VOLUME(EMCT01)-REPLACE(Y)-

Host

ICO-IMG-000714Source data

Target data

TimeFinder/Snapdatasetsnap

Symmetrix

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 95: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

IMS Database Cloning

REUSE(Y)-FORCE(N)-HOSTCOPYMODE(SHR)-DATAMOVERNAME(DFDSS)-DEBUG(OFF)-TRACE(OFF)

/*

3. Start the source IMS databases for normal processing. Normalprocessing can now resume on the source databases. Use the /STADB command.

4. Start the target IMS databases for normal processing. Processingcan now resume on the cloned target databases. Use the /STA DB

command.

Replicating the data 95

Page 96: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

96

IMS Database Cloning

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 97: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

5

This chapter provides a detailed description on how to useTimeFinder to enhance an IMS backup strategy.

◆ Overview............................................................................................. 98◆ IMS commands for TimeFinder integration................................. 100◆ Creating a TimeFinder/Mirror backup for restart ...................... 100◆ TimeFinder/Mirror backup for recovery ..................................... 106◆ Backing up IMS databases using dataset snap ............................ 113◆ Creating multiple split mirrors ...................................................... 114◆ Backing up a TimeFinder copy ...................................................... 115◆ Keeping track of dataset placement on backup volumes........... 117

Backing Up IMSEnvironments

Backing Up IMS Environments 97

Page 98: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

98

Backing Up IMS Environments

OverviewThe size and complexity of information systems continually increaseyear over year. The requirement for high availability is moreimportant than ever. The 24/7 requirement makes it extremelydifficult to perform database maintenance operations and to preparefor fast recovery in case of a disaster. Acceptable solutions are onesthat do not compromise availability or performance.

A TimeFinder backup solution is recommended for backing up IMSenvironments. This solution offers a quick way of creating backupenvironments and has little effect on production availability andperformance. TimeFinder backups can be done in two basic modes:

◆ Where production processing is ongoing, without quiescing theIMS databases: This is a quick backup that has no effect onavailability or performance of the production system. Thisbackup can be used for a fast IMS restart in case of a disaster orfallback from unsuccessful planned operations, but cannot beused for database recovery.

◆ Where production processing is quiesced: This is a quick way ofbacking up databases (or the entire IMS environment) and can beused for fast database point-in-time recovery or as a source for anIMS image copy.

TimeFinder integration with image copies (IC) at the storageprocessor level provides enhanced availability and performanceimprovements for IMS environments. Availability is enhanced byquiescing the databases only for the time it takes for TimeFinderfunctions to complete, as opposed to the time required for traditionalimage copies to complete. Performance is improved by having theimage copy read activity directed to a copy of the database, ratherthan executing the in the production environment.

IMS may be backed up for use in IMS recovery or restart operations.Backups created for recovery purposes are used in IMS recoveryoperations by restoring an IMS backup and then applying logs tobring the database to a known point of consistency. Backups can alsobe created for restart purposes. These backups can be restored, andthen the IMS system is restarted using the restored copy.

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 99: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Backing Up IMS Environments

The following three sections describe these methodologies and theprocedures involved in backing up IMS environments and arecategorized as either a recovery or a restart solution. The followingtable shows the category in which each section falls.

Table 4 Backup methods and categories

Backup method Section Notes

TimeFinder/Mirrorbackup (noquiesce)Consistent Split

DFDSSFDR

Remote ConsistentSplit

DFDSSFDR

Consistency GroupDFDSSFDR

Creating a TimeFinder/Mirror backupBackup using DFDSSBackup using FDRTimeFinder/Mirror using remoteconsistent splitBackup using DFDSSBackup using FDRTimeFinder/Mirror using ConGroupsBackup using DFDSSBackup using FDR

System restart only

Tape backupTape backup

Tape backupTape backup

Tape backupTape backup

TimeFinder/Mirrorbackup (quiesce)Split

DFDSSFDR

Remote Split

DFDSSFDR

IMS image copy

TimeFinder/Mirror backup for recoveryBackup using DFDSSBackup using FDRTimeFinder/Mirror backup for remoterecoveryBackup using DFDSSBackup using FDRCreating IMS image copies using volumesplit

System restart or databaserecovery

Tape backupTape backup

Tape backupTape backup

Dataset SnapDisk backup

IMS image copy

Backing up IMS databases using datasetsnapCreating IMS image copies using datasetsnap

Database recovery

Overview 99

Page 100: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

100

Backing Up IMS Environments

IMS commands for TimeFinder integrationWhen working with TimeFinder replication for backup and recoverythere are certain DBRC commands that should be used to enhance theintegration. These DBRC commands make it possible to integrateIMS into various EMC functions:

◆ NOTIFY.IC—Adds image copy information to the RECON. Thiscommand cannot be used for ILDS or Index DBDs of HALDBpartitions.

◆ NOTIFY.UIC—Adds information to the RECON pertaining to anonstandard image copy dataset. A nonstandard image copydataset is one that was not created by the IMS image copy utility,but created by utilities such as FDR or DFDSS. This commandcannot be used for a DBDS defined with the REUSE attribute orfor ILDS or Index DBDs of HALDB partitions.

◆ NOTIFY.RECOV—Adds information about recovery of a DBDSor DEDB area to the RECON. This command is used whenever itis required to perform the recovery of a DBDS or DEDB areaoutside of the Database Recovery utility. An example of whenthis would be used is if a DBDS was recovered using a backupcreated by DFDSS or FDR.

◆ CHANGE.DBDS—Changes information about a DBDS in theRECON datasets. When used with the RECOV parameter, thiscommand marks the DBDS or area as "needs recovery."

◆ GENJCL.RECOV—Generates the JCL and utility controlstatements required to run the database recovery utility. The JCLand utility control statements can be requested for a full recoveryor a time-stamp recovery of a DBDS or DEDB area. All log datamust be archived; otherwise, the GENJCL.RECOV commandfails.

Creating a TimeFinder/Mirror backup for restartThere are three methods to create the TimeFinder/Mirror backup forIMS restart without quiescing the databases. All of the methods arebased on the same concept of holding application writes to theproduction volumes, and then splitting the BCVs. This split processcreates a consistent copy of the IMS system on the BCVs. The three

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 101: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Backing Up IMS Environments

ways to create a TimeFinder/Mirror backup differ in the mechanismsin which I/Os are stopped, in the operational procedures, and in thehardware requirements:

◆ Consistent split is a TimeFinder/Mirror feature that can be usedto create a restartable copy of an IMS environment on BCVs. TheIMS system is not quiesced while the copy is created.

◆ Remote consistent split can be used to create a restartable copy ofan IMS environment at a remote site. This backup methodrequires that SRDF be implemented between the local and theremote site, and that BCVs are configured in the remoteSymmetrix system.

◆ In remotely mirrored environments, the SRDF ConGroup featureprovides a way to preserve consistency across a group ofvolumes that contain all data for an IMS system when an SRDFlink is dropped.

Creating TimeFinder/Mirror backups for the IMS system enablesrestart of the entire system to a prior point in time. Restarting IMSfrom a TimeFinder/Mirror backup is similar to recovery from a localpower failure due to hardware or physical environment disruption.

Creating a TimeFinder/Mirror backupConsistent split holds all I/Os on the devices that are being split, andperforms an instant split to create the TimeFinder/Mirror backupenvironment. From the IMS perspective, the state of the data on theBCVs is as if the z/OS system incurred a power outage. TheTimeFinder/Mirror backup environment is consistent from thedependent-write I/O perspective, and when restarted, all runningunits of work will be resolved.

Creating a TimeFinder/Mirror backup for restart 101

Page 102: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

102

Backing Up IMS Environments

Figure 20 TimeFinder/Mirror backup using consistent split

To create a TimeFinder/Mirror backup using consistent split, use thefollowing procedure, and refer to Figure 20 on page 102:

1. Re-establish (establish if it is the first time) BCV devices to thestandard volumes that make up the entire IMS environment, andwait until synchronized. Use the following sample JCL toperform the TimeFinder re-establish:

//RUNUTIL EXEC PGM=EMCTF//SYSUDUMP DD SYSOUT=*//SYSOUT DD SYSOUT=*//SYSIN DD *GLOBAL MAXRC=4,WAITRE-ESTABLISH 01,4120-4123/*

The four BCV devices are identified by the UCBs 4120-4123. It isnot necessary to specify the source device UCBs as these BCVs arealready in a relationship, and the Symmetrix knows whatvolumes are paired.

2. Perform a consistent split across all the devices. Use the followingsample JCL to perform the consistent split:

//SPLIT EXEC PGM=EMCTF//SYSOUT DD SYSOUT=*//SYSIN DD *

ICO-IMG-000674

2

1

Updatetransaction

TimeFinder

IMS

4100 4101

4102 4103

4120 4121

4122 4123

BCV

BCV

BCV

BCV

WADS

IMSdatabases

RECON

OLDS

Host

Consistent split

(Re)Establish

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 103: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Backing Up IMS Environments

GLOBAL MAXRC=4,NOWAITSPLIT 1,4120-4123,CONS(GLOBAL(ECA))/*

TimeFinder/Mirror using remote consistent splitIn remotely mirrored environments, a remote consistent split can beused to create a consistent image of the IMS environment on theremote BCV devices.

Figure 21 TimeFinder/Mirror backup using remote consistent split

To create a TimeFinder/Mirror backup using remote consistent split,use the following procedure, and refer to Figure 21:

1. Re-establish (establish if it is the first time) remote BCVs to the R2devices that make up the entire IMS environment (user datasetsand system datasets, ICF catalogs) and wait until synchronized.

The following JCL can be used to perform a remote ESTABLISH:

//ESTABL EXEC PGM=EMCTF//SYSOUT DD SYSOUT=*//SYSIN DD *GLOBAL MAXRC=4,WAIT,FASTEST(Y)ESTABLISH 01,RMT(4100,00A0-00A3,0040-0043,04)/*

2. Perform the remote consistent split.

Remote Symmetrix System

Local Symmetrix System

ICO-IMG-000675

4100 4101

4102 4103

4120 4121

4122 4123

Host 1

Host 1’

BCV

BCV

BCV

BCV

Establish1 Consistent split2

WADS RECONS

OLDSIMSDatabases

WADS

IMSDatabases

R2 Remote mirror BCVs

Hostcomponent

ConGroup

IMS

Updatetransaction

4100 4101

4102 4103

SRDFlinks

R1

RECONS

OLDS

Host utilitiesEMCTFU

Creating a TimeFinder/Mirror backup for restart 103

Page 104: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

104

Backing Up IMS Environments

The following JCL can be used to perform the remote consistentSPLIT:

//SPLIT EXEC PGM=EMCTF//SYSOUT DD SYSOUT=*//SYSIN DD *GLOBAL MAXRC=4,NOWAITSPLIT 01,RMT(4100,00A0-00A3,04),CONS(GLOBAL)/*

TimeFinder/Mirror using ConGroupsIn remotely mirrored environments that use SRDF and ConGroup, anexplicit ConGroup trip can be used to create a consistent image of theIMS environment on the R2 devices. The ConGroup can then beimmediately resumed using the remote split option to trigger aninstant split of the R2 BCVs before resuming the ConGroup.

Figure 22 TimeFinder/Mirror backup using ConGroup

To create a TimeFinder/Mirror backup using ConGroup, use thefollowing procedure and refer to Figure 22:

1. Re-establish (establish if it is the first time) R2 BCVs to the R2devices that make up the entire IMS environment and wait untilsynchronized. Use the following sample JCL to perform theTimeFinder re-establish:

Remote Symmetrix system

Local Symmetrix system

ICO-IMG-000676

4100 4101

4102 4103

4120 4121

4122 4123

Host 1

Host 1’

BCV

BCV

BCV

BCV

Establish1ConGroup Suspend2 ConGroup Resume/Split3

WADS RECONS

OLDSIMSDatabases

WADS

IMSDatabases

R2 Remote mirror BCVs

Hostcomponent

ConGroup

IMS

Updatetransaction

4100 4101

4102 4103

SRDFlinks

R1

RECONS

OLDS

Host utilitiesEMCTFU

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 105: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Backing Up IMS Environments

//RUNTF EXEC PGM=EMCTF//SYSUDUMP DD SYSOUT=*//SYSOUT DD SYSOUT=*//SYSIN DD *GLOBAL MAXRC=4,WAITRE-ESTABLISH 02,RMT(4100,00A0-00A3,04)/*

2. Perform an explicit trip of the ConGroup using the followingSRDF Host Component command with a CUU that is one of thedevices in the ConGroup:

SC VOL,cuu,SUSP-CGRPIssue the following display command to make sure the

ConGroup is suspended:F con_group_task,DIS CON con_group_name NOLIST

3. Immediately resume the ConGroup with the remote split optionusing the following operator command:

F con_group_task,RESUME con_group_name,SPLIT

This command performs an instant split on all ConGroup R2BCVs and immediately resumes the suspended devices tocontinue propagating updates to the remote site.

Interpret the results of the following command to make sure that theTimeFinder/Mirror backup environment is ready:

F con_group_task,DIS CON con_group_name NOLIST

TimeFinder/Mirror backup methods comparisonsThe consistent split solution has minimal performance impact on therunning system, but requires TimeFinder release 5.1.0 or later tosupport multi-image data sharing environments. The ConGroupsolution works in data sharing environments across z/OS images,and there is no performance impact. ConGroup can also be used toback up heterogeneous environments. Remote consistent split has no

Creating a TimeFinder/Mirror backup for restart 105

Page 106: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

106

Backing Up IMS Environments

performance impact on the running system and creates a restartableimage of the IMS system on remote BCVs. Table 5 compares the threemethods.

TimeFinder/Mirror backup for recoveryThere are two ways to create the TimeFinder/Mirror backup for IMSrecovery. Both are based on the same concept of quiescing IMSdatabases and splitting the BCVs. The split process creates aconsistent copy of the IMS system on the BCVs. The two ways tocreate a TimeFinder/Mirror backup differ in the mechanisms in theoperational procedures, and in the hardware requirements:

◆ TimeFinder consistent split can be used to create a backup onBCVs that can be used for IMS recovery.

◆ Remote split provides the ability to create a point-in-time copy ofan IMS system at the remote site that can be used for recovery.

Creating TimeFinder/Mirror backups for the IMS system while thedatabases are quiesced enables recovery of databases to a prior pointin time by restoring them from the backup. The restored database canthen be rolled forward using the IMS logs.

TimeFinder/Mirror backup for recoveryA TimeFinder/Mirror backup can be used for recovering IMSdatabases. To create this type of backup, the IMS databases must bequiesced using the /DBR DB command prior to the split.

To create a TimeFinder/Mirror backup for recovery, use the followingprocedure, and refer to Figure 23 on page 107.

Table 5 TimeFinder/Mirror backup methods

Backup method SRDF required BCVs required Cross-platform

Consistent split No Local Yes

Remote consistent split Yes Remote Yesa

ConGroup Yes Remote Yes

a. Initiated from the mainframe platform.

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 107: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Backing Up IMS Environments

Figure 23 TimeFinder/Mirror backup for recovery

1. Re-establish (establish if it is the first time) BCV devices to thestandard volumes that make up the entire IMS environment, andwait until synchronized. Use the following sample JCL toperform the TimeFinder re-establish:

//RUNUTIL EXEC PGM=EMCTF//SYSOUT DD SYSOUT=*//SYSIN DD *GLOBAL MAXRC=4,WAITRE-ESTABLISH 01,4120-4123/*

2. De-allocate datasets from the source IMS, and externalize IMSbuffers to disk. Use the /DBR DB command, and issue asubsequent /DIS DB command to ensure that the databases havebeen quiesced.

3. Perform a SPLIT across all the devices. Use the following sampleJCL to perform the TimeFinder SPLIT:

//SPLIT EXEC PGM=EMCTF//SYSOUT DD SYSOUT=*//SYSIN DD *GLOBAL MAXRC=4,NOWAITSPLIT 01,4120-4123/*

ICO-IMG-000677

3

1

Update transaction

TimeFinder

IMS

4100 4101

4102 4103

4120 4121

4122 4123

BCV

BCV

BCV

BCV

WADS

IMSDatabases

RECON

OLDS

Host

Instant split

(Re)Establish

/DBR DB2

/STA DB4

3

TimeFinder/Mirror backup for recovery 107

Page 108: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

108

Backing Up IMS Environments

4. Resume production processing. Issue /STA DB commands toresume processing on the databases.

TimeFinder/Mirror backup for remote recoveryIn remotely mirrored environments, a remote SPLIT can be used tocreate an image of the IMS environment on BCV devices in theremote Symmetrix array.

Figure 24 TimeFinder/Mirror backup for recovery using remote split

To create a TimeFinder/Mirror backup remotely, use the followingprocedure and refer to Figure 24:

1. Re-establish (establish if it is the first time) BCVs to the R2 devicesthat make up the entire IMS environment and wait untilsynchronized. Use the following sample JCL to perform theTimeFinder re-establish:

//REESTBL EXEC PGM=EMCTF//SYSOUT DD SYSOUT=*//SYSIN DD *GLOBAL MAXRC=4,WAITRE-ESTABLISH 01,RMT(4100,00A0-00A3,04)/*

2. De-allocate datasets from IMS, and externalize IMS buffers todisk. Use the /DBR DB command, and issue a subsequent /DIS

DB command to ensure that the databases have been quiesced.

Remote Symmetrix system

Local Symmetrix system

ICO-IMG-00067

4100 4101

4102 4103

4120 4121

4122 4123

Host 1

Host 1’

BCV

BCV

BCV

BCV

Establish1 Remote split3

WADS RECONS

OLDSIMSdatabases

WADS

IMSdatabases

R2 Remote mirror BCVs

Hostcomponent

ConGroup

IMS

4100 4101

4102 4103

SRDFlinks

R1

RECONS

OLDS

Host utilitiesEMCTFU

/DBR DB

/STA DB

2

4

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 109: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Backing Up IMS Environments

3. Perform a TimeFinder remote SPLIT using the following sampleJCL:

//RSPLIT EXEC PGM=EMCTF//SYSOUT DD SYSOUT=*//SYSIN DD *GLOBAL MAXRC=4,NOWAITSPLIT 01,RMT(4100,00A0-00A3,04)/*

4. Resume production processing. Issue /STA DB commands toresume processing on the databases.

Creating IMS image copiesIt is possible to create IMS image copies from TimeFinder copies ofIMS databases. The IMS databases must be quiesced by issuing a/DBR DB command prior to creating the copy. Additionally, atimestamp must be recorded at the time of the TimeFinder backup.Refer to Chapter 6, “IMS Database Recovery Procedures,”foradditional information on the timestamp requirement.

TimeFinder database copies are usable for recovery only after theyhave been properly identified in the RECON dataset. NOTIFY.IC isused to establish proper identification in the RECON dataset. Oncethe image copies have been properly identified, thenGENJCL.RECOV will construct the correct recovery JCL. For moreinformation on these commands, refer to the Section “IMS commandsfor TimeFinder integration” on page 100.

There are two methods for creating the copy of the databases to bebacked up. These methods are described in the following sections.

Creating IMS image copies using volume splitImage copies can be taken from quiesced IMS databases that resideon full-volume targets after the SPLIT/ACTIVATE process.TimeFinder dataset copies can be used as input to an IMS image copyprocess. The image copies are used as a basis for traditional IMSrecoveries.

TimeFinder/Mirror backup for recovery 109

Page 110: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

110

Backing Up IMS Environments

Figure 25 IMS image copy from a BCV

To create IMS image copies from a BCV, use the following procedure,and refer to Figure 25:

1. Establish the BCV devices to the standard volumes that containthe databases that have to be copied, and wait until synchronized.Use the following sample JCL to perform the TimeFinderESTABLISH:

//ESTABL EXEC PGM=EMCTF//SYSOUT DD SYSOUT=*//SYSIN DD *GLOBAL MAXRC=4,WAITESTABLISH 01,4120,4100/*

2. De-allocate datasets from the source IMS database andexternalize IMS buffers to disk. Use the /DBR DB command, andissue a subsequent /DIS DB command to ensure that thedatabases have been quiesced. Make a note of the timestamp fromthe /DBR DB command for later use.

3. Perform a SPLIT across all the devices. Use the following sampleJCL to perform the TimeFinder split:

//SPLIT EXEC PGM=EMCTF

5

BCV

Imagecopy

ReconditionBCV

NOTIFY.ICUsing TimeFinder

split timestamp

STD

ICO-IMG-000680

IMS system

/DBR DB

/STA DB

2

4

TF ESTABLISH

TF SPLIT

1

3

GENJCL.IC6

Image copy jobmodified for BCV

datasetsno DBRC

7

8

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 111: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Backing Up IMS Environments

//SYSOUT DD SYSOUT=*//SYSIN DD *GLOBAL MAXRC=4,NOWAITSPLIT 01,4120/*

4. Start databases for normal processing. Use the /STA DB

command.

5. Relabel and process the target volumes.

The target volumes are now ready to be reconditioned. TheTimeFinder utility is used to relabel the volumes so that they canbe varied online for the image copy processing. The RELABELstatement specifies the new VOLSER. The RENAME statementchanges the high-level qualifier of the dataset. This is so it can bebrought online and an image copy taken from the BCV. ThePROCESS statement identifies the types of files to be processed foreach BCV. The CATALOG statement identifies the ICF catalog thatcontains the renamed datasets. Use the following sample JCL tocondition the BCVs:

//RELABEL EXEC PGM=EMCTFU//SYSOUT DD SYSOUT=*//SYSIN DD *RELABEL 4120,OLD-VOLSER=EMCS01,NEW-VOLSER=EMCT01PROCESS CUU=4120,VOLSER=EMCT01,BOTHCATALOG IMSTPRD1.USER.CATALOG,DEFAULTRENAME source_hlq,target_hlq/*

6. Run GENJCL.IC on the IMS dataset to generate image copy JCL.Modify the generated JCL to point to the BCV datasets and to useDBRC=NO so that DBRC does not get updated. The image copieswill be taken from the BCV. This process could be performedusing image copy JCL generated on the source IMS and modifiedto point to the BCV datasets.

7. Execute the image copy JCL generated and modified in theprevious step, using the BCV copy of the database.

8. Run NOTIFY.IC to identify the image copies taken from the BCVusing the timestamp of the /DBR DB issued in step 2.

Creating IMS image copies using dataset snapTimeFinder dataset snap can be used to copy IMS databases. Imagecopies can then be taken from those backup datasets.

TimeFinder/Mirror backup for recovery 111

Page 112: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

112

Backing Up IMS Environments

Figure 26 IMS image copy from a dataset snap

To create IMS image copies from snapped datasets, use the followingprocedure, and refer to Figure 26:

1. De-allocate datasets from IMS and externalize IMS buffers todisk. Use the /DBR DB command, and issue a subsequent /DIS DB

command to ensure that the databases have been quiesced. Makea note of the timestamp from the /DBR DB command for lateruse.

2. Snap the datasets associated with the copied databases. Use thefollowing sample JCL to perform the dataset snap:

//RUNSNAP EXEC PGM=EMCSNAP//SYSOUT DD SYSOUT=*//EMCQCAPI DD SYSOUT=*//EMCQCFMT DD SYSOUT=*//QCOUTPUT DD SYSOUT=*//QCINPUT DD *SNAP DATASET ( -SOURCE(‘source_hlq.other_qualifiers’) –TARGET(‘target_hlq.other_qualifiers’) –VOLUME(EMCT01) –REPLACE(YES) –REUSE(YES) –FORCE(NO) –HOSTCOPYMODE(SHR) –

BCV orSTD

Imagecopy

NOTIFY.ICUsing TimeFinder

split timestampICO-IMG-000690

IMS system

/DBR DB

/STA DB

1

3

Data set snap

2

GENJCL.IC4

Image copy jobmodified for snap

datasetsno DBRC

5

6

STD

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 113: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Backing Up IMS Environments

DATAMOVERNAME(DFDSS) –DEBUG(OFF) –TRACE(OFF))/*

3. Start source databases for normal processing. Use the /STA DB

command.

4. Run GENJCL.IC on the IMS dataset to generate an image copyJCL. Modify the generated JCL to point to the snapped datasetsand to use DBRC=NO so that DBRC does not get updated. Theimage copies will be taken from the snapped datasets. Thisprocess could be performed using an image copy JCL generatedon the source IMS and modified to point to the snapped datasets.

5. Execute the image copy JCL generated and modified in theprevious step, using the snap copy of the database.

6. Run NOTIFY.IC to identify the image copies taken from thesnapped datasets using the /DIS DB timestamp from step 1.

Backing up IMS databases using dataset snapTimeFinder/Clone dataset snap technology can be used to back upand recover databases within an IMS subsystem. The dataset snapoperation is very quick, and the size of the dataset does not affect theduration of the replication as seen by the host, since it is performed inthe background in the Symmetrix array. Figure 27 shows howdatabases are backed up using dataset snap.

Figure 27 Backing up IMS databases using dataset snap

The following steps refer to Figure 27.

ICO-IMG-000691

IMS system

/DBR DB

/STA DB

1

3 Snap

data set(s)

2

Backing up IMS databases using dataset snap 113

Page 114: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

114

Backing Up IMS Environments

1. De-allocate datasets from IMS and externalize IMS buffers todisk. Use the /DBR DB command, and issue a subsequent /DIS DB

command to ensure that the databases have been quiesced.

2. Snap the datasets associated with the copied databases. Use thefollowing sample JCL to perform the dataset snap:

//RUNSNAP EXEC PGM=EMCSNAP//SYSOUT DD SYSOUT=*//EMCQCAPI DD SYSOUT=*//EMCQCFMT DD SYSOUT=*//QCOUTPUT DD SYSOUT=*//QCINPUT DD *SNAP DATASET ( -SOURCE(‘source_hlq.other_qualifiers’) –TARGET(‘target_hlq.other_qualifiers’) –VOLUME(EMCT01) –REPLACE(YES) –REUSE(YES) –FORCE(NO) –HOSTCOPYMODE(SHR) –DATAMOVERNAME(DFDSS) –DEBUG(OFF) –TRACE(OFF))/*

3. Start databases for normal processing. Use the /STA DB

command.

Creating multiple split mirrorsThere are several reasons to keep multiple sets of BCVs to holdadditional split-mirror copies of an IMS database:

◆ While re-establishing the BCV devices to copy the tracks thathave changed while the BCV was split, the environment is notsynchronized or protected. If there is an event that requiresrecovery, a full-volume restore from tape will be required,assuming a tape backup is available.

◆ Most recoveries are caused by user or application errors. It maytake a while to recognize that the data is corrupted and that thelogical error might have already been propagated to theTimeFinder/Mirror backup environment. A second set of BCVsfrom a prior point in time will allow for a fast disk-based restorerather than a slow tape-based restore.

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 115: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Backing Up IMS Environments

◆ Using multiple sets of BCVs adds the flexibility of creating copiesof the system for testing, while other copies can serve as a hotstandby system that will operate simultaneously.

It is recommended to have at least two sets of BCVs and to togglebetween them when creating TimeFinder/Mirror backupenvironments. To eliminate the resynchronization exposure, multipleBCV sets will increase the chances of having a good copy on BCVs tostart with in case of a disaster. Restoring the environment from BCVsis significantly faster than restoring from tape.

Figure 28 shows IMS volumes with three sets of BCVs. These BCVsare used on Monday, Wednesday, and Friday for creatingTimeFinder/Mirror backups.

Figure 28 IMS configured with three sets of BCVs

Backing up a TimeFinder copyOnce the array-based copy is created, a backup to tape can be made.A full-volume dump to tape of the TimeFinder replica isrecommended. A full-volume dump is performed at the volumelevel. The resulting output tape would typically have a dataset namethat reflects the VOLSER that is being backed up. This processrequires no renaming of the datasets that are included in the backup.

ICO-IMG-000692

IMS volumes

IMS dedicated BCVs

TimeFinder/Mirror backup - Monday

TimeFinder/Mirror backup - Wednesday

TimeFinder/Mirror backup - Friday

Backing up a TimeFinder copy 115

Page 116: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

116

Backing Up IMS Environments

Note: In this section, a full-volume dump means a full physical volumedump to sequential media.

Alternatively, a logical dump may be taken at the dataset level. If alogical dump is taken from the same host, or a different host withshared ICF catalogs, then all datasets must be renamed using theTimeFinder utility. This renaming process is time-consuming andrecovery from it will take longer than from a full-volume dump. If thelogical dump is performed from an LPAR that does not share ICFuser catalogs with the source LPAR, renaming the dumped datasets isnot a requirement.

To prepare for a full-volume dump, run the TimeFinder utility withRELABEL statements to relabel the BCVs and vary them online to thehost. Use the following sample JCL to perform the relabel:

//EMCTFU EXEC PGM=EMCTFU//SYSOUT DD SYSOUT=*//EMCRENAM DD SYSOUT=*//TFINPUT DD *RELABEL CUU=4120,OLD-VOLSER=IMS000,NEW-VOLSER=IMST00RELABEL CUU=4121,OLD-VOLSER=IMS001,NEW-VOLSER=IMST01/*

Backup using DFDSSTake a full-volume DFDSS dump. Use the following sample JCL toperform the backup:

//DUMPA EXEC PGM=ADRDSSU//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//DISK1 DD UNIT=3390,DISP=SHR,VOL=SER=IMST00//TAPE1 DD DSN=IMS.FULLBKUP.IMST00,// UNIT=TAPE,DISP=(NEW,KEEP),LABEL=(1,SL),// VOL=SER=TAPE00//DISK2 DD UNIT=3390,DISP=SHR,VOL=SER=IMST01//TAPE2 DD DSN=IMS.FULLBKUP.IMST01,// UNIT=TAPE,DISP=(NEW,KEEP),LABEL=(1,SL),// VOL=SER=TAPE01//SYSIN DD *

DUMP FULL INDD(DISK1) OUTDD(TAPE1) OPT(4)DUMP FULL INDD(DISK2) OUTDD(TAPE2) OPT(4)

/*

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 117: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Backing Up IMS Environments

Backup using FDRTake a full-volume FDR dump. Use the following sample JCL toperform the backup:

//DUMP EXEC PGM=FDR//SYSPRINT DD SYSOUT=*//DISK1 DD UNIT=3390,VOL=SER=IMST00,DISP=OLD//TAPE1 DD DSN=IMS.FULLBKUP.IMST00,// UNIT=TAPE,DISP=(NEW,KEEP),LABEL=(1,SL),// VOL=SER=TAPE00//DISK2 DD UNIT=3390,VOL=SER=IMST01,DISP=OLD//TAPE2 DD DSN=IMS.FULLBKUP.IMST01,// UNIT=TAPE,DISP=(NEW,KEEP),LABEL=(1,SL),// VOL=SER=TAPE01//SYSIN DD *

DUMP TYPE=FDR/*

Keeping track of dataset placement on backup volumesThe TimeFinder utility can be run immediately after the TimeFinderfull-volume operation is completed to create a recovery report. TheRENAME command to a dummy dataset is used to create the recoveryreport. The RENAME command processes all volumes in the listed inthe SYSIN, and create a report that contains the mapping of alldatasets to their volumes. Information from the recovery report andthe tape management system can be used to locate a single dataset fora logical restore even after a backup to tape has been done. Thisreport should be accessible in the event that a recovery is required.Use the following sample JCL to produce this report:

//EMCTFU EXEC PGM=EMCTFU//SYSOUT DD SYSOUT=*//TFINPUT DD *PROCESS CUU=4120,VOLSER=IMS000,BOTHPROCESS CUU=4121,VOLSER=IMS003,BOTH*CATALOG IMSDSN21.USER.CATALOG,DEFAULTSOURCECATALOG DEFAULT=NO,DIRECT=YESRENAME DUMMY.DATA.SET1.*,DUMMY.DATA.SET2.*,CATALOG=IMSDSN21.USER.CATALOG/*

A sample of the recovery report follows:

DSN.MULTIVOL.SAMPLE

Keeping track of dataset placement on backup volumes 117

Page 118: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

118

Backing Up IMS Environments

VOLUME: IMS003 IMS000IMS.RECON01.DATA

VOLUME: IMS003IMS.RECON01.INDEX

VOLUME: IMS003IMS.RECON02.DATA

VOLUME: IMS000IMS.RECON02.INDEX

VOLUME: IMS000IMS.USER.CATALOG

VOLUME: IMS003IMS.USER.CATALOG.CATINDEX

VOLUME: IMS003

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 119: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

6

This chapter presents topics related to the recovery of IMS databasesusing EMC technology.

◆ Overview........................................................................................... 120◆ Database recovery from an IMS image copy ............................... 123◆ Database recovery using DFDSS or FDR...................................... 123◆ Restoring a non-IMS copy .............................................................. 124◆ Recovering IMS from a non-IMS copy.......................................... 129

IMS Database RecoveryProcedures

IMS Database Recovery Procedures 119

Page 120: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

120

IMS Database Recovery Procedures

OverviewTimeFinder can be used with IMS to enhance and speed up recoveryprocessing. Full-volume copies of IMS databases can be used forrecovery purposes in one of two ways:

◆ Recovery of databases (to a copy, point in time, or current) using atraditional IMS image copy taken from a quiesced TimeFinderbackup or snapped datasets.

◆ Recovery of databases (to a copy, point in time, or current) using aquiesced non-IMS copy (TimeFinder full-volume backup, datasetsnap, or a DFDSS/FDR dump of these).

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 121: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

IMS Database Recovery Procedures

◆ The following sections describe these methodologies and theprocedures involved in restarting or recovering IMSenvironments and databases. Table 6 on page 122 shows thecategory in which each section falls.

Overview 121

Page 122: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

122

IMS Database Recovery Procedures

Table 6 Backup and restart/restore methods

Backup method Backup section Restart/Recovery section Notes

TimeFinder/Mirrorbackup (noquiesce)Consistent Split

DFDSSFDR

Remote ConsistentSplit

DFDSSFDR

Consistency Group

DFDSSFDR

Creating a TimeFinder/MirrorbackupBackup using DFDSSBackup using FDR

TimeFinder/Mirror usingremote consistent splitBackup using DFDSSBackup using FDR

TimeFinder/Mirror usingConGroupsBackup using DFDSSBackup using FDR

Restoring usingTimeFinder/Mirror backupRestoring IMS using DFDSSRestoring IMS from an FDRdumpRestoring usingTimeFinder/Mirror backupRestoring IMS using DFDSSRestoring IMS from an FDRdumpRestoring usingTimeFinder/Mirror backupRestoring IMS using DFDSSRestoring IMS from an FDRdump

System restart only

Tape backupTape backup

Tape backupTape backup

Tape backupTape backup

TimeFinder/Mirrorbackup (quiesce)Split

DFDSSFDR

Remote Split

DFDSSFDR

IMS image copy

TimeFinder/Mirror backup forrecoveryBackup using DFDSSBackup using FDR

TimeFinder/Mirror backup forremote recoveryBackup using DFDSSBackup using FDR

Creating IMS image copiesusing volume split

Recovering usingTimeFinder/MirrorRestoring IMS using DFDSSRestoring IMS from an FDRdumpRecovering usingTimeFinder/MirrorRestoring IMS using DFDSSRestoring IMS from an FDRdumpDatabase recovery from anIMS image copy

System restart ordatabase recovery

Tape backupTape backup

Tape backupTape backup

Dataset snapDisk backup

IMS image copy

Restoring using dataset snapbackupDatabase recovery from anIMS image copy

Database recovery

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 123: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

IMS Database Recovery Procedures

Database recovery from an IMS image copyAn IMS image copy taken from a full-volume copy (or snappeddatasets) is a traditional IMS copy and can be used for recovery asdescribed in the IMS manuals.

Database recovery using DFDSS or FDRNon-IMS user copies can be used for IMS recovery. The user mustrestore the copy outside of IMS overlaying the database datasets andmust supply information about this to the IMS RECON. IMS can thenbe used to generate the log apply job as needed.

Figure 29 IMS recovery using DFDSS or FDR

To recover an IMS database from a non-IMS user copy created usingEMC technology, use the following procedure and refer to Figure 29:

1. Notify IMS of the user image copy.

Use the NOTIFY.UIC statement. Set the RUNTIME parameter to thetime that the copy was created. This must be a timestamp whilethe database was quiesced for the backup.

Non IMSrestore

STD

Host

Symmetrix

ICO-IMG-000693

IMS system

/DBR DB

/STA DB

2

8

3

Execute output fromGENJCL.RECOV

DFDSSor FDR

7

DBRC

NOTIFY.UICRUNTIME=Split Time

NOTIFY.RECOVRCVTIME=Split Time

CHANGE.DBDSRECOV

GENJCL.RECOVUSEDBDS

1

6

4

5

Copy

Log

Database recovery from an IMS image copy 123

Page 124: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

124

IMS Database Recovery Procedures

2. Quiesce the databases to be copied. Use the /DBR DB command.Issue a subsequent /DIS DB command to ensure that thedatabases have been quiesced.

3. Restore the user copy. Refer to “Restoring a non-IMS copy,” fordetailed procedures.

4. Notify IMS that the database has been restored.

Use the NOTIFY.RECOV statement. The RCVTIME parameter is setto the time that the copy was created. This could be thetimestamp used in step 1.

Note: The next three steps should be performed only if it is desired to applylog records to roll the database forward to a point in time that is more currentthan the copy that has just been restored.

5. Notify IMS that the database should be recovered.

Use the CHANGE.DBDS statement with the RECOV parameter.

6. Run GENJCL.RECOV with the USEDBDS parameter to generaterecovery JCL.

The USEDBDS parameter tells IMS to perform recovery using onlythe changes that have occurred to the DBDS in its current state.An image copy is not restored prior to applying the changes.

7. Execute the recovery JCL generated in step 6 to perform therecovery.

8. Resume normal processing on the database. Use the /STA DBcommand to start the database.

Restoring a non-IMS copyWhen recovering an IMS database from a non-IMS copy, it is theuser’s responsibility to restore the database datasets beforeperforming the IMS recovery. This section describes the proceduresfor restoring the following non-IMS copies:

◆ TimeFinder/Mirror backup

◆ TimeFinder/Clone backup

◆ Dataset snap

◆ DFDSS

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 125: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

IMS Database Recovery Procedures

◆ FDR

Restoring using TimeFinder/Mirror backupTo restore an IMS database from a TimeFinder/Mirror backup, followthese steps:

1. De-allocate source datasets from IMS and externalize IMS buffersto disk.

Use the /DBR DB command, and issue a subsequent /DIS DBcommand to ensure that the databases have been quiesced.

2. Use the TimeFinder utility to relabel the volumes and vary themonline. The following sample JCL can be used to relabel the BCVvolume:

//RELABEL EXEC PGM=EMCTFU//SYSOUT DD SYSOUT=*//TFINPUT DD *RELABEL CUU=020D,OLD-VOLSER=IMS001,NEW-VOLSER=BCV001/*

3. Use the TimeFinder utility to rename a dataset. The following JCLis an example of how to do this:

//RENAME EXEC PGM=EMCTFU//SYSOUT DD SYSOUT=*//TFINPUT DD *PROCESS VOLSER=BCV001,BOTHRENAME

IMS.DB01,BCVIMS.DB01,CATALOG=IMSBCV.USER.CATALOG/*

4. Use TimeFinder dataset snap to overlay the IMS dataset.Thefollowing JCL is an example of how to do this:

//RUNSNAP EXEC PGM=EMCSNAP//SYSOUT DD SYSOUT=*//EMCQCAPI DD SYSOUT=*//EMCQCFMT DD SYSOUT=*//QCOUTPUT DD SYSOUT=*//QCINPUT DD *SNAP DATASET ( -SOURCE(’BCVIMS.DB01*’) -TARGET(’IMS.DB01*’) -

REPLACE(Y) -REUSE(Y) -FORCE(N) -WAITFORCOMPLETION(N) -

Restoring a non-IMS copy 125

Page 126: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

126

IMS Database Recovery Procedures

HOSTCOPYMODE(SHR) -DATAMOVERNAME(NONE))

/*

Restoring using TimeFinder/Clone backupTo restore an IMS database from a TimeFinder/Clone backup, followthese steps:

1. De-allocate datasets from IMS and externalize IMS buffers todisk.

Use the /DBR DB command, and issue a subsequent /DIS DBcommand to ensure databases have been quiesced.

2. Use TimeFinder utility to rename the dataset. The following JCLis an example of how to do this:

//RENAME EXEC PGM=EMCTFU//SYSUDUMP DD SYSOUT=*//SYSOUT DD SYSOUT=*//TFINPUT DD *PROCESS VOLSER=STD001,BOTHRENAME

IMS.DB01,STDIMS.DB01,CATALOG=IMSSTD.USER.CATALOG/*

3. Use TimeFinder dataset snap to overlay the IMS dataset. Thefollowing JCL is an example of how to do this:

//RUNSNAP EXEC PGM=EMCSNAP//SYSUDUMP DD SYSOUT=*//SYSOUT DD SYSOUT=*//EMCQCAPI DD SYSOUT=*//EMCQCFMT DD SYSOUT=*//QCOUTPUT DD SYSOUT=*//QCINPUT DD *SNAP DATASET ( -SOURCE(’STDIMS.DB01*’) -TARGET(’IMS.DB01*’) -

REPLACE(Y) -REUSE(Y) -FORCE(N) -WAITFORCOMPLETION(N) -HOSTCOPYMODE(SHR) -DATAMOVERNAME(NONE))

/*

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 127: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

IMS Database Recovery Procedures

Restoring using dataset snap backupTo restore an IMS database from a snapped dataset, follow thesesteps:

1. De-allocate source datasets from IMS and externalize IMS buffersto disk. Use the /DBR DB command, and issue a subsequent/DIS DB command to ensure that the databases have beenquiesced.

2. Use TimeFinder dataset snap to overlay the IMS dataset. Use thefollowing sample JCL to perform the snap:

//RUNSNAP EXEC PGM=EMCSNAP//SYSOUT DD SYSOUT=*//EMCQCAPI DD SYSOUT=*//EMCQCFMT DD SYSOUT=*//QCOUTPUT DD SYSOUT=*//QCINPUT DD *SNAP DATASET ( -

SOURCE(’STDIMS.DB01*’) -TARGET(’IMS.DB01*’) -REPLACE(Y) -REUSE(Y) -FORCE(N) -WAITFORCOMPLETION(N) -HOSTCOPYMODE(SHR) –DATAMOVERNAME(NONE))

/*

Restoring IMS using DFDSSAn IMS database recovery, or system recovery, may be required to apoint of consistency that no longer exists on disk. In this case, DFDSSmay be used to restore the standard volumes from tape to a priorpoint in time.

Full-volume DFDSS restoreTo perform a full restore of a volume from tape to a standard volume,use the following JCL:

//RESTA EXEC PGM=ADRDSSU//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//DISK1 DD UNIT=3390,DISP=SHR,VOL=SER=IMS001//TAPE1 DD DSN=IMS.FULLBKUP.IMS001,// UNIT=TAPE,DISP=(OLD,KEEP),LABEL=(1,SL),

Restoring a non-IMS copy 127

Page 128: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

128

IMS Database Recovery Procedures

// VOL=SER=TAPE00//DISK2 DD UNIT=3390,DISP=SHR,VOL=SER=IMS002//TAPE2 DD DSN=IMS.FULLBKUP.IMS002,// UNIT=TAPE,DISP=(OLD,KEEP),LABEL=(1,SL),// VOL=SER=TAPE01//SYSIN DD *RESTORE INDD(TAPE1) OUTDD(DISK1)RESTORE INDD(TAPE2) OUTDD(DISK2)/*

If the entire environment is restored (including the ICF user catalogs),no further steps are required. However, if the ICF catalog is notrestored, then an IDCAMS DEFINE RECATALOG for the datasets on therestored volumes is required to synchronize the ICF user catalogs andthe actual volume contents.

Logical dataset restore using DFDSSTo perform a logical restore of an IMS database from a DFDSSfull-volume dump, follow these steps:

1. De-allocate datasets from IMS and externalize IMS buffers todisk. Use the /DBR DB command, and issue a subsequent /DISDB command to ensure that the databases have been quiesced.

2. Use IDCAMS to delete the IMS dataset.

3. Perform a dataset restore using the following JCL:

//RESTORE EXEC PGM=ADRDSSU//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//DISK1 DD UNIT=3390,DISP=SHR,VOL=SER=IMS001//TAPE1 DD DSN=IMS.FULLBKUP.IMS001,// UNIT=TAPE,DISP=(OLD,KEEP),// LABEL=(1,SL),VOL=SER=TAPE01//SYSIN DD *RESTORE DS(IMSDB01.**)INDD(TAPE1) OUTDD(DISK1)/*

4. If the restored dataset is a VSAM dataset, perform an IDCAMSDEFINE RECATALOG for the restored cluster.

Restoring IMS from an FDR dumpA recovery may be required to a point of consistency that no longerexists on disk. In this case, FDR may be used to restore the standardvolumes from tape to a prior point in time.

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 129: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

IMS Database Recovery Procedures

Full FDR database restoreTo perform a full restore of a volume from tape to the standardvolume, use the following JCL:

//FDRREST EXEC PGM=FDRDSF//SYSPRINT DD SYSOUT=*//DISK1 DD UNIT=3390,VOL=SER=IMS000,DISP=OLD//TAPE1 DD DSN=IMS.FULLBKUP.DB1,// UNIT=TAPE,DISP=(OLD,KEEP),LABEL=(1,SL),// VOL=SER=TAPE00//DISK2 DD UNIT=3390,VOL=SER=IMS001,DISP=OLD//TAPE2 DD DSN=IMS.FULLBKUP.DB2,// UNIT=TAPE,DISP=(OLD,KEEP),LABEL=(1,SL),// VOL=SER=TAPE01//SYSIN DD *RESTORE TYPE=FDR/*

Logical dataset restore using FDRTo perform a logical restore of a single dataset from a full volumedump, use the following JCL after stopping the database:

//RESTB EXEC PGM=FDRDSF//SYSPRINT DD SYSOUT=*//TAPE1 DD DSN=IMS.FULLBKUP.DB1,// UNIT=TAPE,DISP=(OLD,KEEP),LABEL=(1,SL),// VOL=SER=TAPE00//DISK1 DD UNIT=3390,DISP=SHR,VOL=SER=IMS001//SYSIN DD *

RESTORE TYPE=DSFSELECT DSN=IMS.DB001

/*

An IDCAMS DEFINE RECATALOG is not required when restoring withFDR.

Recovering IMS from a non-IMS copy

Recovering using TimeFinder/MirrorIMS application databases can be recovered after restoring the sourcedevices from a previous re-establish/split operation. This operationmay require a roll-forward recovery to be performed on thosedatabases involved in the restore.

Recovering IMS from a non-IMS copy 129

Page 130: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

130

IMS Database Recovery Procedures

Note: Be aware all datasets are restored from the BCV. Use caution inperforming this type of recovery. Make sure all datasets being restored andtherefore overlaid require recovery.

When the following steps are executed, the database datasets canparticipate in an IMS recovery. TimeFinder/Mirror copies can beused for point-in-time IMS copies and can be used as a basis for IMSdatabase recovery. The BCV restore process requires no additionalhost resources.

Figure 30 Restoring an IMS database to a point in time using BCVs

To restore an IMS database to a point in time using BCVs, use thefollowing procedure and refer to Figure 30:

1. Notify IMS of the user image copy for each database that isrestored. Use the NOTIFY.UIC statement. Set the RUNTIMEparameter to the time that the TimeFinder/Mirror backup wascreated. This must be a timestamp while the database wasquiesced for the backup.

2. Quiesce the databases that are restored. Use the /DBR DB

command. Issue a subsequent /DIS DB command to ensure thatthe databases have been quiesced.

3. Restore the application data BCVs. This will restore the databasesto the TimeFinder/Mirror backup point in time. This permitsrolling the database forward, if desired.

Applicationdata

Recoverystructures

Recoverystructures

Host Symmetrix

ICO-IMG-000694

STD

STD BCV

BCV

Applicationdata

Applicationprocessing

3 RESTORE

IMS

NOTIFY.UIC

/DBR DB

/STADB

NOTIFY.RECOV

CHANGE.DBDS

GENJCL.RECOV

1

6

4

5

2

8

7 Execute output fromGENJCL.RECOV

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 131: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

IMS Database Recovery Procedures

4. Notify IMS that the databases have been restored. Use theNOTIFY.RECOV statement. The RCVTIME parameter is set to thetime that the copy was created. This could be the timestamp usedin step 1.

Note: The next three steps should be performed only if it is desired to applylog records to roll the databases forward to a point in time that is morecurrent than the copy that has just been restored.

5. Notify IMS that the databases should be recovered. Use theCHANGE.DBDS statement with the RECOV parameter for all thedatabases that have been restored.

6. Run GENJCL.RECOV with the USEDBDS parameter to generaterecovery JCL. The USEDBDS parameter tells IMS to performrecovery using only the changes that have occurred to the DBDSin its current state. An image copy is not restored prior toapplying the changes.

7. Execute the recovery JCL generated in step 6 to perform therecovery.

8. Resume normal processing on the database. Use the /STA DBcommand to start the databases.

When TimeFinder/Mirror RESTORE is used to restore applicationdatabase volumes, it is critical that the recovery structures such as theIMS RECON, WADS, OLDS, and RDS datasets and theircorresponding ICF user catalogs be positioned on volumes other thanthose of the application databases. A TimeFinder BCV volume restorecould overlay datasets that are critical for the IMS recovery if placedincorrectly on these volumes. For a list of all datasets to be consideredin dataset placement, refer to the Section “IMS considerations forConGroup” on page 57.

Recovering using TimeFinder/CloneIMS application databases can be recovered after restoring the IMSsource devices from a previous TimeFinder/Clone volume-levelbackup. This operation may require a roll-forward recovery to beperformed on those databases involved in the restore.

Recovering IMS from a non-IMS copy 131

Page 132: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

132

IMS Database Recovery Procedures

Note: Be aware all datasets are restored from the clone target. Use caution inperforming this type of recovery. Make sure all datasets being restored andoverlaid require recovery.

When the following steps are executed, the database datasets canparticipate in an IMS recovery. TimeFinder/Clone copies can be usedfor point-in-time IMS copies and can be used as a basis for IMSdatabase recovery. The TimeFinder/Clone restore process is in fact afull volume snap back from the target to the source. The is exactly likea snap in reverse.

Figure 31 Restoring an IMS database to a point in time using STDs

To restore an IMS database to a point in time usingTimeFinder/Clone targets, use the following procedure, and refer toFigure 31:

1. Notify IMS of the user image copy for each database that isrestored. Use the NOTIFY.UIC statement. Set the RUNTIMEparameter to the time that the TimeFinder/Clone backup wascreated. This must be a timestamp while the database wasquiesced for the backup.

2. Quiesce the databases that are about to be restored. Use the /DBRDB command. Issue a subsequent /DIS DB command to ensurethat the databases have been quiesced.

Applicationdata

Recoverystructures

Recoverystructures

Host

Symmetrix

ICO-IMG-000695

IMS

NOTIFY.UIC

/DBR DB

/STADB

NOTIFY.RECOV

CHANGE.DBDS

GENJCL.RECOV

1

6

4

5 STD

STD

Applicationdata

Applicationprocessing

2 3

8

FULL VOLUME SNAP

7 Execute output fromGENJCL.RECOV

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 133: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

IMS Database Recovery Procedures

3. Restore the source IMS application volumes from theTimeFinder/Clone backup volumes. This allows rolling thedatabase forward, if desired.

4. Notify IMS that the databases have been restored. Use theNOTIFY.RECOV statement. The RCVTIME parameter is set to thetime that the copy was created. This could be the timestamp usedin step 1.

Note: The next three steps should be performed only if it is desired to applylog records to roll the databases forward to a point in time that is morecurrent than the copy that has just been restored.

5. Notify IMS that the databases should be recovered. Use theCHANGE.DBDS statement with the RECOV parameter for all thedatabases that have been restored.

6. Run GENJCL.RECOV with the USEDBDS parameter to generaterecovery JCL. The USEDBDS parameter tells IMS to performrecovery using only the changes that have occurred to the DBDSin its current state. An image copy is not restored prior toapplying the changes.

7. Execute the recovery JCL generated in step 6 to perform therecovery.

8. Resume normal processing on the database. Use the /STA DB

command to start the databases.

When TimeFinder full volume snap is used to restore applicationdatabases, it is critical that the recovery structures such as the IMSRECON, WADS, OLDS, and RDS datasets and their correspondingICF user catalogs be positioned on volumes other than those of theapplication databases. A TimeFinder volume snap could overlaydatasets that are critical for the IMS recovery if placed incorrectly onthese volumes. For a list of all datasets to be considered in datasetplacement, refer to the Section “IMS considerations for ConGroup”on page 57.

Restoring IMS using a dataset snap backupTo restore an IMS database from a snapped dataset, follow thesesteps:

Recovering IMS from a non-IMS copy 133

Page 134: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

134

IMS Database Recovery Procedures

1. Notify IMS of the user image copy for each database that isrestored. Use the NOTIFY.UIC statement. Set the RUNTIMEparameter to the time that the TimeFinder/Clone backup wascreated. This must be a timestamp while the database wasquiesced for the backup.

2. Quiesce the databases that are going to be restored. Use the /DBRDB command. Issue a subsequent /DIS DB command to ensurethat the databases have been quiesced.

3. Use TimeFinder dataset snap to overlay the IMS dataset. Use thefollowing sample JCL to perform the snap:

//RUNSNAP EXEC PGM=EMCSNAP//SYSOUT DD SYSOUT=*//EMCQCAPI DD SYSOUT=*//EMCQCFMT DD SYSOUT=*//QCOUTPUT DD SYSOUT=*//QCINPUT DD *SNAP DATASET ( -

SOURCE(’STDIMS.DB01*’) -TARGET(’IMS.DB01*’) -REPLACE(Y) -REUSE(Y) -FORCE(N) -WAITFORCOMPLETION(N) -HOSTCOPYMODE(SHR) –DATAMOVERNAME(NONE))

/*

4. Notify IMS that the databases have been restored. Use theNOTIFY.RECOV statement. The RCVTIME parameter is set to thetime that the copy was created. This could be the timestamp usedin step 1.

Note: The next three steps should be performed only if it is desired to applylog records to roll the databases forward to a point in time that is morecurrent than the copy that has just been restored.

5. Notify IMS that the databases should be recovered. Use theCHANGE.DBDS statement with the RECOV parameter for all thedatabases that have been restored.

6. Run GENJCL.RECOV with the USEDBDS parameter to generaterecovery JCL. The USEDBDS parameter tells IMS to performrecovery using only the changes that have occurred to the DBDSin its current state. An image copy is not restored prior toapplying the changes.

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 135: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

IMS Database Recovery Procedures

7. Execute the recovery JCL generated in step 6 to perform therecovery.

8. Resume normal processing on the database. Use the /STA DBcommand to start the databases.

Recovering IMS from a non-IMS copy 135

Page 136: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

136

IMS Database Recovery Procedures

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 137: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

7

This chapter describes how Symmetrix arrays can be used in adisaster restart and a disaster recovery strategy for IMS.

◆ Overview........................................................................................... 138◆ Disaster restart versus disaster recovery ...................................... 139◆ Definitions......................................................................................... 140◆ Considerations for disaster recovery/restart............................... 142◆ Tape-based solutions ....................................................................... 148◆ Remote replication challenges........................................................ 149◆ Array-based remote replication ..................................................... 154◆ Planning for array-based replication............................................. 154◆ SRDF/S single Symmetrix array to single Symmetrix array..... 155◆ SRDF/S and consistency groups ................................................... 156◆ SRDF/A............................................................................................. 168◆ SRDF/AR single-hop ...................................................................... 172◆ SRDF/Star ......................................................................................... 174◆ SRDF/Extended Distance Protection............................................ 175◆ High-availability solutions ............................................................. 177◆ IMS Fast Path Virtual Storage Option........................................... 180

Disaster Recovery andDisaster Restart

Disaster Recovery and Disaster Restart 137

Page 138: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

138

Disaster Recovery and Disaster Restart

OverviewA critical part of managing a database is planning for an unexpectedloss of data. The loss can occur from a disaster, like fire or flood, or itcan come from hardware or software failures. It can even comethrough human error or malicious intent. In each system, thedatabase must be restored to some usable point, before applicationservices can be resumed.

The effectiveness of any plan for restart or recovery involvesanswering the following questions:

◆ How much downtime is acceptable to the business?

◆ How much data loss is acceptable to the business?

◆ How complex is the solution?

◆ Are multiple recovery sites required?

◆ Does the solution accommodate the data architecture?

◆ How much does the solution cost?

◆ What disasters does the solution protect against?

◆ Is there protection against logical corruption?

◆ Is there protection against physical corruption?

◆ Is the database restartable or recoverable?

◆ Can the solution be tested?

◆ If failover happens, will failback work?

All restart and recovery plans include a replication component. In itssimplest form, the replication process may be as easy as making atape copy of the database and application. In a more sophisticatedform, it could be real-time replication of all changed data to someremote location. Remote replication of data has its own challengescentered around:

◆ Distance

◆ Propagation delay (latency)

◆ Network infrastructure

◆ Data loss

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 139: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Disaster Recovery and Disaster Restart

This chapter provides an introduction to the spectrum of disasterrecovery and disaster restart solutions for IMS databases on EMCSymmetrix arrays.

Disaster restart versus disaster recoveryDisaster recovery implies the use of backup technology in which datais copied to tape, and then shipped off site. When a disaster isdeclared, the remote site copies are restored and logs are applied tobring the data to a point of consistency. Once all recoveries arecompleted, the data is validated to ensure that it is correct.Coordinating to a common business point of recovery across allapplications and other platforms can be difficult, if not impossible,using traditional recovery methods.

Disaster restart solutions allow the restart of all participating DBMSsto a common point of consistency, utilizing the automatedapplication of DBMS recovery logs during DBMS initialization. Therestart time is comparable to the length of time required for theapplication to restart after a power failure. A disaster restart solutioncan include other data, such as VSAM and IMS datasets, flat files, andso on, and is made possible with EMC technology. Restartable imagesthat are dependent-write consistent can be created locally, and thentransported to off site storage site by way of tape or SRDF to be usedfor disaster restart. These restartable images can also be createdremotely. Dependent-write consistency is ensured by EMCtechnology, and transactional consistency is ensured by the DBMS atrestart, similar to recovery from a local power failure.

Transactional consistency is made possible by the dependent I/Ophilosophies inherent in logging DBMS systems. Dependent-writeI/O is the methodology all logging DBMS systems use to maintainintegrity. Data writes are dependent on a successful log write in thesesystems, and therefore, restartability is guaranteed. Restarting adatabase environment instead of recovering it enhances availabilityand reduces recovery time objective (RTO). The time required to do atraditional IMS recovery is significantly greater than restarting anIMS system at the recovery site.

The entire IMS subsystem can be restored at the volume level, andthen restarted instead of recovered. EMC consistency technologyensures that the IMS system is consistent, from the dependent-writeperspective. IMS, upon restart, manages the in-flight transactions to

Disaster restart versus disaster recovery 139

Page 140: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

140

Disaster Recovery and Disaster Restart

bring the database to a point of transactional consistency. Thismethod of restart is significantly faster than the traditional recoverymethods.

EMC supports synchronous and asynchronous restart solutions.Asynchronous remote replication solutions are distinguished fromsynchronous remote replication solutions at the application ordatabase level. With synchronous replication, the application waitsfor an acknowledgment from the storage unit that the remote datawrite has completed. Application logic does not allow anydependent-write activity until this acknowledgment has beenreceived. If the time to receive this notice is elongated, then responsetimes for users or applications can increase.

The alternative is to use an asynchronous remote replication solutionthat provides notice to the application that the write has beencompleted, regardless of whether the remote physical write activityhas completed. The application can then allow dependent-writeactivity to proceed immediately.

It is possible to lose consistency and/or data in an asynchronousremote replication solution, so various techniques have beendeveloped in the industry to prevent data loss and/or ensure dataconsistency. EMC uses SRDF Consistency Groups, TimeFinderConsistency Groups, SRDF/AR, and SRDF/A individually or incombination, to address this issue. Refer to Chapter 2, “EMCFoundation Products,” for further information on EMC products.

DefinitionsIn the following sections, the terms dependent-write consistency,database restart, database recovery, and roll-forward recovery are used. Aclear definition of these terms is required to understand the context ofthis chapter.

Dependent-write consistencyA dependent-write I/O cannot be issued until a related predecessorI/O has completed. Dependent-write consistency is a data statewhere data integrity is guaranteed by dependent-write I/Osembedded in application logic. Database management systems aregood examples of the practice of dependent-write consistency.

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 141: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Disaster Recovery and Disaster Restart

Database management systems must devise protection againstabnormal termination in order to successfully recover from one. Themost common technique used is to guarantee that a dependent writecan not be issued until a predecessor write has completed. Typicallythe dependent write is a data or index write while the predecessorwrite is a write to the log. Because the write to the log must becompleted prior to issuing the dependent data write, the applicationthread is synchronous to the log write, that is, it waits for that write tocomplete prior to continuing. The result of this kind of strategy is adependent-write consistent database.

Database restartDatabase restart is the implicit application of database logs during itsnormal initialization process to ensure a transactionally consistentdata state.

If a database is shut down normally, the process of getting to a pointof consistency during restart requires minimal work. If the databaseabnormally terminates, then the restart process takes longerdepending on the number and size of in-flight transactions at thetime of termination. An image of the database created by using EMCconsistency technology while it is running, without conditioning thedatabase, is in a dependent-write consistent data state, which issimilar to that created by a local power failure. This is also known asa DBMS restartable image. The restart of this image transforms it to atransactionally consistent data state by completing committedtransactions and rolling back uncommitted transactions during thenormal database initialization process.

Database recoveryDatabase recovery is the process of rebuilding a database from abackup image, and then explicitly applying subsequent logs to rollthe data state forward to a designated point of consistency. Databaserecovery is only possible with databases configured with archivelogging.

Creation of a restartable or recoverable IMS database copy on z/OScan be performed in one of three ways:

1. Using traditional backup utilities (restart and recovery)

Definitions 141

Page 142: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

142

Disaster Recovery and Disaster Restart

2. With the database quiesced and copying the databasecomponents using external tools (restart and recovery)

3. Using storage mechanisms

a. With the database in quiesced mode and copying the databaseusing external tools (restart and recovery)

b. Quiesced using external tools (restart and recovery)

c. Running using consistency technology to create a restartableimage (restart only)

Roll-forward recoveryWith IMS, it is possible to take a DBMS restartable image of thedatabase and apply subsequent archive logs to roll forward thedatabase to a point in time after the image was created. This meansthat the image created can be used in a backup strategy incombination with archive logs.

Considerations for disaster recovery/restartLoss of data or loss of application availability has a varying impactfrom one business type to another. For instance, the loss oftransactions for a bank could cost millions, whereas systemdowntime may not have a major fiscal impact. On the other hand,businesses that are primarily Web-based may require 100 percentapplication availability in order to survive. The two factors, loss ofdata and loss of uptime, are the business drivers that are baselinerequirements for a DR solution. When quantified, these two factorsare more formally known as recovery point objective (RPO) andrecovery time objective (RTO), respectively.

When evaluating a solution, the RPO and RTO requirements of thebusiness need to be met. In addition, the solution needs to take intoconsideration operational complexity, cost, and the ability to returnthe whole business to a point of consistency. Each of these aspects isdiscussed in the following sections.

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 143: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Disaster Recovery and Disaster Restart

Recovery point objectiveThe RPO is a point of consistency to which a user wants to recover orrestart. It is measured in the amount of time from when the point ofconsistency was created or captured to the time the disaster occurred.This time equates to the acceptable amount of data loss. Zero dataloss (no loss of committed transactions from the time of the disaster)is the ideal goal but the high cost of implementing such a solutionmust be weighed against the business impact and cost of a controlleddata loss.

Some organizations, like banks, have zero data loss requirements.The database transactions entered at one location must be replicatedimmediately to another location. This can have an impact onapplication performance when the two locations are far apart. On theother hand, keeping the two locations close to one another might notprotect against a regional disaster like a power outage or a hurricane.

Defining the required RPO is usually a compromise between theneeds of the business, the cost of the solution, and the risk of aparticular event happening.

Recovery time objectiveThe RTO is the maximum amount of time allowed for recovery orrestart to a specified point of consistency. This time involves manyfactors. The time taken to:

◆ Provision power, utilities, and so on

◆ Provision servers with the application and database software

◆ Configure the network

◆ Restore the data at the new site

◆ Roll forward the data to a known point of consistency

◆ Validate the data

Some delays can be reduced or eliminated by choosing certain DRoptions, like having a hot site where servers are preconfigured and onstandby. Also, if storage-based replication is used, the time taken torestore the data to a usable state is almost completely eliminated.

Considerations for disaster recovery/restart 143

Page 144: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

144

Disaster Recovery and Disaster Restart

As with RPO, each solution for RTO will have a different cost profile.Defining the RTO is usually a compromise between the cost of thesolution and the cost to the business when database and applicationsare unavailable.

Operational complexityThe operational complexity of a DR solution may be the most criticalfactor in determining the success or failure of a DR activity. Thecomplexity of a DR solution can be considered as three separatephases:

1. Initial setup of the implementation

2. Maintenance and management of the running solution, includingtesting

3. Execution of the DR plan in the event of a disaster

While initial configuration and operational complexities can be ademand on people resources, the third phase, execution of the plan, iswhere automation and simplicity must be the focus. When a disasteris declared, key personnel may not be available in addition to the lossof servers, storage, networks, buildings, and other resources. If thecomplexity of the DR solution is such that skilled personnel with anintimate knowledge of all systems involved are required to restore,recover, and validate application and database services, the solutionhas a high probability of failure.

Multiple database environments grow organically over time intocomplex federated database architectures. In these federateddatabase environments, reducing the complexity of DR is absolutelycritical. Validation of transactional consistency within the complexdatabase architecture is time-consuming and costly, and requiresintimate knowledge of the application. One of the reasons for thiscomplexity is due to the heterogeneous databases and operatingsystems in these federated environments. Across multipleheterogeneous platforms it is hard to establish a common clock, andtherefore, hard to determine a business point of consistency across allplatforms. This business point of consistency has to be created fromintimate knowledge of the transactions and data flows.

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 145: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Disaster Recovery and Disaster Restart

Source LPAR activityDR solutions may or may not require additional processing activityon the source LPARs. The extent of that activity can impact bothresponse time and throughput of the production application. Thiseffect should be understood and quantified for any given solution toensure that the impact to the business is minimized. The effect forsome solutions is continuous while the production application isrunning. For other solutions, the impact is sporadic, where bursts ofwrite activity are followed by periods of inactivity.

Production impactSome DR solutions delay the host activity while taking actions topropagate the changed data to another location. This action onlyaffects write activity, and although the introduced delay may only beof the order of a few milliseconds, it can impact response time in ahigh-write environment. Synchronous solutions introduce delay intowrite transactions at the source site; asynchronous solutions do not.

Target host activitySome DR solutions require a target host at the remote location toperform DR operations. The remote host has both software andhardware costs and needs personnel with physical access to it forbasic operational functions like power on and power off. Ideally, thishost could have some usage like running development or testdatabases and applications. Some DR solutions require more targethost activity than others, and some require none.

Number of copies of dataDR solutions require replication of data in one form or another.Replication of a database and associated files can be as simple asmaking a tape backup and shipping the tapes to a DR site, or assophisticated as asynchronous array-based replication. Somesolutions require multiple copies of the data to support DR functions.More copies of the data may be required to perform testing of the DRsolution, in addition to those that support the DR process.

Considerations for disaster recovery/restart 145

Page 146: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

146

Disaster Recovery and Disaster Restart

Distance for solutionDisasters, when they occur, have differing ranges of impact. Forinstance, a fire may take out a building, an earthquake may destroy acity, or a hurricane may devastate a region. The level of protection fora DR solution should address the probable disasters for a givenlocation. For example, when protecting against an earthquake, theDR site should not be in the same locale as the production site. Forregional protection, the two sites need to be in two different regions.The distance associated with the DR solution affects the kind of DRsolution that can be implemented.

Bandwidth requirementsOne of the largest costs for DR is in provisioning bandwidth for thesolution. Bandwidth costs are an operational expense. This makessolutions that have reduced bandwidth requirements very attractiveto customers. It is important to recognize in advance the bandwidthconsumption of a given solution to be able to anticipate the runningcosts. Incorrect provisioning of bandwidth for DR solutions can havean adverse affect on production performance and can invalidate theoverall solution.

Federated consistencyDatabases are rarely isolated islands of information with nointeraction or integration with other applications or databases. Mostcommonly, databases are loosely and/or tightly coupled to otherdatabases using triggers, database links, and stored procedures. Somedatabases provide information downstream for other databases usinginformation distribution middleware. Other databases receive feedsand inbound data from message queues and EDI transactions. Theresult can be a complex interwoven architecture with multipleinterrelationships. This is referred to as a federated databasearchitecture.

With a federated database architecture, making a DR copy of a singledatabase, without regard to other components, invites consistencyissues and creates logical data integrity problems. All components ina federated architecture need to be recovered or restarted to the samedependent-write consistent point of time to avoid these problems.

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 147: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Disaster Recovery and Disaster Restart

With this in mind, it is possible that point database solutions for DR,like log-shipping, do not provide the required business point ofconsistency in a federated database architecture. Federatedconsistency solutions guarantee that all components, databases,applications, middleware, flat files, and so on are recovered orrestarted to the same dependent-write consistent point in time.

Testing the solutionTested, proven, and documented procedures are also required for aDR solution. Many times the DR test procedures are operationallydifferent from a true disaster set of procedures. Operationalprocedures need to be clearly documented. In the best-case scenario,companies should periodically execute the actual set of proceduresfor DR. This could be costly to the business because of the applicationdowntime required to perform such a test, but it is necessary toensure the validity of the DR solution.

Ideally, DR test procedures should be executed by personnel notinvolved with creating the procedures. This helps ensure thethoroughness and efficacy of the DR implementation.

CostThe cost of doing DR can be justified by comparing it to the cost ofnot doing it. What does it cost the business when the database andapplication systems are unavailable to users? For some companiesthis is easily measurable, and revenue loss can be calculated per hourof downtime or per hour of data loss.

Whatever the business, the DR cost is going to be an extra expenseitem and, in many cases, with little in return. The costs include, butare not limited to:

◆ Hardware (storage, servers, and maintenance)

◆ Software licenses and maintenance

◆ Facility leasing/purchase

◆ Utilities

◆ Network infrastructure

◆ Personnel

Considerations for disaster recovery/restart 147

Page 148: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

148

Disaster Recovery and Disaster Restart

Tape-based solutionsWhen it is not feasible to implement an SRDF solution, tape-basedsolutions are alternatives that have been used for many years. Thefollowing sections describe tape-based disaster recovery and restartsolutions.

Tape-based disaster recoveryTraditionally, the most common form of disaster recovery was tomake a copy of the database onto tape, and using PTAM (PickupTruck Access Method), take the tapes off site to a hardened facility. Inmost cases, the database and application needed to be available tousers during the backup process. Taking a backup of a runningdatabase created a fuzzy image of the database on tape, one thatrequired database recovery after the image had been restored.Recovery usually involved application of logs that were active duringthe time the backup was in process. These logs had to be archivedand kept with the backup image to ensure successful recovery.

The rapid growth of data over the last two decades has meant thatthis method has become unmanageable. Making a hot copy of thedatabase is now the standard, but this method has its own challenges.How can a consistent copy of the database and supporting files bemade when they are changing throughout the duration of thebackup? What exactly is the content of the tape backup atcompletion? The reality is that the tape data is a fuzzy image of thedisk data, and considerable expertise is required to restore thedatabase back to a database point of consistency.

In addition, the challenge of returning the data to a business point ofconsistency, where a particular database must be recovered to thesame point as other databases or applications, is making this solutionless viable.

Tape-based disaster restartTape-based disaster restart is a recent development in disasterrecovery strategies and is used to avoid the fuzziness of a backuptaken while the database and application are running. A restart copyof the system data is created by using storage arrays to create a local,dependent-write consistent point-in-time image of the disks. Thisimage is a DBMS restartable image as described earlier. Thus, if this

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 149: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Disaster Recovery and Disaster Restart

image were restored and the database brought up, the databasewould perform an implicit recovery to attain transactionalconsistency. Forward recovery using archived logs and this databaseimage is possible with IMS.

The restartable image on the disks can be backed up to tape andmoved off site to a secondary facility. The frequency that the backupsare created and shipped off site determines the amount of data losswith this solution. Other circumstances, such as bad tapes, could alsodictate how much data loss is incurred at the remote site.

The time taken to restore the database is a factor to consider sincereading from tape is typically slow. Consequently, this solution can beeffective for customers with relaxed RTOs.

Remote replication challengesReplicating database information over long distances for the purposeof disaster recovery is challenging. Synchronous replication overdistances greater than 200 km may not be feasible due to the negativeimpact on the performance of writes because of propagation delay.Some form of asynchronous replication must be adopted.Considerations in this section apply to all forms of remote replicationtechnology, whether they are array-based, host-based, or managedby the database.

Remote replication solutions usually start with initially copying a fulldatabase image to the remote location. This is called instantiation ofthe database. There are a variety of ways to perform this. Afterinstantiation, only the changes from the source site are replicated tothe target site in an effort to keep the target up to date. Somemethodologies may not send all of the changes (certain log-shippingtechniques for system), by omission rather than design. Thesemethodologies may require periodic re-instantiation of the databaseat the remote site.

The following considerations apply to remote replication ofdatabases:

◆ Propagation delay (latency due to distance)

◆ Bandwidth requirements

◆ Network infrastructure

◆ Method and frequency of instantiation

Remote replication challenges 149

Page 150: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

150

Disaster Recovery and Disaster Restart

◆ Change rate at the source site

◆ Locality of reference

◆ Expected data loss

◆ Failback considerations

Propagation delayElectronic operations execute at the speed of light. The speed of lightin a vacuum is 186,000 miles per second. The speed of light throughglass (in the case of fiber-optic media) is less, approximately 115,000miles per second. In other words, in an optical network like SONETfor system, it takes 1 millisecond to send a data packet 125 miles, or 8milliseconds for 1,000 miles. All remote replication solutions need tobe designed with a clear understanding of the propagation delayimpact.

Bandwidth requirementsAll remote-replication solutions have some bandwidth requirementsbecause the changes from the source site must be propagated to thetarget site. The more changes there are, the greater the bandwidththat is needed. It is the change rate and replication methodology thatdetermine the bandwidth requirement, not necessarily the size of thedatabase.

Data compression can help reduce the quantity of data transmitted,and therefore, the size of the pipe required. Certain network devices,like switches and routers, provide native compression, some bysoftware and some by hardware. GigE directors provide nativecompression in an SRDF pairing between Symmetrix systems. Theamount of compression achieved is dependent on the type of datathat is being compressed. Typical character and numeric databasedata compresses at about a 2 to 1 ratio. A good way to estimate howthe data will compress is to assess how much tape space is required tostore the database during a full backup process. Tape drives performhardware compression on the data prior to writing it. For system, if a300 GB database takes 200 GB of space on tape, the compression ratiois 1.5 to 1.

For most customers, a major consideration in the disaster recoverydesign is cost. It is important to recognize that some components ofthe end solution represent a capital expenditure and some an

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 151: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Disaster Recovery and Disaster Restart

operational expenditure. Bandwidth costs are operational expenses,and thus, any reduction in this area, even at the cost of some capitalexpense, is highly desirable.

Network infrastructureThe choice of channel-extension equipment, network protocols,switches, routers, and so on ultimately determines the operationalcharacteristics of the solution. EMC has a proprietary BC Design Toolto assist customers in analysis of the source systems and to determinethe required network infrastructure to support a remote-replicationsolution.

Method and frequency of instantiationIn all remote-replication solutions, a common requirement is for aninitial, consistent copy of the complete database to be replicated tothe remote site. The initial copy from source to target is calledinstantiation of the database at the remote site. Followinginstantiation, only the changes made at the source site are replicated.For large databases, sending only the changes after the initial copy isthe only practical and cost-effective solution for remote databasereplication.

In some solutions, instantiation of the database at the remote site usesa process that is similar to the one that replicates the changes. Somesolutions do not even provide for instantiation at the remote site (logshipping for system). In all cases, it is critical to understand the prosand cons of the complete solution.

Method of re-instantiationSome techniques to perform remote replication of a database requireperiodic refreshing of the remote system with a full copy of thedatabase. This is called re-instantiation. Technologies such aslog-shipping frequently require this since not all activity on theproduction database may be represented in the log. In these cases, thedisaster recovery plan must account for re-instantiation and also forthe fact there may be a disaster during the refresh. The businessobjectives of RPO and RTO must likewise be met under thosecircumstances.

Remote replication challenges 151

Page 152: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

152

Disaster Recovery and Disaster Restart

Change rate at the source siteAfter instantiation of the database at the remote site, only changes tothe database are replicated remotely. There are many methods ofreplication to the remote site, and each has differing operationalcharacteristics. The changes can be replicated using loggingtechnology, hardware and software mirroring, for example. Beforedesigning a solution with remote replication, it is important toquantify the average change rate. It is also important to quantify thechange rate during periods of burst write activity. These periodsmight correspond to end of month/quarter/year processing, billing,or payroll cycles. The solution needs to be designed to allow for peakwrite workloads.

Locality of referenceLocality of reference is a factor that needs to be measured tounderstand if there will be a reduction of bandwidth consumptionwhen any form of asynchronous transmission is used. Locality ofreference is a measurement of how much write activity on the sourceis skewed. For system, a high locality of reference application maymake many updates to a few records in the database, whereas a lowlocality of reference application rarely updates the same records inthe database during a given period of time.

It is important to understand that while the activity on the databasesmay have a low locality of reference, the write activity into an indexmight be clustered when inserted records have the same or similarindex values, rendering a high locality of reference on the indexcomponents.

In some asynchronous replication solutions, updates are batched upinto periods of time and sent to the remote site to be applied. In agiven batch, only the last image of a given block is replicated to theremote site. So, for highly skewed application writes, this results inbandwidth savings. Generally, the greater the time period of batchedupdates, the greater the savings on bandwidth.

Log-shipping technologies do not take into account locality ofreference. For example, a record updated 100 times is transmitted 100times to the remote site, whether the solution is synchronous orasynchronous.

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 153: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Disaster Recovery and Disaster Restart

Expected data lossSynchronous DR solutions are zero data loss solutions, that is to say,there is no loss of committed transactions from the time of thedisaster. Synchronous solutions may also be impacted by a rollingdisaster, in which case work completed at the source site after therolling disaster started may be lost. Rolling disasters are discussed indetail in a later section.

Asynchronous DR solutions have the potential for data loss. Howmuch data is lost depends on many factors, most of which have beendefined previously. The quantity of data loss that is expected for agiven solution is called the recovery point objective (RPO). Forasynchronous replication, where updates are batched and sent to aremote site, the maximum amount of data lost will be two cycles ortwo batches worth. The two cycles that may be lost include the cyclecurrently being captured on the source site and the one currentlybeing transmitted to the remote site. With inadequate networkbandwidth, data loss could increase due to the increase intransmission time.

Failback considerationsIf there is the slightest chance that failover to the DR site may berequired, then there is a 100 percent chance that failback to theprimary site will also be required, unless the primary site is lostpermanently. The DR architecture should be designed in such a wayas to make failback simple, efficient, and low risk. If failback is notplanned for, there may be no reasonable or acceptable way to movethe processing from the DR site, where the applications may berunning on tier 2 servers and tier 2 networks and so on, back to theproduction site.

In a perfect world, the DR process should be tested once a quarter,with database and application services fully failed over to the DR site.The integrity of the application and database needs to be verified atthe remote site to ensure that all required data was copiedsuccessfully. Ideally, production services are brought up at the DR siteas the ultimate test. This means that production data would bemaintained on the DR site, requiring a failback when the DR testcompleted. While this is not always possible, it is the ultimate test of aDR solution. It not only validates the DR process but also trains thestaff on managing the DR process should a catastrophic failure ever

Remote replication challenges 153

Page 154: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

154

Disaster Recovery and Disaster Restart

occur. The downside for this approach is that duplicate sets of serversand storage need to be present in order to make an effective andmeaningful test. This tends to be an expensive proposition.

Array-based remote replicationCustomers can use the capabilities of a Symmetrix storage array toreplicate the database from the production location to a secondarylocation. No host CPU cycles are used for this, leaving the hostdedicated to running the production application and database. Inaddition, no host I/O is required to facilitate this: The array takes careof all replication, and no hosts are required at the target location tomanage the target array.

EMC provides multiple solutions for remote replication of databases:

◆ SRDF/S: Synchronous SRDF.

◆ SRDF/A: Asynchronous SRDF.

◆ SRDF/AR: SRDF Automated Replication.

◆ SRDF/Star: Software that enables concurrent SRDF/S andSRDF/A operations from the same source volumes.

Each of these solutions is discussed in detail in the following sections.In order to use any of the array-based solutions, it is necessary tocoordinate the disk layout of the databases with this kind ofreplication in mind.

Planning for array-based replicationAll Symmetrix solutions replicating data from one location to anotherare disk based. This allows the Symmetrix system to be agnostic toapplications, database systems, operating systems, and so on.

In addition, if an IMS system is to be replicated independently ofother IMS systems, it should have its own set of dedicated devices.That is, the devices used by an IMS system should not be shared withother applications or IMS systems.

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 155: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Disaster Recovery and Disaster Restart

SRDF/S single Symmetrix array to single Symmetrix arraySynchronous SRDF, or SRDF/S, is a method of replicating productiondata changes from locations that are no greater than 200 km apart.Synchronous replication takes writes that are inbound to the sourceSymmetrix array and copies them to the target Symmetrix array. Thewrite operation is not acknowledged as complete to the host untilboth Symmetrix arrays have the data in cache. It is important torealize that while the following examples involve Symmetrix arrays,the fundamentals of synchronous replication described here are truefor all synchronous replication solutions. Figure 32 shows thisprocess.

.

Figure 32 Single Symmetrix array to single Symmetrix array

The following steps describe Figure 32:

1. A write is received into the source Symmetrix cache. At this time,the host does not receive acknowledgment that the write iscomplete.

2. The source Symmetrix array uses SRDF/S to transmit the write tothe target Symmetrix array.

3. The target Symmetrix array sends an acknowledgment back tothe source that the write was received.

4. Channel-end/device-end are presented to the host.

These four steps cause a delay in the processing of writes asperceived by the database on the source host. The amount of delaydepends on the exact configuration of the network, the storage, thewrite block size, and the distance between the two locations. Notethat reads from the source Symmetrix array are not affected by thereplication.

Source TargetICO-IMG-000696

2

3

1

4

IMS on z/OS

SRDF/S single Symmetrix array to single Symmetrix array 155

Page 156: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

156

Disaster Recovery and Disaster Restart

Dependent-write consistency is inherent in a synchronousrelationship as the target R2 volumes are at all times equal to thesource, provided that a single RA group is used. If multiple RAgroups are used, or if more than one Symmetrix array is used on thesource site, SRDF Consistency Groups (SRDF/CG) must be used toguarantee consistency. SRDF/CG is described in the followingsection.

Once the R2s in the group are made visible to the host, the host canissue the necessary commands to access the devices, and they canthen be varied online to the host. When the data is available to thehost, the IMS systems can be restarted. IMS performs an implicitrecovery when restarted. Transactions that were committed but notcompleted are rolled forward and completed using the information inthe OLDS and WADS. Transactions that have updates applied to thedatabase but were not committed are rolled back. The result is atransactionally consistent database.

SRDF/S and consistency groupsZero data loss disaster recovery techniques tend to usestraightforward database and application restart procedures. Theseprocedures work well if all processing and data mirroring stop at thesame instant in time at the production site, when a disaster happens.This is the case when there is a site power failure.

However, in most cases, it is unlikely that all data processing ceases atan instant in time. Computing operations can be measured innanoseconds, and even if a disaster takes only a millisecond tocomplete, many such computing operations could be completedbetween the start of a disaster until all data processing ceases. Thisgives us the notion of a rolling disaster. A rolling disaster is a series ofevents taken over a period of time that comprise a true disaster. Thespecific period of time that makes up a rolling disaster could bemilliseconds (in the case of an explosion) or minutes in the case of afire. In both cases, the DR site must be protected against datainconsistency.

Rolling disasterProtection against a rolling disaster is required when the data for adatabase resides on more than one Symmetrix array or multiple RAgroups. Figure 33 on page 157 depicts a dependent-write I/O

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 157: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Disaster Recovery and Disaster Restart

sequence where a predecessor log write is happening prior to a pageflush from a database buffer pool. The log device and data device areon different Symmetrix arrays with different replication paths.Figure 33 demonstrates how rolling disasters can affect thisdependent-write sequence.

Figure 33 Rolling disaster with multiple production Symmetrix arrays

1. This example of a rolling disaster starts with a loss of thesynchronous links between the bottom source Symmetrix arrayand the target Symmetrix array. This prevents the remotereplication of data on the bottom source Symmetrix array.

2. The Symmetrix array, which is now no longer replicating, receivesa predecessor log write of a dependent-write I/O sequence. Thelocal I/O is completed, however, it is not replicated to the remoteSymmetrix array, and the tracks are marked as being owed to thetarget Symmetrix array. Nothing prevents the predecessor logwrite from completing to the host completing theacknowledgment process.

3. Now that the predecessor log write has completed, the dependentdata write is issued. This write is received on both the source andtarget Symmetrix arrays because the rolling disaster has not yetaffected those communication links.

Host

DBMS

1

R1(Z)

R1(Y)

R1(X)

R2(Y)

R2(Z)

R2(X)

R1(C)

R1(B)

R1(A)

R2(B)

R2(C)

R2(A)

ICO-IMG-000660

Data ahead of Log

X = Application dataY = DBMS dataZ = Logs

2

3

3

4

Host componentConGroup

SRDF/S and consistency groups 157

Page 158: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

158

Disaster Recovery and Disaster Restart

4. If the rolling disaster ended in a complete disaster, the conditionof the data at the remote site is such that it creates a data ahead oflog condition, which is an inconsistent state for a database. Theseverity of the situation is that when the database is restarted andperforming an implicit recovery, it may not detect theinconsistencies. A person extremely familiar with the transactionsrunning at the time of the rolling disaster might be able to detectthe inconsistencies. Database utilities could also be run to detectsome of the inconsistencies.

A rolling disaster can happen in such a manner that data linksproviding remote mirroring support are disabled in a staggeredfashion, while application and database processing continues at theproduction site. The sustained replication during the time when someSymmetrix units are communicating with their remote partnersthrough their respective links while other Symmetrix units are not(due to link failures) can cause data integrity exposure at the recoverysite. Some data integrity problems caused by the rolling disastercannot be resolved through normal database restart processing, andmay require a full database recovery using appropriate backups andlogs. A full database recovery elongates overall application restarttime at the recovery site.

Protecting against rolling disastersSRDF Consistency Group (SRDF/CG) technology providesprotection against rolling disasters. A consistency group is a set ofSymmetrix volumes spanning multiple RA groups and/or multipleSymmetrix frames that replicate as a logical group to otherSymmetrix arrays using Synchronous SRDF. It is not a requirement tospan multiple RA groups and/or Symmetrix frames when usingSRDF Consistency Group. Consistency group technology guaranteesthat if a single source volume is unable to replicate to its partner forany reason, then all volumes in the group stop replicating. Thisensures that the image of the data on the target Symmetrix array isconsistent from a dependent-write perspective.

Figure 34 on page 159 depicts a dependent-write I/O sequence wherea predecessor log write is happening prior to a changed block writtenfrom a database. The log device and data device are on differentSymmetrix arrays with different replication paths. Figure 34 onpage 159 demonstrates how rolling disasters can be prevented usingSRDF Consistency Group technology.

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 159: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Disaster Recovery and Disaster Restart

Figure 34 Rolling disaster with SRDF Consistency Group protection

1. Consistency group protection is defined containing volumes X, Y,and Z on the source Symmetrix array. This consistency groupdefinition must contain all of the devices that need to maintaindependent-write consistency and reside on all participating hostsinvolved in issuing I/O to these devices. A mix of CKD(mainframe) and FBA (UNIX/Windows) devices can be logicallygrouped together. In some cases, the entire processingenvironment may be defined in a consistency group to ensuredependent-write consistency.

2. The rolling disaster described begins preventing the replication ofchanges from volume Z to the remote site.

3. The predecessor log write occurs to volume Z, causing aconsistency group (ConGroup) trip.

4. A ConGroup trip will hold the I/O that could not be replicatedalong with all of the I/O to the logically grouped devices. TheI/O is held by PowerPath on the UNIX or Windows hosts, andIOS on the mainframe host. It is held long enough to issue twoI/Os per Symmetrix array. The first I/O will put the devices in asuspend-pending state.

Host 1

DBMS

SCF/SYMAPI

IOS/PowerPath

Solutions Enabler consistency group

Host 2

DBMS

Solutions Enabler consistency group

2

R1(Z)

R1(Y)

R1(X)

R2(Y)

R2(Z)

R2(X)

R1(C)

R1(B)

R1(A)

R2(B)

R2(C)

R2(A)

SCF/SYMAPI

IOS/PowerPath

ICO-IMG-000661

DBMSrestartablecopy

ConGroupdefinition(X,Y,Z)

X = DBMS data Y = Application data Z = Logs

1

ConGroupdefinition(X,Y,Z)

1

3

45

6

7SuspendR1/R2relationship

SRDF/S and consistency groups 159

Page 160: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

160

Disaster Recovery and Disaster Restart

5. The second I/O performs the suspend of the R1/R2 relationshipfor the logically grouped devices, which immediately disables allreplication to the remote site. This allows other devices outside ofthe group to continue replicating, provided the communicationlinks are available.

6. After the R1/R2 relationship is suspended, all deferred writeI/Os are released, allowing the predecessor log write to completeto the host. The dependent data write is issued by the DBMS andarrives at X, but is not replicated to the R2(X).

7. If a complete failure occurred from this rolling disaster,dependent-write consistency at the remote site is preserved. If acomplete disaster did not occur, and the failed links wereactivated again, the consistency group replication could beresumed once synchronous mode is achieved. It is recommendedto create a copy of the dependent-write consistent image at thetarget site while the resume takes place. Once the SRDF processreaches synchronization, the dependent-write consistent copy isachieved at the remote site.

SRDF/S with multiple source Symmetrix arraysThe implications of spreading a database across multiple Symmetrixframes, or across multiple RA groups, and replicating in synchronousmode were discussed in previous sections. The challenge in this typeof scenario is to protect against a rolling disaster. SRDF ConsistencyGroups can be used to avoid data corruption in a rolling disastersituation.

Consider the architecture depicted in Figure 35 on page 161.

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 161: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Disaster Recovery and Disaster Restart

Figure 35 SRDF/S with multiple Symmetrix arrays and ConGroup protection

To protect against a rolling disaster, a consistency group can becreated that encompasses all the volumes on all Symmetrix arraysparticipating in replication as shown by the blue-dashed oval.

ConGroup considerationsRefer to the Section “IMS considerations for ConGroup” on page 57for a list of the datasets required in a ConGroup for a synchronousrestart solution. Figure 36 on page 162 shows an example of an IMSConGroup configuration.

Data STD

WADS STD

RECONSTD

Logs STD

Data STD

WADS STD

RECONSTD

Logs STD

Data STD

WADS STD

RECONSTD

Logs STD

Data STD

WADS STD

RECONSTD

Logs STD

Source Target

Syn

chro

nous

ICO-IMG-000668

SRDF/S and consistency groups 161

Page 162: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

162

Disaster Recovery and Disaster Restart

Figure 36 IMS ConGroup configuration

Consistency groups and IMS dual WADSThe combination of EMC Consistency Groups and IMS Dual WADScapability provides a consistent, restartable image of an IMS system,without using SRDF to remotely mirror the WADS volumes.

For IMS environments where the I/O rate of the IMS Write Aheaddataset (WADS) is very high, SRDF protection may not meet WADSresponse time requirements, due to the overhead inherent in SRDF.However, IMS does provide support for multiple WADS datasets,including failover to spare WADS datasets. IMS accommodates dualand spare WADS datasets, and automatically tracks which WADSdatasets are active.

This solution requires that the primary WADS dataset be placed onthe primary Symmetrix system, and the secondary WADS dataset onthe restart Symmetrix system. Refer to Figure 37 on page 164 for therequired configuration. Additional spare WADS datasets are placedin the consistency group. EMC SRDF Consistency Group for z/OSallows configurations with standard (non-SRDF) volumes to be part

ICO-IMG-000669

User databasesVendor databases

Recoverydatasets

OtherSystemdatasets

Messagequeues

Databases

Loadlibraries

Fastpathdatasets

Formatlibraries

ICF usercatalogs

RECONxDFSOLSxxRDSSLDS

DFSOLPxxDFSWADSx

RLDS

MODSTATMODBLKSDBDLIBPROCLIB

IMSACBMATRIXPSBLIB

LGMSGxSHMSGxQBLKS

IMS system librariesUser program librariesVendor program libraries

MCSBCPxMSDBINITMSDBDUMP

FormatIMSTFMT

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 163: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Disaster Recovery and Disaster Restart

of a consistency group. This allows the standard volume to beprotected by a BCV, just as a source (R1) volume is protected by atarget (R2) volume.

In this configuration, a standard volume containing the secondaryWADS dataset is located on the restart Symmetrix system. SpareWADS datasets are placed on volumes in the consistency group. Atthe time of a consistency group trip, all I/O is held on the primaryhosts by IOS Manager. The BCV established with the standardvolume containing the secondary WADS is consistently split, theSRDF R1-R2 relationship is suspended for the ConGroup definedvolumes, and I/O is allowed to continue on the primary site hosts.This is normal consistency group behavior. Since the IMS database R2volumes have been consistently suspended from their R1 pairs, andthe secondary WADS BCV consistently split, restart procedures areused at the recovery site. Procedures can be developed for restart atthe recovery site using the consistent point-in-time copies at that site.

The viability of this solution in any customer IMS environment issite-specific. IMS WADS switching is at the heart of this solution. Thissolution only makes sense if the channel connection of the secondaryWADS adds less response time overhead than SRDF would. Thesolution relies on a combination of ConGroup and IMS functionalityand works in a multi-control unit, multi-MVS image environment.The following scenarios describe how a restartable IMS image iscreated regardless of the nature of the communications failurebetween the local and remote site.

Figure 37 on page 164 shows the required configuration. The primaryWADS00 volume (STD) exists outside the consistency groupdefinition. The secondary WADS01 and its established BCV (BCVWADS01) are channel-extended to the restart site and included in theconsistency group. A spare WADS02 also exists within theconsistency group, and along with the remainder of the IMS storageenvironment that is remotely mirrored by way of SRDF protection.

SRDF/S and consistency groups 163

Page 164: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

164

Disaster Recovery and Disaster Restart

Figure 37 IMS ConGroup configuration

Two communication failure scenarios are examined. Case 1 is thefailure of the SRDF links and is depicted in Figure 38 on page 165.Case 2 is the failure of the channel extension to the WADS controlunit and is depicted in Figure 38 on page 165.

Communications failure case 1If a condition occurs that causes an SRDF remote mirror failure, aconsistency group trip sequence begins. Writes are temporarily heldat the host while all source (R1) volumes are suspended from theirtarget R2 pair. ConGroup also performs a consistent split on thestandard volume at the restart site, which contains the secondaryWADS (WADS01). Writes are then allowed to flow and workcontinues at the primary site.

The recovery site contains a consistent image of the IMS storageenvironment on the R2s, their BVC devices, and the BCV WADS01that was split at the same point in time. If required, IMS database andtransaction recovery can proceed at the recovery site.

STDWADS00

STDWADS01

BCVWADS01

R1WADS01

spare

R1DB

datasets

R1OLDS

R1RECON

ICO-IMG-000697

IMSEMC ConGroup

IMS

Primaryhost

Recoveryhost

SRDF

Channelextension

BCVWADS02

spare

BCVDB

datasets

BCVOLDS

BCVRECON

R2WADS02

spare

R2DB

datasets

R2OLDS

R2RECON

Symmetrix Symmetrix

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 165: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Disaster Recovery and Disaster Restart

Figure 38 Case 1—Failure of the SRDF links

The following sequence of events refers to Figure 38:

1. The SRDF link fails (or any failure that trips the ConGroup).

2. The ConGroup SRDF suspends R1-R2 relationships and splits theBCV of WADS01.

3. The STD WADS01 at the recovery site is varied offline.

4. IMS is restarted at the remote side.

Note: Since some processing may have occurred at the primary site beforetotal shutdown of the primary site, the channel-extended STD WADS01 mayhave information later than the point in time of the consistent trip and split.For this reason, the remote IMS cannot be allowed to start off of the WADS onthe STD volume.

STDWADS00

STDWADS01

BCVWADS01

R1WADS01

spare

R1DB

datasets

R1OLDS

R1RECON

1

2

3

4

WADS01offline

ConGroup Trip &consistent SPLIT

Restart

Failure

Symmetrix Symmetrix

ICO-IMG-000698

IMSEMC ConGroup

IMS

Primaryhost

Recoveryhost

SRDF

Channelextension

BCVWADS02

spare

BCVDB

datasets

BCVOLDS

BCVRECON

R2WADS02

spare

R2DB

datasets

R2OLDS

R2RECON

SRDF/S and consistency groups 165

Page 166: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

166

Disaster Recovery and Disaster Restart

The R2 volumes (or their BCVs) plus the BCV WADS01 constitute apoint-in-time image suitable for restart processing. Splitting all the R2BCV devices preserves the consistent point-in-time copies on eitherthe R2 or BCV volumes. The BCV WADS01 also needs to beprotected.

Communication failure case 2Figure 39 on page 167 is an example of channel-extension failure. Ifthe extended channel fails first, IMS performs the necessary actions tobring the spare WADS02 into the operational configuration. Theintegrity of the IMS environment is handled by IMS. WADS02 is amember of the consistency group and is protected by SRDF. At thistime, the primary WADS00 and dual WADS02 are both in theSymmetrix systems at the primary site. The WADS02 is protected bySRDF at the remote Symmetrix system at the restart site.

If no other failure impacting remote mirroring occurs, the IMS workcontinues normally. The performance of the IMS system may beaffected by the SRDF synchronous protection of WADS02. Theprimary site continues running with performance degraded but fullyprotected by the consistency group. The site then makes decisionsabout how to return to a high-performance configuration.

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 167: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Disaster Recovery and Disaster Restart

Figure 39 Case 2—Channel extension fails first

The sequence of events is as follows:

1. Failure of channel extension causes IMS to begin writing to spareWADS02 defined on the local site as an R1. IMS may rundegraded since this WADS02 is SRDF protected.

2. If there is a subsequent remote mirroring failure, a ConGroup tripoccurs for all IMS resources.

3. IMS is restarted at the remote site.

The R2 volumes (or their BCVs) can be the starting point for restartrecovery. The STD WADS01 and its BCV are not usable for restartrecovery actions due to the extended channel failure. Splitting all theinvolved BCV devices preserves the consistent point-in-time copieson either the R2 or BCV volumes. The restart Symmetrix systemscontains a consistent point-in-time image of the WADS02 and otherprotected volumes.

STDWADS00

STDWADS01

BCVWADS01

R1WADS01

spare

R1DB

datasets

R1OLDS

R1RECON

1 2

3

ConGroup trip

Restart

Symmetrix Symmetrix

ICO-IMG-000699

IMSEMC ConGroup

IMS

Primaryhost

Recoveryhost

SRDF

Channelextension

BCVWADS02

spare

BCVDB

datasets

BCVOLDS

BCVRECON

R2WADS02

spare

R2DB

datasets

R2OLDS

R2RECON

SRDF/S and consistency groups 167

Page 168: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

168

Disaster Recovery and Disaster Restart

Note: In all failover scenarios, the STD device containing the WADS at theremote site must be varied offline. The BCV device containing the WADS atthe remote site must be varied online. The R2 device containing the WADSmust be varied online. IMS contains information in the log records indicatingwhich WADS to use for restart.

SRDF/ASRDF/A, or asynchronous SRDF, is a method of replicatingproduction data changes from one Symmetrix array to another usingdelta set technology. Delta sets are the collection of changed blocksgrouped together by a time interval that can be configured at thesource site. The default time interval is 30 seconds. The delta sets arethen transmitted from the source site to the target site in the orderthey were created. SRDF/A preserves the dependent-writeconsistency of the database at all times at the remote site.

The distance between the source and target Symmetrix arrays isunlimited and there is no host impact. Writes are acknowledgedimmediately when they reach the cache of the source Symmetrixarray. SRDF/A is only available on the the Symmetrix DMX family orthe VMAX Family. Figure 40 depicts the process:

Figure 40 SRDF/Asynchronous replication configuration

1. Writes are received into the source Symmetrix cache. The hostreceives immediate acknowledgment that the write is complete.Writes are gathered into the capture delta set for 30 seconds.

R1N

R1N

N-1

N-1

N-1

N-1 R2N-2

R2N-2

IMS

Source Target

ICO-IMG-000700

1 2 3 4

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 169: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Disaster Recovery and Disaster Restart

2. A delta set switch occurs and the current capture delta setbecomes the transmit delta set by changing a pointer in cache. Anew empty capture delta set is created.

3. SRDF/A sends the changed blocks that are in the transmit deltaset to the remote Symmetrix array. The changes collect in thereceive delta set at the target site. When the replication of thetransmit delta set is complete, another delta set switch occurs, anda new empty capture delta set is created with the current capturedelta set becoming the new transmit delta set. The receive delta setbecomes the apply delta set.

4. The apply delta set marks all the changes in the delta set againstthe appropriate volumes as invalid tracks and begins destagingthe blocks to disk.

5. The cycle repeats continuously.

With sufficient bandwidth for the source database write activity,SRDF/A transmits all changed data within the default 30 seconds.This means that the maximum time the target data is behind thesource is 60 seconds (two replication cycles). At times of high writeactivity, it may not be possible to transmit all the changes that occurduring a 30-second interval. This means that the target Symmetrixarray will fall behind the source Symmetrix array by more than 60seconds. Careful design of the SRDF/A infrastructure and a thoroughunderstanding of write activity at the source site are necessary todesign a solution that meets the RPO requirements of the business atall times.

Consistency is maintained throughout the replication process on adelta set boundary. The Symmetrix array will not apply a partial deltaset, which would invalidate consistency. Dependent-writeconsistency is preserved by placing a dependent write in either thesame delta set as the write it depends on or a subsequent delta set.

Different command sets are used to enable SRDF/A depending onwhether the SRDF/A group of devices is contained within a singleSymmetrix or is spread across multiple Symmetrix.

Preparation for SRDF/ABefore the asynchronous mode of SRDF can be established, initialinstantiation of the database volumes has to have taken place. Inother words, a baseline full copy of all the volumes that are going to

SRDF/A 169

Page 170: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

170

Disaster Recovery and Disaster Restart

participate in the asynchronous replication must be executed first.This is usually accomplished using the Adaptive Copy mode ofSRDF.

After the bulk of the data has been transferred using Adaptive Copy,SRDF/A can be turned on. SRDF/A attempts to send the remainingdata to the target Symmetrix array while at the same time managingongoing changes on the source array. When the data at the targetarray is two delta sets behind the source, SRDF/A status is marked asconsistent.

SRDF/A using multiple source SymmetrixWhen a database is spread across multiple Symmetrix arrays, andSRDF/A is used for long-distance replication, separate software mustbe used to manage the coordination of the delta set boundariesbetween the participating Symmetrix arrays and to stop replication ifany of the volumes in the group cannot replicate for any reason. Thesoftware must ensure that all delta set boundaries on everyparticipating Symmetrix array in the configuration are coordinated togive a dependent-write consistent point-in-time image of thedatabase.

SRDF/A Multi-Session Consistency (MSC) provides consistencyacross multiple RA groups and/or multiple Symmetrix arrays. MSCis available on 5671 microcode and later with Solutions Enabler V6.0and later. SRDF/A with MSC is supported by an SRDF host processthat performs cycle-switching and cache recovery operations acrossall SRDF/A sessions in the group. This ensures that adependent-write consistent R2 copy of the database exists at theremote site at all times.

The MSC environment is set up and controlled though SCF(ResourcePak Base), within which MSC commands are managed.SRDF must be running on all hosts that can write to the set ofSRDF/A volumes being protected. At the time of an interruption(SRDF link failure, for instance), MSC analyzes the status of allSRDF/A sessions and either commits the last cycle of data to the R2target or discards it.

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 171: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Disaster Recovery and Disaster Restart

How to restart in the event of a disasterIn the event of a disaster when the primary source Symmetrix array islost, database and application services must be run from the DR site.A host at the DR site is required for restart. Before restart can beattempted, the R2 devices must be write-enabled.

Once the data is available to the host, the database can be restarted.Transactions that were committed but not completed are rolledforward and completed using the information in the OLDS andWADS. Transactions that have updates applied to the database butnot committed are rolled back. The result is a transactionallyconsistent database.

SRDF/A 171

Page 172: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

172

Disaster Recovery and Disaster Restart

SRDF/AR single-hopSRDF Automated Replication, or SRDF/AR, is a continuousmovement of dependent-write consistent data to a remote site usingSRDF adaptive copy mode and TimeFinder consistent splittechnology. TimeFinder BCVs are used to create a dependent-writeconsistent point-in-time image of the data to be replicated. The BCVsalso have an R1 personality, which means that SRDF in adaptive copymode can be used to replicate the data from the BCVs to the targetsite. Since the BCVs are not changing, replication completes in a finitelength of time. The length of time for replication depends on the sizeof the network pipe between the two locations, the distance betweenthe two locations, the quantity of changed data tracks, and thelocality of reference of the changed tracks. On the remote Symmetrixarray, another BCV copy of the data is made using data on the R2s.This is necessary because the next SRDF/AR iteration replaces the R2image in a non-ordered fashion, and if a disaster were to occur whilethe R2s were synchronizing, there would not be a valid copy of thedata at the DR site. The BCV copy of the data in the remoteSymmetrix array is commonly called the gold copy of the data. Thewhole process then repeats.

With SRDF/AR, there is no host impact. Writes are acknowledgedimmediately when they hit the cache of the source Symmetrix array.Figure 41 shows this process.

Figure 41 SRDF/AR single-hop replication configuration

STD

STD

R1BCV

R1BCV

R2

R2

BCV

BCV

IMS

Source Target

ICO-IMG-000701

1

2

3 45

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 173: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Disaster Recovery and Disaster Restart

The following steps describe Figure 41 on page 172:

1. Writes are received into the source Symmetrix cache and areacknowledged immediately. The BCVs are already synchronizedwith the STDs at this point. A consistent split is executed againstthe STD-BCV pairing to create a point-in-time image of the dataon the BCVs.

2. SRDF transmits the data on the BCV/R1s to the R2s in the remoteSymmetrix array.

3. When the BCV/R1 volumes are synchronized with the R2volumes, they are re-established with the standards in the sourceSymmetrix array. This causes the SRDF links to be suspended. Atthe same time, an incremental establish is performed on the targetSymmetrix array to create a gold copy on the BCVs in that frame.

4. When the BCVs in the remote Symmetrix are fully synchronizedwith the R2s, they are split, and the configuration is ready tobegin another cycle.

5. The cycle repeats based on configuration parameters. Theparameters can specify the cycles to begin at specific times orspecific intervals, or to run back to back.

It should be noted that cycle times for SRDF/AR are usually in theminutes to hours range. The RPO is double the cycle time in aworst-case scenario. This may be a good fit for customers withrelaxed RPOs.

The added benefit of having a longer cycle time is that the locality ofreference will likely increase. This is because there is a much greaterchance of a track being updated more than once in a one-hourinterval than in, say, a 30-second interval. The increase in locality ofreference shows up as reduced bandwidth requirements for the finalsolution.

Before SRDF/AR can be started, instantiation of the database has tohave taken place. In other words, a baseline full copy of all thevolumes that are going to participate in the SRDF/AR replicationmust be executed first. This means a full establish to the BCVs in thesource array, a full SRDF establish of the BCV/R1s to the R2s, and afull establish of the R2s to the BCVs in the target array is required.There is an option to automate the initial setup of the relationship.

SRDF/AR single-hop 173

Page 174: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

174

Disaster Recovery and Disaster Restart

As with other SRDF solutions, SRDF/AR does not require a host atthe DR site. The commands to update the R2s and manage thesynchronization of the BCVs in the remote site are all managedin-band from the production site.

How to restart in the event of a disasterIn the event of a disaster, it is necessary to determine if the mostcurrent copy of the data is located on the remote site BCVs or R2s atthe remote site. Depending on when in the replication cycle thedisaster occurs, the most current version could be on either set ofdisks.

SRDF/StarThe SRDF/Star disaster recovery solution provides advancedmulti-site business continuity protection for enterprise environments.It combines the power of Symmetrix Remote Data Facility (SRDF)synchronous and asynchronous replication, enabling an advancedthree-site business continuity solution. (The SRDF/Star for z/OS UserGuide has more information.)

SRDF/Star enables concurrent SRDF/S and SRDF/A operationsfrom the same source volumes with the ability to incrementallyestablish an SRDF/A session between the two remote sites in theevent of a primary site outage.

This software provides the ability to quickly re-establish protectionbetween the two remote sites in the event of a primary site failure,and then just as quickly restore the primary site when conditionspermit.

With SRDF/Star, enterprises can quickly resynchronize the SRDF/Sand SRDF/A copies by replicating only the differences between thesessions, allowing for a fast resumption of protected services after asource site failure.

Figure 42 on page 175 is an example of an SRDF/Star configuration.

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 175: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Disaster Recovery and Disaster Restart

Figure 42 SRDF/Star configuration

SRDF/Extended Distance ProtectionEnginuity 5874 supports a new feature called SRDF/ExtendedDistance Protection (SRDF/EDP). This feature allows a much moreoptimized and efficient structure to a three-site SRDF topology in thatit allows the intermediate site (site B in an A-to-B-to-C topology) tohave a new device type known as a diskless R21. This arrangementrequires that the secondary (site B) system to be at Enginuity 5874 orlater (and utilizing a Symmetrix VMAX array), while sites A and Ccould be running either DMX-4 with Enginuity 5773 or a SymmetrixVMAX array with Enginuity 5874 or later. Thus, SRDF/EDP allowsreplication between the primary site A and tertiary site C without theneed for SRDF BCVs or any physical storage for the replication at thesecondary site B. Figure 43 on page 176 depicts an SRDF/EDPconfiguration.

R1

R1

R2

R2

R2

R2

Source site (A)primary site

Local site (B)

Remote site (C)

SRDF/Synchronous

SRDF/Asynchronous

Active linkInactive link ICO-IMG-000702

SRDF/A MSCMulti-session control

SDDF session

SDDF session

1111010100

0001010100

SRDF/Extended Distance Protection 175

Page 176: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

176

Disaster Recovery and Disaster Restart

Figure 43 SRDF/EDP block diagram

The diskless R21 differs from the real disk R21, introduced in anearlier release of Enginuity, which had three mirrors (one at each ofthe three sites). The diskless R21 device is a new type of device thatdoes not have any local mirrors. Furthermore, it has no local diskspace allocated on which to store the user data, and therefore, reducesthe cost of disk storage in the site B system. This results in only twofull copies of data, one on the source site A and one on the target siteC.

The purpose of a diskless R21 device is to cascade data directly to theremote R2 disk device. When using a diskless R21 device, thechanged tracks received from the R1 mirror are saved in cache untilthese tracks are sent to the R2 disk device. Once the data is sent to theR2 device and the receipt is acknowledged, the cache slot is freed andthe data no longer exists on the R21 Symmetrix array.

This advantageous approach to three-site SRDF means that acustomer only needs a Symmetrix system with vault and SFS drives,plus enough cache to hold common area, user data/updates(customer data), and device tables, thereby reducing the overallsolution cost. It highlights a serious attempt to address a greeneralternative to the device sprawl brought about by multi-site businesscontinuity requirements and is sure to be welcomed by manycustomers deploying three-site DR solutions. The R21 diskless devicestill uses the device table like a disk device and also consumes aSymmetrix device number. They are not addressable by a host orassigned to a DA, and therefore, cannot be accessed for any I/Os.

Other considerations with diskless R21 devices include:

◆ They can only be supported on GigE and Fibre Channel directors.

R21R1 R2

Primarysite A

Enginuity 5773 or 5874

Secondarysite B

Enginuity 5874

ICO-IMG-000751

Tertiarysite C

Enginuity 5773 or 5874

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 177: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Disaster Recovery and Disaster Restart

◆ They cannot participate in dynamic sparing since the DAmicrocode blocks any type of sparing and performing iVTOCagainst them.

◆ They cannot be SRDF-paired with other diskless devices.

◆ When used for SRDF/A operations, all devices in the SRDF/Asession must be diskless. Non-diskless device types are notallowed.

◆ All Symmetrix replication technologies, other than SRDF(TimeFinder/Mirror, SNAP, and CLONE), do not function withdiskless devices configured as either the source or the target ofthe intended operation.

◆ SDDF sessions are allowed on diskless devices.

High-availability solutionsCustomers that cannot tolerate any data loss or outages, whetherplanned or unplanned, require high-availability solutions to maintainbusiness continuity. EMC solutions that provide highly available arediscussed in this section.

AutoSwapEMC AutoSwap is a software package that allows I/O to beredirected from a primary (R1) device in an SRDF Synchronousrelationship to its partner (R2) device with minimal impact to arunning application. AutoSwap allows users to manage planned andunplanned events.

ConGroup contains a license-key enabled AutoSwap extension thathandles automatic workload swaps between Symmetrix subsystemswhen the AutoSwap software detects an unplanned outage orproblem.

AutoSwap can be used in both shared and non-shared DASDenvironments and uses standard z/OS operating system services toensure serialization and to affect swaps. AutoSwap uses the CrossSystem Communication (CSC) facility of SCF 1 to coordinate swapsacross multiple z/OS images in a shared DASD or parallel sysplexenvironment.

High-availability solutions 177

Page 178: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

178

Disaster Recovery and Disaster Restart

Geographically Dispersed Disaster RestartGeographically Dispersed Disaster Restart (GDDR) is a mainframesoftware product that standardizes and automates business recoveryfor both planned and unplanned outages. GDDR is designed torestart business operations following disasters ranging from the lossof computer capacity and/or disk array access, to total loss of a singledata center or a regional disaster, including the loss of dual datacenters. GDDR achieves this goal by providing the automationquality controls to the functionality of many EMC and third-partyhardware and software products required for business restart.

The EMC GDDR complexBecause GDDR is used to restart following disasters, it does notreside on the same servers that it is seeking to protect. For this reason,GDDR resides on separate logical partitions (LPARs) from the hostservers that run the customer’s application workloads. Thus, in athree-site SRDF/Star configuration, GDDR is installed on a controlLPAR at each site. Each GDDR node is aware of the other two GDDRnodes by way of network connections between each site. Thisawareness is required to detect disasters, identify survivors,nominate the leader, and then take the necessary actions to recoverthe customer’s business at one of the customer-chosen survivingsites.

To achieve the task of business restart, GDDR automation extendswell beyond the disk layer where EMC has traditionally focused andinto the host operating system layer. It is at this layer that sufficientcontrols and access to third-party software and hardware productsexist to enable EMC to provide automated recovery services.

Figure 44 on page 179 shows a GDDR-managed configuration withSRDF/Star and AutoSwap.

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 179: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Disaster Recovery and Disaster Restart

Figure 44 GDDR-managed configuration with SRDF/Star and AutoSwap

In Figure 44, DC1 is the primary site where the production workloadis located. DC1 is also the primary DASD site, where the R1 DASD islocated. DC2 is the secondary site and contains the secondary R2DASD.

Sites DC1 and DC2 are the primary and secondary data centers ofcritical production applications and data. They are considered fullyequivalent for strategic production applications, connected withhighly redundant direct network links. At all times, all productiondata is replicated synchronously between the two sites.

Site DC3 is the tertiary data center for critical production applicationsand data. It is connected with redundant network to both DC1 andDC2. Data is replicated asynchronously from the current primaryDASD site with an intended recovery point objective (RPO) in a shortnumber of minutes.

R1 R2

R2

GDDR

GDDRMaster

GDDRSRDF synch

ConGroups

ICO-IMG-000703

DC1

DC3

DC2

AUTOSWAP AUTOSWAP

High-availability solutions 179

Page 180: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

180

Disaster Recovery and Disaster Restart

GDDR and business continuityImplementing SRDF/S, SRDF/A, TimeFinder, and other EMC andnon-EMC technologies are necessary prerequisites to be able torecover or restart following some level of interruption or disaster tonormal IT operations. However, these are foundation technologies.Because of the complexity of operation of these various products,including known or unknown dependencies and sequencing ofoperations, it is necessary to deploy automation to coordinate andcontrol recovery and restart operations.

Figure 45 shows EMC’s foundation technologies used in a GDDRenvironment.

Figure 45 EMC foundation products

For more information about GDDR see EMC Geographically DispersedDisaster Restart Concepts and Facilities and EMC GeographicallyDispersed Disaster Restart Product Guide.

IMS Fast Path Virtual Storage OptionThe IMS Fast Path Virtual Storage Option can be used to map datainto virtual storage or coupling facility structures. This isaccomplished by defining the DEDB areas as VSO areas. This featureallows for reduced read I/O, decreased locking contention, and fewerwrites to the area dataset.

ICO-IMG-000704

SRDF/A + MSC

SRDF/S

AutoSwap

Congroup

STAR

Symmetrix

GDDR

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 181: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Disaster Recovery and Disaster Restart

The facilities that provide for mirroring of DASD, like SRDF, do notsupport CF structures. This means that when IMS is restarted at adisaster recovery site there are no failed persistent structures for theVSO areas to connect to. The VSO area on DASD is mirrored, and thedata on DASD represents the contents of the structure as of the lastsystem checkpoint.

At a disaster recovery site, during /ERE, IMS attempts to connect tothe VSO structures. Since the structures from the local site do notexist at the disaster recovery site, the XES connect process creates newstructures. As a result, all of the updated data in the failed site thatwas made after the last system checkpoint is lost. The /ERE processwill not rebuild those updates because as far as IMS restartprocessing is concerned, the transactions have completed. Thosetransactions that have a 5612 log record are discarded. Thosecommitted transactions that have not completed will be recovered tothe new CF structure. If applications are allowed to start processing,the CF structures will be re-populated with old data, and there wouldbe many data integrity issues.

In order to re-establish those updates, a database recovery procedureneeds to be done. The updates are on the IMS log. All VSO areas thatare updatable need to be recovered at the disaster recovery site.

IMS Fast Path Virtual Storage Option 181

Page 182: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

182

Disaster Recovery and Disaster Restart

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 183: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

)

8

This chapter presents these topics.

◆ Overview........................................................................................... 184◆ The performance stack .................................................................... 184◆ Traditional IMS layout recommendations.................................... 186◆ RAID considerations........................................................................ 188◆ Flash drives ....................................................................................... 193◆ Data placement considerations ...................................................... 197◆ Other layout considerations ........................................................... 202◆ Extended address volumes............................................................. 204

Performance Topics

Performance Topics 183

Page 184: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

184

Performance Topics

OverviewMonitoring and managing database performance should be acontinuous process in all IMS environments. Establishing baselinesand then collecting database performance statistics to compareagainst them are important to monitor performance trends andmaintain a smoothly running system. The following chapterdiscusses the performance stack and how database performanceshould be managed in general. Subsequent sections discuss specificSymmetrix layout and configuration issues to help ensure that thedatabase meets the required performance levels.

The performance stackPerformance tuning involves the identification and elimination ofbottlenecks in the various resources that make up the system.Resources include the application, code that drives the application,database, host, and storage. Tuning performance involves analyzingeach of these individual components that make up an application,identifying bottlenecks or potential optimizations that can be made toimprove performance, implementing changes that eliminate thebottlenecks or improve performance, and verifying that the changehas improved overall performance. This is an iterative process and isperformed until the potential benefits from continued tuning areoutweighed by the effort required to tune the system.

Figure 46 on page 185 shows the various layers that need to beexamined as a part of any performance analysis. The potentialbenefits achieved by analyzing and tuning a particular layer of theperformance stack are not equal, however. In general, tuning theupper layers of the performance stack, that is, the application andIMS calls, provides a much better ROI than tuning the lower layers,such as the host or storage layers. For example, implementing a newindex on a heavily used database that changes logical access from afull database scan to index access can vastly improve databaseperformance if the call is run many times (thousands or millions) perday.

When tuning an IMS application, developers, DBAs, systemsadministrators, and storage administrators need to work together tomonitor and manage the process. Efforts should begin at the top ofthe stack, and address application and IMS calls before moving down

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 185: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Performance Topics

into the database and host-based tuning parameters. After all of thesehave been addressed, storage-related tuning efforts should then beperformed.

Figure 46 The performance stack

Importance of I/O avoidanceThe primary goal at all levels of the performance stack is disk I/Oavoidance. In theory, an ideal database environment is one in whichmost I/Os are satisfied from memory rather than going to disk toretrieve the required data. In practice, however, this is not realistic.Careful consideration of the disk I/O subsystem is necessary.Optimizing performance of an IMS system on an EMC Symmetrixarray involves a detailed evaluation of the I/O requirements of theproposed application or environment. A thorough understanding ofthe performance characteristics and best practices of the Symmetrixarray, including the underlying storage components (disks, directors,and others) is also needed. Additionally, knowledge ofcomplementary software products such as EMC SRDF, TimeFinder,Symmetrix Optimizer, and backup software, along with how utilizingthese products affects the database, is important for maximizing

ICO-IMG-000040

Application

SQL Statements

DB Engine

Operating System

Storage System

Poorly written application,inefficient code

SQL logic errors, missing index

Database resource contention

File system parameters settings,kernel tuning, I/O distribution

Storage allocation errors,volume contention

The performance stack 185

Page 186: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

186

Performance Topics

performance. Ensuring optimal configuration for an IMS systemrequires a holistic approach to application, host, and storageconfiguration planning. Configuration considerations for host- andapplication-specific parameters are beyond the scope of thisTechBook.

Storage system layer considerationsWhat is the best way to configure IMS on EMC Symmetrix storage?This is a frequently asked question from customers. However, beforerecommendations can be made, a detailed understanding of theconfiguration and requirements for the database, hosts, and storageenvironment is required. The principal goal for optimizing anylayout on the Symmetrix array is to maximize the spread of I/Oacross the components of the array, reducing or eliminating anypotential bottlenecks in the system. The following sections examinethe trade-offs between optimizing storage performance andmanageability for IMS. They also contain recommendations forlaying out IMS on EMC Symmetrix arrays, as well as settings forstorage-related IMS configuration settings.

For the most part, back-end considerations are eliminated whenusing Virtual Provisioning. The wide striping provided by the thinpools eliminates hot spots on the disks and thus allows utilizationlevels to be driven much higher. For more details on VirtualProvisioning see “Advanced Storage Provisioning” on page 65. Therest of this chapter is concerned with considerations that apply tothick implementations.

Traditional IMS layout recommendationsTraditional best practices for database layouts focus on avoidingcontention between storage-related resources. Eliminating contentioninvolves understanding how the database manages the data flowprocess and ensures that concurrent or near-concurrent storageresource requests are separated on to different physical spindles.Many of these recommendations still have value in a Symmetrixenvironment. Before examining other storage-based optimizations, abrief digression to discuss these recommendations is made.

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 187: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Performance Topics

General principles for layoutThere are some clear truths that have withstood the test of time thatare almost always true for any kind of database and are thus worthmentioning here:

◆ Place OLDS and WADS datasets on separate spindles from theother datasets. Separate the OLDS and WADS for availability andperformance. This isolates the sequential write/read activity forthese members from other volumes with differing accesscharacteristics.

◆ Place the OLDS datasets and the SLDS/RLDS datasets onseparate spindles. This minimizes disk contention when writingto the SLDS and RLDS datasets while reading from the previousOLDS dataset.

◆ Separate INDEX components from their DATA components.Index reads that result in database reads are better serviced fromdifferent physical devices to minimize disk contention and headmovement.

The key point of any recommendation for optimizing the layout isthat it is critical to understand both the type (sequential or random),size (long or short), and quantity (low, medium, or high) of I/Oagainst the various databases and other elements. Without clearlyunderstanding data elements and the access patterns expectedagainst them, serious contention issues on the back-end directors orphysical spindles may arise that can negatively impact IMSperformance. Knowledge of the application, both database segmentsand access patterns, is critical to ensuring high performance in thedatabase environment.

IMS layouts and replication considerationsIt is prudent to organize IMS databases in such a way to facilitaterecovery. Since array replication techniques copy volumes at thephysical disk level (as seen by the host), all databases should becreated on a set of disks dedicated to the IMS system and should notbe shared with other applications or other IMS systems.

In addition to isolating the database to be copied onto its owndedicated volumes, the IMS datasets should also be divided into twoparts: the data structures and the recovery structures. The recoverystructures comprise the RECON, WADS, SLDS, and RLDS datasets.

Traditional IMS layout recommendations 187

Page 188: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

188

Performance Topics

The data volumes hold the IMS databases. This division allows thetwo parts to be manipulated independently if a recovery becomesnecessary.

RAID considerationsThe following list defines RAID configurations that are available onthe Symmetrix array:

◆ Unprotected—This configuration is not typically used in aSymmetrix environment for production volumes. BCVs andoccasionally R2 devices (used as target devices for SRDF) can beconfigured as unprotected volumes.

◆ RAID 1—These are mirrored devices and are the most commonRAID type in a Symmetrix array. Mirrored devices require writesto both physical spindles. However, intelligent algorithms in theEnginuity operating environment can use both copies of the datato satisfy read requests that are not already in the cache of theSymmetrix array. RAID 1 offers optimal availability andperformance and is more costly than other RAID protectionoptions since 50 percent of the disk space is used for protectionpurposes.

◆ RAID 10—These are striped and mirrored devices. In thisconfiguration, Enginuity stripes data of a logical device acrossseveral physical drives. Four Symmetrix devices (each a fourththe size of the original mainframe device) appear as onemainframe device to the host, accessible through one channeladdress (more addresses can be provided with PAVs). Any fourdevices can be chosen to define a group, provided they areequally sized, of the same type (3380, 3390, and so on), and havethe same mirror configuration. Striping occurs across this groupof four devices with a striping unit of one cylinder.

◆ RAID 6—Protection schemes such as RAID 1 and RAID 5 canprotect a system from a single physical drive failure within amirrored pair or RAID group. RAID 6 supports the ability torebuild data in the event that two drives fail within a RAIDgroup. RAID 6 on the Symmetrix array comes in twoconfigurations, (6+2) and (14+2).

EMC's implementation of RAID 6 calculates two types of parity,which is key in order for data to be reconstructed following adouble drive failure. Horizontal parity is identical to RAID 5

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 189: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Performance Topics

parity, which is calculated from the data across all the disks.Diagonal parity is calculated on a diagonal subset of datamembers.

RAID 6 provides high data availability but could be subject tosignificant write performance impact due to horizontal anddiagonal parity generation, as with any RAID 6 implementation.Therefore, RAID 6 is generally not recommended forwrite-intensive workloads.

Permanent sparing can also be used to further protect RAID 6volumes. RAID 6 volumes also benefit from a new data structurecalled the Physical Consistency Table that allows them to occupyonly a single mirror position.

◆ RAID 5 —This protection scheme stripes parity informationacross all volumes in the RAID group. RAID 5 offers goodperformance and availability, at a decreased cost. Data is stripedusing a stripe width of four tracks. RAID 5 is configured either asRAID 5 3+1 (75 percent usable) or RAID5 7+1 (87.5 percentusable) configurations. Figure 47 shows the configuration forRAID 5 3+1. Figure 48 on page 190 shows how a random write ina RAID 5 environment is performed.

Figure 47 RAID 5 (3+1) layout detail

ICO-IMG-000083

Parity 1 - 12

Data 13 - 16

Data 25 - 28

Data 37 - 40

Data 1 - 4

Parity 13 - 24

Data 29 - 32

Data 41 - 44

Data 5 - 8

Data 17 - 20

Parity 25 - 36

Data 45 - 48

Data 9 - 12

Data 21 - 24

Data 33 - 36

Parity 37 - 48

Disk 1

Stripe size(4 tracks wide)

RAID 5 3+1 Array

Disk 2 Disk 3 Disk 4

RAID considerations 189

Page 190: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

190

Performance Topics

Figure 48 Anatomy of a RAID 5 random write

The following describes the process of a random write to a RAID 5volume:

1. A random write is received from the host and is placed into a dataslot in cache to be destaged to disk.

2. The write is destaged from cache to the physical spindle. Whenreceived, parity information is calculated in cache on the drive byreading the old data and using an exclusive-or calculation with thenew data.

3. The new parity information is written back to Symmetrix cache.

4. The new parity information is written to the appropriate paritylocation on another physical spindle.

The availability and performance requirements of the applicationsthat utilize the Symmetrix array determine the appropriate level ofRAID to configure in an environment. Combinations of RAID typesare configurable in the Symmetrix array with some exceptions. Forexample, storage may be configured as a combination of RAID 1 and3+1 RAID 5 devices. Combinations of 3+1 and 7+1 RAID 5 areallowed in the same Symmetrix array when using 5772 Enginuity orlater.

Until recently, RAID 1 was the predominant choice for RAIDprotection in Symmetrix storage environments. RAID 1 providesmaximum availability and enhanced performance over otheravailable RAID protections. In addition, performance optimizationssuch as Symmetrix Optimizer, which reduces contention on thephysical spindles by nondisruptively migrating hypervolumes, andDynamic Mirror Service Policy, which improves read performance byoptimizing reads from both mirrors, were only available with

2

34

1

CACHE

Data slot

Parity slot

Host data XOR data write

XOR of old

and new data

XOR parity write

Data

Parity

ICO-IMG-000045

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 191: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Performance Topics

mirrored volumes. While mirrored storage is still the recommendedchoice for RAID configurations in the Symmetrix array, the additionof RAID 5 storage protection provides customers with a reliable,economical alternative for their production storage needs.

RAID 5 storage protection became available with the 5670+ release ofthe Enginuity operating environment. RAID 5 implements thestandard data striping and rotating parity across all members of theRAID group (either 3+1 or 7+1). Additionally, Symmetrix Optimizerfunctionality is available with RAID 5 to reduce spindle contention.RAID 5 provides customers with a flexible data protection option fordealing with varying workloads and service-level requirements.

RAID recommendationsWhile EMC recommends RAID 1 or RAID 10 to be the primary choicein RAID configuration for reasons of performance and availability,IMS systems can be deployed on RAID 5 protected disks for all butthe highest I/O performance-intensive applications. Databases usedfor test, development, QA, or reporting are likely candidates forusing RAID 5 protected volumes.

Another potential candidate for deployment on RAID 5 storage isDSS applications. In many DSS environments, read performancegreatly outweighs the need for rapid writes. This is because DSSapplications typically perform loads off-hours or infrequently (onceper week or month). Read performance in the form of database userqueries is significantly more important. Since there is no RAIDpenalty for RAID 5 read performance (only write performance), thesetypes of applications are generally good candidates for RAID 5storage deployments. Conversely, production OLTP applicationstypically require small random writes to the database, and therefore,are generally more suited to RAID 1 or RAID 10 storage.

An important consideration when deploying RAID 5 is disk failures.When a disk containing RAID 5 members fails, two primary issuesarise: Performance and data availability. Performance is affectedwhen the RAID group operates in the degraded mode because themissing data must be reconstructed using parity and datainformation from other members in the RAID group. Performance isalso affected when the disk rebuild process is initiated after the faileddrive is replaced or a hot spare is disk is activated. Potential data lossis the other important consideration when using RAID 5. Multipledrive failures that cause the loss of multiple members of a single

RAID considerations 191

Page 192: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

192

Performance Topics

RAID group result in loss of data. While the probability of such anevent is infinitesimal, the potential in 7+1 RAID 5 environment ismuch higher than that for RAID 1. Consequently, the probability ofdata loss because of the loss of multiple members of RAID 5 groupshould be carefully weighed against the benefits of using RAID 5.

The bottom line in choosing a RAID type is ensuring that theconfiguration meets the needs of the customer’s environment.Considerations include read/write performance, balancing the I/Oacross the spindles and back end of the Symmetrix array, tolerancefor reduced application performance when a drive fails, and theconsequences of losing data in the event of multiple disk failures. Ingeneral, EMC recommends RAID 10 for all types of IMS systems.RAID 10 stripes data across multiple devices and should be used forall larger MOD sizes.

RAID 5 configurations may be beneficial for many low I/O rateapplications.

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 193: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Performance Topics

Flash drivesEMC has enhanced Symmetrix Enginuity version 5773 and 5874 tointegrate enterprise-class Flash drives directly into the DMX-4 andVMAX storage arrays. With this capability, EMC created a Tier 0ultra-performance storage tier that transcends the limitationspreviously imposed by magnetic disk drives. By combiningenterprise-class Flash drives optimized with EMC technology andadvanced Symmetrix functionality, including Virtual LUN migrationand FAST VP, organizations now have new tiering options previouslyunavailable from any vendor.

Flash drives provide maximum performance for latency sensitiveapplications. Flash drives, also referred to as solid-state drives (SSD),contain no moving parts and appear as normal Fibre Channel drivesto existing Symmetrix management tools, allowing administrators tomanage Tier 0 storage without special processes or custom tools. Tier0 Flash storage is ideally suited for applications with high transactionrates, and those requiring the fastest possible retrieval and storage ofdata, such as currency exchange and electronic trading systems, andreal-time data feed processing. A Symmetrix subsystem with Flashdrives can deliver single-millisecond application response times andup to 30 times more IOPS than traditional 15,000 rpm Fibre Channeldisk drives. Additionally, because there are no mechanicalcomponents, Flash drives require up to 98 percent less energy perIOPS than traditional disk drives.

Magnetic disk historyFor years, the most demanding enterprise applications have beenlimited by the performance of magnetic disk media. Tier 1performance in storage arrays has been unable to surpass thephysical limitations of hard disk drives. With the addition of Flashdrives to DMX-4 and VMAX arrays, EMC provides organizationswith the ability to take advantage of ultra-high performanceoptimized for the highest-level enterprise requirements. Flash drivesfor Tier 0 requirements deliver unprecedented performance andresponse times for DMX-4 and VMAX arrays. Figure 49 on page 194shows, using a logarithmic scale, how magnetic disk capacity andperformance have grown over the last decade and a half as comparedto the demand of applications.

Flash drives 193

Page 194: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

194

Performance Topics

Figure 49 Magnetic disk capacity and performance history

IMS workloads best suited for Flash drivesIt is important to understand that any I/O request from the host isserviced by the Symmetrix array from its global cache. Under normalcircumstances, a write request is always written to cache and incursno delay due to physical disk access. In the case of a read request, ifthe requested data is in the global cache either because of recent reador write, or due to sequential prefetch, the request is immediatelyserviced without disk I/O. A read serviced from cache withoutcausing disk access is called a read hit. If the requested data is not inthe global cache, the Symmetrix array must retrieve it from disk. Thisis referred to as a read miss. A read miss incurs increased I/Oresponse time due to the innate mechanical delays of hard diskdrives.

Since workloads with high Symmetrix cache read-hit rates arealready serviced at memory access speed, deploying them on Flashdrive technology may show little benefit. Workloads with lowSymmetrix cache read-hit rates that exhibit random I/O patterns,with small I/O requests of up to 16 KB, and require high transactionthroughput benefit most from the low latency of Flash drives.

TransactionGrowth

Disk IOPS Growth~14% CAGR

Performance Gap

199 1994 1996 1998 2000 2002 2004 2006 2008

Dis

k P

erfo

rman

ce

1,000

100

10

1

10

1

Dis

k C

apac

ity

100T

10T

1

.1T

.01

.001T

ICO-IMG-000497

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 195: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Performance Topics

Log and database file placement recommendationsAs mentioned earlier, all application I/Os to the Symmetrix arraypass through Symmetrix cache. Under normal operationalcircumstances, writes are always serviced immediately after beingreceived in cache. Read response times, however, can differconsiderably depending on whether the data is found in cache (readhit) or needs to be read from disk (read miss). In the case of a readmiss, the physical characteristics of the disk being read from and howbusy it is affect the response time significantly.

When a Symmetrix array identifies sequential read streams, itattempts to use prefetch to bring the requested data into cache aheadof time, increasing the cache read-hit for this data. Based on this, itcan be concluded that the most obvious advantage for the Flashdrives by virtue of their extremely low latency is to service small I/O(16 KB and less) random read miss workloads.

Database log file activity is mostly sequential writes and becausewrites are serviced from cache, logs are not necessarily goodcandidates for placement on Flash drives and can be placed on HDDto leave more capacity for other files on the Flash drives.

Database datasets supporting an OLTP workload are goodcandidates for placement on Flash drives. Typically, an OLTPworkload generates small I/O, random read/write activity to thedatabase datasets. When a random OLTP application creates a lot ofread-miss activity, Flash drives can improve the performance of theworkload many times over HDD.

Secondary indicesIMS secondary indices tend to also create random read activity.Indices are commonly B-tree structures, and when they do not fit inthe Symmetrix cache, they benefit from being positioned on Flashdrives.

Other database file types can be considered for Flash drivesaccording to their I/O profile and the requirement for highthroughput and low random read response time.

Flash drives and storage tieringWhile it may be feasible to have a complete IMS system deployed onhigh-cost Flash drives, it may not be economically justifiable if thesystem is very large. Moreover, parts of the database may be hot

Flash drives 195

Page 196: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

196

Performance Topics

(high intensity access) and other parts cold (rarely used). An ideallayout would be to have the hot data on Flash drives and the colddata on spinning media. If the access patterns are consistent, that is tosay, certain parts of the database are always hot and certain parts arealways cold, this tiered approached can be implemented very easily.

However, it is probable that data access patterns are more volatile,and they tend to change over time. In many cases, the age of the datais a key factor that determines what is hot and what is not. In thiscase, HALDB partitioning might be able to help.

HALDB partitioning was primarily implemented to overcome somecapacity limitations in the OS and in IMS. But it also allows parts of adatabase to reside in different datasets based on certain predefinedrecord values. The database partitions can be deployed on differentstorage tiers according to the partition access requirements.

The goal with this approach is to partition the database in such a waythat the various partitions have known comparative access profiles.This is usually accomplished by partitioning by date range since it iscommon that the newest data is the most frequently accessed, and theolder the data the less it is retrieved.

With a tiered storage strategy in mind, customers should consider theadvantage of using HALDB partitioning for OLTP applications.While partitioning allows the distribution of the data over multiplestorage tiers, including Flash drives, it does not address the datamovement between tiers. Solutions for data migration betweenstorage tiers are available from some database applications, volumemanagers, or storage management software. An example of usingdatabase partitioning is shown in Figure 50.

Figure 50 Partitioning on tiered storage

ICO-IMG-000496

Partition 1Current data demanding the highest performance and availability

Flash

Partition 2Less current data demanding high performance and availability

FC-15K RPM

Partition 3Older data for fulfillments and batch processing. Less critical for business.

FC-10K RPM

Partition 4Oldest data marked for archiving and compliance. Not critical for running business.

SATA

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 197: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Performance Topics

Data placement considerationsPlacement of the data on the physical spindles can potentially have asignificant impact on IMS database performance. Placement factorsthat affect database performance include:

◆ Hypervolume selection for specific database files on the physicalspindles themselves

◆ The spread of database files across the spindles to minimizecontention

◆ The placement of high I/O devices contiguously on the spindlesto minimize head movement (seek time)

◆ The spread of files across the spindles and back-end directors toreduce component bottlenecks

Each of these factors is discussed in the following section.

Disk performance considerationsWhen rotational disks are used to deploy IMS databases, there arefive main considerations for spindle performance as shown inFigure 51 on page 199:

◆ Actuator Positioning (Seek Time) — This is the time it takes theactuating mechanism to move the heads from their presentposition to a new position. This delay averages a fewmilliseconds and depends on the type of drive. For example, a15k drive has an average seek time of approximately 3.5 ms forreads and 4 ms for writes. The full disk seek is 7.4 ms for readsand 7.9 ms for writes.

Note: Disk drive characteristics can be found at www.seagate.com.

◆ Rotational latency—This is because of the need for the platter torotate underneath the head to correctly position the data thatmust be accessed. Rotational speeds for spindles in theSymmetrix array range from 7,200 rpm to 15,000 rpm. Theaverage rotational latency is the time it takes for one-half of arevolution of the disk. In the case of a 15k drive, this would beabout 2 ms.

Data placement considerations 197

Page 198: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

198

Performance Topics

◆ Interface speed—This is a measure of the transfer rate from thedrive into the Symmetrix cache. It is important to ensure that thetransfer rate between the drive and cache is greater than thedrive’s rate to deliver data. Delay caused by this is typically avery small value, on the order of a fraction of a millisecond.

◆ Areal density— his is a measure of the number of bits of data thatfits on a given surface area on the disk. The greater the density,the more data per second can be read from the disk as it passesunder the disk head.

◆ Drive cache capacity and algorithms—Newer disk drives haveimproved read and write algorithms, as well as cache, to improvethe transfer of data in and out of the drive and to make paritycalculations for RAID 5.

Delay caused by the movement of the disk head across the plattersurface is called seek time. The time associated with a data trackrotating to the required location under the disk head is referred to asrotational latency or delay. The cache capacity on the drive, diskalgorithms, interface speed, and the areal density (or zoned bitrecording) combines to produce a disk transfer time. Therefore, thetime it takes to complete an I/O (or disk latency) consists of the seektime, the rotational delay, and the transfer time.

Data transfer times are typically on the order of fractions of amillisecond. Therefore, rotational delays and delays because ofrepositioning the actuator heads are the primary sources of latency ona physical spindle. Additionally, rotational speeds of disk drives haveincreased from top speeds of 7,200 rpm up to 15,000 rpm, but stillaverage a few milliseconds. The seek time continues to be the largestsource of latency in disk assemblies when using the entire disk.

Transfer delays are lengthened in the inner parts of the drive. Moredata can be read per second on the outer parts of the disk surface thancompared to the inner regions. Therefore, performance issignificantly improved on the outer parts of the disk. In many cases,performance improvements of more than 50 percent can sometimesbe realized on the outer cylinders of a physical spindle. Thisperformance differential typically leads customers to place high I/Oobjects on the outer portions of the drive.

Performance differences across the drives inside the Symmetrix arrayare significantly smaller than the stand-alone disk characteristicswould attest. Enginuity operating environment algorithms,particularly the algorithms that optimize ordering of I/O as the diskheads scan across the disk, greatly reduce differences in performance

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 199: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Performance Topics

across the drive. Although this smoothing of disk latency mayactually increase the delay of a particular I/O, overall performancecharacteristics of I/Os to hypervolumes across the face of the spindlewill be more uniform.

Figure 51 Disk performance factors

Hypervolume contentionDisk drives are capable of receiving only a limited number of read orwrite requests before performance degrades. While diskimprovements and cache, both on the physical drives and in diskarrays, have improved disk read and write performance, the physicaldevices can still become a critical bottleneck in IMS databaseenvironments. Eliminating contention on the physical spindles is akey factor in ensuring maximum IMS performance on Symmetrixarrays.

Contention can occur on a physical spindle when I/O (read or write)to one or more hypervolumes exceeds the I/O capacity of the disk.While contention on a physical spindle is undesirable, migrating highI/O data onto other devices with lower utilization can rectify thistype of contention. This can be accomplished using a number ofmethods, depending on the type of contention that is found. For

Areal density Rotational speed

Cache andalgorithms

Position actuator

Interface speed

ICO-IMG-000037

Data placement considerations 199

Page 200: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

200

Performance Topics

example, when two or more hypervolumes on the same physicalspindle have excessive I/O, contention may be eliminated bymigrating one of the hypervolumes to another, lower utilizedphysical spindle. One method of reducing hypervolume contentionis careful layout of the data across the physical spindles on the backend of the Symmetrix system.

Hypervolume contention can be found in a number of ways.IMS-specific data collection and analysis tools such as the IMSmonitor, as well as host server tools, can identify areas of reducedI/O performance in the database. Additionally, EMC tools such asPerformance Manager can help to identify performance bottlenecksin the Symmetrix array. Establishing baselines for the system andproactive monitoring are essential to maintain an efficient,high-performance database.

Commonly, tuning database performance on the Symmetrix system isperformed post-implementation. This is unfortunate because with asmall amount of up-front effort and detailed planning, significantI/O contention issues could be minimized or eliminated in a newimplementation. While detailed I/O patterns of a databaseenvironment are not always well known, particularly in the case of anew system implementation, careful layout consideration of adatabase on the back end of the Symmetrix system can save time andfuture effort in trying to identify and eliminate I/O contention on thedisk drives.

Maximizing data spread across the back endData on the Symmetrix should be spread across the back-enddirectors and physical spindles before locating data on the samephysical drives. By spreading the I/O across the back end of theSymmetrix system, I/O bottlenecks in any one array component canbe minimized or eliminated.

Considering recent improvements in the Symmetrix componenttechnologies, such as CPU performance on the directors and theDirect Matrix architecture, the most common bottleneck in newimplementations is with contention on the physical spindles and theback-end directors. To reduce these contention issues, a detailedexamination of the I/O requirements for each application that willutilize the Symmetrix storage should be made. From this analysis, adetailed layout that balances the anticipated I/O requirements acrossboth back-end directors and physical spindles should be made.

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 201: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Performance Topics

Before data is laid out on the back end of the Symmetrix system, it ishelpful to understand the I/O requirements for each of the filesystems or volumes that are being laid out. Many methods foroptimizing layout on the back-end directors and spindles areavailable. One time-consuming method involves creating a map ofthe hypervolumes on physical storage, including hypervolumepresentation by the director and physical spindle, based oninformation available in EMC Ionix® ControlCenter®. This involvesdocumenting the environment using a tool such as Excel with eachhypervolume marked on its physical spindle and disk director. Usingthis map of the back-end and volume information for the databaseelements, preferably categorized by I/O requirement(high/medium/low, or by anticipated reads and writes), the physicaldata elements and I/Os can be evenly spread across the directors andphysical spindles.

This type of layout can be extremely complex and time consuming.Additional complexity is added when RAID 5 hypers are added tothe configuration. Since each hypervolume is really placed on eitherfour or eight physical spindles in RAID 5 environments, trying touniquely map out each data file or database element is beyond whatmost customers feel is valuable. In these cases, one alternative is torank each of the database elements or volumes in terms of anticipatedI/O. Once ranked, each element may be assigned a hypervolume inorder on the back end. Since BIN file creation tools almost alwaysspread contiguous hypervolume numbers across different elementsof the back end, this method of assigning the ranked databaseelements usually provides a reasonable spread of I/O across thespindles and back-end directors in the Symmetrix system. Combinedwith Symmetrix Optimizer, this method of spreading the I/O isnormally effective in maximizing the spread of I/O across Symmetrixcomponents.

Minimizing disk head movementPerhaps the key performance consideration controllable by acustomer when laying out a database on the Symmetrix array isminimizing head movement on the physical spindles. Headmovement is minimized by positioning high I/O hypervolumescontiguously on the physical spindles. Disk latency that is caused byinterface or rotational speeds cannot be controlled by layoutconsiderations. The only disk drive performance considerations thatcan be controlled are the placement of data onto specific, higher

Data placement considerations 201

Page 202: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

202

Performance Topics

performing areas of the drive (discussed in a previous section), andthe reduction of actuator movement by trying to place high I/Oobjects in adjacent hypervolumes on the physical spindles.

One method, described in the previous section, describes howvolumes can be ranked by anticipated I/O requirements. Utilizing adocumented map of the back-end spindles, high I/O objects can beplaced on the physical spindles, grouping the highest I/O objectstogether. Recommendations differ as to whether placing the highestI/O objects together on the outer parts of the spindle (that is, thehighest performing parts of a physical spindle) or in the center of aspindle are optimal. Since there is no consensus to this, the historicalrecommendation of putting high I/O objects together on the outerpart of the spindle is still a reasonable suggestion. Placing these highI/O objects together on the outer parts of the spindle should help toreduce disk actuator movement when doing reads and writes to eachhypervolume on the spindle, thereby improving a controllableparameter in any data layout exercise.

Other layout considerationsBesides the layout considerations described in previous sections, afew additional factors may be important to DBAs or storageadministrators who want to optimize database performance. Someadditional configuration factors to consider include:

◆ Implementing SRDF/S for the database

◆ Creating database backups using TimeFinder/Mirror orTimeFinder/Clone

◆ Creating database backups using TimeFinder/Snap

These additional layout considerations are discussed in the nextsections.

Database layout considerations with SRDF/SThere are two primary concerns that must be considered whenSRDF/S is implemented. The first is the inherent latency added foreach write to the database. Latency occurs because each write mustbe first written to both the local and remote Symmetrix caches beforethe write can be acknowledged to the host. This latency must always

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 203: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Performance Topics

be considered as a part of any SRDF/S implementation. Because thespeed of light cannot be circumvented, there is little that can be doneto mitigate this latency.

The second consideration is more amenable to DBA mitigation. Eachhypervolume configured in the Symmetrix system is only allowed tosend a single I/O across the SRDF link. Performance degradationresults when multiple I/Os are written to a single hypervolume, sincesubsequent writes must wait for predecessors to complete. Striping atthe host level can be particularly helpful in these situations. Utilizinga smaller stripe size (32 KB to 128 KB) ensures that larger writes willbe spread across multiple hypervolumes. This reduces the chancesfor SRDF to serialize writes across the link.

TimeFinder targets and sharing spindlesDatabase cloning is useful when DBAs wish to create a backup orimages of a database for other purposes. A common question whenlaying out a database is whether BCVs should share the samephysical spindles as the production volumes or whether the BCVsshould be isolated on separate physical disks. There are pros andcons to each of the solutions. The optimal solution generally dependson the anticipated workload.

The primary benefit of spreading BCVs across all physical spindles isperformance. By spreading I/Os across more spindles, there is areduced chance of developing bottlenecks on the physical disks.Workloads that utilize BCVs, such as backups and reportingdatabases, may generate high I/O rates. Spreading this workloadacross more physical spindles may significantly improveperformance in these environments.

Drawbacks to spreading BCVs across all spindles in the Symmetrixsystem are that the resynchronization process may cause spindlecontention and BCV workloads may negatively impact productiondatabase performance. When resynchronizing the BCVs, data is readfrom the production hypervolumes and copied into cache. Fromthere, it is destaged to the BCVs. When the physical disks containboth production volumes and BCVs, the synchronization rates can begreatly reduced because of increased seek times, due to the headreading one part of the disk and writing to another. Sharing thespindles increases the chance that contention may arise, decreasingdatabase performance.

Other layout considerations 203

Page 204: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

204

Performance Topics

Determining the appropriate location for BCVs, sharing the samephysical spindles or isolated on their own disks, depends oncustomer preference and workload. In general, it is recommendedthat the BCVs share the same physical spindles. However, in caseswhere the BCV synchronization and utilization negatively impactapplications (such as databases that run 24/7 with high I/Orequirements), it may be beneficial for the BCVs to be isolated ontheir own physical disks.

Database clones using TimeFinder/SnapTimeFinder/Snap provides many of the benefits of full-volumereplication techniques, such as TimeFinder/Mirror orTimeFinder/Clone, but at greatly reduced costs. However, twoperformance considerations must be taken into account when usingTimeFinder/Snap to make database clones for backups or otherbusiness continuity functions. The first of these penalties, Copy OnFirst Write (COFW), derives from the need for data to be copied fromthe production hypervolumes to the SAVE area as data is changed.This penalty affects only writes to the source volumes. WithEnginuity 5772 and later, the COFW penalty is mostly eliminatedsince the copying of the track is asynchronous to the write activity.The second potential penalty happens when snaps are accessed.Tracks that have not been copied to the save area have to be read fromthe source volumes. This additional load on the source volumes canhave an impact on heavily loaded systems.

Extended address volumesEnginuity 5874 introduced the ability to create and utilizehypervolumes that can be up to 262,668 cylinders in size. These largevolumes are supported as IBM 3390 format, with a capacity of up to223 GB and are configured as 3390 Model As. This large volume sizematches the capacity announced in z/OS 1.10 for 3390 ExtendedAddress Volumes (EAV). IBM defines an EAV to be any volume thatis greater than 65,520 cylinders, hence any subset volume size greaterthan 65,520 cylinders and less than or equal to 262,668 cylinders is anEAV. An EAV is currently limited to 262,668 cylinders but has adefined architectural limit of 268,434,453 cylinders.

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 205: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Performance Topics

With Enginuity 5874, large volumes may be configured and used in asimilar manner to the old, regular devices that users are alreadyfamiliar with. While large volumes can co-exist alongside the oldervolumes, there are limitations to their use imposed by certain accessmethods (operating system restrictions) and other independentvendor’s software.

The collective reference to the space above 65,520 cylinders is knownas the extended addressing space (EAS), while the space below 65,520cylinders is referred to as the base addressing space. Today, the majorexploiter of the EAS portions of EAVs are applications using any formof VSAM (KSDS, RRDS, ESDS, and linear). This covers DB2, IMS,CICS, zFS, and NFS. A few restrictions are notable with z/OS 1.10with respect to VSAM datasets not supported in the EAS; they are:catalogs, paging spaces, VVDS datasets, and those with KEYRANGEor IMBED attributes.

Large volumes offer the ability to consolidate many smaller volumesonto a single device address. This, in turn, goes a long way in solvingthe sprawling device configurations, which placed users at theLogical Partition (LPAR) device limit of 65,280 devices. In fact, thetwo main culprits of device sprawl in many environments are: Theexcessive and prolonged use of very small volume sizes (3390-3,3390-9, etc.), and the significant and growing business continuityrequirements as business applications proliferate.

Not only are large volumes an important component in the deviceconsolidation strategy, but it is quickly recognized as an even morevital component in providing the very high z/OS system capacitiesrequired by the newer class of applications running on the ever morepowerful processors. In short, even with a few addresses configuredwith these large capacities, the result is very large system capacities.

One of the stated goals of large volumes and the accompanyingreduction of device addresses required to support them is the overallsimplification of the storage system, and hence, the reduction ofstorage management costs. This has a direct impact on storagepersonnel productivity and proves to be a strong determinant in itsadoption.

Last, but certainly not least, is the reduction of processor resourcesassociated with multi-volume usage and processing since datasetsand clusters can now be wholly contained within large enough singleextents, thereby relegating multi-volume extents and their drawbacks

Extended address volumes 205

Page 206: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

206

Performance Topics

to something of user preference and choice rather than necessity. Thereduction in frequency of OPEN/CLOSE/End of Volume processinghelps expedite many currently long running batch jobs.

At this time, it is not recommended to put high-demand productionIMS systems on very large EAV volumes since there may beinsufficient spindles to service the I/O workload. EAV volumes maybe applicable for IMS systems in test or QA where performancecriteria may not be so important as in production applications.

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 207: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

A

This appendix lists references and additional related material.

◆ References.......................................................................................... 208

References

References 207

Page 208: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

208

References

ReferencesThe information in this document was compiled using the followingsources:

From EMC Corporation:

◆ EMC TimeFinder/Clone Mainframe SNAP Facility Product Guide

◆ EMC TimeFinder/Mirror for z/OS Product Guide

◆ TimeFinder Utility for z/OS Product Guide

◆ EMC Symmetrix Remote Data Facility (SRDF) Product Guide

◆ EMC Consistency Group for z/OS Product Guide

From IBM Corporation:

◆ IMS Version 12: Database Administration: SC19-3013

◆ IMS Version 12: System Administration: SC19-3020

◆ IMS Version 12: System Utilities: SC19-3023

◆ IMS Version 12: Operations and Automation: SC19-3018

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 209: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

B

This appendix presents this topic.

◆ Modified image copy skeleton sample JCL.................................. 210

Sample Skeleton JCL

Sample Skeleton JCL 209

Page 210: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

210

Sample Skeleton JCL

Modified image copy skeleton sample JCLUser skeleton JCL for split:

%DELETE (%BCVCOPY EQ ’YES’)//IC%STPNO EXEC PGM=DFSUDMP0,PARM=’DBRC=Y’%ENDDEL%DELETE (%BCVCOPY EQ ’NO’)//IC%STPNO EXEC PGM=DFSUDMP0,PARM=’DBRC=N’%ENDDEL//STEPLIB DD DSN=IMS9DBTM.RESLIB,DISP=SHR//SYSPRINT DD SYSOUT=*//SYSUDUMP DD SYSOUT=*//IMS DD DSN=IMS9DBTM.DBDLIB,DISP=SHR%SELECT DBDS((%DBNAME,%DBDDN))%DELETE (%DBTYPE NE ’FP’ || %BCVCOPY EQ ’NO’)//%DBADDN DD DSN=EMC.%DBDSN,DISP=SHR /* FP BCV */%ENDDEL%DELETE (%DBTYPE EQ ’FP’ || %BCVCOPY EQ ’NO’)//%DBDDN DD DSN=EMC.%DBDSN,DISP=SHR /* FF BCV */%ENDDEL%DELETE (%DBTYPE NE ’FP’ || %BCVCOPY EQ ’YES’)//%DBADDN DD DSN=%DBDSN,DISP=SHR /* FP NOBCV */%ENDDEL%DELETE (%DBTYPE EQ ’FP’ || %BCVCOPY EQ ’YES’)//%DBDDN DD DSN=%DBDSN,DISP=SHR /* FF NOBCV */%ENDDEL%ENDSEL//DATAOUT1 DD DISP=OLD,DSN=%ICDSN1//SYSIN DD *%ICSYSIN/*%DELETE (%BCVCOPY EQ ’NO’)//ICVL%STPNO EXEC PGM=TFIMSICV,COND=(4,LT)//STEPLIB DD DSN=EMC.LOADLIB,DISP=SHR//INFILE DD DSN=%ICDSN1,DISP=OLD,// VOL=REF=*.IC%STPNO.DATAOUT1//VOLUMES DD DSN=&&VOLS,DISP=(MOD,PASS),UNIT=SYSDA,// DCB=(LRECL=6,BLKSIZE=6,RECFM=FB),// SPACE=(TRK,(5,5),RLSE,,ROUND)/*//EDIT%STPNO EXEC PGM=IKJEFT01,DYNAMNBR=256,PARM=TFIMSNIC//SYSTSIN DD *//SYSTSPRT DD SYSOUT=*//SYSPROC DD DISP=SHR,DSN=SYS2.CLIST//TSTAMP DD DSN=EMC.BCVCOPY.TSTAMP,DISP=SHR//VOLUMES DD DSN=&&VOLS,DISP=(OLD,DELETE)//NOTCMD DD *%ENDDEL%SELECT DBDS((%DBNAME,%DBDDN))%DELETE (%DBTYPE EQ ’FP’ | %BCVCOPY EQ ’NO’)

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 211: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Sample Skeleton JCL

NOTIFY.IC DBD(%DBNAME) DDN(%DBDDN) -%ENDDEL%DELETE (%DBTYPE NE ’FP’ | %BCVCOPY EQ ’NO’)NOTIFY.IC DBD(%DBNAME) AREA(%DBDDN) -%ENDDEL%DELETE (%BCVCOPY EQ ’NO’)ICDSN(%ICDSN1) -RUNTIME(DUMMYRUNTIME) -VOLLIST(DUMMYVOLLIST)%ENDDEL%ENDSEL%DELETE (%BCVCOPY EQ ’NO’)//NOTIFY DD DSN=&&NTFY,DISP=(MOD,PASS),UNIT=SYSDA,// DCB=(LRECL=80,BLKSIZE=80,RECFM=FB),// SPACE=(TRK,(1,1),RLSE,,ROUND)//NTFY%STPNO EXEC PGM=DSPURX00,COND=(4,LT)//STEPLIB DD DISP=SHR,DSN=IMS9DBTM.RESLIB//SYSPRINT DD SYSOUT=*//SYSUDUMP DD SYSOUT=*//IMS DD DISP=SHR,DSN=IMS9DBTM.DBDLIB//JCLOUT DD SYSOUT=*//SYSIN DD DISP=(OLD,DELETE),DSN=&&NTFY%ENDDEL

USER skeleton JCL for SNAP:

%SELECT DBDS((%DBNAME,%DDNAME))%DELETE (%DBTYPE NE ’FP’)//%DBADDN EXEC PGM=EMCSNAP%ENDDEL%DELETE (%DBTYPE EQ ’FP’)//%DBDDN EXEC PGM=EMCSNAP%ENDDEL//SYSUDUMP DD SYSOUT=*//SYSOUT DD SYSOUT=*//EMCQCAPI DD SYSOUT=*//EMCQCFMT DD SYSOUT=*//QCOUTPUT DD SYSOUT=*//QCINPUT DD *SNAP DATASET(SOURCE(’%DBDSN’) -TARGET(’EMC.%DBDSN’) -VOLUME(%VOLLIST) -REPLACE(Y) -REUSE(Y) -FORCE(N) -HOSTCOPYMODE(SHR) -DEBUG(OFF) -TRACE(OFF))%ENDSEL

Modified image copy skeleton sample JCL 211

Page 212: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

212

Sample Skeleton JCL

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 213: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Glossary

This glossary contains terms related to disk storage subsystems.Many of these terms are used in this manual.

Aactuator A set of access arms and their attached read/write heads, which

move as an independent component within a head and disk assembly(HDA).

adapter Card that provides the physical interface between the director anddisk devices (SCSI adapter), director and parallel channels (Bus & Tagadapter), director and serial channels (Serial adapter).

adaptive copy A mode of SRDF operation that transmits changed tracksasynchronously from the source device to the target device withoutregard to order or consistency of the data.

alternate track A track designated to contain data in place of a defective primarytrack. See also ”primary track.”

Ccache Random access electronic storage used to retain frequently used data

for faster access by the channel.

cache slot Unit of cache equivalent to one track.

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0 213

Page 214: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

214

Glossary

channel director The component in the Symmetrix subsystem that interfaces betweenthe host channels and data storage. It transfers data between thechannel and cache.

controller ID Controller identification number of the director the disks arechanneled to for EREP usage. There is only one controller ID forSymmetrix system.

CKD Count Key Data, a data recording format employing self-definingrecord formats in which each record is represented by a count areathat identifies the record and specifies its format, an optional key areathat may be used to identify the data area contents, and a data areathat contains the user data for the record. CKD can also refer to a setof channel commands that are accepted by a device that employs theCKD recording format.

DDASD Direct access storage device, a device that provides nonvolatile

storage of computer data and random access to that data.

data availability Access to any and all user data by the application.

delayed fast write There is no room in cache for the data presented by the writeoperation.

destage The asynchronous write of new or updated data from cache to diskdevice.

device A uniquely addressable part of the Symmetrix subsystem thatconsists of a set of access arms, the associated disk surfaces, and theelectronic circuitry required to locate, read, and write data. See also”volume.”

device address The hexadecimal value that uniquely defines a physical I/O deviceon a channel path in an MVS environment. See also ”unit address.”

device number The value that logically identifies a disk device in a string.

diagnostics System level tests or firmware designed to inspect, detect, and correctfailing components. These tests are comprehensive and self-invoking.

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 215: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Glossary

director The component in the Symmetrix subsystem that allows theSymmetrix system to transfer data between the host channels anddisk devices. See also ”channel director.”

disk director The component in the Symmetrix subsystem that interfaces betweencache and the disk devices.

dual-initiator A Symmetrix feature that automatically creates a backup data path tothe disk devices serviced directly by a disk director, if that diskdirector or the disk management hardware for those devices fails.

dynamic sparing A Symmetrix feature that automatically transfers data from a failingdisk device to an available spare disk device without affecting dataavailability. This feature supports all non-mirrored devices in theSymmetrix subsystem.

EESCON Enterprise Systems Connection, a set of IBM and vendor products

that connect mainframe computers with each other and with attachedstorage, locally attached workstations, and other devices usingoptical fiber technology and dynamically modifiable switches calledESCON directors. See also ”ESCON director.”

ESCON director Device that provides a dynamic switching function and extended linkpath lengths (with XDF capability) when attaching an ESCONchannel to a Symmetrix serial channel interface.

Ffast write In a Symmetrix system, a write operation at cache speed that does not

require immediate transfer of data to disk. The data is written directlyto cache and is available for later destaging. This is also called aDASD fast write.

FBA Fixed Block Architecture, disk device data storage format usingfixed-size data blocks.

frame Data packet format in an ESCON environment. See also ”ESCON.”

FRU Field Replaceable Unit, a component that is replaced or added byservice personnel as a single entity.

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0 215

Page 216: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

216

Glossary

Ggatekeeper A small volume on a Symmetrix storage subsystem used to pass

commands from a host to the Symmetrix storage subsystem.Gatekeeper devices are configured on standard Symmetrix disks.

GB Gigabyte, 109 bytes.

Hhead and disk

assemblyA field-replaceable unit in the Symmetrix subsystem containing thedisk and actuator.

home address The first field on a CKD track that identifies the track and defines itsoperational status. The home address is written after the index pointon each track.

hypervolumeextension

The ability to define more than one logical volume on a singlephysical disk device making use of its full formatted capacity. Theselogical volumes are user-selectable in size. The minimum volume sizeis one cylinder, and the maximum size depends on the disk devicecapacity and the Enginuity level that is in use.

IID Identifier, a sequence of bits or characters that identifies a program,

device, controller, or system.

IML Initial microcode program load.

index marker Indicates the physical beginning and end of a track.

index point The reference point on a disk surface that determines the start of atrack.

INLINES An EMC-internal method of interrogating and configuring aSymmetrix controller using the Symmetrix service processor.

I/O device An addressable input/output unit, such as a disk device.

KK Kilobyte, 1024 bytes.

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 217: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Glossary

Lleast recently used

algorithm (LRU)The algorithm used to identify and make available the cache space byremoving the least recently used data.

logical volume A user-defined storage device. In the Model 5200, the user can definea physical disk device as one or two logical volumes.

long miss Requested data is not in cache and is not in the process of beingfetched.

longitude redundancycode (LRC)

Exclusive OR (XOR) of the accumulated bytes in the data record.

MMB Megabyte, 106 bytes.

mirrored pair A logical volume with all data recorded twice, once on each of twodifferent physical devices.

mirroring The Symmetrix array maintains two identical copies of a designatedvolume on separate disks. Each volume automatically updatesduring a write operation. If one disk device fails, the Symmetrixsystem automatically uses the other disk device.

Pphysical ID Physical identification number of the Symmetrix director for EREP

usage. This value automatically increments by one for each directorinstalled in Symmetrix system. This number must be unique in themainframe system. It should be an even number. This number isreferred to as the SCU_ID.

primary track The original track on which data is stored. See also ”alternate track.”

promotion The process of moving data from a track on the disk device to cacheslot.

Rread hit Data requested by the read operation is in cache.

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0 217

Page 218: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

218

Glossary

read miss Data requested by the read operation is not in cache.

record zero The first record after the home address.

Sscrubbing The process of reading, checking the error correction bits, and writing

corrected data back to the source.

SCSI adapter Card in the Symmetrix subsystem that provides the physical interfacebetween the disk director and the disk devices.

short miss Requested data is not in cache, but is in the process of being fetched.

SSID For 3990 storage control emulations, this value identifies the physicalcomponents of a logical DASD subsystem. The SSID must be aunique number in the host system. It should be an even number andstart on a zero boundary.

stage The process of writing data from a disk device to cache.

storage control unit The component in the Symmetrix subsystem that connectsSymmetrix subsystem to the host channels. It performs channelcommands and communicates with the disk directors and cache. Seealso ”channel director.”

string A series of connected disk devices sharing the same disk director.

UUCB The Unit Control Block is a memory structure in z/OS for the unit

address that uniquely defines a physical I/O device.

unit address The hexadecimal value that uniquely defines a physical I/O deviceon a channel path in an MVS environment. See also ”device address.”

Vvolume A general term referring to a storage device. In the Symmetrix

subsystem, a volume corresponds to single disk device.

Wwrite hit There is room in cache for the data presented by the write operation.

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 219: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Glossary

write miss There is no room in cache for the data presented by the writeoperation.

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0 219

Page 220: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

220

Glossary

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 221: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Index

Numerics3390 70

AACBGEN 89Adaptive Copy 170ADRDSSU 84AutoSwap 37, 43, 49, 51

BBCV 27, 54, 91, 188BMP 21

CCache 190, 194Cascaded SRDF 38CF 181CF catalog 116CHANGE.DBDS 124, 131, 133, 134CKD 39, 45COFW 204Concurrent Copy 33ConGroup 37, 45, 46, 101, 104, 105, 159, 161, 163,

164, 165, 167Consistency Group 40Control region 19Coupling facility 180Cross-system communication 35

DDASD 32, 43

Data compression 150DBCTL 20, 21DBD 89DBDGEN 89DBRC 20, 22, 111, 113DCCTL 20Delta set 168, 169, 170DFDSS 124DL/I 20DSS 191Dynamic Cache Partitioning 37Dynamic Mirror Service Policy 190

EEAV 33, 72ECA 45, 46, 55EMC Compatible Flash 37EMC Consistency Group 162EMCSCF 35, 36, 37, 42Enginuity 31, 32, 35, 40, 41, 45, 46, 62, 69, 175, 176,

188, 190, 191, 193, 204, 205Enginuity Consistency Assist 55ESCON 31, 32, 38, 40Extended address volumes

EAV 204, 205, 206EzSM 58

FFAST 59, 62FAST VP 66, 81, 82, 83, 84FBA 39, 45Fibre Channel 32, 40FICON 32, 40

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0 221

Page 222: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

222

Index

Flash drives 193, 194, 195FlashCopy 33, 35, 37

GGDDR 42, 178GENJCL.IC 111, 113GENJCL.RECOV 109, 131, 133, 134GigE 32, 38, 40GNS 35, 36Group Name Service 35

HHMIGRATE 84HRECALL 84HSM 84

IIBM 2105 31IBM 2107 31IBM 3990 31ICF Catalog 54ICF catalog 54, 92, 103, 111, 128IFP 21IMS

Architecture 19Data sets 21Data sharing 23Overview 18

IOSLEVEL 46IPL 51IRLM 21

JJava 18, 21JBP 21JMP 21

MMainframe Enabler 70Migration 75MPR 21MSC 37, 42, 170Multiple Allegiance 31Multi-Session Consistency 42, 170

NNOTIFY.IC 109, 111, 113NOTIFY.RECOV 124, 131, 133, 134NOTIFY.UIC 130, 132, 134

OOLDS 21, 131, 133, 171OLTP 32, 195Over-provisioning 68

PPartitioning 196PAV 31, 188PERSIST 74PPRC 33PREALLOCATE 74

RRACF 54, 55RAID 1 32, 71, 188, 190RAID 10 32, 71, 188RAID 5 33, 71, 189, 190, 191RAID 6 33, 71, 188RCVTIME 124, 131, 134RECON 20, 22, 89, 109, 123, 131, 133RECOV 124, 131, 133, 134Recovery point objective 142

Definition 143Recovery time objective 139, 142

Definition 143REORG 82ResourcePak Base 35, 36, 40RLDS 22Rolling disaster 158RPO 142, 143, 144, 151, 153, 169RTO 139, 142, 143, 144, 151RUNTIME 134

SSAVDEV 27SLDS 22SMC 59, 60SMS 71, 81, 84SPA 60

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 223: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

Index

SRDF 32, 33, 38, 39, 46, 48, 49, 75, 101, 104, 160,162, 170, 181, 188

SRDF Automated Replication 172SRDF Consistency Group 43, 44, 46, 49, 156, 158,

160, 162SRDF Host Component 35, 38, 39, 40, 46SRDF/A 35, 36, 42, 43, 168, 169, 170SRDF/A Monitor 36SRDF/A Multi-Session Consistency 37SRDF/AR 35, 42SRDF/Asynchronous 38, 40SRDF/Automated Replication 38, 55SRDF/EDP 38, 175SRDF/Star 38, 40, 43, 178SRDF/Synchronous 40SRDF-ECA 45Symmetrix Control Facility 74Symmetrix Optimizer 190Symmetrix Priority Control 37

TTable 189Thin device 68, 69, 70, 71, 74, 75Thin Reclaim Utility 74Time windows 82TimeFinder 33, 35, 88, 98, 109, 120TimeFinder family of products for z/OS

TimeFinder/CG 55TimeFinder/Clone for z/OS 27, 52, 55TimeFinder/Mirror for z/OS 54

TimeFinder Utility 54

TimeFinder utility 91, 111, 116, 117, 125, 126TimeFinder/Clone 89, 92, 124, 131, 134TimeFinder/Clone 38TimeFinder/Consistency Group 38TimeFinder/Mirror 27, 52, 54, 55, 89, 90, 101, 102,

103, 106, 114, 115, 117, 124, 125, 130TimeFinder/Snap 27, 38, 55

UUnisphere 70, 81, 82UNIX 43USEDBDS 124

VVDEV 27, 38, 53Virtual Provisioning 66, 68, 69, 70, 71, 75VSAM 205VSO 180, 181

WWADS 22, 131, 133, 162, 163, 164, 165, 171

XXML 18XRC 33

ZzHPF 32, 33

223IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0

Page 224: TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems

224

Index

IMS on z/OS Using EMC Symmetrix Storage Systems Version 2.0