DRD White Paper

27
Dynamic Root Disk: Quick Start & Best Practices August 2010 Technical white paper Table of Contents Introduction ......................................................................................................................................... 3 Quick Start ......................................................................................................................................... 3 Installing DRD .................................................................................................................................. 3 Creating a Clone ................................................................................................................................. 4 Overview ........................................................................................................................................ 4 Important Considerations .................................................................................................................. 4 Choosing a Target Disk .................................................................................................................... 4 Checking the Size of the Target Disk .................................................................................................. 5 Performing the Clone Operation ........................................................................................................ 5 Viewing and Accessing the Clone ...................................................................................................... 5 Modifying the Clone......................................................................................................................... 7 Activating & Booting the Clone .......................................................................................................... 7 After the Clone is Booted .................................................................................................................. 8 Best Practices ...................................................................................................................................... 8 Best Practice 1 (BP1): Basic Maintenance - Patching ............................................................................ 8 BP1: Overview of steps ................................................................................................................ 8 BP1: Additional Considerations..................................................................................................... 9 Best Practice 2 (BP2): Basic Maintenance – Patching & Security Bulletin Management with DRD and Software Assistant (SWA) ............................................................................................................................. 12 BP2: Overview of steps .............................................................................................................. 13 BP2: Additional Considerations................................................................................................... 13 Best Practice 3 (BP3): Basic Maintenance – Updating Within Versions of HP-UX 11i v3......................... 14 BP3: Overview of Steps.............................................................................................................. 14 BP3: Additional Considerations................................................................................................... 14 Best Practice 4 (BP4): Basic Maintenance – Using DRD to Assist an Update from HP-UX 11i v2 to HP-UX 11i v3 ................................................................................................................................................ 14 BP4: Overview of Steps.............................................................................................................. 15 BP4: Additional Considerations................................................................................................... 15 Best Practice 5 (BP5): Basic Recovery .............................................................................................. 16 BP5: Overview of Steps.............................................................................................................. 16 BP5: Additional Considerations................................................................................................... 17 Best Practice 6 (BP6): Basic Provisioning .......................................................................................... 17 BP6: Overview of Steps.............................................................................................................. 17 BP6: Additional Considerations................................................................................................... 18 Special Considerations for All Best Practices ......................................................................................... 18 Specific Details Regarding Clone Creation ........................................................................................ 18 Mirroring .......................................................................................................................................... 18

description

DRD White Paper

Transcript of DRD White Paper

Page 1: DRD White Paper

Dynamic Root Disk: Quick Start & Best Practices

August 2010

Technical white paper

Table of Contents

Introduction ......................................................................................................................................... 3

Quick Start ......................................................................................................................................... 3 Installing DRD .................................................................................................................................. 3

Creating a Clone ................................................................................................................................. 4 Overview ........................................................................................................................................ 4 Important Considerations .................................................................................................................. 4 Choosing a Target Disk .................................................................................................................... 4 Checking the Size of the Target Disk .................................................................................................. 5 Performing the Clone Operation ........................................................................................................ 5 Viewing and Accessing the Clone ...................................................................................................... 5 Modifying the Clone ......................................................................................................................... 7 Activating & Booting the Clone .......................................................................................................... 7 After the Clone is Booted .................................................................................................................. 8

Best Practices ...................................................................................................................................... 8 Best Practice 1 (BP1): Basic Maintenance - Patching ............................................................................ 8

BP1: Overview of steps ................................................................................................................ 8 BP1: Additional Considerations ..................................................................................................... 9

Best Practice 2 (BP2): Basic Maintenance – Patching & Security Bulletin Management with DRD and Software Assistant (SWA) ............................................................................................................................. 12

BP2: Overview of steps .............................................................................................................. 13 BP2: Additional Considerations ................................................................................................... 13

Best Practice 3 (BP3): Basic Maintenance – Updating Within Versions of HP-UX 11i v3 ......................... 14 BP3: Overview of Steps .............................................................................................................. 14 BP3: Additional Considerations ................................................................................................... 14

Best Practice 4 (BP4): Basic Maintenance – Using DRD to Assist an Update from HP-UX 11i v2 to HP-UX 11i v3 ................................................................................................................................................ 14

BP4: Overview of Steps .............................................................................................................. 15 BP4: Additional Considerations ................................................................................................... 15

Best Practice 5 (BP5): Basic Recovery .............................................................................................. 16 BP5: Overview of Steps .............................................................................................................. 16 BP5: Additional Considerations ................................................................................................... 17

Best Practice 6 (BP6): Basic Provisioning .......................................................................................... 17 BP6: Overview of Steps .............................................................................................................. 17 BP6: Additional Considerations ................................................................................................... 18

Special Considerations for All Best Practices ......................................................................................... 18 Specific Details Regarding Clone Creation ........................................................................................ 18

Mirroring .......................................................................................................................................... 18

Page 2: DRD White Paper

2

Using DRD to Expand LVM Logical Volumes and File Systems ................................................................. 19 Extending Files Systems Other Than /stand or / ................................................................................ 19 Extending the /stand or / File System ............................................................................................ 19

Viewing Log Files ............................................................................................................................... 24 Maintaining the Integrity of System Logs ........................................................................................... 26

DRD sync .......................................................................................................................................... 26

DRD Activate and Deactivate Commands ............................................................................................. 26

Delayed Activation/Boot of the Clone .................................................................................................. 26

For More Information ......................................................................................................................... 27

Call to Action .................................................................................................................................... 27

Page 3: DRD White Paper

3

Introduction Dynamic Root Disk (DRD) provides customers the ability to clone an HP-UX system image to an inactive disk, and then: • Perform system maintenance on the clone while their HP-UX 11i system is online. • Quickly reboot during off-hours—after the desired changes have been made—significantly reducing

system downtime. • Utilize the clone for system recovery, if needed. • Re-host the clone on another system for testing or provisioning purposes—only on VMs or blades

running HP-UX 11i v3 with LVM root volume groups, or on VMs running HP-UX 11i v2 with LVM root volume groups. See the Exploring DRD Rehosting whitepaper for more details: http://bizsupport1.austin.hp.com/bc/docs/support/SupportManual/c01920363/c01920363.pdf.

• Perform an OE Update on the clone from an older version of HP-UX 11i v3 to HP-UX 11i v3 update 4 or later.

• Automatically synchronize the active image and the clone, eliminating the need to manually update files on the clone.

This white paper provides an overview of Dynamic Root Disk (DRD) and is divided into 3 major parts:

Quick Start – this section provides an overview of how to install DRD and how to create a clone.

Best Practices – this section provides advice on how to utilize DRD to perform basic tasks such as maintenance, updates, recovery and provisioning.

Special Considerations for All Best Practices – this section provides detailed information about processes you might want to use in many of the best practice scenarios.

Quick Start

Installing DRD The Dynamic Root Disk (DRD) product is contained in the DynRootDisk bundle. DRD is supported on systems—including hard partitions (nPars), virtual partitions (vPars), and Integrity Virtual Machines—running the following operating systems:

• HP-UX 11i v2 (11.23) September 2004 or later • HP-UX 11i v3 (11.31)

DRD, together with any dependencies, can be installed from the Operating Environment or Application Software media for HP-UX 11i v2 or 11i v3. Alternatively, DRD and any dependencies can be downloaded from the DRD website: http://www.hp.com/go/DRD. Note that the website will always have the most up-to-date version of DRD.

To determine definitively if your installation of DRD will require a reboot, preview the swinstall installation and check if any kernel patches are included in the selection at the end of the swinstall’s analysis phase.

Page 4: DRD White Paper

4

Creating a Clone

Overview All uses of DRD begin with creating a clone of the root volume group. After discussing clone creation, we will present best practices a system administrator can follow to make software maintenance tasks easier.

The drd clone command creates a bootable disk containing a copy of the LVM volume group or VxVM disk group containing the root file system “/". In this white paper, “root group” refers to the LVM volume group or VxVM disk group that contains the root (“/”) file system. The term “logical volume” refers to an LVM logical volume or a VxVM volume.

Important Considerations • For VxVM-based systems, all the volumes in the root disk group must reside on a single disk. • For LVM based-systems, the root disk group may be spread across multiple disks. • For VxVM-based systems, the root disk group may be mirrored to additional disks. If the root disk

group is mirrored, all volumes in the root disk group must contain the same number of mirrors. For example, a configuration where all volumes except swapvol are mirrored is not supported.

• The source of the drd clone command—the volume group that is copied—is the LVM volume group or VxVM disk group containing the root (“/”) file system.

• The drd clone operation clones the root group. It is not appropriate for systems where the HP-UX operating system resides in multiple volume groups. (For example, if / resides in vg00, but /var resides in vg01, then the system is not appropriate for DRD.)

• The target of the drd clone operation must be a single disk. However, you can use the -x mirror_disk option to mirror the clone to another disk.

Choosing a Target Disk The target disk must be specified as a block device file.

• An appropriate target disk should be writeable by the system, not currently in use by other applications, and large enough to hold a copy of each logical volume in the root group being cloned.

• The disk needs to be as big as the allocated space, not the used space, for each logical volume. For example, if the logical volume containing /var has been allocated 5 GB, but is only 70% full, you will still need 5 GB for the /var logical volume in the cloned group.

Important: It is the system administrator's responsibility to determine which disks are not currently in use and may therefore be used for a clone.

Please see the Dynamic Root Disk Administrator’s Guide, Chapter 2, which can be found at: http://bizsupport1.austin.hp.com/bc/docs/support/SupportManual/c01918754/c01918754.pdf, for more details on tools and utilities that can be used to determine an appropriate target disk.

Page 5: DRD White Paper

5

Checking the Size of the Target Disk A simple mechanism for determining if you have chosen a sufficiently large disk is to run a preview of the drd clone command:

# /opt/drd/bin/drd clone -p -v -t /dev/dsk/cxtxdx (legacy device file) # /opt/drd/bin/drd clone -p -v -t /dev/disk/diskx (agile device file)

(For further information about options available with the drd clone command, see the manpage drd-clone(1M).)

The preview operation includes the disk space analysis needed to determine if the target disk is sufficiently large. If you prefer to investigate disk sizes before previewing the clone, the diskinfo, vgdisplay, and vxprint commands might be useful.

Performing the Clone Operation The clone operation is initiated with the drd clone command:

# /opt/drd/bin/drd clone -v -t /dev/dsk/cxtxdx

Where /dev/dsk/cxtxdx is a block device special file for the target disk you have chosen. For HP-UX 11iv3, either a legacy or persistent block device special file may be specified. In either case, the device file must refer to the entire disk, not the HP-UX partition on an Integrity system. HP recommends designating a period of time for creating the clone when the root group is not undergoing many changes. Running applications which modify data on other volumes is not an issue for clone validity. However, you might want to choose a location for the clone so that writing the clone data does not compete for performance with writes of application data.

See drd-clone(1M) for options available with this command.

Viewing and Accessing the Clone To determine whether or not a clone has been created, the disk that was used for the clone, when the clone was created, and to view many other details about the clone, use the drd status command:

# /opt/drd/bin/drd status

======= 04/13/09 11:49:43 MDT BEGIN Displaying DRD Clone Image Information (user=root) (jobid=mesa) * Clone Disk: /dev/dsk/c1t4d0 * Clone EFI Partition: AUTO file present, Boot loader present * Clone Rehost Status: SYSINFO.TXT not present * Clone Creation Date: 04/08/09 13:00:57 MDT * Clone Mirror Disk: None * Mirror EFI Partition: None * Original Disk: /dev/dsk/c1t3d0 * Original EFI Partition: AUTO file present, Boot loader present * Original Rehost Status: SYSINFO.TXT not present * Booted Disk: Original Disk (/dev/disk/disk10) * Activated Disk: Original Disk (/dev/disk/disk10) ======= 04/13/09 11:49:52 MDT END Displaying DRD Clone Image Information

Page 6: DRD White Paper

6

succeeded. (user=root) (jobid=mesa)

On an LVM-based system:

• The clone that has been created is not visible (when executing commands such as bdf or vgdisplay) at the completion of the clone operation. This is because the file systems on the clone are unmounted and the clone volume group is exported at completion of the drd clone command. See the information below regarding drd mount to view the clone.

• If the root group name is “vgnn”, the clone group is “drdnn”. If the root group does not have the form “vgnn”, the clone group name is formed by prefixing the root group with “drd”.

• When the clone is booted, the root group is the same as the original root group that was cloned.

On a VxVM-based system:

• The cloned disk group will be displayed in the output of commands such as vxdisk, vxprint, and vxstat. A VxVM clone is not deported by the drd clone command because a deported group cannot be booted.

• If the root group is “rootdg”, the clone group is “drd_rootdg”. If the root group is “drd_rootdg”, the clone group is “rootdg”. More generally, the clone group name is formed by prefixing the root group name with “drd_”, or by removing the prefix.

• When the clone is booted the root group is the same as the clone group name when it was visible on the original system. (Thus, the name of the root group changes when the clone is booted.)

If a system administrator wants to check the contents of particular files, the clone can be mounted by executing the command: # /opt/drd/bin/drd mount

and subsequently unmounted by executing the command: # /opt/drd/bin/drd umount

See drd-runcmd(1M), drd-mount(1M), and drd-umount(1M) for options available on these commands.

If a system administrator needs to verify the software contents of the clone, the following commands can be executed: # /opt/drd/bin/drd runcmd swlist # /opt/drd/bin/drd runcmd swverify

When software is installed in a drd runcmd session, its configuration scripts are postponed until the image is booted. As a result, the state attribute of a fileset is installed rather than configured. When the clone is booted, the swlist command shows the states as configured.

If a system administrator wants to run both of these commands, but eliminate the overhead of mounting and unmounting the clone for each command, the following sequence is preferable: # /opt/drd/bin/drd mount # /opt/drd/bin/drd runcmd swlist # /opt/drd/bin/drd runcmd swverify # /opt/drd/bin/drd umount

Page 7: DRD White Paper

7

When drd runcmd finds the file systems in the clone already mounted, they will not be unmounted—nor on an LVM-based system will the volume group be vgexported—at the completion of the drd runcmd operation.

Modifying the Clone The drd runcmd operation is used to run commands that modify the inactive system image. There are two fundamental requirements for a command run by the drd runcmd operation:

• The command must not affect the currently booted system. In particular, it must not start or stop daemons, make dynamic kernel changes, or in any way affect the process space of the booted system.

• The changes the command makes to the inactive system image must be fully functional when the image is booted. For example, if a patch installs a new daemon, it is usually necessary that the daemon be started automatically when the image is booted.

A command, such as swinstall, that satisfies the two fundamental requirements above is designated as DRD-safe. Similarly, a package whose control scripts behave correctly when executed under drd runcmd is designated as DRD-safe.

For release 3.3 and later of DRD, the commands certified to be DRD-safe are swinstall(1M), swlist(1M), swjob(1M), swmodify(1M), swremove(1M), swverify(1M), kctune(1M), update-ux(1M), and view(1). In addition, there are restrictions on the options that can be used on the sw* and update-ux commands. These restrictions are documented in the manpage drd-runcmd(1M).

For more information on DRD safety, please see the BP1: Additional Considerations section later in this white paper.

Activating & Booting the Clone When the clone is ready for deployment, the drd activate command can be used to set the inactive system image as the primary boot path for the next system boot: # /opt/drd/bin/drd activate

If desired, the alternate boot path and the High Availability (HA) alternate boot path may also be changed by using the options -x alternate_bootdisk and -x HA_alternate_bootdisk on the drd activate command. The value of the autoboot flag, set by the command setboot -b, is not affected by the drd activate command.

The drd activate command always changes the primary boot path to the inactive system image. (This is the clone until the clone is booted—then the original system becomes inactive.) The drd activate command does not toggle the boot path between the booted system and the inactive image. This makes the result of the drd activate command predictable, even if it is issued multiple times by multiple system administrators.

The option -x reboot can be set to true on the drd activate command if an immediate shutdown and reboot is desired: # /opt/drd/bin/drd activate -x reboot=true

Page 8: DRD White Paper

8

As a best practice, consider creating a shutdown script that runs drd sync so that files changed on the original image after the clone was created will be propagated to the clone when it is activated and booted (see the “DRD sync” section below for more details.)

If you need to restore the booted system image to be the primary boot path—that is, “undo” the drd activate command—the drd deactivate command can be used.

After the Clone is Booted After the clone is booted, a swlist of all software on the system should show all patches as configured. Any configure errors are documented in the usual SD location: /var/adm/sw/swconfig.log. When the clone is booted, the original system is now inactive, and may be mounted by executing the drd mount command. The mount point of the root file system of the original system is: /var/opt/drd/mnts/sysimage_000. If application testing shows the newly booted clone to be unacceptable, the system administrator can use drd activate -x reboot=true to return to the original system.

Best Practices The Best Practices below describe the high level steps required to complete a task, followed by Additional Considerations to add more detailed information to tasks when needed. This allows the reader to get a complete picture of the steps required to perform a task, with supporting information provided to add clarity.

Each Best Practice begins with the creation of a clone, which was discussed in detail in the previous section. Note that after a disk created by the drd clone command is booted, the original system becomes an offline system image, and it can be manipulated using the drd runcmd, drd mount, drd umount and drd activate commands. In the Best Practices that follow, references to “inactive system image” or “inactive image” refer to the copy of the root volume group that is not currently booted. It might be a clone, or, when the clone is booted, it might be the original system image.

Best Practice 1 (BP1): Basic Maintenance - Patching The most common benefit of a clone created by a drd clone command is to minimize the downtime needed to perform proactive maintenance, and this best practice will focus on applying patches. For this scenario, our setup is as follows:

Original image: /dev/disk/disk1

Clone disk: /dev/disk/disk5

Patches to apply: Quality Pack Patch bundle plus PHCO_77777

Depot with Quality Pack and PHCO_77777: depot_svr:/var/depots/1131

Objective: Perform maintenance on my server while minimizing downtime.

BP1: Overview of steps 1. Create the clone: drd clone –t /dev/disk/disk5

Page 9: DRD White Paper

9

2. Use drd status to view the clone: drd status

3. Install the QPK and PHCO_77777: drd runcmd swinstall \ -s depot_svr:/var/depots/1131 -x patch_match_target=true

4. Ensure the patch and QPK is installed: drd runcmd swlist QPKBASE QPKAPPS \ PHCO_77777

5. Create a shutdown script that runs drd sync so that files changed on the original image after the clone was created will be propagated to the clone (see the “DRD sync” section below for more details)

6. Activate and boot the clone: drd activate -x reboot=true

BP1: Additional Considerations DRD-Safe: Overview Any patch or product that is installed on a DRD clone must be DRD-safe. That is, the patch/product must not impact the running system.

• For HP-UX 11i v2, only patches are checked prior to release to ensure they are DRD-safe. Almost all of the HP-UX 11i v2 patches are DRD-safe, and those that are not are listed in the drd_unsafe_patch_list file; see the next section for details about this file. Most HP-UX 11i v2 products are not safe, and thus cannot be managed with DRD.

• For HP-UX 11i v3, patches and all products in the Operating Environments are checked prior to release for DRD safety. Any patches that are not DRD-safe either have the is_drd_safe flag set to false or are listed in the drd_unsafe_patch_list file; see the next section for details about this file. Any products or patches that are not DRD-safe will not be installed during a DRD session.

• Note that firmware patches are not DRD-safe and will be automatically excluded from any attempt to install or remove them from an inactive image. One of the values of DRD is that once a DRD clone is created and booted, the original image (with no changes) acts as a backup and can be reactivated at any time if the clone does not operate as expected. If a firmware patch was loaded on the clone once it was booted, this new firmware will be present if the original image is booted.

• For products, the is_drd_safe attribute is used to indicate whether a product is DRD-safe. If this attribute is not set or is missing, the product is considered to be DRD-unsafe.

• The DRD toolset will not process products or patches that are DRD-unsafe. For products and HP-UX 11i v3 patches, the DRD toolset uses the is_drd_safe attribute to determine safety. For HP-UX 11i v2 patches, any patch that is not safe is placed on the DRD Unsafe Patch List. Please see the section below, “DRD-safe: Updating the drd_unsafe_patch_list File” for information on how to make sure this list is current on your system.

Important: Any patch that has been written for a specific site, or that has a tag including the UNOF string, has not been through DRD-safe certification. These types of patches should not be applied—by executing drd runcmd—without thorough discussion with the patch provider.

DRD-Safe: Updating the drd_unsafe_patch_list file The /etc/opt/drd/drd_unsafe_patch_list file is delivered as a volatile file containing a list of DRD-unsafe patches delivered without the attribute is_drd_safe set to false. Most new patches

Page 10: DRD White Paper

10

are DRD-safe; that is, they can be installed by the drd runcmd command to an inactive image without affecting the booted system. The few patches that are not DRD-safe set the fileset attribute is_drd_safe to false; for example firmware patches fall into this category. In the rare event that a patch is released with the is_drd_safe attribute incorrectly set to true, that patch will be added to the drd_unsafe_patch_list file.

Follow this procedure to determine if your drd_unsafe_patch_list file needs to be updated:

1. View the file at ftp://ftp.itrc.hp.com/export/DRD/drd_unsafe_patch_list. (You can open this URL in a

Web browser.) Determine the “last updated” date listed in the file and make a note of it. 2. On your active system image, view the “last updated” date listed in the

/etc/opt/drd/drd_unsafe_patch_list file and make a note of it. 3. Compare the dates from Step 1 and Step 2 and if the “last updated” date listed in the Step 1 file

is later than the date listed in the Step 2 file, then your installed drd_unsafe_patch_list file is out of date and you need to continue to Step 4 (in this procedure) to update the file. (If the “last updated” date listed in the Step 1 file is the same as the Step 2 file, then you do not need to update the drd_unsafe_patch_list file and you are finished with this procedure.)

4. If your installed drd_unsafe_patch_list file is out of date and needs to be updated (to handle potential new DRD-unsafe patches), go to the HP IT Resource Center FTP site (see Step A below) to download the most recent drd_unsafe_patch_list file. This file is updated when any new patch is determined to have the is_drd_safe flag incorrectly set to true.

Update the drd_unsafe_patch_list file as follows:

1. Download the ftp://ftp.itrc.hp.com/export/DRD/drd_unsafe_patch_list file to /etc/opt/drd/drd_unsafe_patch_list on your active system image. If you have not created a clone, then you are done with updating the drd_unsafe_patch_list file and have finished this procedure. If you have created a clone, continue with Step B.

2. Because DRD uses the copy of the drd_unsafe_patch_list file on the inactive system image, the copy on that image must also be updated. If there is a clone, mount the inactive system image: # /opt/drd/bin/drd mount

3. Copy the drd_unsafe_patch_list file to the inactive system image: # /usr/bin/cp /etc/opt/drd/drd_unsafe_patch_list \ /var/opt/drd/mnts/sysimage_00*/etc/opt/drd/drd_unsafe_patch_list

Note: If you download the file to a Windows system and then copy it to an HP-UX system, DOS-type carriage-return line feeds might be inserted into the file. To eliminate these characters, use the dos2ux(1) command.

DRD-Safe: Identifying Patches that will not be Installed in a drd runcmd Session A preview operation can be used to help a system administrator determine if any patches from a desired patch selection will not install in a drd runcmd session. For example, to see if any DRD-unsafe patches will be rejected when installing from a patch depot, execute the following preview command:

Page 11: DRD White Paper

11

# /opt/drd/bin/drd runcmd swinstall -p -x patch_match_target\ -s depot_svr:/var/depots/mydepot Each unsafe selection will be rejected with an appropriate message, and the remaining selections will be listed.

Planning the Installation of DRD-Unsafe Patches Examine the list of DRD-unsafe patches determined in the section above. For each patch, apply the following logic:

1. Does the patch apply to a product that is not used and for which no use is planned? If so, no action is necessary.

2. Has the patch been superseded by a DRD-safe patch? If so, check the superseding patch’s rating and delivery date in the patch database at the HP IT Resource Center: http://www.itrc.com. If you are satisfied with the quality measures for the superseding patch, swcopy it into your patch depot. When the actual DRD installation is run, use the option “-x patch_match_target=true”.

3. Does the patch NOT require a kernel build or reboot? If so, one option is to apply it to the booted system before creating the clone.

4. If the patch is needed on your system, has not been superseded by a DRD-safe patch, and requires a kernel build or reboot, the patch must be applied to the inactive system image after the image is booted. Add the patch to a list of patches (or a selection file of patches) that you plan to install as soon as the inactive system image is booted.

Installing from Serial Depots HP does not support executing the drd runcmd to install from a serial depot source. Attempts to do so result in the following error: ERROR: The source you have chosen is a tape device located on the booted system. Installing from this source is not supported under drd runcmd. To install software from this tape device, first copy the software to a non-tape device using swcopy.

To copy software from a serial depot to a directory depot, use the following command: # /usr/sbin/swcopy -s /path/to/serial.depot SoftwareSelections @ \ /path/to/non-serial/depot

If the software you are trying to copy does not have its dependencies in the depot, you should add the –x enforce_dependencies=false option to the swcopy command.

Verifying Operations Verify scripts have always been required to refrain from making changes either to the file system or to the process space of the system where a swverify operation is run. For this reason, swverify operations are always DRD-safe, even for patches and products that are not DRD-safe for sw* and update-ux operations. The same does not apply to swverify operations with the -F option, which causes “fix” scripts to be run. Use of the -F option on a swverify command is not supported under drd runcmd.

If you attempt to run the swverify operation with the -F option under drd runcmd, DRD exits with a failure return code without running any fix scripts.

Page 12: DRD White Paper

12

Patch Planning and Alternative Patch Deployment Schemes The offline patching scenario presents a straightforward progression from clone to boot in a fairly short time period. However, a system administrator might benefit from some time and feedback to craft the exact selection of patches to be applied to an inactive system image that is eventually booted.

The drd clone and drd runcmd operations can be useful in helping a system administrator to identify an appropriate patch selection, and, if desired, to test it before final deployment.

For example, a system administrator might create a clone, identify a patch selection, use drd runcmd to apply it to the clone, and then investigate any unexpected messages. These messages could include a need to acquire or extend license keys, identify conflicts with previously installed site-specific patches, or just be notes about automatically selected patch filesets. The swlist and swverify commands can be run against the clone. In addition, the system administrator could compare the patch selection with that of a reference system.

If short test periods are available before the actual patch deployment, the system administrator can boot the clone and perform application testing. With this mechanism, problems are identified before the entire community is exposed to the changes.

When the system administrator is satisfied with the patch selection, one of the following two options could be utilized to make sure the clone is up to date before activating and booting it. To determine the best method, first run the drd sync command in preview mode: # /opt/drd/bin/drd sync -p • Using drd sync: Examine the /var/opt/drd/files_to_be_copied_by_drd_sync

file. If the file is not large, create a shutdown script that runs drd sync so that files changed on the original image after the clone was created will be propagated to the clone (See the “DRD sync” section below for more details.)

• Re-creating the clone: If the /var/opt/drd/files_to_be_copied_by_drd_sync file is large, consider running drd clone to re-create the clone. This will ensure current copies of all configuration files (for example: users, passwords, printer configurations) are included in the clone. The final patch selection is then applied with the drd runcmd.

Regardless of the option used to bring the clone up to date, you can then activate and boot the clone.

If downtime is scarce, the system administrator could create a clone daily and apply the “known good” patch selection to it. When an application outage was needed for other reasons, or an unexpected outage occurred, the clone could be deployed as part of bringing the system up.

Best Practice 2 (BP2): Basic Maintenance – Patching & Security Bulletin Management with DRD and Software Assistant (SWA) As in Best Practice 1, this scenario will once again demonstrate how to use DRD to reduce downtime during system maintenance. However in this scenario we will be utilizing Software Assistant (SWA) to identify any required patches. SWA will identify missing patches/patch bundles, patches with warnings, and fixes for published security issues. For this scenario, our setup is as follows:

Assumption: clone is already created and you are booted on the active image

Patches to apply: SWA will figure this out for you

Depot with patches: /var/depots/1131swa

Page 13: DRD White Paper

13

Version of SWA: C.02.26 or later

Objective: Perform maintenance on my server, including the identification and repair of security issues, while minimizing downtime.

BP2: Overview of steps 1. Use drd status to view the clone: drd status 2. Determine what patches are needed: 3. Mount the clone: drd mount

4. Create an SWA report: swa report -s \ /var/opt/drd/mnts/sysimage_001 5. Download the patches identified by SWA into a depot: swa get \

6. -t /var/depots/1131swa 7. Patch installation might require special attention. Review any special installation instructions

documented in /var/depots/1131swa/readBeforeInstall.txt. 8. Install everything in the 1131swa depot: drd runcmd swinstall \

-s /var/depots/1131swa -x patch_match_target=true 9. Ensure the patches are installed: drd runcmd view /var/adm/sw/swagent.log 10. Unmount the clone: drd umount

11. Create a shutdown script that runs drd sync so that files changed on the original image after the clone was created will be propagated to the clone (see the “DRD sync” section below for more details)

12. Activate and boot the clone: drd activate -x reboot=true

BP2: Additional Considerations Using Software Assistant (SWA) In this best practice, we combine the features of SWA and DRD to create a solution that helps reduce the time required to identify needed patches, while still reducing downtime. The default functionality of SWA is used in this scenario. For information on how to customize networking and the SWA analyzers utilized, see the SWA Web page: http://www.hp.com/go/swa.

Note that the SWA report created in Step 2b will identify two basic categories of requirements:

• Product, patches and manual actions that need to be applied in order to address known security issues

• Patches and patch bundles that are missing, warned, etc.

All required patches and patch bundles identified in either category listed above will be downloaded to the depot 1131swa in Step 5. In order to address the other issues identified in the SWA report, you might need to take one or more of the following steps:

• If products need to be updated to address security issues, you will need to download and install those products. Note that this step can be taken after Step 5 above, with the required product updates added to the 1131swa depot so that all products and patches may be installed at the same time and with a single reboot.

• If manual actions need to be taken in order to address security issues, these actions will need to be addressed.

• Other patch issues may be identified and addressed, such as warned patches in recommended bundles, missing patch dependencies, or site specific patches.

Page 14: DRD White Paper

14

For more information on SWA, go to http://www.hp.com/go/swa.

For more information on patching, see the Patch Management User Guide for HP-UX 11.x systems at http://bizsupport1.austin.hp.com/bc/docs/support/SupportManual/c01919407/c01919407.pdf.

Best Practice 3 (BP3): Basic Maintenance – Updating Within Versions of HP-UX 11i v3 This best practice focuses on how to reduce downtime while performing an update from an older version of HP-UX 11i v3 to a newer release of HP-UX 11i v3 (update 4 or later.) For this scenario, our setup is as follows:

Original image: /dev/disk/disk1 (with HP-UX 11i v3, update 2 installed)

Clone disk: /dev/disk/disk5

What to apply: HP-UX 11i v3 Update 4, Virtual Server OE

Depot with OE: depot_svr:/var/depots/1131_VSE-OE

Version of DRD: B.11.31.A.3.3 or later

Objective: Perform update on my server while minimizing downtime.

BP3: Overview of Steps 1. Create the clone: drd clone –t /dev/disk/disk5 2. Use drd status to view the clone: drd status

3. Install HP-UX 11i v3 Update 4, Virtual Servier OE: drd runcmd update-ux -s \ depot_svr:/var/depots/1131_VSE-OE HPUX11i-VSE-OE

4. Ensure that the OE is installed: drd runcmd swlist

5. Create a shutdown script that runs drd sync so that files changed on the original image after the clone was created will be propagated to the clone (see the “DRD sync” section below for more details)

6. Activate and boot the clone: drd activate -x reboot=true

7. After updating, you can check the following logs: /var/adm/sw/swagent.log and /var/opt/swm/swm.log.

BP3: Additional Considerations • DRD update functionality is only supported from DRD version 3.3.x and later, which is shipped with

HP-UX 11i v3 OE Update 4 and later. • When performing an update, LVM file systems often need to be expanded. In order to determine if

any file systems need to be expanded, you might want to run update-ux in preview mode on either your active disk or the clone. You can then expand the file systems on the clone prior to performing the update. See the Using DRD to Expand LVM Logical Volumes and File Systems section, later in this white paper, for detailed information.

Best Practice 4 (BP4): Basic Maintenance – Using DRD to Assist an Update from HP-UX 11i v2 to HP-UX 11i v3 Though a DRD clone cannot be updated from HP-UX 11i v2 to HP-UX 11i v3, DRD can be used to assist such an update by providing a mechanism to adjust file system sizing prior to performing an

Page 15: DRD White Paper

15

update, and providing a quick backup mechanism if you wish to restore the system to HP-UX 11i v2 for any reason. For this scenario, our setup is as follows:

Original image: /dev/dsk/c0t0d0 (with HP-UX 11i v2 installed)

Clone disk: /dev/dsk/c1t0d0

What to apply: HP-UX 11i v3 Update 4, Virtual Server OE

Depot with patches: depot_svr:/var/depots/1131_VSE-OE

Version of DRD: B.11.31.A.3.3 or later

Objective: Utilize DRD to help adjust file system sizes and provide a quick backup mechanism when performing an HP-UX 11i v2 to v3 update.

BP4: Overview of Steps 1. Create the clone: drd clone –t /dev/dsk/c1t0d0 2. Use drd status to view the clone: drd status

3. Run update-UX in preview mode on the active disk: update-ux -p -s \ depot_svr:/var/depots/1131 HPUX11i-VSE-OE

4. Adjust file system sizes on the clone as needed (see BP4: Additional Considerations below for more information)

5. Create a shutdown script that runs drd sync so that files changed on the original image after the clone was created will be propagated to the clone (see the “DRD sync” section below for more details)

6. Activate and boot the clone, setting the alternate boot disk to the HP-UX 11i v2 disk: 7. drd activate -x alternate_bootdisk=/dev/dsk/c0t0d0 -x reboot=true 8. Update the active image to HP-UX 11i v3, Virtual Server OE: update-ux -s \

depot_svr:/var/depots/1131_VSE-OE HPUX11i-VSE-OE

9. Note that there will be a reboot executed at this time. 10. Ensure that the software is installed properly: swverify \*

11. Verify all software has been updated to the HP-UX 11i v3 version: swlist

12. Ensure the integrity of your updated system by checking the following log files: /var/adm/sw/update-ux.log and /var/opt/swm/swm.log

BP4: Additional Considerations Recommended Actions Prior to Update Prior to updating a system to HP-UX 11i v3, you need to ensure the system being updated supports HP-UX 11i v3 including the required firmware, storage devices, 3rd party applications, etc. HP has created a single website that contains all the information necessary to make sure a system is ready to update to HP-UX 11i v3 at http://www.hp.com/go/tov3. It is highly recommended that you follow the steps on this Web page prior to updating a system to HP-UX 11i v3.

Once a system has been checked to ensure it is ready to update to HP-UX 11i v3, perform the following two actions to complete your preparation work:

1. Run “swverify \*” and take action if there are any problems with existing packages or patches.

2. Optional: Check for obsolete software on your system and delete software that is no longer used. This will reduce time needed for update and prevent any problems with obsolete software. At the

Page 16: DRD White Paper

16

same time, verify that System Fault Manager (SFM) and Event Monitoring Service (EMS) are installed and configured properly.

Adjusting File System Sizes on the Clone In Step 3 above, the output of the update-ux -p command will identify file systems that need to be expanded, and these changes can be made on the clone prior to activating it and performing the actual update. You can find detailed information on how to expand file systems, including /stand, later in this whitepaper’s Using DRD to Expand LVM Logical Volumes and File Systems section.

Special Considerations with Different OS Versions on the Active and In-Active Images After Step 5 is executed, the clone (disk 5) becomes the active disk, and the original disk (disk 1) becomes inactive. Both disks are still running HP-UX 11i v2. After Step 6 is executed, the active disk is running HP-UX 11i v3, and the inactive disk is running HP-UX 11i v2. Whenever you are in a situation where the active and inactive disks are not running the same major release of HP-UX, you need to be aware of some limitations:

• If an HP-UX 11i v2 disk is booted and HP-UX 11 v3 is on the inactive clone, you should not use any sw* commands with drd runcmd.

• If an HP-UX 11i v3 disk is booted and HP-UX 11i v2 is on the inactive clone, you can run drd runcmd swlist or drd runcmd swverify; however, you cannot run any other sw*commands.

Best Practice 5 (BP5): Basic Recovery A key benefit of the DRD toolset is that you can use it for basic system recovery. While mirroring provides excellent up-to-date protection from hardware failures, the clone can provide a fallback for reverting from recent software changes. Disk mirroring provides robust protection against hardware failures, but it also automatically updates the mirror image with all software and file system updates. Therefore, if a software installation caused an undesirable system state, DRD provides a better mechanism for quickly returning to the system’s previous state. For this scenario, our setup is as follows:

Original image: /dev/disk/disk1

Clone disk: /dev/disk/disk5

Change to be made: Modify semaphore tunables prior to updating an application

Objective: Utilize a DRD clone as a quick recovery mechanism for the root volume group if needed.

BP5: Overview of Steps 1. Create the clone: drd clone –t /dev/disk/disk5 2. Use drd status to view the clone: drd status

3. Modify semaphore tunables in preparation for updating an application 4. Create a shutdown script that runs drd sync so that files changed on the original image after

the clone was created will be propagated to the clone (see the “DRD sync” section below for more details)

5. While making the tunable changes, a critical networking configuration file is accidentally deleted, so we need to activate and boot the clone which contains the original image:

6. drd activate -x reboot=true

Page 17: DRD White Paper

17

Booting the clone is considerably faster than recovering the system from a network recovery image. Note that any file system changes (for example, application data updates), need to be recovered from the point in time the clone was created, until the time it is booted.

BP5: Additional Considerations Ongoing Recovery Failsafe The approach described in this recovery best practice can be used on an ongoing basis by creating a clone regularly—daily or weekly—depending on the volatility of the system.

One option is to match the timing of DRD cloning with the regularity of system changes. For example, if critical non-reboot patches are identified bi-weekly, the clone could be made just prior to application of the critical patches. Alternatively, if there is a time period when availability is particularly critical, the clone could be created frequently to ensure speedy recovery from unknown problems.

Best Practice 6 (BP6): Basic Provisioning Each of the previous best practices described creating a clone, then using that clone on the same system used to create it. In this scenario, we will demonstrate how a clone can be created on one system then booted on a different system—a task referred to as rehosting the clone. With the ability to rehost a clone, you can quickly and easily provision new systems. For this scenario, our setup is as follows:

Original image: /dev/disk/disk71 (initially allocated to VM2 as /dev/disk/disk1)

Clone disk: /dev/disk/disk75 (initially allocated to VM2 as /dev/disk/disk5)

Initial setup: a VM host with 2 VM guests: VM1 and VM2

Need to add: a third VM guest, VM3, running the same version of HP-UX as VM2

Objective: Utilize DRD rehosting to quickly provision VM3

Assumptions: VM host and all VM guests are running HP-UX 11i v3 with patches PHCO_36525 and PHCO_39064 loaded

BP6: Overview of Steps 1. From the VM host, create VM3 with just a network interface:

a. hpvmcreate -P drdivm3 -c 1 -r 2G -a \ network:avio_lan::vswitch:myvswtch

b. hpvmstatus -P drdivm3

2. From the VM2 guest, create and rehost the new boot disk (/dev/disk/disk5)

c. Create the clone: drd clone -t /dev/disk/disk5 d. Use drd status to view the clone: drd status e. Create the system info file with VM3’s personality:

cp /etc/opt/drd/default_sysinfo_file \ /var/opt/drd/tmp/drdivm3_sysinfo (see the “BP6: Additional Considerations” section below for more information.)

f. Copy the system info file to the EFI partition of the clone disk: drd rehost -f \ /var/opt/drd/tmp/drdivm3_sysinfo

3. From the VM host, activate VM3: 1. Move the clone disk from VM2 to VM3

i. hpvmstatus -d -P drdivm2

Page 18: DRD White Paper

18

ii. hpvmmodify -P drdivm2 -d \ disk:avio_stor:0,1,1:disk:/dev/rdisk/disk75

Note: You might see messages indicating a restart is required due to devices being busy; these can safely be ignored as they only pertain to the original system, not the target for rehosting.

iii. hpvmmodify -P drdivm3 -a \ disk:avio_stor:0,1,1:disk:/dev/rdisk/disk75

2. Boot VM3: i. hpvmstart -P drdivm3

ii. hpvmconsole -P drdivm3

iii. From the EFI shell, enter the following: 1. fs0: 2. cd EFI\HPUX 3. hpux.efi

BP6: Additional Considerations The steps listed above give a very basic overview of rehosting. For more details, please see the whitepaper, “Exploring DRD Rehosting on HP-UX 11i v2 and v3.” Note that DRD rehosting is only supported between LVM managed blades and virtual machines (VMs) for HP-UX 11i v3, and is only supported on LVM managed VMs for HP-UX 11i v2.

Special Considerations for All Best Practices

Specific Details Regarding Clone Creation Prior to creating a clone, ensure that your original disk is bootable. The drd clone operation performs the following items:

• Creates Extensible Firmware Interface (EFI) partition on HP-UX Integrity systems • Creates boot records • Creates a new LVM volume group or VxVM disk group and a volume in the new group for each

volume in the root volume group. The volume management type of the clone matches that of the root group.

• Configures swap and dump volumes • Copies the contents of each file system in the root volume group to the corresponding file system in

the new group • Modifies particular files on the clone that identify the disk on which the volume group resides • For LVM-based systems, modifies volume group metadata on the clone so that the volume group

name is the same as the original root volume group when the clone is booted

Mirroring System Administrators frequently use MirrorDisk/UX to create a redundant copy of an HP-UX system as protection against hardware failures. DRD provides a means of protecting against software failures. Combining the use of DRD and MirrorDisk/UX can provide many benefits. Please see the Dynamic Root Disk and MirrorDisk/UX whitepaper, at http://bizsupport1.austin.hp.com/bc/docs/support/SupportManual/c01920361/c01920361.pdf, for various strategies that can be used to combine the benefits of DRD clones and LVM mirrors.

Page 19: DRD White Paper

19

Using DRD to Expand LVM Logical Volumes and File Systems

Extending Files Systems Other Than /stand or / One of the difficulties in expanding file systems in the root volume group of an LVM-based system is that the file systems are always busy, so a boot to LVM maintenance mode is often needed to complete the size change.

Because the entire inactive image created by a drd clone command is not in use, the system administrator has an opportunity to expand file systems on the inactive image.

The following steps work for file systems other than the boot (/stand) file system.

1. After creating the clone, execute the command:

# /opt/drd/bin/drd mount

2. Choose the file system on the clone to expand. For this example, we are using /opt. The logical

volume is /dev/drd00/lvol6 mounted at /var/opt/drd/mnts/sysimage_001/opt. The size of the vxfs file system is increased to 999 extents. Execute the following commands to expand /opt: # /usr/sbin/umount /dev/drd00/lvol6 # /usr/sbin/lvextend -l 999 /dev/drd00/lvol6 # /usr/sbin/extendfs -F vxfs /dev/drd00/rlvol6 # /usr/sbin/mount /dev/drd00/lvol6 \ /var/opt/drd/mnts/sysimage_001/opt

3. Run bdf to check that the /var/opt/drd/mnts/sysimage_001/opt file system now has the

desired size.

Extending the /stand or / File System Note: Extending the root (“/”) file system is very similar to extending the /stand file system. For brevity, we simply refer to the /stand file system in this section.

Overview It can be a challenging task for a system administrator to increase the size of the /stand file system, even in a Logical Volume Manager (LVM) environment. Because it is used by the boot loader, /stand must be the first file system on a physical disk and it must be contiguous and non-relocatable. Extents added to /stand must therefore come from the second logical volume on the disk, which, in a “standard” configuration, is usually the swap area (and, fortuitously, is also contiguous). Typically, it takes one reboot to free up swap space by moving to a new swap logical volume, and then another reboot to switch to the larger /stand.

Page 20: DRD White Paper

20

This section describes how a system administrator can use an inactive system image created by a drd clone command to expand /stand with a single reboot. The most typical use of Dynamic Root Disk (DRD) is to patch an inactive system image. If desired, the administrator can use the reboot required to boot the patched image to also resize the /stand file system, thus accomplishing both tasks with a single reboot.

Notes • The following steps assume that the system has a “standard” configuration with lvol2 used for swap. After the change below has been made, the assumption is no longer valid. Because the solution described below is a one-time change, care should be taken to make /stand sufficiently large for some period in the future.

• The procedure described below restricts all changes to the inactive system image. However, as a failsafe (in case, for example, you enter a command with vg00 instead of drd00), you should make sure you have a current recovery image of your system.

Procedure 1. Create a DRD clone. For further information on creating a clone, see the Creating a Clone section

of this document, the manpage drd-clone(1M), or the Dynamic Root Disk Administrator’s Guide which can be found at: http://bizsupport1.austin.hp.com/bc/docs/support/SupportManual/c01918754/c01918754.pdf

2. Mount the clone by executing the following command: # /opt/drd/bin/drd mount

This command imports the cloned disk as the volume group drd00. 3. Execute the following command to see the current logical volumes used for swap, dump and

boot. You should see lvol2 being used for swap on both the booted system and the clone:

4. On 11iv2 Integrity: /usr/sbin/lvlnboot -v

Boot Definitions for Volume Group /dev/vg00: Physical Volumes belonging in Root Volume Group: /dev/dsk/c3t15d0 (0/0/2/1.15.0) -- Boot Disk Boot: lvol1 on: /dev/dsk/c3t15d0 Root: lvol3 on: /dev/dsk/c3t15d0 Swap: lvol2 on: /dev/dsk/c3t15d0 Dump: lvol2 on: /dev/dsk/c3t15d0, 0 Boot Definitions for Volume Group /dev/drd00: Physical Volumes belonging in Root Volume Group: /dev/dsk/c1t15d0 (0/0/1/1.15.0) -- Boot Disk Boot: lvol1 on: /dev/dsk/c1t15d0 Root: lvol3 on: /dev/dsk/c1t15d0 Swap: lvol2 on: /dev/dsk/c1t15d0 Dump: lvol2 on: /dev/dsk/c1t15d0, 0

5. On 11iv3: /usr/sbin/lvlnboot -v

Boot Definitions for Volume Group /dev/vg00: Physical Volumes belonging in Root Volume Group: /dev/disk/disk6_p2 -- Boot Disk Boot: lvol1 on: /dev/disk/disk6_p2

Page 21: DRD White Paper

21

Root: lvol3 on: /dev/disk/disk6_p2 Swap: lvol2 on: /dev/disk/disk6_p2 Dump: lvol2 on: /dev/disk/disk6_p2, 0

Boot Definitions for Volume Group /dev/drd00: Physical Volumes belonging in Root Volume Group:

/dev/disk/disk7_p2 -- Boot Disk Boot: lvol1 on: /dev/disk/disk7_p2 Root: lvol3 on: /dev/disk/disk7_p2 Swap: lvol2 on: /dev/disk/disk7_p2 Dump: lvol2 on: /dev/disk/disk7_p2, 0

6. Execute the following command and note the size of both lvol1 (/stand) and lvol2 (swap), the physical extent (PE) size, and the number of free extents on the disk.

# /usr/sbin/vgdisplay -v | more

--- Volume groups --- VG Name /dev/vg00 VG Write Access read/write VG Status available Max LV 255 Cur LV 9 Open LV 9 Max PV 16 Cur PV 1 Act PV 1 Max PE per PV 4350 VGDA 2 PE Size (Mbytes) 4 Total PE 4340 Alloc PE 3452 Free PE 888 Total PVG 0 Total Spare PVs 0 Total Spare PVs in use 0 --- Logical volumes --- LV Name /dev/vg00/lvol1 LV Status available/syncd LV Size (Mbytes) 300 Current LE 75 Allocated PE 75 Used PV 1 LV Name /dev/vg00/lvol2 LV Status available/syncd LV Size (Mbytes) 1024 Current LE 256 Allocated PE 256 Used PV 1 --- Physical volumes --- PV Name /dev/dsk/c1t15d0 PV Status available Total PE 4340 Free PE 888 Autoswitch On

7. Based on the sizes above, decide how much space you want to allocate to the new swap logical

volume, and how much space currently in lvol2 you want to assign to /stand.

8. Create a new logical volume for swap. The extents for swap must be contiguous and non-relocatable. For example, to assign 2 GB to a new logical volume to be used for swap, you would use the following command:

Page 22: DRD White Paper

22

# /usr/sbin/lvcreate -L 2048 -C y -r n -n swap drd00

9. Remove the old dump device from drd00:

# /usr/sbin/lvrmboot -v -d lvol2 /dev/drd00

10. Create the new dump device:

# /usr/sbin/lvlnboot -d /dev/drd00/swap

11. Remove the old swap device:

# /usr/sbin/lvrmboot -s /dev/drd00

12. Add the new swap device:

# /usr/sbin/lvlnboot -s /dev/drd00/swap

13. Verify the changes:

# /usr/sbin/lvlnboot -v

14. Remove /dev/drd00/lvol2:

# /usr/sbin/lvremove -f /dev/drd00/lvol2

15. Using the values you determined in Step 5 above, extend /dev/drd00/lvol1 — the volume

where /stand is mounted. For example, the following command expands /dev/drd00/lvol1 to 150 extents:

# /usr/sbin/lvextend -l 150 /dev/drd00/lvol1

16. Unmount /dev/dr00/lvol1 so that the file system can be extended:

# /usr/sbin/umount /dev/drd00/lvol1

17. Use extendfs to extend the /stand file system on the inactive system image. Note that the

character device file must be specified, and that the argument of -F is the file system type.

# /usr/sbin/extendfs -F hfs /dev/drd00/rlvol1

18. Re-mount /drd00/lvol1:

# /usr/sbin/mount /dev/drd00/lvol1 /var/opt/drd/mnts/sysimage_001/stand

19. Make the newly expanded logical volume a boot volume:

# /usr/sbin/lvlnboot -b lvol1 /dev/drd00

Page 23: DRD White Paper

23

20. The old lvol2 must be removed from the device directory of the inactive system image. However, because the device directory on the inactive system image reflects devices when the image is booted, the file to be removed is part of the volume group vg00 on the inactive image:

# /usr/bin/rm -f /var/opt/drd/mnts/sysimage_001/dev/vg00/lvol2

21. Create a new character device and a new block device on the inactive system image so that the new swap volume is recognized when it is mounted. The high order byte of the minor number used should match the high order byte of the group file: /var/opt/drd/mnts/sysimage_001/dev/vg00. The lower order bytes should match those of the file: /de/drd00/swap. In this example, we find:

# /usr/bin/ll /var/opt/drd/mnts/sysimage_001/dev/vg00/group crw-r----- 1 root sys 64 0x020000 Aug 7 19:26 /var/opt/drd/mnts/sysimage_001/dev/vg00/group # ll /dev/drd00/swap brw-r----- 1 root sys 64 0x02000a Aug 7 20:23 /dev/drd00/swap

So we use the commands:

# /usr/sbin/mknod /var/opt/drd/mnts/sysimage_001/dev/vg00/rswap c 64 0x02000a # /usr/sbin/mknod /var/opt/drd/mnts/sysimage_001/dev/vg00/swap b 64 0x02000a

22. Re-create the file /var/opt/drd/mapfiles/drd00mapfile:

# /usr/sbin/vgexport -p -m /var/opt/drd/mapfiles/drd00mapfile /dev/drd00

(This is the only change to the booted system. It is made so that if the clone is un-mounted and re-mounted before booting to it, the drd00 volume group will be properly imported. Even though the vgexport fails because the volume group is active, the mapfile is still updated.)

23. Check to see that the boot and swap areas on the clone are as expected:

24. On 11iv2 Integrity: # /usr/sbin/lvlnboot -v Boot Definitions for Volume Group /dev/vg00: Physical Volumes belonging in Root Volume Group: /dev/dsk/c3t15d0 (0/0/2/1.15.0) -- Boot Disk Boot: lvol1 on: /dev/dsk/c3t15d0 Root: lvol3 on: /dev/dsk/c3t15d0 Swap: lvol2 on: /dev/dsk/c3t15d0 Dump: lvol2 on: /dev/dsk/c3t15d0, 0 Boot Definitions for Volume Group /dev/drd00: Physical Volumes belonging in Root Volume Group: /dev/dsk/c1t15d0 (0/0/1/1.15.0) -- Boot Disk Boot: lvol1 on: /dev/dsk/c1t15d0 Root: lvol3 on: /dev/dsk/c1t15d0 Swap: swap on: /dev/dsk/c1t15d0 Dump: swap on: /dev/dsk/c1t15d0, 0

25. On 11iv3:

# /usr/sbin/lvlnboot -v

Page 24: DRD White Paper

24

Boot Definitions for Volume Group /dev/vg00: Physical Volumes belonging in Root Volume Group: /dev/disk/disk6_p2 -- Boot Disk Boot: lvol1 on: /dev/disk/disk6_p2 Root: lvol3 on: /dev/disk/disk6_p2 Swap: lvol2 on: /dev/disk/disk6_p2 Dump: lvol2 on: /dev/disk/disk6_p2, 0

Boot Definitions for Volume Group /dev/drd00: Physical Volumes belonging in Root Volume Group:

/dev/disk/disk7_p2 -- Boot Disk Boot: lvol1 on: /dev/disk/disk7_p2 Root: lvol3 on: /dev/disk/disk7_p2 Swap: lvol2 on: /dev/disk/disk7_p2 Dump: lvol2 on: /dev/disk/disk7_p2, 0

26. Set the clone to be the primary boot disk and boot to it:

# /opt/drd/bin/drd activate -x autoreboot=true

HP recommends that you create a shutdown script that runs drd sync so that files changed on the original image after the clone was created will be propagated to the clone. See the “DRD sync” section below for more details.

Viewing Log Files When you use drd runcmd to run commands that modify the inactive system image, logging occurs in several places that correspond to the locations at which the processes were executed. Because DRD runs on the booted system, a DRD log is created on the active image. Any sw* command that you run on an inactive image appends to the sw* logs on the inactive image.

For example, the command: # /opt/drd/bin/drd runcmd swinstall -s depotserver:/patch_depot PHKL_9999

results in new messages in each of the following log files:

• In /var/opt/drd/drd.log (the original log file, located on the booted system)

• In the copy of /var/adm/sw/swinstall.log on the clone

• In the copy of /var/adm/sw/swagent.log on the clone

Because drd.log is located in the /var file system, it is copied during the clone operation to the /var file system on the clone. However, because the clone’s file systems must be unmounted before the final ending banner message of the operation is written to the log, the record of the clone operation in the clone’s log is truncated at the message indicating that file systems are being copied. The next message in the clone’s log is issued by the next DRD command run on the clone itself—after it is booted. The log on the booted system will be complete, ending with the final banner message.

sw* logs for a given image produce a complete picture of all software operations on that image. If the image was created by a clone, then the initial copies of the logs were copied by the clone operation. New records might have been appended to the logs by subsequent drd runcmd sw* operations or by sw* commands run after the image was booted.

Note the following important factors about log files created from a drd runcmd operation:

Page 25: DRD White Paper

25

• The drd log (/var/opt/drd/drd.log) contains entries that pertain only to the command that was run, including the return code and/or error messages. This log does not contain any log messages from the sw* command itself.

• The sw* log file that results from a drd runcmd operation is always on the inactive system image (that is, the clone) and is not appended to the original logs on the booted system (that is, the active system image).

Logs can be viewed on the inactive system image by executing drd runcmd view. For example, to view the swagent log on the clone, execute the following command:

# /opt/drd/bin/drd runcmd view /var/adm/sw/swagent.log

The following example compares operations and logs on the booted system and the inactive system image.

To swverify the booted system and view the swverify log, execute the following commands:

# /usr/sbin/swverify \*

# /usr/bin/view /var/adm/sw/swverify.log

To swverify the inactive system image and view the swverify log, execute the following commands: # /opt/drd/bin/drd runcmd swverify \*

# /opt/drd/bin/drd runcmd view /var/adm/sw/swverify.log

Note that drd runcmd view /var/opt/drd/drd.log does not provide a view of drd runcmd commands that were run recently on the booted system, so it is not particularly useful.

You can view logs directly by mounting the inactive system image with the drd mount command. The log paths are relative to the mount point. The mount point of the root file system of an inactive image created by the drd clone command is: /var/opt/drd/mnts/sysimage_001. After DRD is used to create, activate, and boot a clone, the mount point of the original image (which is now inactive) is: /var/opt/drd/mnts/sysimage_000. The swagent log resides at /var/opt/drd/mnts/sysimage_001/var/adm/sw/swagent.log or /var/opt/drd/mnts/sysimage_000/var/adm/sw/swagent.log.

Note: The drd runcmd view provides a mechanism for browsing logs, but not for annotating them. If you want to modify log files, you need to mount the inactive system image and edit the logs using their full pathnames on the booted system. For example, to annotate the swagent log on the inactive system image, you would use the following commands: # /opt/drd/bin/drd mount # /usr/bin/echo “Swagent log after quality pack application using drd runcmd. June 6, 2007” >> \ /var/opt/drd/mnts/sysimage_001/var/adm/sw/swagent.log

Page 26: DRD White Paper

26

Maintaining the Integrity of System Logs If system logs are being collected and audited for regulatory, intrusion detection, or forensics purposes, it is desirable to avoid gaps in logs between the creation and boot of the clone. To address this need, the drd sync command was introduced in March 2010. The drd sync command allows you to automatically synchronize the active image and the clone, eliminating the need to manual update files on the clone. It is recommended that the drd sync command be incorporated into a shutdown script prior to activating and booting the clone, so that any files changed on the original image after the clone was created will be propagated to the clone. See the “DRD sync” section below for more information.

DRD sync With the March 2010 release of DRD, version A.3.5.186, the drd sync command is now supported to automatically synchronize the active image and the clone. For detailed information regarding drd sync, please review Chapter 5 of the Dynamic Root Disk Administrator’s Guide at http://bizsupport1.austin.hp.com/bc/docs/support/SupportManual/c01918754/c01918754.pdf. The information presented will help you understand how to use drd sync, when it is recommended that drd clone be used to recreate a clone instead of using drd sync, and even includes a sample drd sync system shutdown script.

DRD Activate and Deactivate Commands The commands drd activate and drd deactivate enable an administrator to choose an image to be booted the next time the system is re-started: An image is said to be activated if it will be booted. A drd activate command activates the inactive image. A drd deactivate command activates the booted image.

An administrator can use drd activate and drd deactivate to implement various maintenance schemes, such as setting a DRD clone as an alternate boot disk or activating a mirrored DRD clone. For further information on the commands, please see the Using Dynamic Root Disk Activate and Deactivate whitepaper at http://bizsupport1.austin.hp.com/bc/docs/support/SupportManual/c01920455/c01920455.pdf.

Delayed Activation/Boot of the Clone HP recommends that system administrators clone, patch, and boot in a fairly short time cycle. If a long period of time has passed since the clone was created, it is recommended that the clone be re-created. If it has been a few days since the clone was created, you can use the drd sync command to determine how many files have changed on the original image that would need to be propagated to the clone. If this number is large, it may be advisable to re-create the clone, rather than using drd sync to copy files from the original image to the clone. For more details, please see Chapter 5 of the Dynamic Root Disk Administrator’s Guide at http://bizsupport1.austin.hp.com/bc/docs/support/SupportManual/c01918754/c01918754.pdf

Page 27: DRD White Paper

© Copyright 20XX Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.

Trademark acknowledgments, if needed.

4AA2-xxxxENW, August 2010

For More Information To read more about Dynamic Root Disk, go to www.hp.com/go/drd.

Call to Action HP welcomes your input. Please give us comments about this white paper, or suggestions for LVM or related documentation, through our technical documentation feedback website:

http://docs.hp.com/en/feedback.html

© 2010 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. 5900-0594, August 2010

Share with colleagues

Share with colleagues