Installing Oracle Real Application Clusters 11g Release 2 ... are the private network ports...

39
An Oracle Technical White Paper October 2011 Installing Oracle Real Application Clusters 11g Release 2 on Oracle Solaris 10 Containers

Transcript of Installing Oracle Real Application Clusters 11g Release 2 ... are the private network ports...

An Oracle Technical White Paper October 2011

Installing Oracle Real Application Clusters 11g Release 2 on Oracle Solaris 10 Containers

Installing Oracle Real Application Clusters 11g Release 2 on Oracle Solaris 10 Containers

Overview ........................................................................................... 2 Prerequisites ..................................................................................... 2

Servers .......................................................................................... 2 Network ......................................................................................... 3 Storage .......................................................................................... 4

Preparing ........................................................................................... 5 Server ............................................................................................ 5 Network ......................................................................................... 5 Storage LUNs on the Server .......................................................... 8

Creating Oracle Solaris Containers ................................................. 11 Creating a Zone ........................................................................... 11 Installing a Zone .......................................................................... 13 Configuring a Zone ...................................................................... 14

Configuring Oracle Solaris Containers to Install Oracle RAC ........... 15 Configuring the root User Environment ..................................... 15 Configuring the Network and Name Service ................................ 16 Adding Groups, Users, and Projects and Updating User Profiles . 18 Configuring Shared Storage Devices Inside a Zone .................... 20 Configuring Passwordless SSH for the oracle and grid Users 21

Installing Oracle RAC 11.2.0.1 ........................................................ 23 Performing a Preinstallation Verification for Oracle RAC ............. 23 Installing Oracle Clusterware ...................................................... 25 Installing Oracle Database ........................................................... 32 Applying CRS and Database Patches for the Oracle Solaris Containers Environment .............................................................. 34

Creating the Oracle RAC Database ................................................. 34 Creating the Oracle Automatic Storage Management Disk Group 34 Creating the Oracle RAC Database Using the Oracle Automatic Storage Management Disk Group ............................................... 35 Performing a Postinstallation Verification of the Cluster and Database ..................................................................................... 35

References ...................................................................................... 37

Installing Oracle Real Application Clusters 11g Release 2 on Oracle Solaris 10 Containers

2

Overview

This paper provides step-by-step procedures for creating and configuring Oracle Solaris Containers or Zones (a virtual environment) to host Oracle Real Application Clusters (Oracle RAC) 11g Release 2 databases. It also describes installing Oracle RAC patches for an Oracle Solaris Containers environment for developers or system administrators who are new to Oracle Solaris.

The best practices presented in this paper can be used in a production environment. After deployment, it is recommended that you apply required security policies per your production environment.

This document uses a sample configuration that consists of four x86-64 architecture servers, but the information is equally applicable to SPARC platforms.

Note: The sequence of commands that is shown on one node needs to be executed on all other physical nodes to create and configure Oracle Solaris Containers.

It is recommended that you refer to the following Oracle white papers:

• Best Practices for Deploying Oracle RAC Inside Oracle Solaris Containers

• Highly Available and Scalable Oracle RAC Networking with Oracle Solaris 10 IPMP, which provides more details on public and private networking with Oracle RAC

Oracle RAC database deployments inside Oracle Solaris Containers are hosted on different physical servers. However, multiple Oracle RAC deployments of different versions can be consolidated on the same set of the physical servers.

Prerequisites

This section describes how to configure the physical connectivity of the servers, network, and storage. At a minimum, you need to identify IP addresses, storage LUNs, and VLAN tagged NICs ( for private and public networks).

Servers The physical server names are pnode01, pnode02, pnode03, pnode04, and their configured IP addresses are as follows:

199.199.121.101 pnode01

199.199.121.102 pnode02

199.199.121.103 pnode03

199.199.121.104 pnode04

These four servers host four zones, one zone per node, for one Oracle RAC database deployment. For another Oracle RAC database deployment, another four zones could be created on the same set of physical servers. These physical servers are connected to four Cisco switches (two for a public network and another two for a private network) and two Sun Storage 6180 arrays from Oracle.

Installing Oracle Real Application Clusters 11g Release 2 on Oracle Solaris 10 Containers

3

Network In this sample configuration, the names of the Oracle Solaris Containers are ve11gr2x01, ve11gr2x02, ve11gr2x03, and ve11gr2x04. The Oracle Solaris Containers IP addresses can be on a different TCP/IP network than that of the host servers.

Oracle Solaris Containers Public IP Addresses

Configure the following set of IP addresses for the Oracle Solaris Containers public interface. Configure the Single Client Access Name (SCAN) IP addresses under DNS, along with the other container IP addresses and VIP addresses. Here, the DNS domain name is doit.com.

## Zone IP addresses to identify individual hosts:

199.199.111.101 ve11gr2x01

199.199.111.102 ve11gr2x02

199.199.111.103 ve11gr2x03

199.199.111.104 ve11gr2x04

## Oracle RAC's virtual IP (VIP) addresses:

199.199.111.111 ve11gr2x01-vip

199.199.111.112 ve11gr2x02-vip

199.199.111.113 ve11gr2x03-vip

199.199.111.114 ve11gr2x04-vip

## Oracle RAC's SCAN IP addresses; add these IP addresses to DNS, not to /etc/hosts:

199.199.111.115 rac11gr2x-scan.doit.com

199.199.111.116 rac11gr2x-scan.doit.com

199.199.111.117 rac11gr2x-scan.doit.com

Oracle Solaris Containers Private IP Addresses

Configure the following for the Oracle RAC private network IP addresses. More than one IP address can be configured as a private network. Hence, you could use two IP addresses per node or per zone, even though here one is used.

## Oracle RAC private IP addresses:

192.168.120.101 ve11gr2x01-priv

192.168.120.102 ve11gr2x02-priv

192.168.120.103 ve11gr2x03-priv

192.168.120.104 ve11gr2x04-priv

192.168.120.105 ve11gr2x05-priv

192.168.120.106 ve11gr2x06-priv

192.168.120.107 ve11gr2x07-priv

192.168.120.108 ve11gr2x08-priv

Installing Oracle Real Application Clusters 11g Release 2 on Oracle Solaris 10 Containers

4

Storage

Two Sun Storage 6180 arrays are used with four FCAL ports on each RAID controller. For more information on how to configure LUNs, refer to the respective storage guide.

Configure the following LUNs on the Sun Storage 6180 arrays. LUN names are assigned on the storage array, and Controller Target Disk (CTD) names are created by the operating system (OS) when the LUNs or disks are visible inside the server.

Volume names are optional; they are provided for reference and assigned while configuring the disks inside a server. Volume names help you locate disks on other servers if the device paths are different. Using the same volume name creates soft links across all zones under a common folder to help you provision disks of the same name on all nodes. This is the best way to document and keep track of disks/LUNs from storage to Oracle Automatic Storage Management, Oracle Automatic Storage Management hosts, and Oracle Database files.

TABLE 1. FIRST SUN STORAGE 6180 ARRAY

S#. LUN NAME SIZE

(IN GB)

WWN NAME CTD NAME ON THE HOSTS

(PNODE01)

VOLUME NAME

1 dw11gr2A 250 60:08:0E:50:00:17:F3:A8:00:00

:1E:63:4D:DF:44:33

c7t60080E500017F3A800001E6

34DDF4433d0

dw22A

2 oltp11gr2A 250 60:08:0E:50:00:17:F3:CC:00:00

:15:91:4D:DF:47:4D

c7t60080E500017F3CC0000159

14DDF474Dd0

oltp22A

3 ov11gr2A_01 2 60:08:0E:50:00:17:F3:A8:00:00

:1E:64:4D:DF:48:65

c7t60080E500017F3A800001E6

44DDF4865d0

ov22A01

4 ov11gr2A_02 2 60:08:0E:50:00:17:F3:CC:00:00

:15:93:4D:DF:47:F4

c7t60080E500017F3CC0000159

34DDF47F4d0

ov22A02

5 ov11gr2A_03 2 60:08:0E:50:00:17:F3:A8:00:00

:1E:66:4D:DF:48:B9

c7t60080E500017F3A800001E6

64DDF48B9d0

ov22A03

TABLE 2. SECOND SUN STORAGE 6180 ARRAY

S#

.

LUN NAME SIZE

(IN GB)

WWN NAME CTD NAME ON THE HOSTS (PNODE01) VOLUME NAME

1 dw11gr2B 250 60:08:0E:50:00:17:F4:46:00:00:

1F:FD:4D:DF:48:1A

c7t60080E500017F44600001FFD4

DDF481Ad0

dw22B

2 oltp11gr2B 250 60:08:0E:50:00:17:F3:78:00:00:

16:71:4D:DF:4A:E2

c7t60080E500017F378000016714

DDF4AE2d0

oltp22B

3 ov11gr2B_01 2 60:08:0E:50:00:17:F4:46:00:00:

1F:FE:4D:DF:49:A7

c7t60080E500017F44600001FFE4

DDF49A7d0

ov22B01

4 ov11gr2B_02 2 60:08:0E:50:00:17:F3:78:00:00:

16:73:4D:DF:4B:E0

c7t60080E500017F378000016734

DDF4BE0d0

ov22B02

5 ov11gr2B_03 2 60:08:0E:50:00:17:F4:46:00:00:

20:00:4D:DF:49:E5

c7t60080E500017F446000020004

DDF49E5d0

ov22B03

Installing Oracle Real Application Clusters 11g Release 2 on Oracle Solaris 10 Containers

5

Preparing

Ensure that all servers, network connections, and the storage are configured prior to creating zones. This will help to ensure that the required OS and kernel patches are in place. Configure the VLAN at the network switches. Configure LUNs/disks inside the servers.

Server Install Oracle Solaris 10 10/09 (s10x_u8wos_08a X86) operating system with kernel patch 142901-15, or a later update, on all the physical nodes.

root@pnode01:/# cat /etc/release

Solaris 10 10/09 s10x_u8wos_08a X86

Copyright 2009 Sun Microsystems, Inc. All Rights Reserved.

Use is subject to license terms.

Assembled 16 September 2009

root@pnode01:/#

OS Kernel Patch

Install the kernel patch on the physical nodes (Oracle Solaris Containers will use the same kernel version):

root@pnode01:/# uname -a

SunOS pnode01 5.10 Generic_142901-15 i86pc i386 i86pc

ZFS File System

Use a ZFS file system to host the root file system of the Oracle Solaris Containers. The disks used in creating the ZFS file system are the local disks of the node. Oracle Database and Oracle Clusterware binaries are installed on this file system.

Create the pool and file system:

root@pnode01:/# zpool create zonepool c0t1d0 c0t2d0

root@pnode01:/# zfs create zonespool/ve11gr2x01

Network

Let’s look at the VLAN configuration on the switches.

Public Network Switches

Out of the four Cisco 2960 network switches, two are configured for public networking, providing an uplink to the outside world. Namely, switch1 and switch2 are public network switches configured with VLAN ID 111.

Installing Oracle Real Application Clusters 11g Release 2 on Oracle Solaris 10 Containers

6

Here are the public network ports configured on switch2 for VLAN 111. Perform a similar configuration on switch1.

root@port05:/# telnet nts 2002

Trying 199.199.121.200...

Connected to nts.

Escape character is '^]'.

switch2>enable

switch2#show vlan id 111

VLAN Name Status Ports

---- -------------------------------- --------- -------------------------------

111 vlan111 active Gi0/1, Gi0/2, Gi0/3, Gi0/4

Gi0/11, Gi0/12, Gi0/13, Gi0/14

Gi0/24

VLAN Type SAID MTU Parent RingNo BridgeNo Stp BrdgMode Trans1 Trans2

---- ----- ---------- ----- ------ ------ -------- ---- -------- ------ ------

111 enet 100111 1500 - - - - - 0 0

Remote SPAN VLAN

----------------

Disabled

Primary Secondary Type Ports

------- --------- ----------------- ------------------------------------------

switch2#

switch2#show running-config

.... [Shows only connected ports, rest are truncated.]

....

interface GigabitEthernet0/10

!

interface GigabitEthernet0/11

switchport mode trunk

!

interface GigabitEthernet0/12

switchport mode trunk

!

interface GigabitEthernet0/13

switchport mode trunk

!

Installing Oracle Real Application Clusters 11g Release 2 on Oracle Solaris 10 Containers

7

interface GigabitEthernet0/14

switchport mode trunk

!

interface GigabitEthernet0/15

!

....

....

switch2#

Private Network Switches

Two additional switches, switch3 and switch4, are private network switches configured with VLAN ID 120.

Here are the private network ports configured on switch4 for VLAN 120. Perform a similar configuration on switch3.

root@port05:/# telnet nts 2004

Trying 199.199.121.200...

Connected to nts.

Escape character is '^]'.

switch4>enable

switch4#

switch4#show vlan id 120

VLAN Name Status Ports

---- -------------------------------- --------- -------------------------------

120 vlan120 active Gi0/1, Gi0/2, Gi0/3, Gi0/4

Gi0/11, Gi0/12, Gi0/13, Gi0/14

Gi0/24

VLAN Type SAID MTU Parent RingNo BridgeNo Stp BrdgMode Trans1 Trans2

---- ----- ---------- ----- ------ ------ -------- ---- -------- ------ ------

120 enet 100120 1500 - - - - - 0 0

Remote SPAN VLAN

----------------

Disabled

Primary Secondary Type Ports

------- --------- ----------------- ------------------------------------------

switch4#

Installing Oracle Real Application Clusters 11g Release 2 on Oracle Solaris 10 Containers

8

switch4#show running-config

.... [Shows only connected ports, rest are truncated.]

....

interface GigabitEthernet0/11

switchport mode trunk

!

interface GigabitEthernet0/12

switchport mode trunk

!

interface GigabitEthernet0/13

switchport mode trunk

!

interface GigabitEthernet0/14

switchport mode trunk

!

interface GigabitEthernet0/15

!

....

....

switch4#

DNS Server Updates

Ensure that all the IP addresses mentioned in the “Server” and “Network” subsections of the “Prerequisites” section are updated in the DNS Server, which includes all the zone IP addresses, VIP addresses, and SCAN IP addresses. In this example, DNS domain name is doit.com. Also add the following IP addresses in the DNS server.

DNS Server:199.199.111.250 dnsserver-111.doit.com

Gateway/Router:199.199.111.254 gateway111.doit.com

When configuring zones, assign the same IP addresses for the DNS server and router network configurations.

Storage LUNs on the Server

Provision the disks/LUNs so that all the storage LUNs are configured and visible on all the physical nodes. Use the format utility and create one single slice per disk.

Installing Oracle Real Application Clusters 11g Release 2 on Oracle Solaris 10 Containers

9

MPxIO

MPxIO is used for high availability of the disks/LUNs and redundant physical connectivity from servers to storage. Verify the following default configuration on the servers:

root@pnode01:/# egrep -e "^mpxio" /kernel/drv/fp.conf

mpxio-disable="no";

root@pnode01:/#

LUNs/Disks Configuration on the Server

The storage (Sun Storage 6180 array) LUNs are mapped on all nodes. And from any one of the hosts, these disks/LUNs could be configured as follows. (Configure all disks of this Oracle RAC cluster the same way.)

Note: The starting cylinder cannot be zero. A non-zero value is required; hence, starting cylinder 5 is chosen.

root@pnode01:/# format

.... 62. c7t60080E500017F378000016734DDF4BE0d0 <DEFAULT cyl 1020 alt 2 hd 128 sec 32>

/scsi_vhci/disk@g60080e500017f378000016734ddf4be0

63. c7t60080E500017F446000020004DDF49E5d0 <DEFAULT cyl 1020 alt 2 hd 128 sec 32> ov22B03

/scsi_vhci/disk@g60080e500017f446000020004ddf49e5

Specify disk (enter its number)[61]: 62

selecting c7t60080E500017F378000016734DDF4BE0d0

[disk formatted]

format>

format> fdisk

No fdisk table exists. The default partition for the disk is:

a 100% "SOLARIS System" partition

Type "y" to accept the default partition, otherwise type "n" to edit the

partition table.

y

format> vol

Enter 8-character volume name (remember quotes)[""]:"ov22B02"

Ready to label disk, continue? y

format> partition

PARTITION MENU:

0 - change `0' partition

1 - change `1' partition

2 - change `2' partition

Installing Oracle Real Application Clusters 11g Release 2 on Oracle Solaris 10 Containers

10

3 - change `3' partition

4 - change `4' partition

5 - change `5' partition

6 - change `6' partition

7 - change `7' partition

select - select a predefined table

modify - modify a predefined partition table

name - name the current table

print - display the current table

label - write partition map and label to the disk

!<cmd> - execute <cmd>, then return

quit

partition> 0

Part Tag Flag Cylinders Size Blocks

0 unassigned wm 0 0 (0/0/0) 0

Enter partition id tag[unassigned]:

Enter partition permission flags[wm]:

Enter new starting cyl[0]: 5

Enter partition size[0b, 0c, 5e, 0.00mb, 0.00gb]: $

partition> label

Ready to label disk, continue? y

partition> print

Volume: ov22B02

Current partition table (unnamed):

Total disk cylinders available: 1020 + 2 (reserved cylinders)

Part Tag Flag Cylinders Size Blocks

0 unassigned wm 5 - 1019 1.98GB (1015/0/0) 4157440

1 unassigned wm 0 0 (0/0/0) 0

2 backup wu 0 - 1019 1.99GB (1020/0/0) 4177920

3 unassigned wm 0 0 (0/0/0) 0

4 unassigned wm 0 0 (0/0/0) 0

5 unassigned wm 0 0 (0/0/0) 0

6 unassigned wm 0 0 (0/0/0) 0

7 unassigned wm 0 0 (0/0/0) 0

8 boot wu 0 - 0 2.00MB (1/0/0) 4096

9 unassigned wm 0 0 (0/0/0) 0

partition>

Installing Oracle Real Application Clusters 11g Release 2 on Oracle Solaris 10 Containers

11

Creating Oracle Solaris Containers

In order to create Oracle Solaris Containers, the first step is to create a zone and then install that zone. On the first boot of the zone, use sysidcfg to finish configuring Oracle Solaris Containers.

Creating a Zone In this paper, Oracle Solaris Containers are created using the /var/tmp/ve11gr2x01.cfg configuration file shown below. Edit the configuration file to match your environment paying particular attention to disk names, NIC names, and the zone root path.

For example, create a zone on one of the physical nodes (pnode01 [ve11gr2x01]) using the command zonecfg -z <zonename> -f <cfg_file>, as shown below. On all other nodes, create pnode02 [ve11gr2x02], pnode03 [ve11gr2x03], and pnode04 [ve11gr2x04].

Note: Choose the number of CPUs (ncpus), memory (physical), and swap variables as required.

root@pnode01:/# cat /var/tmp/ve11gr2x01.cfg

create -b

set zonepath=/zonespool/ve11gr2x01

set autoboot=true

set limitpriv=default,proc_priocntl,proc_lock_memory,sys_time

set scheduling-class=TS,RT,FX

set ip-type=exclusive

add capped-memory

set physical=11G

set swap=24G

set locked=6G

end

add net

set physical=igb120002

end

add net

set physical=igb120003

end

add net

set physical=igb111000

end

add net

set physical=igb111001

end

add device

set match=/dev/dsk/c7t60080E500017F3A800001E634DDF4433d0s0

Installing Oracle Real Application Clusters 11g Release 2 on Oracle Solaris 10 Containers

12

end

add device

set match=/dev/rdsk/c7t60080E500017F3A800001E634DDF4433d0s0

end

add device

set match=/dev/dsk/c7t60080E500017F3CC000015914DDF474Dd0s0

end

add device

set match=/dev/rdsk/c7t60080E500017F3CC000015914DDF474Dd0s0

end

add device

set match=/dev/dsk/c7t60080E500017F3A800001E644DDF4865d0s0

end

add device

set match=/dev/rdsk/c7t60080E500017F3A800001E644DDF4865d0s0

end

add device

set match=/dev/dsk/c7t60080E500017F3CC000015934DDF47F4d0s0

end

add device

set match=/dev/rdsk/c7t60080E500017F3CC000015934DDF47F4d0s0

end

add device

set match=/dev/dsk/c7t60080E500017F3A800001E664DDF48B9d0s0

end

add device

set match=/dev/rdsk/c7t60080E500017F3A800001E664DDF48B9d0s0

end

add device

set match=/dev/dsk/c7t60080E500017F44600001FFD4DDF481Ad0s0

end

add device

set match=/dev/rdsk/c7t60080E500017F44600001FFD4DDF481Ad0s0

end

add device

set match=/dev/dsk/c7t60080E500017F378000016714DDF4AE2d0s0

end

add device

set match=/dev/rdsk/c7t60080E500017F378000016714DDF4AE2d0s0

end

add device

set match=/dev/dsk/c7t60080E500017F44600001FFE4DDF49A7d0s0

Installing Oracle Real Application Clusters 11g Release 2 on Oracle Solaris 10 Containers

13

end

add device

set match=/dev/rdsk/c7t60080E500017F44600001FFE4DDF49A7d0s0

end

add device

set match=/dev/dsk/c7t60080E500017F378000016734DDF4BE0d0s0

end

add device

set match=/dev/rdsk/c7t60080E500017F378000016734DDF4BE0d0s0

end

add device

set match=/dev/dsk/c7t60080E500017F446000020004DDF49E5d0s0

end

add device

set match=/dev/rdsk/c7t60080E500017F446000020004DDF49E5d0s0

end

add dedicated-cpu

set ncpus=4

end

root@pnode01:/# zonecfg -z ve11gr2x01 -f /var/tmp/ve11gr2x01.cfg

Installing a Zone Follow these steps to install a zone using zoneadm -z <zonename> install:

root@pnode01:/# zoneadm list -icv

ID NAME STATUS PATH BRAND IP

0 global running / native shared

1 cont10gr2x01 running /zonespool/cont10gr2x01 native excl

- ve11gr2x01 configured /zonespool/ve11gr2x01 native excl

root@pnode01:/# zoneadm -z ve11gr2x01 install

/zonespool/ve11gr2x01 must not be group readable.

/zonespool/ve11gr2x01 must not be group executable.

/zonespool/ve11gr2x01 must not be world readable.

/zonespool/ve11gr2x01 must not be world executable.

could not verify zonepath /zonespool/ve11gr2x01 because of the above errors.

zoneadm: zone ve11gr2x01 failed to verify

root@pnode01:/#

root@pnode01:/# ls -ld /zonespool/ve11gr2x01

drwxr-xr-x 2 root root 2 Jun 12 03:01 /zonespool/ve11gr2x01

root@pnode01:/# chmod 700 /zonespool/ve11gr2x01

Installing Oracle Real Application Clusters 11g Release 2 on Oracle Solaris 10 Containers

14

root@pnode01:/# ls -ld /zonespool/ve11gr2x01

drwx------ 2 root root 2 Jun 12 03:01 /zonespool/ve11gr2x01

root@pnode01:/# zoneadm -z ve11gr2x01 install

cannot create ZFS dataset zonespool/ve11gr2x01: dataset already exists

Preparing to install zone <ve11gr2x01>.

Creating list of files to copy from the global zone.

Copying <152094> files to the zone.

Initializing zone product registry.

Determining zone package initialization order.

Preparing to initialize <1245> packages on the zone.

Initialized <1245> packages on zone.

Zone <ve11gr2x01> is initialized.

The file </zonespool/ve11gr2x01/root/var/sadm/system/logs/install_log> contains a log of the

zone installation.

root@pnode01:/#

root@pnode01:/# zoneadm list -icv

ID NAME STATUS PATH BRAND IP

0 global running / native shared

1 cont10gr2x01 running /zonespool/cont10gr2x01 native excl

- cont11gr2x01 installed /zonespool/cont11gr2x01 native excl

- ve11gr2x01 installed /zonespool/ve11gr2x01 native excl

root@pnode01:/#

Configuring a Zone Use the sysidcfg file to automate the zone configuration. Create the sysidcfg file, as follows, and copy it to all physical nodes.

Note: The root_password is copied from the /etc/shadow file. The lower-order NIC is the "primary" NIC. For example, here the NICs are configured as igb111000, igb111001, igb120002, and igb120003, and igb111000 is the lower- order NIC. Lower order is chosen based on the number (for example, 111000) that comes after the driver name (for example, igb).

root@pnode01:/# cat > /var/tmp/sysidcfg.ve11gr2x01

keyboard=US-English

system_locale=C

terminal=vt100

timeserver=localhost

timezone=US/Pacific

network_interface=primary {hostname=ve11gr2x01 ip_address=199.199.111.101 netmask=255.255.255.0

protocol_ipv6=no default_route=199.199.111.254}

root_password=hHKrnpJQJpQZU

security_policy=none

Installing Oracle Real Application Clusters 11g Release 2 on Oracle Solaris 10 Containers

15

name_service=NONE

nfs4_domain=dynamic

Copy sysidcfg to the zoneroot/etc directory and boot the zone to complete the configuration as follows.

Note: You can ignore the warning. The zone will boot with the required scheduling classes.

root@pnode01:/# cp /var/tmp/sysidcfg.ve11gr2x01 /zonespool/ve11gr2x01/root/etc/sysidcfg

root@pnode01:/# zoneadm list -icv

ID NAME STATUS PATH BRAND IP

0 global running / native shared

2 cont11gr2x01 running /zonespool/cont11gr2x01 native excl

- cont10gr2x01 installed /zonespool/cont10gr2x01 native excl

- ve11gr2x01 installed /zonespool/ve11gr2x01 native excl

root@pnode01:/# zoneadm -z ve11gr2x01 boot

zoneadm: zone 've11gr2x01': WARNING: unable to set the default scheduling class: Invalid argument

Configuring Oracle Solaris Containers to Install Oracle RAC

Configuring the root User Environment

It is recommended that you do not leave ssh enabled for the root user. You can enable an ssh login that is passwordless between the zones to ease the process of hopping to different nodes as root. The method described for the oracle account in the "Configuring Passwordless SSH for the oracle and grid Users" section can be used for the root account.

Configure the zones’ root shell and profile and enable the ssh root login on all zones, as follows.

Note: Inclusion of /usr/local/bin:/usr/local/sbin:. in the PATH variable is optional.

Connect to the physical server as root.

# ssh root@pnode01

Last login: Fri Jul 1 05:43:21 from port05

Sun Microsystems Inc. SunOS 5.10 Generic January 2005

root@pnode01:/#

root@pnode01:/# zlogin ve11gr2x01

[Connected to zone 've11gr2x01' pts/6]

Last login: Fri Jul 1 05:43:11 on console

Sun Microsystems Inc. SunOS 5.10 Generic January 2005

#

# passwd -e root

Installing Oracle Real Application Clusters 11g Release 2 on Oracle Solaris 10 Containers

16

Old shell: /sbin/sh

New shell: /usr/bin/bash

passwd: password information changed for root

# exit

logout

[Connection to zone 've11gr2x01' pts/6 closed]

root@pnode01:/#

root@pnode01:/# zlogin ve11gr2x01

[Connected to zone 've11gr2x01' pts/6]

Last login: Sat Jul 2 19:13:52 on pts/6

Sun Microsystems Inc. SunOS 5.10 Generic January 2005

-bash-3.00#

-bash-3.00# sed -e 's/^PermitRootLogin .*/PermitRootLogin yes/' /etc/ssh/sshd_config >

/tmp/sshd_config

-bash-3.00# mv /tmp/sshd_config /etc/ssh/sshd_config

-bash-3.00# svcadm restart ssh

-bash-3.00# cat > ~/.profile

PS1='\u@\h:\w# '

export

PATH=$PATH:/usr/X11/bin:/usr/sfw/bin:/usr/dt/bin:/usr/openwin/bin:/usr/sfw/sbin:/usr/ccs/bin

:/usr/local/bin:/usr/local/sbin:.

-bash-3.00# logout

[Connection to zone 've11gr2x01' pts/6 closed]

root@pnode01:/#

root@pnode01:/# zlogin ve11gr2x01

[Connected to zone 've11gr2x01' pts/6]

Last login: Sat Jul 2 19:15:13 on pts/6

Sun Microsystems Inc. SunOS 5.10 Generic January 2005

root@ve11gr2x01:/#

Configuring the Network and Name Service

Configure IPMP and the DNS client by copying the required files, as follows.

DNS Client Configuration

While connected to the zone ve11gr2x01 console, do the following:

root@pnode01:/# zlogin ve11gr2x01

root@ve11gr2x01:/# cat > /etc/resolv.conf

nameserver 199.199.111.250

Installing Oracle Real Application Clusters 11g Release 2 on Oracle Solaris 10 Containers

17

search doit.com

root@ve11gr2x01:/#

root@ve11gr2x01:/# sed -e '/^hosts/ s/files/files dns/' /etc/nsswitch.conf > /tmp/nsswitch.conf

root@ve11gr2x01:/# cp /tmp/nsswitch.conf /etc

Public and Private Network Configuration Using IPMP

Edit /etc/default/mpathd and set the FAILBACK=no value:

root@ve11gr2x01:/# sed -e '/^FAILBACK/ s/FAILBACK\=yes/FAILBACK\=no/' /etc/default/mpathd >

/tmp/mpathd

root@ve11gr2x01:/# cp /tmp/mpathd /etc/default/

Update the /etc/hosts file, populate /etc/<hostname.interface>name files, and restart the network service:

root@ve11gr2x01:/# cat >> /etc/hosts

## All Oracle RAC zone IP addresses:

199.199.111.101 ve11gr2x01

199.199.111.102 ve11gr2x02

199.199.111.103 ve11gr2x03

199.199.111.104 ve11gr2x04

199.199.111.250 dnserver-111

199.199.111.254 gateway111

## Oracle RAC VIP addresses:

199.199.111.111 ve11gr2x01-vip

199.199.111.112 ve11gr2x02-vip

199.199.111.113 ve11gr2x03-vip

199.199.111.114 ve11gr2x04-vip

## Oracle RAC private IP addresses:

199.199.120.101 ve11gr2x01-priv

199.199.120.102 ve11gr2x02-priv

199.199.120.103 ve11gr2x03-priv

199.199.120.104 ve11gr2x04-priv

199.199.120.105 ve11gr2x05-priv

199.199.120.106 ve11gr2x06-priv

199.199.120.107 ve11gr2x07-priv

199.199.120.108 ve11gr2x08-priv

root@ve11gr2x01:/# cat > /etc/hostname.igb111000

ve11gr2x01 group pub_ipmp0 up

root@ve11gr2x01:/# cat > /etc/hostname.igb111001

group pub_ipmp0 up

Installing Oracle Real Application Clusters 11g Release 2 on Oracle Solaris 10 Containers

18

root@ve11gr2x01:/# cat > /etc/hostname.igb120002

ve11gr2x01-priv group priv_ipmp0

root@ve11gr2x01:/# cat > /etc/hostname.igb120003

group priv_ipmp0 up

root@ve11gr2x01:/#

root@ve11gr2x01:/#

root@ve11gr2x01:/# svcadm restart physical

root@ve11gr2x01:/# ifconfig -a

lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1

inet 127.0.0.1 netmask ff000000

igb111000: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 2

inet 199.199.111.101 netmask ffffff00 broadcast 199.199.111.255

groupname pub_ipmp0

ether 0:21:28:6a:be:9a

igb111001: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 3

inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255

groupname pub_ipmp0

ether 0:21:28:6a:be:9b

igb120002: flags=201000842<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 4

inet 199.199.120.101 netmask ffffff00 broadcast 199.199.120.255

groupname priv_ipmp0

ether 0:21:28:6a:be:9c

igb120003: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 5

inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255

groupname priv_ipmp0

ether 0:21:28:6a:be:9d

root@ve11gr2x01:/#

Adding Groups, Users, and Projects and Updating User Profiles Create the oinstall and dba groups, add the oracle and grid users, and add projects as follows.

# groupadd oinstall # groupadd dba # useradd -g oinstall -G dba -d /export/oracle -m -s /usr/bin/bash oracle # useradd -g oinstall -G dba -d /export/grid -m -s /usr/bin/bash grid # mkdir -p /u01/app/oracle/product/11.2.0/base # mkdir -p /u01/app/oracle/product/11.2.0/db_1 # mkdir -p /u01/app/oracle/product/11.2.0/crs_1 # mkdir -p /u01/disks/rdsk; mkdir -p /u01/disks/dsk # chown -R oracle:oinstall /u01 # chown grid:oinstall /u01/app/oracle/product/11.2.0/crs_1

Installing Oracle Real Application Clusters 11g Release 2 on Oracle Solaris 10 Containers

19

# projadd oracle # usermod -K project=oracle oracle

# projmod -s -K "project.max-shm-memory=(priv,6gb,deny)" oracle # projmod -s -K "project.max-shm-ids=(privileged,1024,deny)" oracle # projmod -s -K "project.max-sem-ids=(privileged,1024,deny)" oracle # projmod -s -K "project.max-msg-ids=(privileged,1024,deny)" oracle # projmod -s -K "process.max-sem-nsems=(privileged,4096,deny)" oracle # projmod -s -K "process.max-sem-ops=(privileged,4096,deny)" oracle # projmod -s -K "process.max-file-descriptor=(privileged,65536,deny)" oracle # projadd grid # usermod -K project=grid grid

# projmod -s -K "project.max-shm-memory=(priv,6gb,deny)" grid

# projmod -s -K "project.max-shm-ids=(privileged,1024,deny)" grid # projmod -s -K "project.max-sem-ids=(privileged,1024,deny)" grid # projmod -s -K "project.max-msg-ids=(privileged,1024,deny)" grid # projmod -s -K "process.max-sem-nsems=(privileged,4096,deny)" grid # projmod -s -K "process.max-sem-ops=(privileged,4096,deny)" grid # projmod -s -K "process.max-file-descriptor=(privileged,65536,deny)" grid # passwd oracle # passwd grid

Update the oracle and grid user profiles as follows:

root@ve11gr2x01:/# su - oracle

oracle@ve11gr2x01:~$ cat > ~/.profile

PS1='\u@\h:\w$ '

alias l='ls -lrt'

export ORACLE_BASE=/u01/app/oracle/product/11.2.0/base

export GRID_HOME=/u01/app/oracle/product/11.2.0/crs_1

export ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1

export ORACLE_SID=orcl_1

export

PATH=$PATH:/usr/sbin:/usr/X11/bin:/usr/sfw/bin:/usr/dt/bin:/usr/openwin/bin:/usr/sfw/sbin:/usr/cc

s/bin:/usr/local/bin:/usr/local/sbin:$ORACLE_HOME/bin:.:$GRID_HOME/bin

export TNS_ADMIN=$ORACLE_HOME/network/admin

export ORA_NLS11=$ORACLE_HOME/nls/data

export LS_LANG=AMERICAN_AMERICA.WE8ISO8859P1

export LD_LIBRARY_PATH=$ORACLE_HOME/lib

export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib

export CLASSPATH=$ORACLE_HOME/JRE

export CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib

export CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib

Installing Oracle Real Application Clusters 11g Release 2 on Oracle Solaris 10 Containers

20

export CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib

export TEMP=/tmp

export TMPDIR=/tmp

oracle@ve11gr2x01:~$ logout

root@ve11gr2x01:/# su - grid

grid@ve11gr2x01:~$ cat > ~/.profile

PS1='\u@\h:\w$ '

alias l='ls -lrt'

export ORACLE_BASE=/u01/app/oracle/product/11.2.0/base

export GRID_HOME=/u01/app/oracle/product/11.2.0/crs_1

export ORACLE_HOME=$GRID_HOME

export ORACLE_SID=+ASM1

export

PATH=$PATH:/usr/sbin:/usr/X11/bin:/usr/sfw/bin:/usr/dt/bin:/usr/openwin/bin:/usr/sfw/sbin:/usr/cc

s/bin:/usr/local/bin:/usr/local/sbin:$GRID_HOME/bin:.

export TNS_ADMIN=$ORACLE_HOME/network/admin

export ORA_NLS11=$ORACLE_HOME/nls/data

export LD_LIBRARY_PATH=$ORACLE_HOME/lib

export LS_LANG=AMERICAN_AMERICA.WE8ISO8859P1

export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib

export CLASSPATH=$ORACLE_HOME/JRE

export CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib

export CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib

export CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib

export TEMP=/tmp

export TMPDIR=/tmp

grid@ve11gr2x01:~$

Configuring Shared Storage Devices Inside a Zone

With shared devices, the CTD name of the same disk might be different on different servers. To resolve this, a soft link is created under a common location in all zones with a readable name. This provides a consistent disk name across all zones. The example here shows creating soft links in one zone. This procedure should be repeated in all zones.

root@pnode01:/# zlogin ve11gr2x01

root@ve11gr2x01:/# mkdir -p /sharedfiles/disks/rdsk/ /sharedfiles/disks/dsk/

root@ve11gr2x01:/# chmod 664 /dev/dsk/rdsk/* /dev/dsk/dsk/*

root@ve11gr2x01:/# chown oracle:oinstall /dev/dsk/rdsk/* /dev/dsk/dsk/*

root@ve11gr2x01:/# chown oracle:oinstall /sharedfiles/disks/rdsk/ /sharedfiles/disks/dsk/

root@ve11gr2x01:/# su -- oracle

oracle@ve11gr2x01:~$ cd /sharedfiles/disks/rdsk/

oracle@ve11gr2x01:/sharedfiles/disks/rdsk$ ln -s

Installing Oracle Real Application Clusters 11g Release 2 on Oracle Solaris 10 Containers

21

/dev/rdsk/c7t60080E500017F3A800001E634DDF4433d0s0 dw22A

oracle@ve11gr2x01:/sharedfiles/disks/rdsk$ ln -s

/dev/rdsk/c7t60080E500017F3CC000015914DDF474Dd0s0 oltp22A

oracle@ve11gr2x01:/sharedfiles/disks/rdsk$ ln -s

/dev/rdsk/c7t60080E500017F3A800001E644DDF4865d0s0 ov22A01

oracle@ve11gr2x01:/sharedfiles/disks/rdsk$ ln -s

/dev/rdsk/c7t60080E500017F3CC000015934DDF47F4d0s0 ov22A02

oracle@ve11gr2x01:/sharedfiles/disks/rdsk$ ln -s

/dev/rdsk/c7t60080E500017F3A800001E664DDF48B9d0s0 ov22A03

oracle@ve11gr2x01:/sharedfiles/disks/rdsk$ ln -s

/dev/rdsk/c7t60080E500017F44600001FFD4DDF481Ad0s0 dw22B

oracle@ve11gr2x01:/sharedfiles/disks/rdsk$ ln -s

/dev/rdsk/c7t60080E500017F378000016714DDF4AE2d0s0 oltp22B

oracle@ve11gr2x01:/sharedfiles/disks/rdsk$ ln -s

/dev/rdsk/c7t60080E500017F44600001FFE4DDF49A7d0s0 ov22B01

oracle@ve11gr2x01:/sharedfiles/disks/rdsk$ ln -s

/dev/rdsk/c7t60080E500017F378000016734DDF4BE0d0s0 ov22B02

oracle@ve11gr2x01:/sharedfiles/disks/rdsk$ ln -s

/dev/rdsk/c7t60080E500017F446000020004DDF49E5d0s0 ov22B03

Configuring Passwordless SSH for the oracle and grid Users

Passwordless SSH must be configured for the grid and oracle user on all nodes in an Oracle RAC cluster. A configuration script is provided on the installation medium. Alternatively, passwordless SSH can be configured manually, as shown in the following example.

Create an RSA key pair on each node. Do not set a passphrase; simply press Enter or return to the prompt.

root@ve11gr2x01:/# su - oracle

Sun Microsystems Inc. SunOS 5.10 Generic January 2005

oracle@ve11gr2x01:~$

oracle@ve11gr2x01:~$ ssh-keygen -t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/export/oracle/.ssh/id_rsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /export/oracle/.ssh/id_rsa.

Your public key has been saved in /export/oracle/.ssh/id_rsa.pub.

The key fingerprint is:

78:57:17:ba:ef:7a:2e:63:2c:62:05:a6:0a:e2:82:b4 oracle@ve11gr2x01

oracle@ve11gr2x01:~$

Installing Oracle Real Application Clusters 11g Release 2 on Oracle Solaris 10 Containers

22

On each node, copy the public key from the node to the local authorized_keys file as shown below. Repeat the procedure on all nodes.

This operation also stores the RSA key fingerprint of each node in the known_hosts file of the current node. Then, when SSH is used the next time, there is no prompt for authenticating the host.

Repeat the procedure under the grid account to configure the passwordless SSH connection on this second account.

The Oracle RAC software expects to be able to connect through SSH from any node to any other node of the cluster directly on both the oracle and grid account without a prompt, passphrase, or password.

ssh ve11gr2x01 "cat .ssh/id_rsa.pub" >> .ssh/authorized_keys

ssh ve11gr2x02 "cat .ssh/id_rsa.pub" >> .ssh/authorized_keys

ssh ve11gr2x03 "cat .ssh/id_rsa.pub" >> .ssh/authorized_keys

ssh ve11gr2x04 "cat .ssh/id_rsa.pub" >> .ssh/authorized_keys

oracle@ve11gr2x01:~$ ssh ve11gr2x01 "cat .ssh/id_rsa.pub" >> .ssh/authorized_keys

The authenticity of host 've11gr2x01 (199.199.111.101)' can't be

established.

RSA key fingerprint is

55:63:b0:8f:b5:2a:73:ec:92:ba:cb:08:16:80:0f:a0.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 've11gr2x01,199.199.111.101' (RSA) to the

list of known hosts.

Password: …

oracle@ve11gr2x01:~$ ssh ve11gr2x02 "cat .ssh/id_rsa.pub" >> .ssh/authorized_keys

The authenticity of host 've11gr2x02 (199.199.111.102)' can't be

established.

RSA key fingerprint is

55:63:b0:8f:b5:2a:73:ec:92:ba:cb:08:16:90:0f:b2.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 've11gr2x02,199.199.111.102' (RSA) to the

list of known hosts.

Password: …

oracle@ve11gr2x01:~$ ssh ve11gr2x03 "cat .ssh/id_rsa.pub" >> .ssh/authorized_keys

The authenticity of host 've11gr2x03 (199.199.111.103)' can't be

established.

RSA key fingerprint is

55:63:b0:8f:b5:2a:73:ec:92:ba:cb:08:16:90:0f:f4.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 've11gr2x03,199.199.111.103' (RSA) to the

list of known hosts.

Installing Oracle Real Application Clusters 11g Release 2 on Oracle Solaris 10 Containers

23

Password: …

oracle@ve11gr2x01:~$ ssh ve11gr2x04 "cat .ssh/id_rsa.pub" >> .ssh/authorized_keys

The authenticity of host 've11gr2x04 (199.199.111.104)' can't be

established.

RSA key fingerprint is

55:63:b0:8f:b5:2a:73:ec:92:ba:cb:08:16:80:01:40.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 've11gr2x04,199.199.111.104' (RSA) to the

list of known hosts.

Password: …

Installing Oracle RAC 11.2.0.1

Ensure that the Oracle Database 11.2.0.1 binaries are available to install, along with the mandatory patch 11840629 for the Oracle Solaris Container environment. For Oracle Database 11.2.0.1.2 (PSU2) environment, the mandatory patch is patch 12327147. Download Oracle Database 11.2.0.1 here:

http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html

Performing a Preinstallation Verification for Oracle RAC

Ping all public and private network IP addresses by name.

root@ve11gr2x01:/#

root@ve11gr2x01:/# ping -s ve11gr2x02

PING ve11gr2x02: 56 data bytes

64 bytes from ve11gr2x02 (199.199.111.102): icmp_seq=0. time=0.500 ms

64 bytes from ve11gr2x02 (199.199.111.102): icmp_seq=1. time=0.396 ms

64 bytes from ve11gr2x02 (199.199.111.102): icmp_seq=2. time=0.363 ms

^C

----ve11gr2x02 PING Statistics----

3 packets transmitted, 3 packets received, 0% packet loss

round-trip (ms) min/avg/max/stddev = 0.363/0.420/0.500/0.072

root@ve11gr2x01:/#

root@ve11gr2x01:/# ping -s ve11gr2x02-priv

PING ve11gr2x02-priv: 56 data bytes

64 bytes from ve11gr2x02-priv (199.199.120.102): icmp_seq=0. time=0.500 ms

64 bytes from ve11gr2x02-priv (199.199.120.102): icmp_seq=1. time=0.396 ms

64 bytes from ve11gr2x02-priv (199.199.120.102): icmp_seq=2. time=0.363 ms

^C

----ve11gr2x02-priv PING Statistics----

3 packets transmitted, 3 packets received, 0% packet loss

Installing Oracle Real Application Clusters 11g Release 2 on Oracle Solaris 10 Containers

24

round-trip (ms) min/avg/max/stddev = 0.363/0.420/0.500/0.072

root@ve11gr2x01:/#

Test the SCAN IP addresses. On the first query, <x.x.x>.117 is returned and on second query, <x.x.x>.116 is returned.

oracle@ve11gr2x01:~$ nslookup

> rac11gr2x-scan

Server: 199.199.111.250

Address: 199.199.111.250#53

Name: rac11gr2x-scan.doit.com

Address: 199.199.111.117

Name: rac11gr2x-scan.doit.com

Address: 199.199.111.115

Name: rac11gr2x-scan.doit.com

Address: 199.199.111.116

> rac11gr2x-scan

Server: 199.199.111.250

Address: 199.199.111.250#53

Name: rac11gr2x-scan.doit.com

Address: 199.199.111.116

Name: rac11gr2x-scan.doit.com

Address: 199.199.111.117

Name: rac11gr2x-scan.doit.com

Address: 199.199.111.115

> exit

oracle@ve11gr2x01:~$

Test the shared devices using the oracle user as follows.

Warning: This is a write test that destroys any existing data.

oracle@ve11gr2x01:~$ dd if=/dev/zero of=/sharedfiles/disks/rdsk/oltp22A count=5 bs=1024k

5+0 records in

5+0 records out

oracle@ve11gr2x01:~$

Installing Oracle Real Application Clusters 11g Release 2 on Oracle Solaris 10 Containers

25

Test the shared devices using the oracle user, as follows. This is a read test.

oracle@ve11gr2x01:~$ dd if=/sharedfiles/disks/rdsk/oltp22A of=/dev/null count=5 bs=1024k

5+0 records in

5+0 records out

oracle@ve11gr2x01:~$

Warning: These write and read tests can be performed on all the other shared devices initially. However, after the installation is complete, the write test wipes the device and there will be data loss. Therefore, be cautious when using the write test after installation. The read test can be performed after installation as a mean to ensure that a device is available and accessible inside an Oracle Solaris Container.

As a final test, run runcluvfy from any of the cluster nodes:

oracle@ve11gr2x01:/my_path_to_distribution/grid_clusterware/grid$ ./runcluvfy.sh stage -pre

crsinst -n ve11gr2x01,ve11gr2x02,ve11gr2x03,ve11gr2x04

.....

.....

Clock synchronization check using Network Time Protocol(NTP) passed

Pre-check for cluster services setup was successful.

oracle@ve11gr2x01:/~$

Installing Oracle Clusterware

Installing Oracle Clusterware basically means installing Oracle Grid Infrastructure and it is commonly done through the Oracle Universal Installer.

Do the following to start the Oracle Clusterware installation:

grid@ve11gr2x01:/~$ cd /<my_path_to_distribution/grid_clusterware/grid>

grid@ve11gr2x01:/<my_path_to_distribution/grid_clusterware/grid>$ ./runInstaller

grid@ve11gr2x01:/<my_path_to_distribution/grid_clusterware/grid>$

The GUI should appear. Follow these steps in the GUI screens:

1. Under Installation Option, choose Install and Configure Grid Infrastructure for a cluster and click Next. 2. Under Installation Type, choose Advanced Installation and click Next.

Installing Oracle Real Application Clusters 11g Release 2 on Oracle Solaris 10 Containers

26

3. Under Product Languages, leave the default option, English, unless you need to choose other language, and then click Next. 4. Under Grid Plug and Play, deselect the Configure GNS option, specify the following values, and then click Next: Cluster Name: doitcluster SCAN Name: rac11gr2x-scan.doit.com SCAN Port: 1521 5. Under Cluster Node Information, update the table with the rest of the cluster host names and VIP addresses by clicking the Add button for each entry, as follows, and then click Next: ve11gr2x01 ve11gr2x01-vip ve11gr2x02 ve11gr2x02-vip ve11gr2x03 ve11gr2x03-vip ve11gr2x04 ve11gr2x04-vip 6. Under Network Interface Usage, values are shown for Interface Name, Subnet, and Interface Type. These values are correct for this environment, so just click the Next button to continue. 7. Under Storage Options, select the Automatic Storage Management (ASM) option to host the Oracle Cluster Registry and voting files, and click Next. 8. Under Create ASM Disk Group, do the following: a. Change the Disk Group Name to suite your naming requirements. Here it is being changed from DATA to ASMDG. b. Under Redundancy, choose Normal. c. Edit the Change Discovery Path by clicking it, changing the default value to /sharedfiles/disks/rdsk/*, and click OK. d. From the list of Candidate disks, select four disks by selecting the check boxes, namely /sharedfiles/disks/rdsk/ov22A01, /sharedfiles/disks/rdsk/ov22A02, /sharedfiles/disks/rdsk/ov22B01, and /sharedfiles/disks/rdsk/ov22B02. e. Click the Next button. 9. Under ASM Password, choose Use same passwords for these accounts, specify the password, and click Next. A confirmation window appears. Click Yes to continue if the password does not meet the Oracle recommended standards or if you want to set more complex password. Otherwise, click No. 10. Under Operating System Groups, leave the default selection, and click Next. A window appears. Select Yes or choose different groups. 11. Under Installation Location, leave the default values, because they are set in the environment before launching the installer, or correct the directories where the software needs to be installed. Then click Next. 12. Under Create Inventory, leave the default location, and click Next.

Installing Oracle Real Application Clusters 11g Release 2 on Oracle Solaris 10 Containers

27

13. Prerequisite checks will run and if there are any issues, they will be reported. However, the checks should pass because the environment was configured as expected. 14. Under Summary, click Finish. 15. Move to the Setup GUI screen, where the installation of the binaries takes place. At the end of Setup, the installer asks to execute the configuration script, and it provides the script location and lists the nodes in the order in which the script needs to be executed.

16. Run the orainstRoot.sh script on the first node, as follows:

root@ve11gr2x01:/# /u01/app/oracle/product/11.2.0/oraInventory/orainstRoot.sh

Changing permissions of /u01/app/oracle/product/11.2.0/oraInventory.

Adding read,write permissions for group.

Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oracle/product/11.2.0/oraInventory to oinstall.

The execution of the script is complete.

root@ve11gr2x01:/#

Then run the script on all other nodes in the order that is specified in the GUI, for example, ve11gr2x02, ve11gr2x03, and ve11gr2x04.

17. Run the root.sh script on the first node.

Caution: It is mandatory to run the script on the first node, wait until the script is finished without error, and then run the script on the next node. This script initializes information in shared disks and cannot be run in parallel on multiple nodes of the grid cluster.

root@ve11gr2x01:/# /u01/app/oracle/product/11.2.0/crs_1/root.sh

Running Oracle 11g root.sh script...

The following environment variables are set as:

ORACLE_OWNER= grid

ORACLE_HOME= /u01/app/oracle/product/11.2.0/crs_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:

Creating /usr/local/bin directory...

Copying dbhome to /usr/local/bin ...

Copying oraenv to /usr/local/bin ...

Copying coraenv to /usr/local/bin ...

Creating /var/opt/oracle/oratab file...

Entries will be added to the /var/opt/oracle/oratab file as needed by

Database Configuration Assistant when a database is created

Installing Oracle Real Application Clusters 11g Release 2 on Oracle Solaris 10 Containers

28

Finished running generic part of root.sh script.

Now product-specific root actions will be performed.

2011-07-31 09:19:07: Parsing the host name

2011-07-31 09:19:07: Checking for super user privileges

2011-07-31 09:19:07: User has super user privileges

Using configuration parameter file:

/u01/app/oracle/product/11.2.0/crs_1/crs/install/crsconfig_params

Creating trace directory

LOCAL ADD MODE

Creating OCR keys for user 'root', privgrp 'root'..

Operation successful.

root wallet

root wallet cert

root cert export

peer wallet

profile reader wallet

pa wallet

peer wallet keys

pa wallet keys

peer cert request

pa cert request

peer cert

pa cert

peer root cert TP

profile reader root cert TP

pa root cert TP

peer pa cert TP

pa peer cert TP

profile reader pa cert TP

profile reader peer cert TP

peer user cert

pa user cert

Adding daemon to inittab

CRS-4123: Oracle High Availability Services has been started.

ohasd is starting

CRS-2672: Attempting to start 'ora.gipcd' on 've11gr2x01'

CRS-2672: Attempting to start 'ora.mdnsd' on 've11gr2x01'

CRS-2676: Start of 'ora.gipcd' on 've11gr2x01' succeeded

CRS-2676: Start of 'ora.mdnsd' on 've11gr2x01' succeeded

CRS-2672: Attempting to start 'ora.gpnpd' on 've11gr2x01'

CRS-2676: Start of 'ora.gpnpd' on 've11gr2x01' succeeded

CRS-2672: Attempting to start 'ora.cssdmonitor' on 've11gr2x01'

Installing Oracle Real Application Clusters 11g Release 2 on Oracle Solaris 10 Containers

29

CRS-2676: Start of 'ora.cssdmonitor' on 've11gr2x01' succeeded

CRS-2672: Attempting to start 'ora.cssd' on 've11gr2x01'

CRS-2672: Attempting to start 'ora.diskmon' on 've11gr2x01'

CRS-2676: Start of 'ora.diskmon' on 've11gr2x01' succeeded

CRS-2676: Start of 'ora.cssd' on 've11gr2x01' succeeded

CRS-2672: Attempting to start 'ora.ctssd' on 've11gr2x01'

CRS-2676: Start of 'ora.ctssd' on 've11gr2x01' succeeded

ASM created and started successfully.

DiskGroup ASMDG created successfully.

clscfg: -install mode specified

Successfully accumulated necessary OCR keys.

Creating OCR keys for user 'root', privgrp 'root'..

Operation successful.

CRS-2672: Attempting to start 'ora.crsd' on 've11gr2x01'

CRS-2676: Start of 'ora.crsd' on 've11gr2x01' succeeded

CRS-4256: Updating the profile

Successful addition of voting disk 9bdf40408e674f19bf8f5f50bbe72294.

Successful addition of voting disk 55a8645545dd4f77bf0fbef3a146dbe4.

Successful addition of voting disk a77ed2a3915b4f84bf50c0566e1e448a.

Successfully replaced voting disk group with +ASMDG.

CRS-4256: Updating the profile

CRS-4266: Voting file(s) successfully replaced

## STATE File Universal Id File Name Disk group

-- ----- ----------------- --------- ---------

1. ONLINE 9bdf40408e674f19bf8f5f50bbe72294 (/sharedfiles/disks/rdsk/ov22A01) [ASMDG]

2. ONLINE 55a8645545dd4f77bf0fbef3a146dbe4 (/sharedfiles/disks/rdsk/ov22A02) [ASMDG]

3. ONLINE a77ed2a3915b4f84bf50c0566e1e448a (/sharedfiles/disks/rdsk/ov22B01) [ASMDG]

Located 3 voting disk(s).

CRS-2673: Attempting to stop 'ora.crsd' on 've11gr2x01'

CRS-2677: Stop of 'ora.crsd' on 've11gr2x01' succeeded

CRS-2673: Attempting to stop 'ora.asm' on 've11gr2x01'

CRS-2677: Stop of 'ora.asm' on 've11gr2x01' succeeded

CRS-2673: Attempting to stop 'ora.ctssd' on 've11gr2x01'

CRS-2677: Stop of 'ora.ctssd' on 've11gr2x01' succeeded

CRS-2673: Attempting to stop 'ora.cssdmonitor' on 've11gr2x01'

CRS-2677: Stop of 'ora.cssdmonitor' on 've11gr2x01' succeeded

CRS-2673: Attempting to stop 'ora.cssd' on 've11gr2x01'

CRS-2677: Stop of 'ora.cssd' on 've11gr2x01' succeeded

CRS-2673: Attempting to stop 'ora.gpnpd' on 've11gr2x01'

Installing Oracle Real Application Clusters 11g Release 2 on Oracle Solaris 10 Containers

30

CRS-2677: Stop of 'ora.gpnpd' on 've11gr2x01' succeeded

CRS-2673: Attempting to stop 'ora.gipcd' on 've11gr2x01'

CRS-2677: Stop of 'ora.gipcd' on 've11gr2x01' succeeded

CRS-2673: Attempting to stop 'ora.mdnsd' on 've11gr2x01'

CRS-2677: Stop of 'ora.mdnsd' on 've11gr2x01' succeeded

CRS-2672: Attempting to start 'ora.mdnsd' on 've11gr2x01'

CRS-2676: Start of 'ora.mdnsd' on 've11gr2x01' succeeded

CRS-2672: Attempting to start 'ora.gipcd' on 've11gr2x01'

CRS-2676: Start of 'ora.gipcd' on 've11gr2x01' succeeded

CRS-2672: Attempting to start 'ora.gpnpd' on 've11gr2x01'

CRS-2676: Start of 'ora.gpnpd' on 've11gr2x01' succeeded

CRS-2672: Attempting to start 'ora.cssdmonitor' on 've11gr2x01'

CRS-2676: Start of 'ora.cssdmonitor' on 've11gr2x01' succeeded

CRS-2672: Attempting to start 'ora.cssd' on 've11gr2x01'

CRS-2672: Attempting to start 'ora.diskmon' on 've11gr2x01'

CRS-2676: Start of 'ora.diskmon' on 've11gr2x01' succeeded

CRS-2676: Start of 'ora.cssd' on 've11gr2x01' succeeded

CRS-2672: Attempting to start 'ora.ctssd' on 've11gr2x01'

CRS-2676: Start of 'ora.ctssd' on 've11gr2x01' succeeded

CRS-2672: Attempting to start 'ora.asm' on 've11gr2x01'

CRS-2676: Start of 'ora.asm' on 've11gr2x01' succeeded

CRS-2672: Attempting to start 'ora.crsd' on 've11gr2x01'

CRS-2676: Start of 'ora.crsd' on 've11gr2x01' succeeded

CRS-2672: Attempting to start 'ora.evmd' on 've11gr2x01'

CRS-2676: Start of 'ora.evmd' on 've11gr2x01' succeeded

CRS-2672: Attempting to start 'ora.asm' on 've11gr2x01'

CRS-2676: Start of 'ora.asm' on 've11gr2x01' succeeded

CRS-2672: Attempting to start 'ora.ASMDG.dg' on 've11gr2x01'

CRS-2676: Start of 'ora.ASMDG.dg' on 've11gr2x01' succeeded

ve11gr2x01 2011/07/31 09:24:18

/u01/app/oracle/product/11.2.0/crs_1/cdata/ve11gr2x01/backup_20110731_092418.olr

Configure Oracle Grid Infrastructure for a Cluster ... succeeded

Updating inventory properties for clusterware

Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 24576 MB Passed

The inventory pointer is located at /var/opt/oracle/oraInst.loc

The inventory is located at /u01/app/oracle/product/11.2.0/oraInventory

'UpdateNodeList' was successful.

root@ve11gr2x01:/#

Installing Oracle Real Application Clusters 11g Release 2 on Oracle Solaris 10 Containers

31

Then run the script in order on all the other nodes.

Here is an example for the last node, ve11gr2x04:

root@ve11gr2x04:/# /u01/app/oracle/product/11.2.0/crs_1/root.sh

Running Oracle 11g root.sh script...

The following environment variables are set as:

ORACLE_OWNER= grid

ORACLE_HOME= /u01/app/oracle/product/11.2.0/crs_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:

Creating /usr/local/bin directory...

Copying dbhome to /usr/local/bin ...

Copying oraenv to /usr/local/bin ...

Copying coraenv to /usr/local/bin ...

Creating /var/opt/oracle/oratab file...

Entries will be added to the /var/opt/oracle/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root.sh script.

Now product-specific root actions will be performed.

2011-07-31 09:39:54: Parsing the host name

2011-07-31 09:39:54: Checking for super user privileges

2011-07-31 09:39:54: User has super user privileges

Using configuration parameter file:

/u01/app/oracle/product/11.2.0/crs_1/crs/install/crsconfig_params

Creating trace directory

LOCAL ADD MODE

Creating OCR keys for user 'root', privgrp 'root'..

Operation successful.

Adding daemon to inittab

CRS-4123: Oracle High Availability Services has been started.

ohasd is starting

CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node

ve11gr2x01, number 1, and is terminating

An active cluster was found during exclusive startup, restarting to join the cluster

CRS-2672: Attempting to start 'ora.mdnsd' on 've11gr2x04'

CRS-2676: Start of 'ora.mdnsd' on 've11gr2x04' succeeded

CRS-2672: Attempting to start 'ora.gipcd' on 've11gr2x04'

CRS-2676: Start of 'ora.gipcd' on 've11gr2x04' succeeded

Installing Oracle Real Application Clusters 11g Release 2 on Oracle Solaris 10 Containers

32

CRS-2672: Attempting to start 'ora.gpnpd' on 've11gr2x04'

CRS-2676: Start of 'ora.gpnpd' on 've11gr2x04' succeeded

CRS-2672: Attempting to start 'ora.cssdmonitor' on 've11gr2x04'

CRS-2676: Start of 'ora.cssdmonitor' on 've11gr2x04' succeeded

CRS-2672: Attempting to start 'ora.cssd' on 've11gr2x04'

CRS-2672: Attempting to start 'ora.diskmon' on 've11gr2x04'

CRS-2676: Start of 'ora.diskmon' on 've11gr2x04' succeeded

CRS-2676: Start of 'ora.cssd' on 've11gr2x04' succeeded

CRS-2672: Attempting to start 'ora.ctssd' on 've11gr2x04'

CRS-2676: Start of 'ora.ctssd' on 've11gr2x04' succeeded

CRS-2672: Attempting to start 'ora.asm' on 've11gr2x04'

CRS-2676: Start of 'ora.asm' on 've11gr2x04' succeeded

CRS-2672: Attempting to start 'ora.crsd' on 've11gr2x04'

CRS-2676: Start of 'ora.crsd' on 've11gr2x04' succeeded

CRS-2672: Attempting to start 'ora.evmd' on 've11gr2x04'

CRS-2676: Start of 'ora.evmd' on 've11gr2x04' succeeded

ve11gr2x04 2011/07/31 09:43:39

/u01/app/oracle/product/11.2.0/crs_1/cdata/ve11gr2x04/backup_20110731_094339.olr

Configure Oracle Grid Infrastructure for a Cluster ... succeeded

Updating inventory properties for clusterware

Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 24576 MB Passed

The inventory pointer is located at /var/opt/oracle/oraInst.loc

The inventory is located at /u01/app/oracle/product/11.2.0/oraInventory

'UpdateNodeList' was successful.

root@ve11gr2x04:/#

18. Click OK in the GUI and the message The installation of Oracle Grid Infrastructure for a Cluster was successful is displayed. Click Close.

Installing Oracle Database To install Oracle Database, log in as user oracle, go to the location where Oracle Database should be installed, and perform the following steps:

1. Run runInstaller to start Oracle Universal Installer.

2. Under Configure Security Updates, to keep it simple in the test environment, just deselect the check box and click Next button. In the resulting message, click Yes.

3. Under Installation Option, choose Install database software only, and click Next.

Installing Oracle Real Application Clusters 11g Release 2 on Oracle Solaris 10 Containers

33

4. Under Grid Options, by default, the installer selects Real Application Clusters database installation. Ensure all nodes are selected, and click Next.

5. Under Product Languages, select any other required language or continue with the default language (English), and click Next.

6. Under Database Edition, the default selection, Enterprise Edition, is selected. Click Next.

7. Under Installation Location, keep the default value, because we set this when we set the environment variables for the oracle user. Click Next.

8. Under Operating System Groups, leave the default values and click Next.

9. Under Prerequisite Checks, the tests typically will pass.

10. Under Summary, click Finish. The installation of binaries starts and they are copied to all the nodes.

A message says Execute Configuration scripts.

11. Run root.sh as root:

root@ve11gr2x01:/# /u01/app/oracle/product/11.2.0/db_1/root.sh

Running Oracle 11g root.sh script...

The following environment variables are set as:

ORACLE_OWNER= oracle

ORACLE_HOME= /u01/app/oracle/product/11.2.0/db_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:

The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y

Copying dbhome to /usr/local/bin ...

The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y

Copying oraenv to /usr/local/bin ...

The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y

Copying coraenv to /usr/local/bin ...

Entries will be added to the /var/opt/oracle/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root.sh script.

Now product-specific root actions will be performed.

Finished product-specific root actions.

root@ve11gr2x01:/#

Then repeat this step on the remaining nodes in order: first on ve11gr2x02, then on ve11gr2x03, and finally on ve11gr2x04.

Installing Oracle Real Application Clusters 11g Release 2 on Oracle Solaris 10 Containers

34

12. Go to the GUI installer, and click OK. You should see that the installation of the database was successful.

13. In the Finish screen; click Close to close the installer.

Applying CRS and Database Patches for the Oracle Solaris Containers Environment

It is important to apply fixes for bug number 11840629, which is required for the Oracle Solaris Containers environment. Or apply any latest patchset or patchset update that contains a fix for bug 11840629. The latest OPatch, 6880880, is required before installing a patch in the Oracle Solaris Containers environment.

For bug number 11840629, the following are the fixes for Oracle RAC 11.2.0.1, per the platform:

• Patch ID 13859663, which is for an Oracle Solaris Containers SPARC environment

• Patch ID 13699024, which is for an Oracle Solaris Containers x86 environment

If you are using Oracle RAC 11.2.0.2.0, install PSU3; no other patches are required.

Please check the Oracle Database Virtualization Web site for patch details.

Creating the Oracle RAC Database

In this example, Oracle Automatic Storage Management disk groups are used to host Oracle RAC database files. To create an Oracle RAC database, ensure that sufficient disk space is available on the disk group or create a new disk group.

The following sections describe the creation of an Oracle Automatic Storage Management disk group, the creation of the Oracle RAC database, and the postinstallation verification to check that all the database instances are up and running.

Creating the Oracle Automatic Storage Management Disk Group 1. Log in to any one of the zones as the grid user, for example, ve11gr2x01.

2. Ensure ORACLE_SID=+ASM1 is set. (This is set in the .profile file of the grid user).

3. Ensure the shell variable DISPLAY is set appropriately and it is able to launch the xclock application.

4. Now start the asmca tool to create a disk group.

5. Create one disk group using normal redundancy as OLTPDG.

6. Since the Oracle Automatic Storage Management disk path is set to /sharedfiles/disks/rdsk/* during Oracle Automatic Storage Management disk group creation while installing Oracle Clusterware, now select two disks from two different storage arrays as oltp22A and oltp22B.

Installing Oracle Real Application Clusters 11g Release 2 on Oracle Solaris 10 Containers

35

Once the disk group is created it also mounts on all the nodes.

The disk group can be used to host any number of databases or any specific database.

Creating the Oracle RAC Database Using the Oracle Automatic Storage Management Disk Group

1. To create the Oracle RAC database, log in to one of the nodes, ve11gr2x01, as the oracle user.

2. Ensure the shell variable DISPLAY is set appropriately and able to launch the xclock application.

3. Now start the dbca tool to create the Oracle RAC database.

4. Choose Oracle Automatic Management to host the data files.

Performing a Postinstallation Verification of the Cluster and Database Use crsctl check cluster -all to check the cluster services status on all nodes, and use crsctl stat res -t to check various resources status, as follows:

root@ve11gr2x01:/$ type crsctl

crsctl is /u01/app/oracle/product/11.2.0/crs_1/bin/crsctl

root@ve11gr2x01:/$ crsctl check cluster -all

**************************************************************

ve11gr2x01:

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

**************************************************************

ve11gr2x02:

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

**************************************************************

ve11gr2x03:

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

**************************************************************

ve11gr2x04:

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

**************************************************************

Installing Oracle Real Application Clusters 11g Release 2 on Oracle Solaris 10 Containers

36

root@ve11gr2x01:/$

root@ve11gr2x01:/$ crsctl stat res -t

--------------------------------------------------------------------------------

NAME TARGET STATE SERVER STATE_DETAILS

--------------------------------------------------------------------------------

Local Resources

--------------------------------------------------------------------------------

ora.ASMDG.dg

ONLINE ONLINE ve11gr2x01

ONLINE ONLINE ve11gr2x02

ONLINE ONLINE ve11gr2x03

ONLINE ONLINE ve11gr2x04

ora.LISTENER.lsnr

ONLINE ONLINE ve11gr2x01

ONLINE ONLINE ve11gr2x02

ONLINE ONLINE ve11gr2x03

ONLINE ONLINE ve11gr2x04

ora.asm

ONLINE ONLINE ve11gr2x01 Started

ONLINE ONLINE ve11gr2x02 Started

ONLINE ONLINE ve11gr2x03 Started

ONLINE ONLINE ve11gr2x04 Started

.....

.....

ora.ve11gr2x04.vip

1 ONLINE ONLINE ve11gr2x04

root@ve11gr2x01:/$

Installing Oracle Real Application Clusters 11g Release 2 on Oracle Solaris 10 Containers

37

References

• Oracle Solaris ZFS Administration Guide http://download.oracle.com/docs/cd/E23823_01/index.html

• "Best Practices for Deploying Oracle RAC Inside Oracle Solaris Containers" http://www.oracle.com/technetwork/articles/systems-hardware-architecture/deploying-rac-in-containers-168438.pdf

• "Highly Available and Scalable Oracle RAC Networking with Oracle Solaris 10 IPMP" http://www.oracle.com/technetwork/articles/systems-hardware-architecture/ha-rac-networking-ipmp-168440.pdf

• Oracle Solaris Containers--Resource Management and Solaris Zones http://download.oracle.com/docs/cd/E23823_01/index.html

• "Virtualization Options for Deploying Oracle Database Deployments on Sun SPARC Enterprise T-Series Systems" http://www.oracle.com/technetwork/articles/systems-hardware-architecture/virtualization-options-t-series-168434.pdf

Installing Oracle Real Application Clusters 11g Release 2 on Oracle Solaris 10 Containers October 2011, Version 1.0 Author: Mohammed Yousuf Reviewers: Alain Chéreau, Gia-Khanh Nguyen, Markus Michalewicz, Troy Anthony, Hashamkha Pathan

Oracle Corporation World Headquarters 500 Oracle Parkway Redwood Shores, CA 94065 U.S.A.

Worldwide Inquiries: Phone: +1.650.506.7000 Fax: +1.650.506.7200

oracle.com

Copyright © 2011, Oracle and/or its affiliates. All rights reserved. This document is provided for information purposes only and the contents hereof are subject to change without notice. This document is not warranted to be error-free, nor subject to any other warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of merchantability or fitness for a particular purpose. We specifically disclaim any liability with respect to this document and no contractual obligations are formed either directly or indirectly by this document. This document may not be reproduced or transmitted in any form or by any means, electronic or mechanical, for any purpose, without our prior written permission.

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark licensed through X/Open Company, Ltd. 0611