Install Rac on Solaris Vmware

33
Install Oracle RAC 10g on Solaris 10 Using Vmware Oracle RAC 10g Overview Oracle RAC, introduced with Oracle9i, is the successor to Oracle Parallel Server (OPS). RAC allows multiple instances to access the same database (storage) simultaneously. RAC provides fault tolerance, load balancing, and performance benefits by allowing the system to scale out, and at the same time since all nodes \ access the same database, the failure of one instance will not cause the loss of access to the database. Each node has its own redo log file(s) and UNDO tablespace, but the other nodes must be able to access them (and the shared control file) in order to recover that node in the event of a system failure. Each node has its own redo log, control files, and UNDO tablespace, but the other nodes must be able to access them in order to recover that node in the event of a system failure. The biggest difference between Oracle RAC and OPS is the addition of Cache Fusion. With OPS a request for data from one node to another required the data to be written to disk first, then the requesting node can read that data. With cache fusion, data is passed along a high- speed interconnect using a sophisticated locking algorithm. Not all clustering solutions use shared storage. Some vendors use an approach known as a "Federated Cluster", in which data is spread across several machines rather than shared by all. With Oracle RAC 10g, however, multiple nodes use the same set of disks for storing data. With Oracle RAC 10g, the data files, redo log files, control files, and archived log files reside on shared storage on raw-disk devices, a NAS, ASM, or on a clustered file system. Oracle's approach to clustering leverages the collective processing power of all the nodes in the cluster and at the same time provides failover security. Pre-configured Oracle RAC 10g solutions are available from vendors such as Dell, IBM and HP for production environments. This article, however, focuses on putting together your own Oracle RAC 10g environment for development and testing by using Solaris servers and a low cost shared disk solution; iSCSI.

description

Istall Oracle RAC 10g on Solaris Using VMware

Transcript of Install Rac on Solaris Vmware

Page 1: Install Rac on Solaris Vmware

Install Oracle RAC 10g on Solaris 10 Using Vmware 

Oracle RAC 10g Overview Oracle RAC, introduced with Oracle9i, is the successor to Oracle Parallel Server (OPS). RAC allows multiple instances to access the same database (storage) simultaneously. RAC provides fault tolerance, load balancing, and performance benefits by allowing the system to scale out, and at the same time since all nodes \ access the same database, the failure of one instance will not cause the loss of access to the database.

Each node has its own redo log file(s) and UNDO tablespace, but the other nodes must be able to access them (and the shared control file) in order to recover that node in the event of a system failure. Each node has its own redo log, control files, and UNDO tablespace, but the other nodes must be able to access them in order to recover that node in the event of a system failure.

The biggest difference between Oracle RAC and OPS is the addition of Cache Fusion. With OPS a request for data from one node to another required the data to be written to disk first, then the requesting node can read that data. With cache fusion, data is passed along a high-speed interconnect using a sophisticated locking algorithm.

Not all clustering solutions use shared storage. Some vendors use an approach known as a "Federated Cluster", in which data is spread across several machines rather than shared by all. With Oracle RAC 10g, however, multiple nodes use the same set of disks for storing data. With Oracle RAC 10g, the data files, redo log files, control files, and archived log files reside on shared storage on raw-disk devices, a NAS, ASM, or on a clustered file system. Oracle's approach to clustering leverages the collective processing power of all the nodes in the cluster and at the same time provides failover security.

Pre-configured Oracle RAC 10g solutions are available from vendors such as Dell, IBM and HP for production environments. This article, however, focuses on putting together your own Oracle RAC 10g environment for development and testing by using Solaris servers and a low cost shared disk solution; iSCSI.

Page 2: Install Rac on Solaris Vmware

Oracle Database Files

RAC Node Name Instance Name Database Name $ORACLE_BASE

File System / Volume Manager for DB Files

solaris1 orcl1 orcl /u01/app/oracle ASM

solaris2 orcl2 orcl /u01/app/oracle ASM

Oracle Clusterware Shared Files

File Type File Name iSCSI Volume Name Mount Point File System

Oracle Cluster Registry /dev/rdsk/c2t0d0s1 ocr RAW

CRS Voting Disk /dev/rdsk/c2t1d0s1 vot RAW  

Oracle RAC Node 1 - (solaris1) Purpose Connects solaris1 to the public network Connects solaris1 (interconnect) to solaris2 (solaris2-priv) /etc/inet/hosts # # Internet host table # 127.0.0.1 localhost # Public Network - (rge0) 172.16.16.27 solaris1 172.16.16.28 solaris2 # Private Interconnect - (e1000g0) 192.168.2.111 solaris1-priv 192.168.2.112 solaris2-priv # Public Virtual IP (VIP) addresses for - (rge0) 172.16.16.31 solaris1-vip 172.16.16.32 solaris2-vip # Private Storage Network for Openfiler - (rge0) 192.168.2.195 openfiler1

 

Download Solaris 10 appliance   http://developers.sun.com/solaris/downloads/solaris_apps/index.jsp 

Configure Virtual Machine with Netwotk and  Virtual Disks 

 

To create and configure the first virtual machine, you will add virtual hardware devices such as disks

Page 3: Install Rac on Solaris Vmware

and processors. Before proceeding with the install, create the windows folders to house the virtual machines and the shared storage. D:\>mkdir vm\rac\rac1 D:\>mkdir vm\rac\rac2 D:\>mkdir vm\rac\sharedstorage Double-click on the VMware Server icon on your desktop to bring up the application: 1. Press CTRL-N to create a new virtual machine. 2. New Virtual Machine Wizard: Click on Next. 3. Select the Appropriate Configuration: a. Virtual machine configuration: Select Custom. Select a Guest Operating System: a. Guest operating system: Select Linux. b. Version: Select Red Hat Enterprise Linux 4. 4. Name the Virtual Machine: a. Virtual machine name: Enter “rac1.” b. Location: Enter “d:\vm\rac\rac1.” 5. Set Access Rights: a. Access rights: Select Make this virtual machine private. 6. Startup / Shutdown Options: a. Virtual machine account: Select User that powers on the virtual machine. 7. Processor Configuration: a. Processors: Select One. 8. Memory for the Virtual Machine: a. Memory: Select 1024MB. 9. Network Type: a. Network connection: Select Use bridged networking. 10. Select I/O Adapter Types: a. I/O adapter types: Select LSI Logic. 11. Select a Disk: a. Disk: Select Create a new virtual disk. 12. Select a Disk Type: a. Virtual Disk Type: Select SCSI (Recommended). 13. Specify Disk Capacity: a. Disk capacity: Enter “0.2GB” Deselect Allocate all disk space now. To save space, you do not have to allocate all the disk space now. b. 14. Specify Disk File: a. Disk file: Enter “ocr.vmdk”

Page 4: Install Rac on Solaris Vmware

b. Click on Finish. 15. Repeat steps 16 to 24 to create four virtual SCSI hard disks - ocfs2disk.vmdk (512MB), asmdisk1.vmdk (3GB), asmdisk2.vmdk (3GB), and asmdisk3.vmdk (2GB). 16. VMware Server Console: Click on Edit virtual machine settings. 17. Virtual Machine Settings: Click on Add. 18. Add Hardware Wizard: Click on Next. Hardware Type: a. Hardware types: Select Hard Disk. 19. Select a Disk: a. Disk: Select Create a new virtual disk. 20. Select a Disk Type: a. Virtual Disk Type: Select SCSI (Recommended). 21. Specify Disk Capacity: a. Disk capacity: Enter “0.5GB.” Select Allocate all disk space now. You do not have to allocate all the disk space if you want to save space. For performance reason, you will pre-allocate all the disk space for each of the virtual shared disk. If the size of the shared disks were to grow rapidly especially during Oracle database creation or when the database is under heavy DML activity, the virtual machines may hang intermittently for a brief period or crash in a few b. 22. rare occasions. Specify Disk File: a. Disk file: Enter “” b. Click on Advanced. 23. Add Hardware Wizard: a. Virtual device node: Select SCSI 1:0. b. Mode: Select Independent, Persistent for all shared disks. c. Click on Finish. 

• Ocr.vmdk    0.2 GB 

• Vot.vmdk   ‐> 0.2 GB 

• Asm1.vmdk ‐> 2 GB 

• Asm2.vmdk  ‐> 2GB 

• Asm3.vmdk ‐> 2GB 

 

 Start vmware 

 Network 

• Sys‐unconfig   ‐>  prompt y and after system reboot configure network for 2 ethernet with ip’s and gateways. 

Page 5: Install Rac on Solaris Vmware

 Vmxnet0 ‐> hostname:solaris1 ip:172.16.16.27 netmask:255.255.255.0 gateway:172.16.16.1    Vmxnet1 ‐> hostname:solaris1‐prıv  ip:192.168.2.111 netmask:255.255.255.0     

Modify virtual machine configuration file (.vmx)

disk.locking = "FALSE" diskLib.dataCacheMaxSize = "0" scsi1.sharedBus = "virtual" 

    After adding virtual disks restart vmware. Tell Solaris to look for new devices when booting Select the Solaris entry GRUB menu that you want to boot.  to edit, enter e  select the "kernel /platform" line  to edit that again enter e  add to the end of the 'kernel' line a space followed by ‐r  kernel /platform/i86pc/multiboot ‐r  press enter key to accept the change  press b to boot  OPTIONAL: Another method to force Solaris to rebuild the device tree: You can force Solaris to rebuild it's device tree while the system is running but creating an empty file and rebooting Solaris: #touch /reconfigure   Formatting Disks  

# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0d0 <DEFAULT cyl 7293 alt 2 hd 255 sec 63> /pci@0,0/pci-ide@1f,2/ide@0/cmdk@0,0 1. c2t1d0 <DEFAULT cyl 157 alt 2 hd 64 sec 32> /iscsi/[email protected]%3Arac1.ocr0001,0 2. c2t2d0 <DEFAULT cyl 61 alt 2 hd 64 sec 32> /iscsi/[email protected]%3Arac1.vot0001,0 3. c2t3d0 <DEFAULT cyl 157 alt 2 hd 64 sec 32> /iscsi/[email protected]%3Arac1.asmspfile0001,0 4. c2t4d0 <DEFAULT cyl 11473 alt 2 hd 255 sec 63> /iscsi/[email protected]%3Arac1.asm10001,0 5. c2t5d0 <DEFAULT cyl 11473 alt 2 hd 255 sec 63>

Page 6: Install Rac on Solaris Vmware

/iscsi/[email protected]%3Arac1.asm20001,0 Specify disk (enter its number): 1 selecting c2t1d0 [disk formatted] FORMAT MENU disk - select a disk type - select (define) a disk type partition - select (define) a partition table current - describe the current disk format - format and analyze the disk fdisk - run the fdisk program repair - repair a defective sector label - write label to the disk analyze - surface analysis defect - defect list management backup - search for backup labels verify - read and display labels save - save new disk/partition definitions inquiry - show vendor, product and revision volname - set 8-character volume name !<cmd> - execute <cmd>, then return format> partition Please run fdisk first format> fdisk No fdisk table exists. The default partition for the disk is: a 100% "SOLARIS system" partition Type "y" to accept the default partition, otherwise type "n" to edit the partition table. y format> partition PARTITION MENU: 0 - change '0' partition 1 - change '1' partition 2 - change '2' partition 3 - change '3' partition 4 - change '4' partition 5 - change '5' partition 6 - change '6' partition 7 - change '7' partition select - select a predefined table modify - modify a predefined partition table name - name the current table print - display the current table label - write the partition map and label to the disk !<cmd> - execute <cmd>, then return quit partition> print Current partition table (original): Total disk cylinders available: 156 + 2 (reserved cylinders) Part Tag Flag Cylinders Size Blocks 0 unassigned wm 0 0 (0/0/0) 0 1 unassigned wm 0 0 (0/0/0) 0 2 backup wu 0 - 156 157.00MB (157/0/0) 321536 3 unassigned wm 0 0 (0/0/0) 0 4 unassigned wm 0 0 (0/0/0) 0 5 unassigned wm 0 0 (0/0/0) 0

Page 7: Install Rac on Solaris Vmware

6 unassigned wm 0 0 (0/0/0) 0 7 unassigned wm 0 0 (0/0/0) 0 8 boot wu 0 - 0 1.00MB (1/0/0) 2048 9 unassigned wm 0 0 (0/0/0) 0 partition> 1 Part Tag Flag Cylinders Size Blocks 1 unassigned wm 0 0 (0/0/0) 0 Enter partition id tag[unassigned]: Enter partition permission flags[wm]: Enter new starting cyl[0]: 3 Enter partition size[0b, 0c, 3e, 0.00mb, 0.00gb]: 153c partition> label Ready to label disk, continue? y partition> quit format> disk 2 selecting c2t2d0 [disk formatted] format> fdisk No fdisk table exists. The default partition for the disk is: a 100% "SOLARIS System" partition Type "y" to accept the default partition, otherwise type "n" to edit the partition table. y format> partition partition> print Current partition table (original): Total disk cylinders available: 60 + 2 (reserved cylinders) Part Tag Flag Cylinders Size Blocks 0 unassigned wm 0 0 (0/0/0) 0 1 unassigned wm 0 0 (0/0/0) 0 2 backup wu 0 - 60 61.00MB (61/0/0) 124928 3 unassigned wm 0 0 (0/0/0) 0 4 unassigned wm 0 0 (0/0/0) 0 5 unassigned wm 0 0 (0/0/0) 0 6 unassigned wm 0 0 (0/0/0) 0 7 unassigned wm 0 0 (0/0/0) 0 8 boot wu 0 - 0 1.00MB (1/0/0) 2048 9 unassigned wm 0 0 (0/0/0) 0 partition> 1 Part Tag Flag Cylinders Size Blocks 1 unassigned wm 0 0 (0/0/0) 0 Enter partition id tag[unassigned]: Enter partition permission flags[wm]: Enter new starting cyl[0]: 3 Enter partition size[0b, 0c, 3e, 0.00mb, 0.00gb]: 57c partition> label Ready to label disk, continue? y partition> quit format> disk 3 selecting c2t3d0 [disk formatted] format> fdisk

Page 8: Install Rac on Solaris Vmware

No fdisk table exists. The default partition for the disk is: a 100% "SOLARIS System" partition Type "y" to accept the default partition, otherwise type "n" to edit the partition table. y format> partition partition> print Current partition table (original): Total disk cylinders available: 156 + 2 (reserved cylinders) Part Tag Flag Cylinders Size Blocks 0 unassigned wm 0 0 (0/0/0) 0 1 unassigned wm 0 0 (0/0/0) 0 2 backup wu 0 - 156 157.00MB (157/0/0) 321536 3 unassigned wm 0 0 (0/0/0) 0 4 unassigned wm 0 0 (0/0/0) 0 5 unassigned wm 0 0 (0/0/0) 0 6 unassigned wm 0 0 (0/0/0) 0 7 unassigned wm 0 0 (0/0/0) 0 8 boot wu 0 - 0 1.00MB (1/0/0) 2048 9 unassigned wm 0 0 (0/0/0) 0 partition> 1 Part Tag Flag Cylinders Size Blocks 1 unassigned wm 0 0 (0/0/0) 0 Enter partition id tag[unassigned]: Enter partition permission flags[wm]: Enter new starting cyl[0]: 3 Enter partition size[0b, 0c, 3e, 0.00mb, 0.00gb]: 153c partition> label Ready to label disk, continue? y partition> quit format> disk 4 selecting c2t4d0 [disk formatted] format> fdisk No fdisk table exists. The default partition for the disk is: a 100% "SOLARIS System" partition Type "y" to accept the default partition, otherwise type "n" to edit the partition table. y format> partition partition> print Current partition table (original): Total disk cylinders available: 11472 + 2 (reserved cylinders) Part Tag Flag Cylinders Size Blocks 0 unassigned wm 0 0 (0/0/0) 0 1 unassigned wm 0 0 (0/0/0) 0 2 backup wu 0 - 11472 87.89GB (11473/0/0) 184313745 3 unassigned wm 0 0 (0/0/0) 0 4 unassigned wm 0 0 (0/0/0) 0 5 unassigned wm 0 0 (0/0/0) 0

Page 9: Install Rac on Solaris Vmware

6 unassigned wm 0 0 (0/0/0) 0 7 unassigned wm 0 0 (0/0/0) 0 8 boot wu 0 - 0 7.84MB (1/0/0) 16065 9 unassigned wm 0 0 (0/0/0) 0 partition> 1 Part Tag Flag Cylinders Size Blocks 1 unassigned wm 0 0 (0/0/0) 0 Enter partition id tag[unassigned]: Enter partition permission flags[wm]: Enter new starting cyl[0]: 3 Enter partition size[0b, 0c, 3e, 0.00mb, 0.00gb]: 11468c partition> label Ready to label disk, continue? y partition> quit format> disk 5 selecting c2t5d0 [disk formatted] format> fdisk No fdisk table exists. The default partition for the disk is: a 100% "SOLARIS System" partition Type "y" to accept the default partition, otherwise type "n" to edit the partition table. y format> partition partition> print Current partition table (original): Total disk cylinders available: 11472 + 2 (reserved cylinders) Part Tag Flag Cylinders Size Blocks 0 unassigned wm 0 0 (0/0/0) 0 1 unassigned wm 0 0 (0/0/0) 0 2 backup wu 0 - 11472 87.89GB (11473/0/0) 184313745 3 unassigned wm 0 0 (0/0/0) 0 4 unassigned wm 0 0 (0/0/0) 0 5 unassigned wm 0 0 (0/0/0) 0 6 unassigned wm 0 0 (0/0/0) 0 7 unassigned wm 0 0 (0/0/0) 0 8 boot wu 0 - 0 7.84MB (1/0/0) 16065 9 unassigned wm 0 0 (0/0/0) 0 partition> 1 Part Tag Flag Cylinders Size Blocks 1 unassigned wm 0 0 (0/0/0) 0 Enter partition id tag[unassigned]: 3 `3' not expected. Enter partition id tag[unassigned]: Enter partition permission flags[wm]: Enter new starting cyl[0]: 3 Enter partition size[0b, 0c, 3e, 0.00mb, 0.00gb]: 11468c partition> label Ready to label disk, continue? y partition> quit format> disk 6 selecting c2t6d0

Page 10: Install Rac on Solaris Vmware

[disk formatted] format> fdisk No fdisk table exists. The default partition for the disk is: a 100% "SOLARIS System" partition Type "y" to accept the default partition, otherwise type "n" to edit the partition table. y format> partition partition> print Current partition table (original): Total disk cylinders available: 12745 + 2 (reserved cylinders) Part Tag Flag Cylinders Size Blocks 0 unassigned wm 0 0 (0/0/0) 0 1 unassigned wm 0 0 (0/0/0) 0 2 backup wu 0 - 12745 97.64GB (12746/0/0) 204764490 3 unassigned wm 0 0 (0/0/0) 0 4 unassigned wm 0 0 (0/0/0) 0 5 unassigned wm 0 0 (0/0/0) 0 6 unassigned wm 0 0 (0/0/0) 0 7 unassigned wm 0 0 (0/0/0) 0 8 boot wu 0 - 0 7.84MB (1/0/0) 16065 9 unassigned wm 0 0 (0/0/0) 0 partition> 1 Part Tag Flag Cylinders Size Blocks 1 unassigned wm 0 0 (0/0/0) 0 Enter partition id tag[unassigned]: Enter partition permission flags[wm]: Enter new starting cyl[0]: 3 Enter partition size[0b, 0c, 3e, 0.00mb, 0.00gb]: 12740c partition> label Ready to label disk, continue? y partition> quit format> quit

     

Create oracle User and Directories Perform the following tasks on all Oracle nodes in the cluster.

We will create the dba group and the oracle user account along with all appropriate directories. # mkdir -p /u01/app # groupadd -g 115 dba # useradd -u 175 -g 115 -d /u01/app/oracle -m -s /usr/bin/bash -c "Oracle Software Owner" oracle # chown -R oracle:dba /u01

Page 11: Install Rac on Solaris Vmware

# passwd oracle # su - oracle When you are setting the oracle environment variables for each Oracle RAC node, ensure to assign each RAC node a unique Oracle SID. For this example, we used:

• solaris1: ORACLE_SID=orcl1 • solaris2: ORACLE_SID=orcl2

After creating the oracle user account on both nodes, ensure that the environment is setup correctly by using the following .bash_profile (Please note that the .bash_profile will not exist on Solaris; you will have to create it).

# .bash_profile umask 022 export ORACLE_BASE=/u01/app/oracle export ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1 export ORA_CRS_HOME=$ORACLE_BASE/product/crs # Each RAC node must have a unique ORACLE_SID. (i.e. orcl1, orcl2,...) export ORACLE_SID=orcl1 export PATH=.:${PATH}:$HOME/bin:$ORACLE_HOME/bin export PATH=${PATH}:/usr/bin:/bin:/usr/local/bin:/usr/sfw/bin export TNS_ADMIN=$ORACLE_HOME/network/admin export ORA_NLS10=$ORACLE_HOME/nls/data export LD_LIBRARY_PATH=$ORACLE_HOME/lib export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib export CLASSPATH=$ORACLE_HOME/JRE export CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib export CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib export CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib export TEMP=/tmp export TMPDIR=/tmp

Configure the Solaris Servers for Oracle Perform the following configuration procedure on both Oracle RAC nodes in the cluster

This section focuses on configuring both Oracle RAC Solaris servers - getting each one prepared for the Oracle RAC 10g installation. A lot of the information in the kernel parameters section is referenced from Oracle ACE Howard J. Rogers' excellent Web site, Dizwell Informatics.

Setting Device Permissions

The devices we will be using for the various components of this article (e.g. the OCR and the voting disk) must have the appropriate ownershop and permissions set on them before we can proceed to the installation stage. We will the set the permissions and ownerships using the chown and chmod commands as follows: (this must be done as the root user) # chown root:dba /dev/rdsk/c2t1d0s1 # chmod 660 /dev/rdsk/c2t1d0s1

Page 12: Install Rac on Solaris Vmware

# chown oracle:dba /dev/rdsk/c2t2d0s1 # chmod 660 /dev/rdsk/c2t2d0s1 # chown oracle:dba /dev/rdsk/c2t3d0s1 # chown oracle:dba /dev/rdsk/c2t4d0s1 # chown oracle:dba /dev/rdsk/c2t5d0s1 # chown oracle:dba /dev/rdsk/c2t6d0s1 These permissions will be persistent accross reboots. No further configuration needs to be performed with the permissions.

Creating Symbolic Links for the SSH Binaries

The Oracle Universal Installer and configuration assistants (such as NETCA) will look for the SSH binaries in the wrong location on Solaris. Even if the SSH binaries are included in your path when you start these programs, they will still look for the binaries in the wrong location. On Solaris, the SSH binaries are located in the /usr/bin directory by default. The Oracle Universal Installer will throw an error stating that it cannot find the ssh or scp binaries. For this article, my workaround was to simply create s symbolic link in the /usr/local/bin directory for these binaries. This workaround was quick and easy to implement and worked perfectly. # ln -s /usr/bin/ssh /usr/local/bin/ssh # ln -s /usr/bin/scp /usr/local/bin/scp

Remove any STTY Commands

During an Oracle Clusterware instllation, the Oracle Universal Installer uses SSH perform remote operations on other nodes. During the installation, hidden files on the system (for example, .bashrc or .cshrc) will cause installlation errors if they contain stty commands.

To avoid this problem, you must modify these files to suppress all output on STDERR, as in the following examples:

• Bourne, bash, or Korn shell:

• if [ -t 0 ]; then • stty intr ^C • fi

• C shell:

• test -t 0 • if ($status==0) then • stty intr ^C • endif

Setting Kernel Parameters

In Solaris 10, there is a new way of setting kernel parameters. The old Solaris 8 and 9 way of setting kernel parameters by editing the /etc/system file is deprecated. A new method of setting kernel parameters exists in Solaris 10 using the resource control facility and this

Page 13: Install Rac on Solaris Vmware

method does not require the system to be re-booted for the change to take effect. Let's start by creating a new resource project. # projadd oracle Kernel parameters are merely attributes of a resource project so new kernel parameter values can be established by modifying the attributes of a project. First we need to make sure that the oracle user we created earlier knows to use the new oracle project for its resource limits. This is accomplished by editing the /etc/user_attr file to look like this: # # Copyright (c) 2003 by Sun Microsystems, Inc. All rights reserved. # # /etc/user_attr # # user attributes. see user_attr(4) # #pragma ident "@(#)user_attr 1.1 03/07/09 SMI" # adm::::profiles=Log Management lp::::profiles=Printer Management root::::auths=solaris.*,solaris.grant;profiles=Web Console Management,All;lock_after_retries=no oracle::::project=oracle This assigns the oracle user to the new resource project called oracle whenever he/she logs on. Log on as the oracle user to test it out. # su - oracle Sun Microsystems Inc. SunOS 5.10 Generic January 2005 $ id -p uid=175(oracle) gid=115(dba) projid=100(oracle) $ The last output here indicates that the oracle user has been assigned to project number 100, the oracle project. To check the resource parameter values that have been assigned to the project, issue the following command: $ prctl -n project.max-shm-memory -i project oracle project: 100: oracle NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT project.max-shm-memory privileged 510MB - deny - system 16.0EB max deny - $ This indicates that the maximum size of shared memory segments that the oracle user can create is 510MB. Unless that number is in the gigabyte range, it will need to be changed as the recommended value by Oracle is 4GB.

At this point, leave the oracle user logged in to the original terminal session and start a brand new session as the root user. A resource project's settings can only be modified dynamically if

Page 14: Install Rac on Solaris Vmware

there is at least one user logged in that is actually assigned to that project. As the root user in the new session, issue the following command: # prctl -n project.max-shm-memory -v 4gb -r -i project oracle After issuing this command, switch back to the oracle user's session and re-issue the earlier command: $ prctl -n project.max-shm-memory -i project oracle project: 100: oracle NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT project.max-shm-memory privileged 4.00GB - deny - system 16.0EB max deny - $ Now we can see that the value for the maximum size of shared memory segments is now 4GB. This sets a mximum size for the shared memory segment; it does not mean that the Oracle instance will actually be 4GB.

This procedure sets the correct value for the max-shm-memory kernel parameter dynamically but if the server is rebooted, the new value would be lost. To make the value ermanent across reboots, the following command is issued in the root user's session: # projmod -s -K "project.max-shm-memory=(priv,4gb,deny)" oracle There are other kernel parameters which the Oracle documentation instructs you to check and modify if necessary. However, if you are following along this article and just installed Solaris 10, these parameters will be set to acceptable values by defualt. Therefore, there is no need to alter these parameters. For more information on resource projects in Solaris 10, see the System Administration Guide: Solaris Containers-Resource Management and Solaris Zones. This is available from http://docs.sun.com.

 

****Shutdown and copy solaris1 to solaris2**** 

After just reboot second machine change /etc/hosts file with solaris2 loghost. 

• Change /etc/hostname.vmxnet0    solaris2 

• Change /etc/hostname.vmxnet1    solaris2‐prıv 

• Change /etc/nodename solaris2 

• Svcadm restart network/physıcal  Reboot.    

Page 15: Install Rac on Solaris Vmware

                 

Configure RAC Nodes for Remote Access Perform the following configuration procedures on both Oracle RAC nodes in the cluster.

Before you can install and use Oracle RAC, you must configure either secure shell (SSH) or remote shell (RSH) for the oracle user account both of the Oracle RAC nodes in the cluster. The goal here is to setup user equivalence for the oracle user account. User equivalence enables the oracle user account to access all other nodes in the cluster without the need for a password. This can be configured using either SSH or RSH where SSH is the preferred method.

In this article, we will just discuss the secure shell method for establishing user equivalency. For steps on how to use the remote shell method, please see this section of Jeffrey Hunter's original article.

Creating RSA Keys on Both Oracle RAC Nodes

The first step in configuring SSH is to create RSA key pairs on both Oracle RAC nodes in the cluster. The command to do this will create a public and private key for the RSA algorithm. The content of the RSA public key will then need to be copied into an authorized key file which is then distributed to both of the Oracle RAC nodes in the cluster.

Use the following steps to create the RSA key pair. Please note that these steps will need to be completed on both Oracle RAC nodes in the cluster:

1. Logon as the oracle user account. # su - oracle

2. If necessary, create the .ssh directory in the oracle user's home directory and set the correct permissions on it:

Page 16: Install Rac on Solaris Vmware

$ mkdir -p ˜/.ssh $ chmod 700 ˜/.ssh

3. Enter the following command to generate an RSA key pair (public and private key) for version 3 of the SSH protocol: $ /usr/bin/ssh-keygen -t rsa At the prompts:

o Accept the default location for the key files. o Enter and confirm a pass phrase. This should be different from the oracle

user account password; however it is not a requirement i.e. you do not have to enter any password.

This command will write the public key to the ˜/.ssh/id_rsa.pub file and the private key to the ˜/.ssh/id_rsa file. Note that you should never distribute the private key to anyone.

4. Repeat the above steps for each Oracle RAC node in the cluster.

Now that both Oracle RAC nodes contain a public and private key pair for RSA, you will need to create an authorized key file on one of the nodes. An authorized key file is nothing more than a single file that contains a copy of everyone's (every node's) RSA public key. Once the authorized key file contains all of the public keys, it is then distributed to all other nodes in the RAC cluster.

Complete the following steps on one of the nodes in the cluster to create and then distribute the authorized key file. For the purpose of this article, I am using solaris1.

1. First, determine if an authorized key file already exists on the node (˜/.ssh/authorized_keys). In most cases this will not exist since this article assumes you are working with a new install. If the file doesn't exist, create it now: $ touch ˜/.ssh/authorized_keys $ cd ˜/ssh

2. In this step, use SSH to copy the content of the ˜/.ssh/id_rsa.pub public key from each Oracle RAC node in the cluster to the authorized key file just created (˜/.ssh/authorized_keys). Again, this will be done from solaris1. You will be prompted for the oracle user account password for both Oracle RAC nodes accessed. Notice that when using SSH to access the node you are on (solaris1), the first time it prompts for the oracle user account password. The second attempt at accessing this node will prompt for the pass phrase used to unlock the private key. For any of the remaining nodes, it will always ask for the oracle user account password. The following example is being run from solaris1 and assumes a 2-node cluster, with nodes solaris1 and solaris2: $ ssh solaris1 cat ˜/.ssh/id_rsa.pub >> ˜/.ssh/authorized_keys The authenticity of host 'solaris1 (172.16.16.27)' can't be established. RSA key fingerprint is a5:de:ee:2a:d8:10:98:d7:ce:ec:d2:f9:2c:64:2e:e5 Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'solaris1,172.16.16.27' (RSA) to the list of known hosts. oracle@solaris1's password: $ ssh solaris2 cat ˜/.ssh/id_rsa.pub >> ˜/.ssh/authorized_keys

Page 17: Install Rac on Solaris Vmware

The authenticity of host 'solaris2 (172.16.16.28)' can't be established. RSA key fingerprint is d2:99:ed:a2:7b:10:6f:3e:e1:da:4a:45:d5:34:33:5b Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'solaris2,172.16.16.28' (RSA) to the list of known hosts. oracle@solaris2's password: Note: The first time you use SSH to connect to a node from a particular system, you may see a message similar to the following: The authenticity of host 'solaris1 (172.16.16.27)' can't be established. RSA key fingerprint is a5:de:ee:2a:d8:10:98:d7:ce:ec:d2:f9:2c:64:2e:e5 Are you sure you want to continue connecting (yes/no)? yes Enter yes at the prompt to continue. You should not see this message again when you connect from this system to the same node.

3. At this point, we have the content of the RSA public key from every node in the cluster in the authorized key file (˜/.ssh/authorized_keys) on solaris1. We now need to copy it to the remaining nodes in the cluster. In out two-node cluster example, the only remaining node is solaris2. Use the scp command to copy the authorized key file to all remaining nodes in the cluster: $ scp ˜/.ssh/authorized_keys solaris2:.ssh/authorized_keys oracle@solaris2's password: authorized_keys 100% 1534 1.2KB/s 00:00

4. Change the permission of the authorized key file for both Oracle RAC nodes in the cluster by logging into the node and running the following: $ chmod 600 ˜/.ssh/authorized_keys

5. At this point, if you use ssh to log in or run a command on another node, you are prompted for the pass phrase that you specified when you created the RSA key. For example, test the following from solaris1: $ ssh solaris1 hostname Enter passphrase for key '/u01/app/oracle/.ssh/id_rsa': solaris1 $ ssh solaris2 hostname Enter passphrase for key '/u01/app/oracle/.ssh/id_rsa': solaris2 Note: If you see any other messages or text, apart from the host name, then the Oracle installation can fail. Make any changes required to ensure that only the host name is displayed when you enter these commands. You should ensure that any part of a login script(s) that generate any output, or ask any questions, are modified so that they act only when the shell is an interactive shell.

Enabling SSH User Equivalency for the Current Shell Session

When running the OUI, it will need to run the secure shell tool commands (ssh and scp) without being prompted for a password. Even though SSH is configured on both Oracle RAC

Page 18: Install Rac on Solaris Vmware

nodes in the cluster, using the secure shell tool commands will still prompt for a password. Before running the OUI, you need to enable user equivalence for the terminal session you plan to run the OUI from. For the purpose of this article, all Oracle installations will be performed from solaris1.

User equivalence will need to be enabled on any new terminal shell session before attempting to run the OUI. If you log out and log back in to the node you will be performing the Oracle installation from, you must enable user equivalence for the terminal session as this is not done by default.

To enable user equivalence for the current terminal shell session, perform the following steps:

1. Logon to the node where you want to run the OUI from (solaris1) as the oracle user. # su - oracle

2. Enter the following commands: $ exec ssh-agent $SHELL $ ssh-add Enter passphrase for /u01/app/oracle/.ssh/id_rsa: Identity added: /u01/app/oracle/.ssh/id_rsa (/u01/app/oracle/.ssh/id_rsa) At the prompts, enter the pass phrase for each key that you generated.

3. If SSH is configured correctly, you will be able to use the ssh and scp commands without being prompted for a password or pass phrase from this terminal session: $ ssh solaris1 "date;hostname" Wed Jan 24 17:12 CST 2007 solaris1 $ ssh solaris2 "date;hostname" Wed Jan 24 17:12:55 CST 2007 solaris2 Note: The commands above should display the date set on both Oracle RAC nodes along with its hostname. If any of the nodes prompt for a password or pass phrase then verify that the ˜/.ssh/authorized_keys file on that node contains the correct public keys. Also, if you see any other messages or text, apart from the date and hostname, then the Oracle installation can fail. Make any changes required to ensure that only the date and hostname is displayed when you enter these commands. You should ensure that any part of a login script(s) that generate any output, or ask any questions, are modified so that they act only when the shell is an interactive shell.

4. The Oracle Universal Installer is a GUI interface and requires the use of an X server. From the terminal session enabled for user equivalence (the node you will be performing the Oracle installations from), set the environment variable DISPLAY to a valid X Windows display: Bourne, Korn, and Bash shells: $ DISPLAY=<Any X-Windows Host>:0 $ export DISPLAY C shell:

Page 19: Install Rac on Solaris Vmware

$ setenv DISPLAY <Any X-Windows Host>"0

5. You must run the Oracle Universal Installer from this terminal session or remember to repeat the steps to enable user equivalence (steps 2, 3, and 4 from this section) before you start the OUI from a different terminal session.

Download Oracle RAC 10g Software The following download procedurs only need to be performed on one node in the cluster

The next logical step is to install Oracle Clusterware Release 2 (10.2.0.2.0) and Oracle Database 10g Release 2 (10.2.0.2.0). For this article, I chose not to install the Companion CD as Jeffrey Hunter did in his article. This is a matter of choice - if you want to install the companion CD, by all means do.

In this section, we will be downloading and extracting the required software from Oracle to only one of the Solaris nodes in the RAC cluster - namely solaris1. This is the machine where I will be performing all of the Oracle installs from. The Oracle installer will copy the required software packages to all other nodes in the RAC configuration using the user equivalency we setup in the section "Configure RAC Nodes for Remote Access". Login to the node that you will be performing all of the Oracle installations from (solaris1) as the oracle user account. In this example, I will be downloading the required Oracle software to solaris1 and saving them to /u01/app/oracle/orainstall.

Oracle Clusterware Release 2 (10.2.0.2.0) for Solaris 10 x86

First, download the Oracle Clusterware Release 2 software for Solaris 10 x86.

Oracle Clusterware Release 2 (10.2.0.2.0)

• 10202_clusterware_solx86.zip

Oracle Database 10g Release 2 (10.2.0.2.0) for Solaris 10 x86

First, download the Oracle Database Release 2 software for Solaris 10 x86.

Oracle Database 10g Release 2 (10.2.0.2.0)

• 10202_database_solx86.zip

As the oracle user account, extract the two packages you downloaded to a temporary directory. In this example, I will use /u01/app/oracle/orainstall. Extract the Clusterware package as follows: # su - oracle $ cd ˜oracle/orainstall $ unzip 10202_clusterware_solx86.zip

Page 20: Install Rac on Solaris Vmware

Then extract the Oracle10g Database software: $ cd ˜oracle/orainstall $ unzip 10202_database_solx86.zip

Pre-Installation Tasks for Oracle RAC 10g Perform the following checks on all Oracle RAC nodes in the cluster

The following packages must be installed on each server before you can continue: SUNWlibms SUNWtoo SUNWi1cs SUNWi15cs SUNWxwfnt SUNWxwplt SUNWmfrun SUNWxwplr SUNWxwdv SUNWgcc SUNWbtool SUNWi1of SUNWhea SUNWlibm SUNWsprot SUNWuiu8 To check whether any of these required packages are installed on your system, use the pkginfo -i package_name commad as follows: # pkginfo -i SUNWlibms system SUNWlibms Math & Microtasking Libraries (Usr) # If you need to install any of the above packages, use the pkgadd -d package_name command.

Install Oracle 10g Clusterware Software Perform the following installation procedures from only one of the Oracle RAC nodes in the cluster (solaris1). The Oracle Clusterware software will be installed to both of the Oracle RAC nodes in the cluster by the OUI.

You are now ready to install the "cluster" part of the environment - the Oracle Clusterware. In a previous section, you downloaded and extracted the install files for Oracle Clusterware to solaris1 in the directory /u01/app/oracle/orainstall/clusterware. This is the only node from which you need to perform the install.

During the installation of Oracle Clusterware, you will be asked for the nodes involved and to configure in the RAC cluster. Once the actual installation starts, it will copy the required

Page 21: Install Rac on Solaris Vmware

software to all nodes using the remote access we configured in the section "Configure RAC Nodes for Remote Access".

After installing Oracle Clusterware, the Oracle Universal Installer (OUI) used to install the Oracle10g database software (next section) will automatically recognize these nodes. Like the Oracle Clusterware install you will be performing in this section, the Oracle Database 10g software only needs to be run from one node. The OUI will copy the software packages to all nodes configured in the RAC cluster.

Oracle Clusterware - Some Background

The material contained in this section is taken from material created by Kevin Closson (http://kevinclosson.wordpress.com) and much more information is available in the PolyServe white paper entitled Third-Party Cluster Platforms and Oracle Real Application Clusters on Linux (can be downloaded from here).

In its simplest form, any software that performs any function on more than one interconnected computer system can be called clusterware. In the context of Oracle RAC, clusterware takes the form of two shared libraries that provide node membership services and internode communication functionality when dynamically linked with Oracle executables.

Oracle RAC links with two clusterware libraries:

• libskgxn.so: This library contains the routines used by the Oracle server to maintain node membership services. In short, these routines let Oracle know what nodes are in the cluster. Likewise, if Oracle wants to evict (fence) a node, it will call a routine in this library.

• libskgxp.so: This library contains the Oracle server routines used for communication between instances (e.g. Cache Fusion CR sends, lock converts, etc.).

Verifying Terminal Shell Environment

Before starting the OUI, you should first verify you are logged onto the server you will be running the installer from ( i.e. solaris1) then run the xhost command as root from the console to allow X server connections. Next, login as the oracle user account. If you are using a remote client to connect to the node performing the installationg (SSH/Telnet to solaris1 from a workstation configured with an X server), you will need to set the DISPLAY variable to point to your local workstation. Finally, verify remote access/user equivalence to all nodes in the cluster.

Verify Server and Enable X Server Access # hostname solaris1 # xhost + access control disabled, clients can connect from any host

Login as the oracle User Account and Set DISPLAY (if necessary) # su - oracle $ # IF YOU ARE USING A REMOTE CLIENT TO CONNECT TO THE

Page 22: Install Rac on Solaris Vmware

$ # NODE PERFORMING THE INSTALL # DISPLAY=<you local workstation>:0.0 $ export DISPLAY

Verify Remote Access/User Equivalence Verify you are able to run the Secure Shell commands (ssh or scp) on the Solaris server you will be running the OUI from against all other Solaris servers in the cluster without being prompted for a password. When using the secure shell method (which is what we are using in this article), user equivalence will need to enabled on any new terminal shell session before attempting to run the OUI. To enable user equivalence for the current terminal shell session, perform the following steps remembering to enter the pass phrase for the RSA key that generated when prompted: $ exec ssh-agent $SHELL $ ssh-add Enter passphrase for /u01/app/oracle/.ssh/id_rsa: Identity added: /u01/app/oracle/.ssh/id_rsa (/u01/app/oracle/.ssh/id_rsa) $ ssh solaris1 "date;hostname" Wed Jan 24 16:45:43 CST 2007 solaris1 $ ssh solaris2 "data;hostname" Wed Jan 24 16:46:24 CST 2007 solaris2

Installing Clusterware

Perform the following tasks to install the Oracle Clusterware: $ cd ˜oracle $ /u01/app/oracle/orainstall/clusterware/runInstaller

Screen Name Response

Welcome Screen Click Next

Specify Inventory directory and credentials

Accept the default values: Inventory directory: /u01/app/oracle/oraInventory Operating System group name: dba

Specify Home Details

Set the Name and Path for the ORACLE_HOME (actually the $ORA_CRS_HOME that I will be using in this article) as follows: Name: OraCrs10g_home Path: /u01/app/oracle/product/crs

Product-Specific Prerequisite Checks

The installer will run through a series of checks to determine if the node meets the minimum requirements for installing and configuring the Oracle Clusterware software. If any of the checks fail, you will need to manually verify the check that failed by clicking on the checkbox. For my installation, all checks passed with no problems.

Click Next to continue.

Page 23: Install Rac on Solaris Vmware

Specify Cluster Configuration

Cluster Name: crs Public Node Name Private Node Name Virtual Node Namesolaris1 solaris1-priv solaris1-vip solaris2 solaris2-priv solaris2-vip

Specify Network Interface Usage

Interface Name Subnet Interface Type rge0 172.16.16.0 Public e1000g0 192.168.2.0 Private

Specify OCR Location

Starting with Oracle Database 10g Release 2 (10.2) with RAC, Oracle Clusterware provides for the creation of a mirrored OCR file, enhancing cluster reliability. For the purpose of this example, I did not choose to mirror the OCR file by using the option of "External Redundancy":

Specify OCR Location: /dev/rdsk/

Specify Voting Disk Location

For the purpose of this example, I did not choose to mirror the voting disk by using the option of "External Redundancy":

Voting Disk Location: /dev/rdsk/

Summary Click Install to start the installation!

Execute Configuration Scripts

After the installation has completed, you will be prompted to run the orainstRoot.sh and root.sh script. Open a new console window on both Oracle RAC nodes in the cluster, (starting with the node you are performing the install from), as the "root" user account.

Navigate to the /u01/app/oracle/oraInventory directory and run orainstRoot.sh ON ALL NODES in the RAC cluster.

Within the same new console window on both Oracle RAC nodes in the cluster, (starting with the node you are performing the install from), stay logged in as the "root" user account.

Navigate to the /u01/app/oracle/product/crs directory and locate the root.sh file for each node in the cluster - (starting with the node you are performing the install from). Run the root.sh file ON ALL NODES in the RAC cluster ONE AT A TIME.

You will receive several warnings while running the root.sh script on all nodes. These warnings can be safely ignored.

The root.sh may take awhile to run.

Go back to the OUI and acknowledge the "Execute Configuration scripts" dialog window after running the root.sh script on both nodes.

End of installation At the end of the installation, exit from the OUI.

Page 24: Install Rac on Solaris Vmware

Verify Oracle Clusterware Installation

After the installation of Oracle Clusterware, we can run through several tests to verify the install was successful. Run the following commands on both nodes in the RAC Cluster

Check Cluster Nodes $ /u01/app/oracle/product/crs/bin/olsnodes -n solaris1 solaris2

Check Oracle Clusterware Auto-Start Scripts $ ls -l /etc/init.d/init.*[d,s] -r-xr-xr-x 1 root root 2236 Jan 26 18:49 init.crs -r-xr-xr-x 1 root root 4850 Jan 26 18:49 init.crsd -r-xr-xr-x 1 root root 41163 Jan 26 18:49 init.cssd -r-xr-xr-x 1 root root 3190 Jan 26 18:49 init.evmd

Install Oracle Database 10g Software Perform the following installation procedures from only one of the Oracle RAC nodes in the cluster (solaris1). The Oracle Database software will be installed to both of the Oracle RAC nodes in the cluster by the OUI.

After successfully installing the Oracle Clusterware software, the next step is to install the Oracle Database 10g Release 2 (10.2.0.1.0) with RAC. For the purpose of this article, you opt not to create a database when installing the software. You will, instead, create the database using the Database Creation Assistant (DBCA) after the install. Like the Oracle Clusterware install in the previous section, the Oracle 10g database software only needs to be run from one node. The OUI will copy the software packages to all nodes configured in the RAC cluster.

Verifying Terminal Shell Environment

As discussed in the previous section, the terminal shell environment needs to be configured for remote access and user equivalence to all nodes in the cluster before running the Oracle Universal Installer. Note that you can utilize the same terminal shell session used in the previous section which in this case, you do not have to take any of the actions described below with regards to setting up remote access and the DISPLAY variable.

Verify Server and Enable X Server Access # hostname solaris1 # xhost + access control disabled, clients can connect from any host

Page 25: Install Rac on Solaris Vmware

Login as the oracle User Account and Set DISPLAY (if necessary) # su - oracle $ # IF YOU ARE USING A REMOTE CLIENT TO CONNECT TO THE $ # NODE PERFORMING THE INSTALL # DISPLAY=<you local workstation>:0.0 $ export DISPLAY

Verify Remote Access/User Equivalence Verify you are able to run the Secure Shell commands (ssh or scp) on the Solaris server you will be running the OUI from against all other Solaris servers in the cluster without being prompted for a password. When using the secure shell method (which is what we are using in this article), user equivalence will need to enabled on any new terminal shell session before attempting to run the OUI. To enable user equivalence for the current terminal shell session, perform the following steps remembering to enter the pass phrase for the RSA key that generated when prompted: $ exec ssh-agent $SHELL $ ssh-add Enter passphrase for /u01/app/oracle/.ssh/id_rsa: Identity added: /u01/app/oracle/.ssh/id_rsa (/u01/app/oracle/.ssh/id_rsa) $ ssh solaris1 "date;hostname" Wed Jan 24 16:45:43 CST 2007 solaris1 $ ssh solaris2 "data;hostname" Wed Jan 24 16:46:24 CST 2007 solaris2

Install Oracle Database 10g Release 2 Software

Install the Oracle Database 10g Release 2 software with the following: $ cd ˜oracle $ /u01/app/oracle/orainstall/database/runInstaller -ignoreSysPrereqs

Screen Name Response

Welcome Screen Click Next

Select Installation Type I selected the Enterprise Edition option.

Specify Home Details

Set the Name and Path for the ORACLE_HOME as follows: Name: OraDb10g_home1 Path: /u01/app/oracle/product/10.2.0/db_1

Specify Hardware Cluster Installation Mode

Select the Cluster Installation option then select all nodes available. Click Select All to select all servers: solaris1 and solaris2.

If the installation stops here and the status of any of the RAC nodes is "Node not reachable", perform the

Page 26: Install Rac on Solaris Vmware

following checks:

• Ensure Oracle Clusterware is running on the node in question.

• Ensure you are table to reach the node in question from the node you are performing the installation from.

Product-Specific Prerequisite Checks

The installer will run through a series of checks to determine if the node meets the minimum requirements for installing and configuring the Oracle database software. If any of the checks fail, you will need to manually verify the check that failed by clicking on the checkbox.

If you did not run the OUI with the ignoreSysPrereqs option then the Kernel parameters prerequisite check will fail. This is because the OUI is looking at the /etc/system file to check the kernel parameters. As we discussed earlier, this file is not used by default in Solaris 10. This is documented in Metalink Note 363436.1.

Click Next to continue.

Select Database Configuration

Select the option to "Install database software only."

Remember that we will create the clustered database as a separate step using DBCA.

Summary Click on Install to start the installation!

Root Script Window - Run root.sh

After the installation has completed, you will be prompted to run the root.sh script. It is important to keep in mind that the root.sh script will need to be run on all nodes in the RAC cluster one at a time starting with the node you are running the database installation from.

First, open a new console window on the node you are installing the Oracle 10g database software from as the root user account. For me, this was solaris1.

Navigate to the /u01/app/oracle/product/10.2.0/db_1 directory and run root.sh.

After running the root.sh script on all nodes in the cluster, go back to the OUI and acknowledge the "Execute Configuration scripts" dialog window.

End of installation At the end of the installation, exit from the OUI.

Page 27: Install Rac on Solaris Vmware

Create TNS Listener Process Perform the following configuration procedures from only one of the Oracle RAC nodes in the cluster (solaris1). The Network Configuration Assistant (NETCA) will setup the TNS listener in a clustered configuration on both of the Oracle RAC nodes in the cluster.

DBCA requires the Oracle TNS listener process to be configured and running on all nodes in the RAC cluster before it can create the clustered database. The process of creating the TNS listener only needs to be performed on one node in the cluster. All changes will be made and replicated to all nodes in the cluster. On one of the nodes (I will be using solaris1) bring up the NETCA and run through the process of creating a new TNS listener process and also configure the node for local access.

Verifying Terminal Shell Environment

As discussed in the previous section, the terminal shell environment needs to be configured for remote access and user equivalence to all nodes in the cluster before running the Network Configuration Assistant (NETCA). Note that you can utilize the same terminal shell session used in the previous section which in this case, you do not have to take any of the actions described below with regards to setting up remote access and the DISPLAY variable.

Login as the oracle User Account and Set DISPLAY (if necessary) # su - oracle $ # IF YOU ARE USING A REMOTE CLIENT TO CONNECT TO THE $ # NODE PERFORMING THE INSTALL # DISPLAY=<you local workstation>:0.0 $ export DISPLAY

Verify Remote Access/User Equivalence Verify you are able to run the Secure Shell commands (ssh or scp) on the Solaris server you will be running the OUI from against all other Solaris servers in the cluster without being prompted for a password. When using the secure shell method (which is what we are using in this article), user equivalence will need to enabled on any new terminal shell session before attempting to run the OUI. To enable user equivalence for the current terminal shell session, perform the following steps remembering to enter the pass phrase for the RSA key that generated when prompted: $ exec ssh-agent $SHELL $ ssh-add Enter passphrase for /u01/app/oracle/.ssh/id_rsa: Identity added: /u01/app/oracle/.ssh/id_rsa (/u01/app/oracle/.ssh/id_rsa) $ ssh solaris1 "date;hostname" Wed Jan 24 16:45:43 CST 2007 solaris1 $ ssh solaris2 "data;hostname" Wed Jan 24 16:46:24 CST 2007 solaris2

Page 28: Install Rac on Solaris Vmware

Run the Network Configuration Assistant

To start NETCA, run the following: $ netca The following table walks you through the process of creating a new Oracle listener for our RAC environment.

Screen Name Response

Select the Type of Oracle Net Services Configuration

Select Cluster Configuration

Select the nodes to configure Select all of the nodes: solaris1 and solaris2.

Type of Configuration Select Listener configuration.

Listener Configuration - Next 6 Screens

The following screens are now like any other normal listener configuration. You can simply accept the default parameters for the next six screens: What do you want to do: Add Listener name: LISTENER Selected protocols: TCP Port number: 1521 Configure another listener: No Listener configuration complete! [ Next ] You will be returned to this Welcome (Type of Configuration) Screen.

Type of Configuration Select Naming Methods configuration.

Naming Methods Configuration

The following screens are: Selected Naming Methods: Local Naming Naming Methods configuration complete! [ Next ] You will be returned to this Welcome (Type of Configuration) Screen.

Type of Configuration Click Finish to exit the NETCA.

The Oracle TNS listener process should now be running on all nodes in the RAC cluster.

Create the Oracle Cluster Database The database creation process should only be performed from one of the Oracle RAC nodes in the cluster ( solaris1).

We will use DBCA to create the clustered database.

Verifying Terminal Shell Environment

Page 29: Install Rac on Solaris Vmware

As discussed in the previous section, the terminal shell environment needs to be configured for remote access and user equivalence to all nodes in the cluster before running the Database Configuration Assistant (DBCA). Note that you can utilize the same terminal shell session used in the previous section which in this case, you do not have to take any of the actions described below with regards to setting up remote access and the DISPLAY variable.

Login as the oracle User Account and Set DISPLAY (if necessary) # su - oracle $ # IF YOU ARE USING A REMOTE CLIENT TO CONNECT TO THE $ # NODE PERFORMING THE INSTALL # DISPLAY=<you local workstation>:0.0 $ export DISPLAY

Verify Remote Access/User Equivalence Verify you are able to run the Secure Shell commands (ssh or scp) on the Solaris server you will be running the OUI from against all other Solaris servers in the cluster without being prompted for a password. When using the secure shell method (which is what we are using in this article), user equivalence will need to enabled on any new terminal shell session before attempting to run the OUI. To enable user equivalence for the current terminal shell session, perform the following steps remembering to enter the pass phrase for the RSA key that generated when prompted: $ exec ssh-agent $SHELL $ ssh-add Enter passphrase for /u01/app/oracle/.ssh/id_rsa: Identity added: /u01/app/oracle/.ssh/id_rsa (/u01/app/oracle/.ssh/id_rsa) $ ssh solaris1 "date;hostname" Wed Jan 24 16:45:43 CST 2007 solaris1 $ ssh solaris2 "data;hostname" Wed Jan 24 16:46:24 CST 2007 solaris2

Create the Clustered Database

To start the database creation process, run the following: $ dbca

Screen Name Response

Welcome Screen Select "Oracle Real Application Clusters database."

Operations Select Create a Database.

Node Selection Click on the Select All button to select all servers: solaris1 and solaris2.

Database Templates Select Custom Database.

Database Select:

Page 30: Install Rac on Solaris Vmware

Identification Global Database Name: orcl.itconvergence.com SID Prefix: orcl

I used itconvergence.com for the database domain. You may use any domain. Keep in mind that this domain does not have to be a valid DNS domain.

Management Option

Leave the default options here, which is to "Configure the Database with Enterprise Manager / Use Database Control for Database Management."

Database Credentials

I selected to Use the Same Password for All Accounts. Enter the password (twice) and make sure the password does not start with a digit number.

Storage Options For this guide, we will select to use Automatic Storage Management (ASM).

Create ASM Instance

Supply the SYS password to use for the new ASM instance.

Also, starting with Oracle 10g Release 2, the ASM instance server parameter file (SPFILE) needs to be on a shared disk. You will need to modify the default entry for "Create server parameter file (SPFILE)" to reside on the RAW partition as follows: /dev/rdsk/c2t3d0s1. All other options can stay at their defaults.

You will then be prompted with a dialog box asking if you want to create and start the ASM instance. Select the OK button to acknowledge this dialog.

The OUI will now create and start the ASM instance on all nodes in the RAC cluster.

ASM Disk Groups

To start, click the Create New button. This will bring up the "Create Disk Group" window with the three of the partitions we created earlier.

For the first "Disk Group Name", I used the string "ORCL_DATA". Select the first two RAW partitions (in my case c2t4d0s1 and c2t5d0s1) in the "Select Member Disks" window. Keep the "Redundancy" setting to "Normal".

After verifying all values in this window are correct, click the [OK] button. This will present the "ASM Disk Group Creation" dialog. When the ASM Disk Group Creation process is finished, you will be returned to the "ASM Disk Groups" windows.

Click the Create New button again. For the second "Disk Group Name", I used the string "FLASH_RECOVERY_AREA". Select the last RAW partition (c2t6d0s1) in the "Select Member Disks" window. Keep the "Redundancy" setting to "External".

After verifying all values in this window are correct, click the [OK] button. This will present the "ASM Disk Group Creation" dialog.

When the ASM Disk Group Creation process is finished, you will be returned to the "ASM Disk Groups" window with two disk groups created and selected. Select only one of the disk groups by using the checkbox next to the newly

Page 31: Install Rac on Solaris Vmware

created Disk Group Name "ORCL_DATA" (ensure that the disk group for "FLASH_RECOVERY_AREA" is not selected) and click [Next] to continue.

Database File Locations

I selected to use the default, which is to use Oracle Managed Files:

Database Area: +ORCL_DATA

Recovery Configuration

Check the option for "Specify Flash Recovery Area".

For the Flash Recovery Area, click the [Browse] button and select the disk group name "+FLASH_RECOVERY_AREA".

My disk group has a size of about 100GB. I used a Flash Recovery Area Size of 90GB (92160 MB).

Database Content

I left all of the Database Components (and destination tablespaces) set to their default value.

Database Services

For this test configuration, click Add, and enter orcltest as the "Service Name." Leave both instances set to Preferred and for the "TAF Policy" select "Basic".

Initialization Parameters

Change any parameters for your environment. I left them all at their default settings.

Database Storage

Change any parameters for your environment. I left them all at their default settings.

Creation Options

Keep the default option Create Database selected and click Finish to start the database creation process.

Click OK on the "Summary" screen.

End of Database Creation

At the end of the database creation, exit from the DBCA.

When exiting the DBCA, another dialog will come up indicating that it is starting all Oracle instances and HA service "orcltest". This may take several minutes to complete. When finished, all windows and dialog boxes will disappear.

When DBCA has completed, you will have a fully functional Oracle RAC cluster running.

Starting / Stopping the Cluster All commands in this section will be run from solaris1.

Stopping the Oracle RAC 10g Environment

The first step is to stop the Oracle instance. When the instance (and related services) is down, then bring down the ASM instance. Finally, shut down the node applications (Virtual IP, GSD, TNS Listener, and ONS). $ export ORACLE_SID=orcl1

Page 32: Install Rac on Solaris Vmware

$ emctl stop dbconsole $ srvctl stop instance -d orcl -i orcl1 $ srvctl stop asm -n solaris1 $ srvctl stop nodeapps -n solaris1

Starting the Oracle RAC 10g Environment

The first step is to start the node applications (Virtual IP, GSD, TNS Listener, and ONS). When the node applications are successfully started, then bring up the ASM instance. Finally, bring up the Oracle instance (and related services) and the Enterprise Manager Database console. $ export ORACLE_SID=orcl1 $ srvctl start nodeapps -n solaris1 $ srvctl start asm -n solaris1 $ srvctl start instance -d orcl -i orcl1 $ emctl start dbconsole

Start/Stop All Instances with SRVCTL

Start/stop all the instances and their enabled services. $ srvctl start database -d orcl $ srvctl stop database -d orcl

Verify the RAC Cluster and Database Configuration The following RAC verification checks should be performed on both Oracle RAC nodes in the cluster. For this article, however, I will only be performing checks from solaris1.

This section provides several srvctl commands and SQL queries to validate you Oracle RAC 10g configuration.

There are 5 node-level tasks defined for srvctl:

• Adding and deleting node-level applications • Setting and unsetting the environment for node-level applications • Administering node applications • Administering ASM instances • Starting and stopping a group of programs that include virtual IP addresses, listeners,

Oracle Notification Services, and Oracle Enterprise manager agents (for maintenance purposes)

Status of all instances and services

$ srvctl status database -d orcl Instance orcl1 is running on node solaris1 Instance orcl2 is running on node solaris2

Status of a single instance

Page 33: Install Rac on Solaris Vmware

$ srvctl status instance -d orcl -i orcl1 Instance orcl1 is running on node solaris1

Status of a named service globally across the database

$ srvctl status service -d orcl -s orcltest Service orcltest is running on instance(s) orcl2, orcl1

Status of node applications on a particular node

$ srvctl status nodeapps -n solaris1 VIP is running on node: solaris1 GSD is running on node: solaris1 Listener is running on node: solaris1 ONS daemon is running on node: solaris1

Status of an ASM instance

$ srvctl status asm -n solaris1 ASM instance +ASM1 is running on node solaris1

List all configured databases

$ srvctl config database orcl

Display configuration for our RAC database

$ srvctl config database -d orcl solaris1 orcl1 /u01/app/oracle/product/10.2.0/db_1 solaris2 orcl2 /u01/app/oracle/product/10.2.0/db_1

Display the configuration for the ASM instance(s)

$ srvctl config asm -n solaris1 +ASM1 /u01/app/oracle/product/10.2.0/db_1

All running instances in the cluster

SELECT inst_id, instance_number inst_no, instance_name inst_name, parallel, status, database_status db_status, active_state state, host_name host FROM gv$instance ORDER BY inst_id INST_ID INST_NO INST_NAME PAR STATUS DB_STATUS STATE HOST -------- -------- ---------- --- ------- ------------ --------- -------- 1 1 orcl1 YES OPEN ACTIVE NORMAL solaris1 2 2 orcl2 YES OPEN ACTIVE NORMAL solaris2