Generic Cluster Configuration Guide.v2
-
Upload
anonymous-ie0oexp2e -
Category
Documents
-
view
19 -
download
0
description
Transcript of Generic Cluster Configuration Guide.v2
Generic Cluster Configuration
User Guide
Document Identifier: 4096 <update in the file properties dialog box – Custom tab>Document Version: <A.B> <update in the file properties dialog box – Custom tab>Document Status: <DRAFT| ISSUED| INFORMATION| FINAL| OBSOLETE| PROPOSED| APPROVED> Control: Internal/ExternalDocument Release Date: <month year> <update in the file properties dialog box – Custom tab>Approved by: <Approved By> <update in the file properties dialog box –Custom tab>
RedHat Cluster
DistributionDocument Distribution
LMP Implementation, LMP Global Services,
PE-TP ISD
Table of Contents
Preface.......................................................................................................................4Purpose.......................................................................................................................................... 4
Audience........................................................................................................................................ 4
Organisation................................................................................................................................... 4
Typographic Conventions...............................................................................................................4
1 Introduction.........................................................................................................61.1 Basic requirements..............................................................................................................6
1.2 General assumptions...........................................................................................................6
1.3 Cluster rpm installation........................................................................................................6
1.4 Modifying lvm.conf file..........................................................................................................7
2 Cluster configuration..........................................................................................82.1 Adding Node........................................................................................................................ 8
2.2 Adding fence device.............................................................................................................8
2.3 Adding managed resources.......................................................................................102.3.1 Fail over domain.................................................................................................102.3.2 Floating IP..........................................................................................................112.3.3 Resource script...................................................................................................112.3.4 File system.........................................................................................................11
2.4 Creating service........................................................................................................12
2.5 Starting the cluster.............................................................................................................13
2.6 Creating GFS..................................................................................................................... 14
3 Managing Cluster services......................................................................15
3.1 Clustat................................................................................................................................ 15
3.2 Clusvcadm.........................................................................................................................153.2.1 Start a service.....................................................................................................153.2.2 Stop a service.....................................................................................................153.2.3 Disable a service................................................................................................153.2.4 Restart a service.................................................................................................16
Document identifier: <Document Identifier> Version: <A.B> 3 of 25 Copyright © Acision BV 2006-2007
Copyright © Acision BV 2006-2007
All rights reserved. This document is protected by international copyright law and may not be reprinted, reproduced, copied or utilised in whole or in part by any means including electronic, mechanical, or other means without the prior written consent of Acision BV.
Whilst reasonable care has been taken by Acision BV to ensure the information contained herein is reasonably accurate, Acision BV shall not, under any circumstances be liable for any loss or damage (direct or consequential) suffered by any party as a result of the contents of this publication or the reliance of any party thereon or any inaccuracy or omission therein. The information in this document is therefore provided on an “as is” basis without warranty and is subject to change without further notice and cannot be construed as a commitment by Acision BV.
The products mentioned in this document are identified by the names, trademarks, service marks and logos of their respective companies or organisations and may not be used in any advertising or publicity or in any other way whatsoever without the prior written consent of those companies or organisations and Acision BV.
A.1 Sample cluster.conf file......................................................................................................17
A.2 Sample Informix script (/etc/cluster/db/Informix.sh)...........................................................20
Glossary and Abbreviations...................................................................................22
References...............................................................................................................23
Version History..........................................................................................................24
Document identifier: <Document Identifier> Version: <A.B> 4 of 25 Copyright © Acision BV 2006-2007
Preface
PurposeThe purpose of this document is to give an overview of the RedHat cluster installation and configuration. This is a generic document and will not be discussing the product specific aspects of the configuration. There is a separate document available for the product specific cluster configuration details
AudienceThe target audience of this document is the implementation team and testing team who will work with the cluster configuration
OrganisationThis document is organised in two chapters. The first chapter deals with the assumptions and the installation procedure of the cluster packages.
The second chapter explains the configuration details of the RedHat. A sample cluster configuration is given in the annexure
Typographic ConventionsIn this document, the typographic conventions listed in Table P-1-1 are used.
Table P-1-1: Typographic Conventions
Typeface or Symbol Meaning/Used for Example
Courier Refers to a keyboard key, system command, label, button, filename, window, or other computer component or output.
The directory data contains…
Click the Close button to…
<courier> Serves as a placeholder for variable text that the user will replace as appropriate to its context.
Use the file name <entity>.cfg for...
[] Refers the user to external documentation listed in the References section.
[ETSI 03.38]
Italic Emphasises a new word or term of significance.
Jumpstart, the install procedure on a SUN T1,
% Denotes a Unix regular-user prompt for C shell.
% ls
# Denotes a Unix super-user prompt for any shell.
# ls
$ Denotes an OpenVMS Digital Command Language prompt.
$ dir
Document identifier: <Document Identifier> Version: <A.B> 5 of 25 Copyright © Acision BV 2006-2007
Typeface or Symbol Meaning/Used for Example
\ (Unix)or– (OpenVMS)
Denotes line continuation; the character should be ignored as the user types the example, and Enter should only be pressed after the last line.
% grep searchforthis \data/*.dat
$ search [.data]*.dat -searchforthis
- Bridges two keystrokes that should be pressed simultaneously.
If Ctrl-C does not work, use Ctrl-Alt-Del.
Denotes a “note”, a piece of text alongside the normal text requiring extra attention.
Note that the system is usually...
Document identifier: <Document Identifier> Version: <A.B> 6 of 25 Copyright © Acision BV 2006-2007
1 Introduction
1.1 Basic requirementsSome basic requirements to enable this document to be used correctly are:
· The installation engineer is an experienced UNIX user.· The installation engineer has had some recent exposure to system administration tasks on
RHEL· The installation engineer has had some recent exposure to RedHat cluster configuration· Some of the steps given in the document uses X windows and the terminal on which this
configuration is done should support the X window.· It is essential that this document be read completely before installation is attempted.· All commands must be run as the root user unless explicitly stated otherwise.
1.2 General assumptionsThere are several assumptions made in this document, which makes writing it considerably easier. They are described below.
· It is assumed that all the required cluster suite rpms are downloaded or copied from the CD to /var/tmp/rpm
· The document is written four node cluster in mind · This document assumes that an ilo user fence with a password password is created with
limited privileges· The servers should be build using the 4092 document before start using this document· The cluster name must be the same as the node names, but without the trailing letter. For
example, if the node names were bm4a and bm4b, then the cluster name must be bm4.
1.3 Cluster rpm installation
There are few rpms needs to be installed on all the nodes for cluster to configure. The list of rpm is given below. The care should be taken to install it in the same order as it is listed to avoid the dependency errors.
rpm –ivh /var/tmp/rpm/<rpm>
Where the rpm is the files listed below
perl-Crypt-SSLeay-0.51-5.x86_64.rpmmagma-1.0.5-0.x86_64.rpmccs-1.0.10-0.x86_64.rpmperl-Net-Telnet-3.03-3.noarch.rpmseamonkey-nss-1.0.8-0.2.el4.x86_64.rpmfence-1.32.45-1.0.2.x86_64.rpmgulm-1.0.10-0.x86_64.rpmiddev-2.0.0-4.x86_64.rpmmagma-plugins-1.0.8-0.x86_64.rpm
4096 Version: 1.1 7 of 25 Copyright © Acision BV 2006-2007
system-config-cluster-1.0.27-1.0.noarch.rpmrgmanager-1.9.68-1.x86_64.rpmcman-kernel-smp-2.6.9-50.2.x86_64.rpmcman-1.0.17-0.x86_64.rpmdlm-kernel-smp-2.6.9-46.16.x86_64.rpmdlm-1.0.1-1.x86_64.rpmGFS-kernel-smp-2.6.9-72.2.x86_64.rpmGFS-6.1.14-0.x86_64.rpmlvm2-cluster-2.02.21-7.el4.x86_64.rpmcmirror-kernel-smp-2.6.9-32.0.x86_64.rpmcmirror-1.0.1-1.x86_64.rpm
1.4 Modifying lvm.conf file Once the cluster rpms are installed the /etc/lvm/lvm.conf file need to be modified to make the volume groups aware of the clvms. This can be done by changing the locking_type to 3 in the /etc/lvm/lvm.conf file.
Document identifier: <Document Identifier> Version: <A.B> 8 of 25 Copyright © Acision BV 2006-2007
2 Cluster configuration
This session describes the various steps involved in configuring the cluster. This document assumes that there are four nodes to be configured in the cluster. Redhat provided system-config-cluster utility to configure the cluster
2.1 Adding NodeThe cluster nodes are added using the GUI. We will be using the DLM lock manager for the cluster configuration. Please refer the 4092 document for the cluster and node naming convention.
Lock Method = DLM
Cluster Edit Cluster Properties
Name = <Cluster Name>
Cluster Cluster Nodes Add a Cluster Node
Cluster Node Name = < First Node Name >
Cluster Cluster Nodes Add a Cluster Node
Cluster Node Name = < Second Node Name >
Cluster Cluster Nodes Add a Cluster Node
Cluster Node Name = < Third Node Name>
Cluster Cluster Nodes Add a Cluster Node
Cluster Node Name = < Fourth Node Name>
2.2 Adding fence deviceAll nodes in the cluster should have a fence device defined for the cluster to function properly. There are few devices we can use as a fence device in our configuration we will be using the HP ILO as a fence device. An entry should added for each HP ILO in the /etc/hosts file before proceeding with the below steps
Cluster Fence Devices Add a Fence Device
Type = HP ILO Device
Name = < Name of the ilo>
Hostname = <Hostname of the first node ilo>
Login = <User id>
Password = <Password>
Document identifier: <Document Identifier> Version: <A.B> 9 of 25 Copyright © Acision BV 2006-2007
Cluster Cluster nodes < First Node Name > Manage Fencing for this Node
Add a New Fence Level
Fence Level 1 Add a New Fence to this level
Select the ilo device for the <First Node>
Cluster Fence Devices Add a Fence Device
Type = HP ILO Device
Name = < Name of the ilo>
Hostname = <Hostname of the second node ilo>
Login = <User id>
Password = <Password>
Cluster Cluster nodes < Second Node Name > Manage Fencing for this Node
Add a New Fence Level
Fence Level 1 Add a New Fence to this level
Select the ilo device for the <Second Node>
Cluster Fence Devices Add a Fence Device
Type = HP ILO Device
Name = < Name of the ilo>
Hostname = <Hostname of the third node ilo>
Login = <User id>
Password = <Password>
Cluster Cluster nodes < Third Node Name > Manage Fencing for this Node
Add a New Fence Level
Fence Level 1 Add a New Fence to this level
Select the ilo device for the <Third Node>
Cluster Fence Devices Add a Fence Device
Type = HP ILO Device
Name = < Name of the ilo>
Hostname = <Hostname of the fourth node ilo>
Document identifier: <Document Identifier> Version: <A.B> 10 of 25 Copyright © Acision BV 2006-2007
Login = <User id>
Password = <Password>
Cluster Cluster nodes < Fourth Node Name > Manage Fencing for this Node
Add a New Fence Level
Fence Level 1 Add a New Fence to this level
Select the ilo device for the <First Node>
2.3 Adding managed resourcesThe managed resource will contain the resources managed by the cluster. This group includes the failover domain, IP address, resource script and service. Each of these are covered in the sub sessions.
2.3.1 Fail over domain
The fail over domain is a list of nodes to which a service may be bound. It specifies where the cluster manager should send a failed nodes service. We will be configuring a restricted failover domain which tells the cluster manager to run a service only on the nodes in the domain. If no nodes are available in the domain the service is stopped.
Cluster Managed Resources Failover Domains Create a Failover Domain
Name = db0
Available Cluster Nodes < DB Server >
Available Cluster Nodes < Stand by DB Server >
Restrict Failover to This domain Members
Close
Cluster Managed Resources Failover Domains Create a Failover Domain
Name = app0
Available Cluster Nodes < First application server>
Available Cluster Nodes < Second application server >
Restrict Failover to This domain Members
Close
Document identifier: <Document Identifier> Version: <A.B> 11 of 25 Copyright © Acision BV 2006-2007
2.3.2 Floating IP
A floating IP address is created as the managed resources and will be attached to the services managed by the cluster manager. All the clients refer to this IP address to access the services running under the control of the cluster. The number of the IP addresses configured depends on the number of services configured in the cluster. In our example we are configuring four services to be part of the cluster hence we will create four IP resources
Cluster Managed Resources Resources Createt a Resource IP Address
< Floating IP for db >
Cluster Managed Resources Resources Createt a Resource IP Address
< Floating IP for appcore >
Cluster Managed Resources Resources Createt a Resource IP Address
< Floating IP for filter>
Cluster Managed Resources Resources Createt a Resource IP Address
< Floating IP for tint >
2.3.3 Resource script
The resource script is responsible for managing the services.
ClusterManaged Resources Resources Create a Resource Script
Name = db_init
File = /etc/cluster/db/informix.sh
ClusterManaged Resources Resources Create a Resource Script
Name = appcore_init
File = /etc/cluster/appcore/appcore.sh
ClusterManaged Resources Resources Create a Resource Script
Name = filter_init
File = /etc/cluster/filter/filter.sh
ClusterManaged Resources Resources Create a Resource Script
Name = tint_init
File = /etc/cluster/tint/tint.sh
2.3.4 File system
The ext3 file system which is required for the DB services to run should be created as a file system resource.
Cluster --> Managed Resources Resources Create a Resource File system
Document identifier: <Document Identifier> Version: <A.B> 12 of 25 Copyright © Acision BV 2006-2007
Name = /usr/aethos/backups
File System Type = ext3
Mount point = /usr/aethos/backups
Device = /dev/vgdblpd/backups
Cluster --> Managed Resources Resources Create a Resource File system
Name = /usr/aethos/suppimpdb
File System Type = ext3
Mount point = /usr/aethos/suppimpdb
Device = /dev/vgdblpd/suppimpdb
2.4 Creating serviceThe service is created in the cluster which contains all the resources required for the cluster to run. It uses the resources we have created in the session 2.3
Cluster Managed Resources Services Create a service
Name = db
Failover Domain = db0
Add a Shared Resource to this service
< Floating IP of DB > IP address
Add a Shared Resource to this service
db_init Script
Add a Shared Resource to this service
/usr/aethos/backups File System
/usr/aethos/suppimpdb File System
Cluster Managed Resources Services Create a service
Name = appcore
Failover Domain = app0
Add a Shared Resource to this service
< Floating IP of appcore > IP address
Add a Shared Resource to this service
appcore_init Script
Cluster Managed Resources Services Create a service
Name = tint
Failover Domain = app0
Document identifier: <Document Identifier> Version: <A.B> 13 of 25 Copyright © Acision BV 2006-2007
Add a Shared Resource to this service
< Floating IP of tint > IP address
Add a Shared Resource to this service
tint_init Script
Cluster Managed Resources Services Create a service
Name = filter
Failover Domain = app0
Add a Shared Resource to this service
< Floating IP of filter > IP address
Add a Shared Resource to this service
filter_init Script
File Save (leave the filename as default)
File Quit
2.5 Starting the clusterThe above session will generate a configuration file called cluster.conf in /etc/cluster directory. We need to ensure that the cluster name in cluster.conf file is same as the cluster name we have given. This can be located in the second line of the configuration file. This is absolute necessary for the GFS to work. Before we start the cluster services, cluster.conf file needs to be copied to all the nodes in the cluster.
scp /etc/cluster/cluster.conf < Second Node Name >:/etc/cluster
scp /etc/cluster/cluster.conf < Third Node Name >:/etc/cluster
scp /etc/cluster/cluster.conf < Fourth Node Name >:/etc/cluster
Start the cluster services using the commands below
service ccsd start
service cman start
service fenced start
service clvmd start
service rgmanager start
Make sure that these services start each time the server is rebooted. This can be done using the chkconfig command as below
for SER in ccsd cman fenced clvmd rgmanager
Document identifier: <Document Identifier> Version: <A.B> 14 of 25 Copyright © Acision BV 2006-2007
do
chkconfig $SER on
done
2.6 Creating GFS The file system which needs to be shared among the nodes is created with GFS. The GFS file system can be mounted on multiple nodes in the cluster at the same time and can provide simultaneous access. CDR, suppimpapp1 and datafiles file systems are created using the GFS and mounted on the nodes running the application services. Following commands will create a GFS file system which will be mounted on the nodes.
lvcreate –L 1536 –n cdr /dev/vgapp1
gfs_mkfs –p lock_dlm –t <cluster name>:cdr –j 4 /dev/vgapp1/cdr
gfs_mkfs –p lock_dlm –t <cluster name>:suppimpapp1 –j 4 \ /dev/vgapp1/suppimpapp1
gfs_mkfs –p lock_dlm –t <cluster name>:datafiles –j 4 \ /dev/vgapp1/datafiles
Mount the file system on all the nodes
mount –t gfs /dev/vgapp1/cdr /usr/aethos/cdr
mount –t gfs /dev/vgapp1/suppimpapp1 /usr/aethos/suppimpapp1
mount –t gfs /dev/vgapp1/datafiles /usr/aethos/datafiles
Edit the /etc/fstab and add the following entries on the nodes
/dev/vgapp1/cdr /usr/aethos/cdr gfs defaults 0 0
/dev/vgapp1/suppimpapp1 /usr/aethos/suppimpapp1 gfs defaults 0 0
/dev/vgapp1/datafiles /usr/aethos/datafiles gfs defaults 0 0
Document identifier: <Document Identifier> Version: <A.B> 15 of 25 Copyright © Acision BV 2006-2007
3 Managing cluster services
This session describes the commands to manage the cluster and services.
3.1 ClustatThe clustat displays the current status of the cluster and the services. A sample clustat out put is shown below
Member Status: Quorate
Member Name Status
------ ---- ------
BL04DL385 Online, Local, rgmanager
BL05DL385 Online, rgmanager
BL06DL385 Online, rgmanager
BL07DL385 Online, rgmanager
Service Name Owner (Last) State
------- ---- ----- ------ -----
db BL06DL385 started
appcore (BL04DL385) stopped
tint1 (BL04DL385) stopped
filter1 (BL04DL385) stopped
3.2 Clusvcadmclusvcadm command is used to manage the services in the cluster.
3.2.1 Start a service
clusvcadm –e db
3.2.2 Stop a service
clusvcadm –s db
3.2.3 Disable a service
clusvcadm –d db
Document identifier: <Document Identifier> Version: <A.B> 16 of 25 Copyright © Acision BV 2006-2007
3.2.4 Restart a service
clusvcadm –r db
This would restart the service on the same name. If you want to restart the service on an another node
clusvcadm –r db –m <hostname>
Once service failover is blocked, stop and disable each active service using –d option to the clusvcadm.
Document identifier: <Document Identifier> Version: <A.B> 17 of 25 Copyright © Acision BV 2006-2007
Appendix A.
A.1 Sample cluster.conf file<?xml version="1.0"?>
<cluster config_version="16" name="psa1">
<fence_daemon post_fail_delay="0" post_join_delay="3"/>
<clusternodes>
<clusternode name="BL04DL385" votes="1">
<fence>
<method name="1">
<device name="BL04DL385_ilo"/>
</method>
</fence>
</clusternode>
<clusternode name="BL05DL385" votes="1">
<fence>
<method name="1">
<device name="BL05DL385_ilo"/>
</method>
</fence>
</clusternode>
<clusternode name="BL06DL385" votes="1">
<fence>
<method name="1">
<device name="BL06DL385_ilo"/>
</method>
</fence>
</clusternode>
<clusternode name="BL07DL385" votes="1">
<fence>
<method name="1">
Document identifier: <Document Identifier> Version: <A.B> 18 of 25 Copyright © Acision BV 2006-2007
<device name="BL07DL385_ilo"/>
</method>
</fence>
</clusternode>
</clusternodes>
<cman/>
<fencedevices>
<fencedevice agent="fence_ilo" hostname="BL04DL385_ilo" login="fence" name="BL04DL385_ilo" passwd="password"/>
<fencedevice agent="fence_ilo" hostname="BL04DL385_ilo" login="fence" name="BL05DL385_ilo" passwd="password"/>
<fencedevice agent="fence_ilo" hostname="BL06DL385_ilo" login="fence" name="BL06DL385_ilo" passwd="password"/>
<fencedevice agent="fence_ilo" hostname="BL07DL385_ilo" login="fence" name="BL07DL385_ilo" passwd="password"/>
>
</fencedevices>
<rm>
<failoverdomains>
<failoverdomain name="db0" ordered="1" restricted="1">
<failoverdomainnode name="BL06DL385" priority="2"/>
<failoverdomainnode name="BL07DL385" priority="1"/>
</failoverdomain>
<failoverdomain name="app0" ordered="1" restricted="1">
<failoverdomainnode name="BL04DL385" priority="1"/>
<failoverdomainnode name="BL05DL385" priority="1"/>
</failoverdomain>
</failoverdomains>
<resources>
<ip address="10.14.236.230" monitor_link="1"/>
<ip address="10.14.236.231" monitor_link="1"/>
<ip address="10.14.236.232" monitor_link="1"/>
<ip address="10.14.236.233" monitor_link="1"/>
<ip address="10.14.236.234" monitor_link="1"/>
Document identifier: <Document Identifier> Version: <A.B> 19 of 25 Copyright © Acision BV 2006-2007
<ip address="10.14.236.235" monitor_link="1"/>
<fs device="/dev/vgdblpd/backups" force_fsck="1" force_unmount="1" fsid="14080" fstype="ext3" mountpoint="/usr/aethos/backups" name="/usr/aethos/backups" options="" self_fence="1"/>
<fs device="/dev/vgdblpd/suppimpdb" force_fsck="1" force_unmount="1" fsid="12687" fstype="ext3" mountpoint="/usr/aethos/suppimpdb" name="/usr/aethos/suppimpdb" options="" self_fence="1"/>
<script file="/etc/cluster/db/informix.sh" name="db_init"/>
<script file="/etc/cluster/appcore/appcore.sh" name="appcore_init"/>
<script file="/etc/cluster/tint/tint1.sh" name="tint1.sh_init"/>
<script file="/etc/cluster/filter/filter1.sh" name="filter1_init"/>
</resources>
<service autostart="1" domain="db0" name="db">
<ip ref="10.14.236.230"/>
<script ref="db_init"/>
</service>
<service autostart="1" domain="app0" name="appcore">
<ip ref="10.14.236.231"/>
<script ref="appcore_init"/>
</service>
<service autostart="1" domain="app0" name="tint1">
<script ref="tint1.sh_init"/>
<ip ref="10.14.236.232"/>
</service>
<service autostart="1" domain="app0" name="filter1">
<script ref="filter1_init"/>
<ip ref="10.14.236.234"/>
</service>
</rm>
</cluster>
Document identifier: <Document Identifier> Version: <A.B> 20 of 25 Copyright © Acision BV 2006-2007
A.2 Sample Informix script (/etc/cluster/db/Informix.sh)
#!/bin/bash -x
export LOG=/var/log/clusterdb.log
prog=informix
start () {
echo -n $"Starting $prog: "
echo "Starting $INFORMIXSERVER Server `date`" >>$LOG
/bin/su - informix -c "oninit -vy" >>$LOG 2>&1
RETVAL=$?
echo "Server $NFORMIXSERVER Started via /etc/cluster/db/informix `date`">>$LOG
sleep 5
/bin/su - informix -c "onstat -" >>$LOG
[ $RETVAL = 0 ] && touch /var/lock/subsys/$prog
return $RETVAL
}
stop () {
echo -n "Stopping $prog: "
echo "Shutting Down $INFORMIXSERVER Server `date`" >>$LOG
/bin/su - informix -c "onmode -ky"
RETVAL=$?
echo "Server $INFORMIXSERVER Stopped via /etc/cluster/db/informix `date`">>$LOG
/bin/su - informix -c "onstat -" >>$LOG
RETVAL=$?
echo
[ $RETVAL = 0 ] && rm -f /var/lock/subsys/$prog
}
restart() {
stop
start
}
Document identifier: <Document Identifier> Version: <A.B> 21 of 25 Copyright © Acision BV 2006-2007
status() {
/bin/su - informix -c "onstat -"
RETVAL=0
}
echo "******************************************************" >>$LOG
echo "******************************************************" >>$LOG
echo "PATH=$PATH">>$LOG
case $1 in
start)
start
;;
stop)
stop
;;
restart)
restart
;;
status)
status
;;
*)
echo "Usage: $DAEMON {start|stop|restart|status}"
exit 1
esac
exit $RETVAL
Document identifier: <Document Identifier> Version: <A.B> 22 of 25 Copyright © Acision BV 2006-2007
Glossary and Abbreviations
Term Description
CDR Call Data/Detail Record
DB Database
HP Hewlett Packard
DLM Distributed lock manager
GFS Global file system
ILO Integrated lights-out
RHEL Red hat enterprise linux
BM Bundle Manager
4096 Version: 1.1 23 of 25 Copyright © Acision BV 2006-2007
References
Referenced Document Document Number Version Source
Document Title Document Identifier or ISBN Number for external references.
Referenced version number.
Intranet URL / external website URL / name of publishing company, etc.
[1]
[2]
[3]
[4]
[5]
Document identifier: <Document Identifier> Version: <A.B> 24 of 25 Copyright © Acision BV 2006-2007
Version History
Version Status Date Details of Changes Author(s)
1.0 DRAFT 22-10-2007 Initial version Sijo Jose
2.0 DRAFT 30-11-2007 Managing cluster services session is added. Updated the cluster rpm information
Sijo Jose