Sun Cluster 3.0 Series Guide to Installation ok.doc
Transcript of Sun Cluster 3.0 Series Guide to Installation ok.doc
-
8/13/2019 Sun Cluster 3.0 Series Guide to Installation ok.doc
1/81
Part one of a two-part series, this article guides the reader through preparation and setup,
prior to deployment of a Sun Cluster system. Sun's preferred methodology for installing
Sun Cluster software, Sun's Enterprise Install Services (EIS processes, are presented.
The purpose of this module is to guide you through the tasks you must perform before you install
the Sun Cluster (SC) 3.0 software. These tasks include setting up the administratie workstation
and configuring the Sun Cluster 3.0 hardware components.
The e!ercises in this module e!plain how to install and configure a workstation to perform Sun
Cluster 3.0 software administratie functions in a cluster enironment. "dditionally# we proide
instructions for configuring the cluster# implementing best practices# and performing design
erifications# as well as administering a two$node cluster.
%or information about managing a cluster# refer to the documents listed in the &eferences section#
specifically'
System Administration Guide (reparing to "dminister the Cluster# *eginning to
"dminister the Cluster# "dministering the Cluster# and "dministering Sun Cluster with the
+raphical ,ser -nterface)
SC3.0 U1 Cluster Concepts(Cluster "dministration and "pplication eelopment)
/any of the steps in this guide refer to manual (local) procedures# which you should perform only
if you hae local (physical) access to the Sunle! platform. "n e!ample is# resetting the terminal
concentrator (TC) in order to actiate specific settings.
!"#ectives
"fter completing this module# you will hae successfully erified the installation and software
configuration for each cluster component. These tasks must be performed before you can install
the Sun Cluster 3.0 software on each cluster node. "dditionally# you will hae implemented the
associated key practices during each task# including'
Configuring the hardware and cabling.
Configuring the Solaris perating 1nironment (Solaris 1) and installing patches on
each cluster node. This task is site specific.
2erifying the management serer setup on the administratie workstation.
2erifying the terminal concentrator configuration.
-nstalling and configuring the Cluster Console utili ty.
ost$-nstallation and preparation for Sun Cluster software installation.
-
8/13/2019 Sun Cluster 3.0 Series Guide to Installation ok.doc
2/81
-
8/13/2019 Sun Cluster 3.0 Series Guide to Installation ok.doc
3/81
These can be combined# along with Sun Cluster 3.0 software administratie functions
appropriate to your implementation.
-n this module# we describe the hardware configuration and the procedures used to install the
administratie workstation clustadm. 4e e!plain how to confirm that all reuirements are met#erify that all cluster components (including patches) are installed correctly# and ensure that the
shell enironment is configured correctly.
Enterprise Installation Services Standard Installation
Practices
SunCluster installations must conform to the 1-S -nstallation standards that are current at the time
of the installation. %or this module# we hae preiously gathered all configuration information and
7data$layout7 criteria# as reuired to successfully complete the installation. ,sing the 1-S process#
this information is typically proided on the completed 1-S -nstall Specification or euialent
documents.
%or local (manual) installations# you must obtain and implement all current 1-S standards when
installing each node in the Sunle! platform. 1-S installation checklists# shell scripts# 1!plorer and
"CT software# as well as the 1-S C itself will all follow a separate reision cycle. The 1-S C
reision used during any new installation will include a 7date check#7 to ensure your 1-S C is
current. %or e!ample# if you are using an out$dated 1-S C during the installation# you will be
notified that the 1-S C you are using is . . . 8 thanxxdays old . . . . %or local (manual)
installations# you should always use the most recent 1-S C# before installing any system.
This guide# including all steps and procedures# represents a specific Sun Cluster implementation#
which conforms to 1-S standards at the time of this writing. 9ence# during reconstruction of the lab
cluster nodes# any warnings indicating that the 1-S C is 7out of date7 must be accepted# since we
hae fro:en the implementation# in order to guarantee successful configuration 7as tested7.
Installation and Planning Considerations
;ew installations that are well planned and well e!ecuted are critical to ensuring reliability and#
ultimately# aailability. &educing system outages inoles using proen methods (that is# well$
documented techniues# applications# components# configurations# operating policies and
procedures) when configuring highly aailable platforms. This includes minimi:ing all single points
of failure (S%s)# and documenting any S%s that could occur# along with documenting any
associated best practices.
The following points can contribute to successful configurations# assist in sustaining daily
operations# and help ma!imi:e platform aailability and performance'
1nsure SC 3.0 administrators are highly trained and able to successfully test and conduct
cluster failoer operations for each highly aailable (9") application and associated systems
-
8/13/2019 Sun Cluster 3.0 Series Guide to Installation ok.doc
4/81
and subsystems# including fault isolation
-
8/13/2019 Sun Cluster 3.0 Series Guide to Installation ok.doc
5/81
%igure 5and Table 5 through Table > represent the Sun Cluster hardware configuration used for
this module# which specifies two or more Sun serers that are connected by means of a priate
network. 1ach serer can access the same application data using multi$ported (shared) disk
storage and shared network resources# thereby enabling either cluster node to inherit an
application after its primary serer becomes unable to proide those serices.
&efer to%igure 5#which describes the SC 3.0 lab hardware implementation and Table 5 through
Table ># which define each connection.
%inal erification of the cluster hardware configuration will be confirmed only after the reuired
software has been installed and configured# and failoer operations hae been tested successfully.
Ca"le Configuration
igure Ca"le Configuration
$!%E
-n the preious illustration' c5 ? C-3# c@ ? C-AB 5000s include t0# t5# t@# t# tD# t50B ,tili:e
spare 1thernet ports by configuring additional priate interconnects (that is# use crossoer
cables between qfe3and qfe7# as indicated).
http://popup%28%27/content/images/art_sun_clusterinst1/elementLinks/fig01.gif')http://popup%28%27/content/images/art_sun_clusterinst1/elementLinks/fig01.gif')http://popup%28%27/content/images/art_sun_clusterinst1/elementLinks/fig01.gif')http://popup%28%27/content/images/art_sun_clusterinst1/elementLinks/fig01.gif')http://popup%28%27/content/images/art_sun_clusterinst1/elementLinks/fig01.gif')http://popup%28%27/content/images/art_sun_clusterinst1/elementLinks/fig01.gif')http://popup%28%27/content/images/art_sun_clusterinst1/elementLinks/fig01.gif')http://popup%28%27/content/images/art_sun_clusterinst1/elementLinks/fig01.gif') -
8/13/2019 Sun Cluster 3.0 Series Guide to Installation ok.doc
6/81
Ca"le Connections
Table 5 through Table > list the reuired cable connections.
%a"le Server-to-Storage Connections
rom
evice
rom
/ocation
%o evice %o
/ocation
Ca"le
/a"el
E220R #
1
SCSI A
(PCI3)
D1000 #1 SCSI A C3/1 -
C3/3A
E220R #
2
SCSI A
(PCI3)
D1000 #1 SCSI B C3/1 -
C3/3B
E220R #
1
SCSI A
(PCI)
D1000 #2 SCSI A C3/2 -
C3/3A
E220R #
2
SCSI A
(PCI)
D1000 #2 SCSI B C3/2 -
C3/3B
%a"le 0 Private $etwor1 Connections
rom
evice
rom
/ocation
%o evice %o
/ocation
Ca"le
/a"el
E220R #
1
qfe0 E220R # 2 qfe0 C3/1 -
C3/2A
E220R #
2
qfe E220R # 2 qfe C3/1 -
C3/2B
%a"le 2 Pu"lic $etwor1 Connections
rom
evice
rom
/ocation
%o evice %o
/ocation
Ca"le
/a"el
E220R # !me0 "u # 00 P$%t #2 C3/1 -
-
8/13/2019 Sun Cluster 3.0 Series Guide to Installation ok.doc
7/81
1 C3/&A
E220R #
2
qfe1 "u # 01 P$%t #3 C3/1 -
C3/'A
E220R #
1
!me0 "u # 01 P$%t #2 C3/1 -
C3/'A
E220R #
2
qfe1 "u # 00 P$%t #3 C3/2 -
C3/'A
%a"le 3 %erminal Concentrator Connections
rom evice rom
/ocation
%o evice %o
/ocation
Ca"le
/a"el
E220R # 1 Se%al
P$%t A
e%m*al
C$*ce*t%at$%
P$%t #2 C3/1 -
C3/A
E220R # 2 Se%al
P$%t A
e%m*al
C$*ce*t%at$%
P$%t #3 C3/2 -
C3/A
e%m*al
C$*ce*t%at$
%
Et!e%*et
P$%t
"u # 00 P$%t #1 C3/ -
C3/&A
%a"le 4 )dministrative 5or1station Connections
rom evice rom
/ocation
%o evice %o
/ocation
Ca"le
/a"el
Adm*st%at$*
+$%,stat$*
!me0 "u # 00 P$%t # 2/1 -
C3/&A
Adm*st%at$*
+$%,stat$*
Se%al
P$%t A
e%m*al
C$*ce*t%at$%
P$%t #1
..
2/1 -
C3/&B
$!%E
-
8/13/2019 Sun Cluster 3.0 Series Guide to Installation ok.doc
8/81
The Cable =abel column in Table 5 through Table > assumes the euipment is located in a
specific grid location# for e!ample C3. The number following the grid location identifies the
stacking leel for that piece of euipment with 5 being the lowest leel. The letter at the end of
the label tag indicates how many cables terminate at that leel. %or e!ample# the letter "
indicates one cable# * indicates two cables# and so on. "lso# the label tag %@ is the grid location
of the administratie workstation. The cable with EE in the To =ocation column is only
connected when configuring the terminal concentrator.
)rchitectural /imitations
The Sun Cluster 3.0 architecture is able to proide highest leels of aailability for hardware# the
operating system# and applications without compromising data integrity. The SunCluster
enironment (that is# hardware# operating enironment# Sun Cluster framework# and "-
applications) can be customi:ed to create highly aailable applications.
$o Single Points of ailure
/ultiple faults occurring within the same cluster platform (enironment) can result in unplanned
downtime. " S% can e!ist within# say# the software applications architecture. %or the 1@@0 a
S% for the single cluster node might be the embedded boot controller# or een a memory
module.
The basic Sun Cluster configuration based on Sun 1nterprise Serer /odel @@0& can be
configured as an entry$leel platform# proiding no S%s for the cluster pair.
Configuring Clusters for +) Planning Considerations
The primary configuration
-
8/13/2019 Sun Cluster 3.0 Series Guide to Installation ok.doc
9/81
;umber of logical hosts per node (including their agents# agent interoperability# and
serice leel reuirements)
Type of olume manager
isk striping and layout
%ile systems ersus raw deice database storage
erformance (local storage ersus +%S considerations)
;etwork infrastructure reuirements and redundancy
Client failoer strategy
=ogical host failoer method (manual s. automatic)
;aming conentions such as host -# disk label# disk groups# meta sets# and mount
points.
;ormal (sustaining) operations policies and procedures
*ackup and recoery procedures for the Sunle! platform
Section . Solaris Configuration ( lustadmThis section describes the steps necessary to install Solaris (plus patches) on the Sun Cluster
administratie workstation (clustadm). The same ersion of Solaris 1 must run on both the
clustadmworkstation and on each of the cluster nodes. The workstation is used for the
installation and for basic cluster operations.
$!%E
"t this time# it is assumed that all systems and subsystems hae been powered on# and the
Sunle! platform is fully configured (as per %igure 5# and Table 5 through Table >). "ll hardware
components are operational.
6ey Practice 1nsure all firmware is installed with the most recent (supported) ersions for all
systems and subsystems# including all serers# disk arrays# controllers# and terminal
concentrators. %or e!ample# on each node in the Sunle! platform# ensure the system 11&/
contains the most recent (supported) ersion of pen*oot &/ (*) such as# pen*oot 3.5>.
Step ..
%or local (manual) installations# it is a good idea to ensure that the clustadmworkstation is
configured with the most recent ersion of the *. -nformation about downloading can be
http://popup%28%27/content/images/art_sun_clusterinst1/elementLinks/fig01.gif')http://popup%28%27/content/images/art_sun_clusterinst1/elementLinks/fig01.gif') -
8/13/2019 Sun Cluster 3.0 Series Guide to Installation ok.doc
10/81
obtained from SunSoleS/# at http'
-
8/13/2019 Sun Cluster 3.0 Series Guide to Installation ok.doc
11/81
Set Su*et 6esSu*et as, 2&&2&&2&&0Default 8ate9a: 4$*e
$!%E
The alues uoted in the preious table are sample alues. %or local (manual) installations#
substitute the appropriate site$specific alues# as proided on the 1-S installation
documentation.
Step ..4
2erify that the following partitioning guidelines hae been implemented for the clustadm
workstation# which reseres space for use by a olume manager# and allocates @ +bytes (slice 5)
for swap space# then assigns unallocated space to / (%$$t# on slice 0).
$!%E
-t is often easier to use the Solaris f$%mat command to calculate the e!act number of een
cylinders to be configured# as when determining the si:e of the %$$tfile system.
Configure boot disk slices using the following guidelines'
Sl e 0 ; ass
-
8/13/2019 Sun Cluster 3.0 Series Guide to Installation ok.doc
12/81
slices 3 and A must be unassigned. Some customer applications reuire use of a slice. The
alternate boot enironment# =ie ,pgrade reuires one slice.
$!%E
-nstalling the entire Solaris media kit can reuire nearly 53 +bytes. "ssuming 3I +byte disk
dries# this should be more than sufficient for the boot disk layout being proposed here.
6ey Practice 2erify that the Solaris 1 installation was successful and that any errors reported
are resoled before proceeding. &eiew the /5a%/sadm/READE file to determine the
location of the most recent installation logs (for e!ample# /5a%/sadm/s:stem/l$
-
8/13/2019 Sun Cluster 3.0 Series Guide to Installation ok.doc
13/81
Step ..9
Change to the es-cd/su*/*stalldirectory# where the setu-sta*da%ds!script
is located. The setu-sta*da%ds!script will set up the 1-S enironment# and adds 1!plorer
and "CT software. "dditionally# this script configures the root user and shell enironment# and willdynamically configure your path based upon your actual configuration.
1nter the following commands to set up the 1-S enironment# on the clustadmworkstation'
# d / d%$m/es- d/su*/*stall# s! setu-sta*da%ds! /any prompts proide a default alue. "ccept all default alues# e!cept as noted for the
following'
1nter *# when asked if you want to enable the email panic facility# duringACinstallation (into the /$t/CEactdirectory).
4hen prompted for the S4+el$package# enter all site$specific information# as
appropriate.
1nter a single - # when asked 4ould you like e!plorer output to be sent to alternate
email addresses at the completion of e!plorerJ
1nter *# when asked if you wish to run e!plorer once a weekJ 4hen prompted# ;T run /$t/S4+el$/*/el$%e% -q -eat this
time. 4e will use this tool to gather important cluster configuration data after the installation
has been completed.
%or /$t/S4+el$# type : when asked if you want this directory created nowJ 1nter : to proceed with the installation of e!plorer.
,pon completion# you should see a message indicating the /%$fle 9as
c%eated/m$dfed# along with notification that the I*stallat$* $f S4+el$9as successful.
Step ..:
"ctiate the 1-S enironment# by inoking a new shell# or simply e!ecuting 7su -7 # at this time. -fyou are connected to the /de5/c$*s$leport# you should log off and then on again.
2erify that all settings and enironment ariables are configured# as reuired.
1!ample' 1nironment 2ariables (settings) $ clustadm
-
8/13/2019 Sun Cluster 3.0 Series Guide to Installation ok.doc
14/81
;aria"le /a"el
ER a*s $% 5t100
stt: st%
set %$mt %$$t!$st*ame#E*su%e t!e f$ll$9*data/S$la%s>2H>Rec$mme*ded>l$
-
8/13/2019 Sun Cluster 3.0 Series Guide to Installation ok.doc
17/81
?$utut $mtted@
lustadm l$
Step .3.0
Create the /etc/cluste%s file with an entry for this cluster# with the following format'
4ame $f Cluste% 4$de1 4ame 4$de2 4ame where'
4ame $f Cluste% ; *!l
4$de1 4ame ; clust*$de1
4$de2 4ame ; clust*$de2
n the clustadmworkstation# create the /etc/cluste%s file# configuring clust*$de1and
clust*$de2. 2erify that the /etc/cluste%sfile is correct'
%$$t lustadm# at /et / luste%s*!l lust*$de1 lust*$de2
Step .3.2
;e!t# configure the /etc/se%al$%tsfile on the clustadmworkstation# enabling a connection
to the terminal concentrator through the tt:a(serial) port. Create the /etc/se%al$%tsfile#
as shown in the following code bo!.
"t the clustadmworkstation# erify /etc/se%al$%tsfile is correct# as follows'
%$$t lustadm# at /et /se%al$%ts lust*$de1 t &002 lust*$de2 t &003
$!%E
-
8/13/2019 Sun Cluster 3.0 Series Guide to Installation ok.doc
18/81
-n this e!ample# the terminal concentrator (tc) has ports# numbered 5 G . The &002entry
refers to the terminal concentrator physical port @# and &003refers to physical port 3. The tc
entry must correspond to the host name entry in /etc/*et/!$sts.
Section .4 Configure the %erminal Concentrator
Some steps for configuring the terminal concentrator reuire that you are able to gain physical
access to the terminal concentrator and euipment. %or e!ample# forcing the terminal concentrator
into the 7/onitor7 mode. The following steps are included only when performing these steps locally#
as during a manual installation.
%or local (manual) installations# when configuring the "nne! terminal serer# refer to the
1-S$C# under su*/d$cs/ISC
Step .4.
%or local (manual) installations# prior to connecting the serial cable'
1nsure that the terminal concentrator power is off.
Connect the cable# noting the serial port tt:is the default serial port you will be using
on the clustadmworkstation.
"s described in%igure 5and Tables 5 through Table ># a serial cable must be connected
from a serial (tt:aor ) port on the clustadmworkstation to port 5 on the terminalconcentrator. ort 5 (configuration port) of the terminal concentrator is reuired for
performing all local (manual) steps.
$!%E
The ne!t step is not implemented for these *="*S. "lso# %igure 5and Tables 5 through Table
> indicate that serial port " of the clustadmworkstation is connected to the terminal
concentrator (port 5) instead of the default tt:.
Step .4.0
%or local (manual) installations# use the ,;-K tcommand to communicate with the terminal
concentrator during configuration.
$!%E
*efore the tcommand will work# ensure that the /etc/%em$tefile includes the following
lines (appended to the end of the file)'
a**ete%m d5;/de5/te%m/*%#'00el;CSJDe;KL$e;D
http://popup%28%27/content/images/art_sun_clusterinst1/elementLinks/fig01.gif')http://popup%28%27/content/images/art_sun_clusterinst1/elementLinks/fig01.gif')http://popup%28%27/content/images/art_sun_clusterinst1/elementLinks/fig01.gif')http://popup%28%27/content/images/art_sun_clusterinst1/elementLinks/fig01.gif')http://popup%28%27/content/images/art_sun_clusterinst1/elementLinks/fig01.gif')http://popup%28%27/content/images/art_sun_clusterinst1/elementLinks/fig01.gif')http://popup%28%27/content/images/art_sun_clusterinst1/elementLinks/fig01.gif') -
8/13/2019 Sun Cluster 3.0 Series Guide to Installation ok.doc
19/81
??I* t!e a$5e l*eM susttute t!e se%al $%t lette% :$u a%eus*< f$% =*= - f$% eamleM f us*< tt:M %ela e =*= 9t!==Mas =/de5/te%m/=@@ "n easy way create this entry is to simply copy the lines from the !a%d9%ecommand#
then change the entry from hard#ireto annexterm# ensuring that the port letter is correct.
The t(5) command can be used to connect the clustadmworkstation console -
-
8/13/2019 Sun Cluster 3.0 Series Guide to Installation ok.doc
20/81
(that is# actual options will likely differ from those specified in this *="*). 4hen configuring the
terminal concentrator# refer to the manufacturer7s documentation to ensure that settings are
established correctly. Specifically# ensure that settings for the terminal concentrator -nternet
address# subnet mask# and broadcast address are as indicated in the following steps.
$!%E
%or the ne!t few steps to configure the terminal concentrator# the settings should be
configurable 7as$listed.7 9oweer# terminal concentrator firmware settings ary from unit$reision
to unit$reision. Lour actual options may differ. 4hen configuring the terminal concentrator# refer
to the manufacturer7s documentation to ensure that settings are established correctly.
Specifically# ensure settings for the terminal concentrator -nternet address# subnet mask# and
broadcast address# as indicated in the following steps.
Step .4.4
%or local (manual) installations# when the diagnostic tests are completed# the t window of the
clustadmworkstation should display'
S:stem Reset - E*te%*< $*t$% $dem$*t$%
Step .4.7
%or local (manual) installations# we will use the add%command to set the -nternet address# subnet
mask# and broadcast address for the terminal concentrator.
1nter the following commands'
S:stem Reset - E*te%*< $*t$% $dem$*t$%add%E*te% I*te%*et add%ess N120H&'0O 12200
E*te% Su*et mas, N2&&2&&2&&0O CR
E*te% P%efe%%ed l$ad !$st I*te%*et add%ess Na*: !$stO CR
E*te% B%$adcast add%ess N121&32&&O 122002&&
E*te% P%efe%%ed dum add%ess N0000O CR
Select t:e $f IP ac,et e*casulat$*(eeeH02/et!e%*et)
Net!e%*etO CR
:e $f IP ac,et e*casulat$* Et!e%*et
$ad B%$adcast 6/4 N4O CR
-
8/13/2019 Sun Cluster 3.0 Series Guide to Installation ok.doc
21/81
m$*t$% seque*ce
At t!s $*t :$u *eed t$ e*te% a lst $f 1 t$ *te%faces t$
attemt t$ use f$% d$9*l$ad*< $%ul*e dum*
-
8/13/2019 Sun Cluster 3.0 Series Guide to Installation ok.doc
22/81
ower on the terminal concentrator and wait appro!imately D0 seconds for it to configure.
6ey Practice *ecause port 5 of the terminal concentrator is the configuration port# minimi:e
security ulnerability by disconnecting port 5 of the terminal concentrator after configuration. This
will preent unauthori:ed access to the terminal concentrator7s configuration port.
Step .4.9
%or local (manual) installations# if the terminal concentrator reuires access from an adMacent
network# the default%$ute%configuration must be performed on each cluster node. This would
be performed later# after the Solaris 1 installation has completed on each cluster node. "t that
time# configure the default router information on the cluster nodes by performing the following
steps.
Create the file /etc/default%$ute% and insert the - address of your gateway.
1!ample'
122002& ??samle
-
8/13/2019 Sun Cluster 3.0 Series Guide to Installation ok.doc
23/81
%or local (manual) installations# complete the configuration of the terminal concentrator by entering
the following commands. 1nter the data as shown where prompted'
%$$t lustadm# tel*et tc%:*< 122200C$**e ted t$ t Es ae !a%a te% s O lA**e C$mma*d *e I*te%%ete% . C$:%
K
-
8/13/2019 Sun Cluster 3.0 Series Guide to Installation ok.doc
24/81
6$u ma: *eed t$ %eset t!e a%$%ate $%tM A**e sus:stem$% %e$$t t!e A**e f$% !a*
-
8/13/2019 Sun Cluster 3.0 Series Guide to Installation ok.doc
25/81
uring this section you will start the Cluster Control anel by entering the cccommand for the
cluster named nhl. lease read this entire step before entering any commands. "fter starting the
cluster control panel# you will double$click on the cc$*s$leicon.
"t this time# erify that each cluster node is accessible to the clustadmworkstation bystarting the Cluster Console anel and accessing the cluster consoles for each cluster node.
Step .7.
-f you are accessing the clustadmworkstation from a remote system# e!ecute the !$st Tcommand# enabling remote display from the clustadmworkstation to your local system.
4hen accessing the clustadmworkstation remotely you must# also# set the DISPA6
enironment ariable on the clustadmworkstation to point to your local system. %or e!ample# for
cs!users# sete*5 DISPA6 :$u%s:stem00.$!%E
This step can be performed when accessing the Sunle! platform from a remote workstation. -t
is often useful to access the Cluster Control anel (cc) remotely# as appropriate# or when
configuring (administering) the Sun Cluster.
"t this time# you must set the DISPA6ariable before inoking the CC. %irst# on your local
workstation (e!ample only)'
:$u%s:stem# /us%/$e*9*/*/!$st T lustadm;e!t# on clustadm(note' replace :$u%s:stemwith local system name)'
%$$t lustadm# sete*5 DISPA6 :$u%s:stem00Step .7.0
1nter the following commands# on the clustadmworkstation'
%$$t lustadm# 9!c! cc/$t/S4+ luste%/*/
%$$t lustadm# cc *!l V4hen the cccommand is e!ecuted# the Cluster Control anel window will appear. 2erify that a
menu bar and icon panel display all of the aailable tools# as listed'
Cluster Console# console mode
Cluster Console# rlogin mode
-
8/13/2019 Sun Cluster 3.0 Series Guide to Installation ok.doc
26/81
Cluster Console# telnet mode.
1!ample' Cluster Control anel 4indow
igure 0Cluster Control Panel 5indow
Step .7.2
&efer to the preceeding figure. ouble$click the Cluster Console (console mode) icon (circled) to
display the cluster console. "n e!ample cluster console shown in the following figure.
-n this e!ample# three windows are displayed' one small Cluster Consolewindow# and two larger
cconsole$ host %name&windows. ;ote that each of the larger windows is
associated with a specific host# or cluster node.
.1!ample' Cluster Console (console mode) and cc$*s$le4indows
igure 2Cluster Console and cconsole 5indows
C)*%I!$
The Cluster Console utility proides a method of entering commands into multiple cluster nodes
simultaneously (or indiidually# as reuired).Always be aware of which window is active
prior to entering commands. -f a cc$*s$lewindow does ;T appear for a cluster node#
erify the following' %rom the Cluster Consolewindow (console mode)# select 'osts# followed
by Select 'osts. ;e!t# erify (insert) an entry for each cluster node (for e!ample# clust*$de1#clust*$de2).
"t this time# arrange each window for your own personal iewing preferences. ,ngroup the Cluster
Consolewindow from the cconsole$ host %name&windows. Select (ptionsfrom the menu (Cluster
Console window)# and uncheck Group )erm *indo#s.
%or e!ample# arrange the cc$*s$lewindows to be able to see each window clearly and at the
same time by moing the Cluster Console window away from the other cluster node windows. This
is done to ensure that commands are entered correctly into one# or both nodes# as reuired during
these e!ercises (and to preent entering commands into the wrong window).
-t is ;T necessary to do so at this time# but when you wish to close the Cluster Console window#
select !xitfrom the Cluster Console window 9osts menu.
$!%E
-t is ;T necessary to do so at this time,but if you need to issue a St$-A command to
each cluster node# simultaneously placing them in the * mode# use the following procedure#
for the "nne! Terminal Serer. %irst# actiate the Cluster Consolewindow# then press the O
( Ctrl F O ) keys. This will display a tel*etprompt# for each cluster node. "t the tel*et
http://popup%28%27/content/images/art_sun_clusterinst1/elementLinks/fig02.gif')http://popup%28%27/content/images/art_sun_clusterinst1/elementLinks/fig03.gif')http://popup%28%27/content/images/art_sun_clusterinst1/elementLinks/fig02.gif')http://popup%28%27/content/images/art_sun_clusterinst1/elementLinks/fig03.gif') -
8/13/2019 Sun Cluster 3.0 Series Guide to Installation ok.doc
27/81
prompt# enter the se*d %,command# which will issue a St$-Ato each cluster node
(placing them at the * $,prompt).
Step .7.3
2erify operations using the Cluster Console anel (CC) utility by logging in to each cluster node.
*egin configuring each system which will operate as a cluster node.
=og in as %$$tfrom the cconsole$host %hostname& window# on each cluster node'
lust*$de1 $*s$le l$
-
8/13/2019 Sun Cluster 3.0 Series Guide to Installation ok.doc
28/81
%or local (manual) installations# it is a good idea to ensure that no preious 11&/ settings e!ist
by setting the 11&/ to a known state (that is# factory defaults). -t is recommended that this be
performed only once# at this point in the procedure# prior to customi:ing the system 11&/to
meet cluster reuirements (and *1%&1 installing any software). &eset the system 11&/ to
its factory default.
%or local (manual) installations# enter the following * command on each cluster node'
$, set-defaults,sing the set-defaultscommand at this step establishes a consistent# known (default) state
of all * ariables prior to customi:ing the * enironment.
C)*%I!$
&esetting the system 11&/ should only be performed at this time# during the initialpreparation for the Solaris 1 installation. This command resets all 11&/ (*) ariables
to their factory default alues. "ll subseuent steps assume the 11&/ has been reset (at
this point in the e!ercise). uring the ne!t few steps# the 11&/ will be modified
(customi:ed).
6ey Practice 1nsure a consistent state on each cluster node before proceeding to configure site$
specific (customi:ed) * settings. rior to implementing any configuration changes# and as part
of initial Solaris 1 installation preparations# reset the 11&/to the factory defaults. This is
done only once# and at this point in the procedure# and will easily and uickly ensure that a
consistent state is achieed "efore further customi>ationoccurs.
$!%E
%or local (manual) installations# prior to installing Solaris # we will re$configure the * settings
for each cluster node. This is achieed by e!ecuting commands at the * 7$,7 prompt (the
* $,prompt should be iewable through the Cluster Control anel windows).
Step .8.2
n each cluster node# e!ecute the * a**e%command to erify system information# such asthe system model number# * ersion# 1thernet address# hostid# and serial number.
$, a**e%1ach node will respond with configuration information.
ocument system information of each cluster node.
6ey Practice ,ntil the 11&/ configuration has been completed# you should disable the auto$
boot 11&/ feature on each cluster node. isabling the auto$boot feature will alleiate any
-
8/13/2019 Sun Cluster 3.0 Series Guide to Installation ok.doc
29/81
problems which could arise if both systems attempted to boot their Solaris 1s while# at the same
time# both systems are set with the same# and therefore# conflicting SCS-$initiator - settings.
4e temporarily disable aut$-$$tUon each cluster node# during this phase of the installation.
4e do this because# as yet# the system has not been configuredNif there is an accidental rebootof a node or nodes# and the system aut$-$$tUariable has been set to ASE# the
system will reset to the * prompt instead of attempting to boot from disk. "t this phase# any
attempts to boot from disk may reuire an administrator to manually put the system back to the
* for further configuration changes.
$!%E
Lou will be instructed to re$enable aut$-$$tU at the end of this procedure.
Step .8.3
isable aut$-$$tUby entering the following command into each cluster node'
$, sete*5 aut$-$$tU falseaut$-$$tU ; false
Step .8.4
n each cluster node# set the following * ariables# as indicated'
$, sete*5 l$cal-mac-add%essU false
l$ al-ma -add%essU ; false
$, sete*5 da
-
8/13/2019 Sun Cluster 3.0 Series Guide to Installation ok.doc
30/81
$!%E
To sole this conflict# in the ;1KT step we will e!plicitly set the SCS-$initiator$- of
clust*$de27s internal SCS- controller to a alue of 777 # by entering a simple script into the
clust*$de2snon$olatile &"/# or *5%am%c7.
"t this time# enter the following command into the cconsole$ host clustnode2window'
$, sete*5 s s-*tat$%-d 's s-*tat$%-d ; '
$!%E
SCS-$initiator$- modification. &efer to %igure 5and Table 5 through Table ># specifically noting
the disk subsystem cabling and configuration. *ecause two cluster nodes (both Sun 1nterprise
/odel @@0& serers) are connected to the same pair of Sun Stor1dge 5000s# the *
settings reuire modification. 4e will set the SCS-$initiator$- on one of the cluster nodes
(clustnode@ in this e!ercise) to a alue of I and insert a script into clust*$de2s *5%am%c
(non$olatile memory) to maintain a SCS-$initiator$id of H on the clust*$de2internal SCS-
controller. Setting the clust*$de2global SCS-$initiator$id to I will preent a conflict on the
shared SCS- bus that connects both Sun 1nterprise @@0&s to the Sun Stor1dge 5000s.
,se the BP *5edtcommand in the following procedure.%he *5%ameditor is alwaysset to *se%tmode. ,se the following keystrokes when editing (refer to the following).
,sing *5edt' Peystrokes
6eystro1e )ction
Ct%lTB /oe backward one character
Ct%lTC 1!it the *5%am%ceditor# returning to the pen *oot
&/ command interpreter. The temporary buffer is
presered# but is not written back to the *5%am%c
editor. (,se *5st$%e afterwards to write it back.)
Delete Delete %e5$us c!a%acte%
Ct%lT /oe forward one character.
Ct%lTW %rom current position in a line# deletes all te!t after
cursor and Moins the ne!t line to the current line (that
is# deletes the new line).
http://popup%28%27/content/images/art_sun_clusterinst1/elementLinks/fig01.gif')http://popup%28%27/content/images/art_sun_clusterinst1/elementLinks/fig01.gif') -
8/13/2019 Sun Cluster 3.0 Series Guide to Installation ok.doc
31/81
Ct%lT =ist all lines.
Ct%lT4 $5e t$ t!e *et l*e $f t!e *5%am%c edt*