BECU, 5th largest credit union, achieve success with HP-UX 11i v2 and HP Integrity ServersScott Wolfe Dan ChowIT Enterprise Architect Enterprise Systems Engineer BECU BECU
2
Agenda & Attendee BenefitsAgenda• Who is BECU?• Our Business Changes & IT Challenges• Our Approach & Implementation• The Results Attendee Benefits1. Gain insights on how BECU have overcome both IT and business challenges,
especially for financial and SAP environments.2. Obtain in-depth design and implementation details on how BECU designed,
tested, and easily implemented HP-UX 11i v2/HP Integrity into existing HP Alpha environments to work seamlessly together.
3. Understand how HP-UX 11i v2 and HP Integrity can be tuned and integrated seamlessly into existing environment without disruption or failures.
3
Who is BECU?
• not Boeing Employees Credit Union
• largest credit union in Washington State
• 5th largest credit union in United States
• serving members since 1935
• 400,000 members; 2 million accounts
• 5 billion in assets
• full-service IT shop
4
Business Change & IT Challenges• BECU poised for rapid growth
− deploying two Express Service Centers per month− projections demonstrated database performance
limitations within 12 - 18 months.
• Service availability was not meeting our business requirements− redundant, stand-alone systems with manual recovery− excessive hardware failures and downtime
• Desire to maintain current and well-supported technology. − End-of-life announced for RISC and Alpha chipsets− Itanium architecture meets our computing needs
5
Deciding on the Right Investment
• BECU is a ‘buy, don’t make’ IT shop.−Standards based
hardware, software and applications
−Stay current with technology trends
• HP has proven to be a successful strategic partner
6
SpokaneTukwila
EMA 34TB raw
(65) 76GB(19) 36GB
EMA 28TB raw
(112) 72GB
GS160(16) active CPUs32GB memory
Tru64 v5.1AOracle 9.1.0.4
Production STBYMEND DevDevDisaster
RecoveryPreProd1
ES40(4) active CPUs8GB memoryTru64 v5.1A
Oracle 9.1.0.4
test
test
test
test
EVA 18TB raw
(112 ) 72GB
EMA 112TB raw
(84) 144 GB
Intgrt FunctionPreProd2 test
test
EMA 45TB raw
(64) 72GB(30) 36GB
SAN switch fabric
Alpha / Oracle Database Server Infrastructure
SAN switch fabric
GS80(4) active CPUs32GB memory Tru64 v5.1A
Oracle 9.1.0.4
GS80(4) active CPUs32GB memory
Tru64 v5.1A
SharePlex
Switch blade 6
Switch blade 6
Switch blade 4
Switch blade 4
2 650Po werEd ge
26 50Po werEdg e
Load BalancedApplication servers
(2) 4600(4) 900Mhz P3 CPUs
4GB memoryWindows 2000OSI application
OSI batch
Production Primary Production Primary
Test Test
Previous environment
7
rx5670
MemoryCarrier
OptionalCarrier
ChipsetHPZX1
U320DiskSlot
CPU1
CPU2
CPU3
CPU0
U320DiskSlot
U320DiskSlot
U320DiskSlot
Memory
CPU
121110987654321
DVD/DDS
PS2
PS1
PS0
LAN PCI66/64 PCI66/64 PCI66/64 PCI66/64 PCI33/64 PCI133/64 PCI133/64 PCI66/64 PCI66/64 PCI133/64 LanConsole 3xRS-232 U2-LVDSCSI 2xSCSI
IT Infrastructure Core Database Roadmap2004 2005 2006 2007
Wildfire - GS160Hardware
Operating System
Database
Application
Clustering
A
B
C
Marvel - GS1280Integrity - Itanium
Integrity - Itanium - Dual CoreTru64
HP-UX 11i v2HP-UX 11i v3 (with Tru64 integration)
Linux
Oracle 9i RACOracle 10g Cluster
OSI 3.8.10OSI 3.9
OSI 10g certified
TruCluster on Tru64TruCluster on Oracle
OSI 3.10
TruCluster on HP-UX
Q1 Q2 Q3 Q4 Q1 Q2 Q3 Q4 Q1 Q2 Q3 Q4 Q1 Q2 Q3 Q4
Alpha
Itanium
PA-RISC
(2) rx8620(8) Itanium CPUs
HP-UX 11i v2ServiceGuard?Oracle 9i RAC?
OSI 3.9EVA 5000
Production
(2) rp8400(8) PA-8800 CPUs
HP-UX 11i v2ServiceGuard?Oracle 9i RAC?
OSI 3.9EVA 5000
Production
RPTS
MEND
rx4640(4) Itanium CPUs
HP-UX 11i v2OSI 3.9
EVA 5000
(2) rx2600(2) Itanium CPUs
HP-UX 11i v2Oracle 10g
(2) rx2600(2) Itanium CPUs
Red Hat LinuxOracle 10g
Dev
Dev
(2) rx2600(2) Itanium CPUs
HP-UX 11i v2Oracle 10g
(2) rx2600(2) Itanium CPUs
Red Hat LinuxOracle 10g
Dev
Dev
(2) rx2600(2) Itanium CPUs
HP-UX 11i v2Oracle 10g
(2) rx2600(2) Itanium CPUs
Red Hat LinuxOracle 10g
Dev
Dev
(4) rx76xx(4) Itanium CPUs
HP-UX 11i v3 or LinuxTruServiceGuard
Oracle 10g ClusterOSI 3.9
EVA 5000
RPTS
MENDProduction
(2) rx8620 (8) Itanium CPUs(1) rx4640 (4) Itanium CPUs
(4) Itanium CPUsHP-UX 11i v3 or Linux
TruServiceGuardOracle 10g Cluster
OSI 3.9EVA 5000
RPTS
MENDProduction
(2) rp8400 (8) Itanium CPUs(1) rp5470 (4) Itanium CPUs
HP-UX 11i v3 or LinuxTruServiceGuard
Oracle 10g ClusterOSI 3.9
EVA 5000Production
RPTS
MEND
test
test
test(2) rp5470
(4) Itanium CPUsHP-UX 11i v3 or Linux
OSI 3.9EVA 5000
test
test
test(2) rx4640
(4) Itanium CPUsHP-UX 11i v3 or Linux
OSI 3.9EVA 5000
test
test
test(2) rx4640
(4) Itanium CPUsHP-UX 11i v3 or Linux
OSI 3.9EVA 5000
���� �� �� ��� ��� ��� ��� �� � �� �� ��� � �� �� ��� � � � ��
(2) GS1280(8) EV7 CPUsTru64 5.1B-1TruCluster?
Oracle 9i RAC?OSI 3.9
EVA 5000
���� �� �� ��� ��� ��� ��� ��� � � � ��� � �� �� �
� ��� � � ��
Production � ���� �� �� �� �� �� �� � � �� �� � � �� �� � �� �� �� �� � �� ��
RPTS
MEND
GS80(4) EV68 CPUsTru64 5.1B-1
OSI 3.9EVA 5000
e
� ���� �� �� �� �� �� �� � � �� �� � � �� �� � �� �� �
� �� � �� ��Disaster
Recovery
GS80(8) EV68 CPUsTru64 5.1B-1
OSI 3.9EVA 5000 e
� ���� �� �� �� �� �� �� � �� �� � �� � �� �� � �� ��
� �� � �� ��
test
� ��� �� � �� �� �� �� �� � �� �� � �� � �� �� � �� ��
� � �� �� ��
(2) ES40(4) EV68 CPUsTru64 5.1B-1
OSI 3.9EVA 5000
test
test
ee
DisasterRecovery
test
test
test(2) rx4640
(4) Itanium CPUsHP-UX 11i v2
OSI 3.9EVA 5000
RPTS
MEND
rp5470(4) PA-8800 CPUs
HP-UX 11i v2OSI 3.9
EVA 5000
test
test
test(2) rp5470
(4) PA-8800 CPUsHP-UX 11i v2
OSI 3.9EVA 5000
DisasterRecovery
rp7410(8) PA-8800 CPUs
HP-UX 11i v2OSI 3.9
EVA 5000
DisasterRecovery
rx7620(8) Itanium CPUs
HP-UX 11i v3 or LinuxOSI 3.9
EVA 5000
DisasterRecovery
rx7620(8) Itanium CPUs
HP-UX 11i v3 or LinuxOSI 3.9
EVA 5000
DisasterRecovery
rp7410(8) Itanium CPUs
HP-UX 11i v3 or LinuxOSI 3.9
EVA 5000
rx7620(8) Itanium CPUs
HP-UX 11i v2OSI 3.9
EVA 5000
e e
e e e
n = new server
e = existing server
n n
n n
n n
n
n
n n
n n
n n
n n
n n
n n
n
n
n n
n n
n n
n
n n
n
n
e
u = server upgrade
u
u
u
u u u
HP Enterprise Server Roadmaps
8
Overcoming Tight Project Timeline• January - training
• February - configuration
• March - hardware testing
• April - application testing
• May - migration
• May 23 - go live!
9
HP-UX 11i v2 Technology Implemented• HP-UX 11i v2 Mission Critical Operating Environment• Service Guard with Quorum Servers• Oracle package failover• Standby LAN interface• Auto Port Aggregation• Mirrordisk/UX• Ignite Server• Quest SharePlex transaction replication• Secure Path• SAN ISL trunking• Fabric Manager, Advanced Performance & Health• Measure Ware
HP-UX 11i
10
SpokaneTukwila
EMA 34TB raw
(65) 76GB(19) 36GB
EVA 28TB raw
(112) 72GB
(2) rx7620(8) active CPUs32GB memory HP-UX 11i v2
MCServiceGuardOracle 9.1.0.4
Production STBYMEND DevDevDisasterRecoveryPreProd1
(2) rx2600(2) active CPUs4GB memoryHP-UX 11i v2
MCServiceGuardOracle 9.1.0.4
test
test
test
test
EVA 18TB raw
(112) 72GB
EMA 112TB raw
(84) 144GB
Intgrt FunctionPreProd2 test
test
EMA 45TB raw
(64) 72GB(30) 36GB
SAN switch fabric
EVA 38TB raw
(112) 72GB
SAN switch fabric
(2) rx7620(8) active CPUs32GB memory HP-UX 11i v2
MCServiceGuardOracle 9.1.0.4
(1) rx7620(8) active CPUs32GB memory HP-UX 11i v2
disk to diskbackup
SharePlex
Switch blade 4
Switch blade 4
Switch blade 6
Switch blade 6
2 650Po werEdg e
2650PowerEd ge
Quorum / SharePlexservers
2650PowerEdge
2650PowerEd ge
Load BalancedApplication servers
(3) DL380(2) 3Ghz P4 CPUs
4GB memoryLinux RedHat 2.1 ESServiceGuard clusterQuorum / SharePlex
(2) DL380(2) 3Ghz P4 CPUs
4GB memoryWindows 2003OSI application
OSI batch
New Architecture for BECU:HP-UX 11i v2 and HP Integrity Servers
11
High Availability Tests• Manual database move
• Oracle service failure
• Database maintenance
• Single network link failure
• Dual network link failure
• Triple network link failure
• Heartbeat link failure
• SAN drive failure
• Local drive failure
• Single SAN link failure
primary core I/O
PCI-Xcell 0
PCI-Xcell 1
secondarycore I/O
(2) HBAs cell 0(2) HBAs cell 1
SANswitch1
SANswitch2
Gbblade
4
Gbblade
4
Gbblade
6
Gbblade
6
crossover cable
EVA 1 EVA 2
Production
STBYMEND
Stage1
Intgrt
Stage2SharePlex
265 0Powe rEdge
265 0Powe rEdge
2 650PowerEdge
2 650Pow erEdge
(2) Quorum/SharePlexServers
(2) ApplicationServers
12
Hardware Testing• Four file transfers
− 45GB each− TCP and NFS− bi-directional
• EVA Configuration− 2C8D configuration− mirrored cache− 1 diskgroup/112 disks− VRAID1 LUNs
• CPU 50%
(4) 2GbHBAs
SANswitch1
SANswitch2
EVA 1 EVA 245BG
45GB
import
NFS
NFS
TCP
TCP
(8) CPUs32GB memory
(4) 1Gb/s NICs each
45GB
45GB
import
13
Application Load Test Results• SAN Controller
− 160MB/s sustained throughput limit− 200-300MB/s bursts
• CPU− average 50%
• Batch Processing− average 4x faster− realized 2x faster
• Live Transactions− little contention− many times current volume
14
Physical SAN configuration
p a g e : 1 o f 3
H P S e rv i c e sC o n s u l t i n g a n d
In te g ra t i o n
i n v e n t
l e g e n d d e s c rip t i o n : p a g e t i t l e / c u s to m e r:
3 :5 0 P M2 3 F e b ru a ry, 2 0 0 4
d a t e r e v is e d :
L e e C h e na u t h o r :
T h is d o c u m e n t is c o n fid e n t ia l to H P a n d i ts in te n d e d re c ip ie n t .
B E CUC o n fig u ra t io n fo r D e ve lo p m e n tC lu s te r ( a . k. a . S u g a r-R a y C lu s te r)
S c a le : 1 : 1 0
h pI n t e g r i t y
r x 2 6 0 0
S A NS witc h 2/1 6IP
0 1 2 3 4 5 6 7 8 9 10 1 1 1 2 1 3 1 4 1 51 0 10 1
3 1 2 9 27 25 23 21 19 17 1 5
IM L
1 0/1 0 0
P W R
3 0 2 8 26 24 22 20 18 16 1 4 2 0
1 3 1 1 9 7 5
1 2 1 0 8 6 4
3 1E R R
h p ��� � �� �� � �� �� � �� ���� � ��
3 1 2 9 2 7 2 5 2 3 2 1 1 9 1 7 1 5
IM L
1 0/1 0 0
P W R
3 0 2 8 2 6 2 4 2 2 2 0 1 8 1 6 1 4 2 0
13 11 9 7 5
12 10 8 6 4
3 1E R R
h p � ��� � �� � ��� � �� � ���� � � ��
6 5 4 3 2 1 0
IM L1 0/1 0 0
P WRE RR
1 0 9 8 71 5 1 4 1 3 1 2 1 1
h p � ��� � �� � ��� � �� � ���� � � �� �
PCI - X 1 3 3M HZ Slot 1
PCI - X 1 3 3M HZ Slot 2
PCI - X 1 3 3M HZ Slot 3
PCI - X 1 3 3M HZ Slot 4
S e r ia l AE C I S e r ia lV id e o
E C I 1 0 / 1 0 0M a n a g e m e n t L A N
S e r ia l B1 0 / 1 0 0 / 1 0 0 0
L A NM a n a g e m e n t
L A NU S B U S B
U S B
U 3 2 0 S C S I L ANS C S I
h p S t or a g e W o r k s
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
HSG 80
HSG 80HSx 80Ca che
HSx 8 0Ca che
h p S t or a g e W o r k s
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
h p S t or a g e W o r k s
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
h p S t or a g e W o r k s
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
h p S t or a g e W o r k s
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
h p S t or a g e W o r k s
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
h p S t or a g e W o r k s
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
S T O R AG EW O RK S 2 2 00
h p S t or a g e W o r k s
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
h p S t or a g e W o r k s
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
h p S t or a g e W o r k s
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
h p S t or a g e W o r k s
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
h p S t or a g e W o r k s
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
Ultr
a 3
SCS
I
L ANS C S I
V id e o / U S B
V id e o / U S B
F ib e r C h a n n e l C o n n e c tio n
G ig a b it E th e r n e t L A N
EV A 2 EV A 3
S w itc h 5 S w itc h 6
15
Database ConfigurationFilesystem Mounted on Storage RAID
• /dev/vg04/u01 /u01 Oracle Binary Installation 0+1
• /dev/oradev/u02a /u02a Data/Index Volume 1 0+1
• /dev/oradev/u03a /u03a Data/Index Volume 2 0+1
• /dev/oradev/u04a /u04a Data/Index Volume 3 0+1
• /dev/oradev/u05a /u05a Data/Index Volume 4 0+1
• /dev/oradev/u06a /u06a Rollback Segments Volume0+1
• /dev/oradev/u07a /u07a Temp. / Log / Trace Volume0+1
• /dev/oradev/u09a /u09a Redo Log File Volume 0+1
• /dev/oradev/u10a /u10a Current Archive Log Volume0+1
• /dev/oradev/u11a /u11a Redo Log File Volume 0+1
• /dev/oradevback/backup /u08a Backup Log / Export Volume 5
16
Production Metrics
• Alpha GS160− (16)1.2Ghz EV68 CPUs− 32GB memory
• average utilization− CPU
• 20% live transactions• 50% batch processing
− memory• 20GB daytime• 28GB nighttime
− disk I/O• 5MB/s daytime• 20MB/s nighttime
• batch runtime− 12.5 hours nightly− 16 hours month-end
• Integrity rx7620− (8) 1.5Ghz Itanium 2 CPUs− 32GB memory
• average utilization− CPU
• 15% live transactions• 25% batch processing
− memory• 14GB daytime• 19GB nighttime
− disk I/O• 10MB/s daytime• 35MB/s nighttime
• batch runtime− 7.5 hours (40% less)− 7 hours month-end (55% less)
BEFORE AFTER
17
The WOW Factors withHP-UX 11i v2 and HP Integrity Servers• BECU no longer tracks nightly batch run-times during Operations
meeting
• Migration was a non-event
• No data migration required
− batch processing results matched exactly
− physical database row counts matched exactly
• For the week prior to migration, seven databases were in synch (4 Alpha, 3 Integrity)
• Quality
− memory management
− availability and reliability
18
Secrets to Success• Get firm commitment from ISV to support Itanium platform
• Get the right people on the team
• Plan before you build• Alpha Retain Trust Program is FOR REAL and WORKS
• Oracle on HP-UX 11i v2/HP Integrity is ready for prime time
• Involve all business partners early
19
HP-UX 11i v2 and HP Integrity Benefitsto BECU• Reduced batch processing window:
* -5 hours nightly* -9 hours month-end
• Ability to purge historical transactions• Low contention• Quick test-database creation
− import reduced from 16 hours to 8 hours
• More frequent test cycles− (Around how much more?)
• Reduced datacenter footprint by 8x• Reduced Oracle licensing• Alignment with HP roadmap
Top Related