© 2012 IBM Corporation · • Extends HDR to support a primary server with many secondary servers...

59
© 2012 IBM Corporation Informix High-Availability and Scalability

Transcript of © 2012 IBM Corporation · • Extends HDR to support a primary server with many secondary servers...

© 2012 IBM Corporation

Informix High-Availability and Scalability

© 2012 IBM Corporation2

Agenda

• High Availability BEFORE Informix 11.10• High Availability Data Replication (HDR)

• Enterprise Replication (ER)

• New High Availability Features in Informix 11.10• MACH 11 and required subcomponents

• Remote Standalone Secondary (RSS)

• Shared Disk Secondary (SDS)

• Continuous Log Restore (CLR)

• New Features in Informix 11.5• Updatable Secondaries

• Connection Manager

• New Features in Informix 11.70• Flexible Grid

• Connection Manager Grid Support

• Other Supporting Features

• Appendix

© 2012 IBM Corporation3

Agenda

• High Availability BEFORE Informix 11.10• High Availability Data Replication (HDR)

• Enterprise Replication (ER)

• New High Availability Features in Informix 11.10• MACH 11 and required subcomponents

• Remote Standalone Secondary (RSS)

• Shared Disk Secondary (SDS)

• Continuous Log Restore (CLR)

• New Features in Informix 11.5• Updatable Secondaries

• Connection Manager

• New Features in Informix 11.70• Flexible Grid

• Connection Manager Grid Support

• Other Supporting Features

• Appendix

© 2012 IBM Corporation

High-Availability Data Replication (HDR)

Primary

Blade Server A<New Orleans>Building-A

HDRTraffic

Blade Server B<Memphis>

HDRSecondary

Client Apps

� Use� Disaster Recovery

� Two identical servers on two identical machines� Primary server� Secondary server

� Primary server� Fully functional server.� All database activity – insert/update/deletes, are performed

on this instance� Sends logs to secondary server

� Secondary server� Read only server - allows read only query� Always in recovery mode� Receives logs from primary and replay them to keep in sync

with primary

When primary server goes down, secondary server takes over as standard server

Read-Only

� Simple to administer

� Little configuration required

� Just backup the primary

and restore to secondary

© 2012 IBM Corporation

• Requirements:

• Same hardware (vendor and architecture)

• Logged databases

• Same storage paths on each machine

• Backup the primary system

• ontape –s –L 0

• Set the type of the primary

• onmode –d primary <secondary_server_name>

• Restore the backup on the secondary

• ontape -p

• Change the type of the secondary

• onmode –d secondary <primary_server_name>

• DONE!

High-Availability Data Replication (HDR): Easy Setup

© 2012 IBM Corporation6

Agenda

• High Availability BEFORE Informix 11.10• High Availability Data Replication (HDR)

• Enterprise Replication (ER)

• New High Availability Features in Informix 11.10• MACH 11 and required subcomponents

• Remote Standalone Secondary (RSS)

• Shared Disk Secondary (SDS)

• Continuous Log Restore (CLR)

• New Features in Informix 11.5• Updatable Secondaries

• Connection Manager

• New Features in Informix 11.70• Flexible Grid

• Connection Manager Grid Support

• Other Supporting Features

• Appendix

© 2012 IBM Corporation

� Use� Workload partitioning� Capacity relief

� The entire group of servers is the replication domain� Any node within the domain can replicate data with any other

node in the domain� Servers in domain can be configured to be root, non-root and

Leaf

� Supports� Heterogeneous OS, Informix versions, and H/W� Secure data communication� Update anywhere (Bi-directional replication)

�Conflicting updates resolved by Timestamp, stored procedure, or always apply

� Based on log snooping rather than transaction based

� Low data transfer latency� Already integrated in the server!� Flexible

� Choose what to replicate – column level!� Choose where to replicate – all nodes or select

� Scalable� Add or remove servers/nodes easily

BENEFITS

Enterprise Replication (ER)

Multiple topologies supported for maximum implementation flexibility� Fully Connected� Hierarchical Routing� Hierarchical Tree� Forest of Trees

© 2012 IBM Corporation8

Agenda

• High Availability BEFORE Informix 11.10• High Availability Data Replication (HDR)

• Enterprise Replication (ER)

• New High Availability Features in Informix 11.10• MACH 11 and required subcomponents

• Remote Standalone Secondary (RSS)

• Shared Disk Secondary (SDS)

• Continuous Log Restore (CLR)

• New Features in Informix 11.5• Updatable Secondaries

• Connection Manager

• New Features in Informix 11.70• Flexible Grid

• Connection Manager Grid Support

• Other Supporting Features

• Appendix

© 2012 IBM Corporation

• Extends HDR to support a primary server with many secondary servers

• Three new types of secondary instances:

• Shared Disk Secondary (SDS)

• Remote Standalone Secondary (RSS)

• Continuous Log Restore (CLR) or “near-line” standby**

• High Availability Cluster is not just 1-to-N HDR

• It is the treating of all new forms of secondary as a multi-tiered availability solution

What is High Availability Cluster?

© 2012 IBM Corporation

Supporting Infrastructure for High Availability Cluster

• Two new sub-components introduced to support “High Availability Cluster”

• Server Multiplexer (SMX)

• Automatically enabled internally to establish network connections between the instances

• Supports multiple logical connections over a single TCP connection

• Sends packets without waiting for return “ack”

• Index Page Logging

• Allows index pages to be copied to the logical log when initially creating the index

• HDR currently transfers the index pages to the secondary when creating the index

• Required for RSS

© 2012 IBM Corporation11

Agenda

• High Availability BEFORE Informix 11.10• High Availability Data Replication (HDR)

• Enterprise Replication (ER)

• New High Availability Features in Informix 11.10• MACH 11 and required subcomponents

• Remote Standalone Secondary (RSS)

• Shared Disk Secondary (SDS)

• Continuous Log Restore (CLR)

• New Features in Informix 11.5• Updatable Secondaries

• Connection Manager

• New Features in Informix 11.70• Flexible Grid

• Connection Manager Grid Support

• Other Supporting Features

• Appendix

© 2012 IBM Corporation

• Extends HDR to include new type of secondary (RSS)

• Receive logs from Primary

• Has its own set of disks to manage

• Primary performance does not affect RSS servers; and vice versa

• Only manual failover supported

• Requires Index Page Logging be turned on

• Uses full duplex communication (SMX) with RSS nodes

• Does not support SYNC mode

• Can have 0 to N asynchronous RSS nodes

• Supports Secondary - RSS conversions

Primary Node HDR Secondary

RSS #2RSS #1

Replication to Multiple Remote Secondary Nodes

• Allows for simultaneous local and remote replication for HA

• Support read-write operations

• Simple online setup and use

Remote Standalone Server (RSS)

BENEFITS

• Capacity Relief

• Web Applications / Reporting

• Ideal for disaster recovery

© 2012 IBM Corporation

• Requirements: (similar to HDR)

• Same hardware (vendor and architecture)

• Logged databases

• Same storage paths on each machine

• Configuration:

• Primary:LOG_INDEX_BUILDS: Enable index page logging

• Dynamically: onmode –wf LOG_INDEX_BUILDS=1

• Identify the RSS server on the primary

• onmode –d add rss <server_name>

• Backup the primary system

• ontape –s –L 0

• Restore the backup on the secondary

• ontape -p

• Identify the primary on the RSS server

• onmode –d rss <primary_server_name>

• DONE!

Remote Secondary Server (RSS): Easy Setup

© 2012 IBM Corporation

New RSS Configuration Parameters (11.50.xC5)

• DELAY_APPLY

• Used to configure RS secondary servers to wait for a specified period of time before applying logs

• LOG_STAGING_DIR

• Specifies the location of log files received from the primary server when configuring delayed application of log files on RS secondary servers

• STOP_APPLY

• Used to stop an RS secondary server from applying log files received from the primary serve

Useful when a problem on the Primary should not be replicated to the Secondary server(s)

Dynamic – onmode –wf/wm

© 2012 IBM Corporation15

Agenda

• High Availability BEFORE Informix 11.10• High Availability Data Replication (HDR)

• Enterprise Replication (ER)

• New High Availability Features in Informix 11.10• MACH 11 and required subcomponents

• Remote Standalone Secondary (RSS)

• Shared Disk Secondary (SDS)

• Continuous Log Restore (CLR)

• New Features in Informix 11.5• Updatable Secondaries

• Connection Manager

• New Features in Informix 11.70• Flexible Grid

• Connection Manager Grid Support

• Other Supporting Features

• Appendix

© 2012 IBM Corporation

• Extends HDR to include a new type of secondary (SDS)

• SDS nodes share data storage with the primary

� Can have 0 to N SDS nodes

• Uses

� Adjust capacity online as demand changes

� Lower data storage costs

• How does it work?

� Primary transmits the current Log Sequence Number (LSN) as it is flushing logs

� SDS instance receives the LSN from the primary and reads the logs from the shared disks

� SDS instance applies log changes to its buffer cache

� SDS instance re-sync processed LSN to primary

� Dirty reads allowed on SDS nodes

� The primary can failover to any SDS node

Primary SDS SDS

LSN

ACKLSN

SharedDisk

Hardware

Mirror

• Provides on-line capacity relief

• Multiple redundancy

• Simple to setup and flexible (easily = scalable)

• Low cost - not duplicate disk space

• Does not require specialized hardware

• Can coexist with ER, HDR and RSS secondary nodes

BENEFITS

Shared Disk Secondary (SDS)

© 2012 IBM Corporation

Shared Disk Secondary (SDS): Easy Setup• Requirements:

• Same hardware (vendor and architecture)

• Logged databases

• Same storage paths on each machine

• Configuration:

• Primary SDS_TIMEOUT: Wait time in seconds for acknowledgement from SDS server

• SDS

SDS_ENABLE: Set to 1 to enable SES server

SDS_PAGING: Path to two buffer paging files that may be used between checkpoints to save pagesSDS_TEMPDBS: Temporary dbspace used by an SDS server

• Change the following ONCONFIG parameters to be unique for this SDS instance

� DBSERVERALIASES, DBSERVERNAME, MSGPATH, SERVERNUM

� Leave all other parameters the same

• On Primary, identify the primary as the shared disk primary

• onmode –d set sds primary <name_of_primary_instance>

• Start the shared disk secondary

• oninit

• DONE!

Primary:Mark Primary as SDS node

SDS:Enable SDS in the onconfigStart SDS node

© 2012 IBM Corporation18

Agenda

• High Availability BEFORE Informix 11.10• High Availability Data Replication (HDR)

• Enterprise Replication (ER)

• New High Availability Features in Informix 11.10• MACH 11 and required subcomponents

• Remote Standalone Secondary (RSS)

• Shared Disk Secondary (SDS)

• Continuous Log Restore (CLR)

• New Features in Informix 11.5• Updatable Secondaries

• Connection Manager

• New Features in Informix 11.70• Flexible Grid

• Connection Manager Grid Support

• Other Supporting Features

• Appendix

© 2012 IBM Corporation

• Also known as “Log Shipping”

• Server in roll forward mode

• Logical log backups made from an IDS instance are continuously restored on a second machine

• Allows logical recovery to span multiple ‘ontape/onbar’commands/logs

• Provides a secondary instance with ‘log file granularity’

• Does not impact the primary server

• Can co-exist with “the cluster” (HDR/RSS/SDS) as well as ER

• Useful when backup site is totally isolated (i.e. no network)

• Ideal for disaster recovery

• Replay server logs when convenient

BENEFITS

Continuous Log Restore (CLR)

Primary

CLR2CLR3CLR1

© 2012 IBM Corporation20

Agenda

• High Availability BEFORE Informix 11.10• High Availability Data Replication (HDR)

• Enterprise Replication (ER)

• New High Availability Features in Informix 11.10• MACH 11 and required subcomponents

• Remote Standalone Secondary (RSS)

• Shared Disk Secondary (SDS)

• Continuous Log Restore (CLR)

• New Features in Informix 11.5• Updatable Secondaries

• Connection Manager

• New Features in Informix 11.70• Flexible Grid

• Connection Manager Grid Support

• Other Supporting Features

• Appendix

© 2012 IBM Corporation21

• Client applications can update data on secondary servers by using redirected writes

• Secondary servers (SDS, RSS, and HDR) now support both DDL (CREATE, ALTER, DROP, etc) and DML (INSERT, UPDATE, and DELETE) statements

• The secondary server is not updated directly

• Transaction is transferred to the primary server to check for conflict resolution and then the change is propagated back to the secondary server

Updatable secondaries give the appearance that updates are occurring directly on the secondary server when in fact the transaction is transferred to the primary server and then the change is replicated to the secondary server

Updatable Secondary Servers

© 2012 IBM Corporation

Updatable Secondary - Conflict Resolution

• Two options for detecting update conflicts between nodes

1. Secondary sends “before” and “after” images to the primary (optimistic concurrency)

• Primary compares the before image to current row

2. Secondary sends row version information along with after image

• Primary compares row version number and checksum to current row to detect collisions

Primary HDRSecondary

HDRTraffic

UpdateOperation

If the primary node fails and the HDR/RSS/SDS secondary is promoted to the new primary then the writes are automatically sent to the new primary!

If the “before” image on the secondary is different than the current image on the primary, then the write operation is not allowed and an EVERCONFLICT (-7350) error is returned

© 2012 IBM Corporation23

• Two Shadow Columns required for each row in an updatable table

• ifx_insert_checksum

• insert checksum value

• remains constant for the life of the row

• ifx_row_version

• update version

• incremented with each update of the row

Updatable Secondary Servers: Row Versioning

© 2012 IBM Corporation24

• To add row versioning to an existing table

alter table tablename add vercols;

• To delete row versioning

alter table tablename drop vercols;

• To create a new table with row versioning

create table tablename (

column_name datatype,

column_name datatype,

column_name datatype

) with vercols;

Updatable Secondary Servers: Configure Row Versioning

© 2012 IBM Corporation25

• Only needs to be applied to tables used for updatable secondary operations

• Small rows may not benefit from turning on row versioning

• Shadow columns are not seen when the following SQL/commands are run

• select * from . .

• dbschema–d dbname …

• Use of row versions can reduce the network traffic and improve performance

� If no vercols, the entire secondary “before” image is sent to primary and compared to its image

� SLOW and network hog!!!!

� Row Versioning Optional but STRONGLY RECOMMENDED

Updatable Secondary Servers: Row Versioning

© 2012 IBM Corporation26

Agenda

• High Availability BEFORE Informix 11.10• High Availability Data Replication (HDR)

• Enterprise Replication (ER)

• New High Availability Features in Informix 11.10• MACH 11 and required subcomponents

• Remote Standalone Secondary (RSS)

• Shared Disk Secondary (SDS)

• Continuous Log Restore (CLR)

• New Features in Informix 11.5• Updatable Secondaries

• Connection Manager

• New Features in Informix 11.70• Flexible Grid

• Connection Manager Grid Support

• Other Supporting Features

• Appendix

© 2012 IBM Corporation27

Dallas

Las Vegas

FriscoAustin

TokyoSao PauloParis

OLTP

CATA

LOGMART

Connection Manager Utility:oncmsm (Online Connection

Manager and Server) Monitor

Whi

ch c

atal

og in

stan

ce? Las

Veg

as

The Connection Manager (CM)

• A daemon program:

1. Accepts a client connection request and then re-routes that connection to one of the “best fit”nodes in the Informix cluster

2. Monitors and manages instances failovers

© 2012 IBM Corporation28

• Delivered in the CSDK

• No additional software you have to buy

• Completely integrated into client and server connectivity, not an add-on

• Works with CSDK, JDBC and JCC

• DRDA is available but waiting on DRDA API enhancements

• .NET, Ruby and other support for interaction with the ConnectionManager coming soon too

• Resolves connection requests based on Service Level Agreements (SLA)

The Connection Manager is FREE!

© 2012 IBM Corporation29

• Applications can either connect to a specific instance

• database stores@inst1_primary

OR

• Applications can connect to a “server cloud” - aka SLA -and be routed to the “best choice” available instance in the “cloud”

• database stores@payroll

• database stores@catalog

SLA-Based Client Routing

© 2012 IBM Corporation30

• The following are “reserved words” for SLA definitions:

• primary – the cluster Primary

• SDS – any SDS instance

• HDR – the HDR Secondary

• RSS – any RSS instance

• An SLA definition can include “reserved words” as well as specific instance names

• NOTE: Each SLA has a separate SQLHOSTS entry on CM server

SLA oltp=primary

SLA report=rss_1+rss_2+rss_3

SLA accounting=SDS+HDR

SLA catalog=rss_4+rss_5+rss_6

SLA test=RSS

Sample SLA

© 2012 IBM Corporation

How does CM re-route clients based on “best choice”?

• All servers in a cluster maintain a weighted history of their resource

• Free CPU cycles, number of threads on ready queue, number of active threads, etc

• Every 5 seconds, each server in the cluster sends this resource information to the Connection Manager

• The connection manager uses this information to determine “best choice” within a SLA class

• Clients directed to node has the most free resources

© 2012 IBM Corporation32

Primary HDR secondary

RSS

HDRTraffic

RSSTraffic

PrimaryDown?

Is PrimaryReally Down?

• The Failover Arbitrator provides automatic failover logic for high-availability clusters

• Monitors all nodes checking for the primary failover

• Performs failover (i.e. promotes a secondary to primary) when it is confirmed that the primary is down

• Released as part of the Connection Manager

• Will support failover to RSS, SDS and HDR Secondary nodes

Failover Arbitrator (part of Connection Manager)

© 2012 IBM Corporation33

Fail Over Configuration (FOC) Parameter

• Order of failover is defined by an entry in the Connection Manager configuration file $INFORMIXDIR/etc/cmsm.cfg

• FOC parameter format

FOC failover_configuration,timeout_value

failover_configuration: One or more of primary, SDS, HDR, RSS, or specific instances, separated by a plus (+); sub-groups can be created within parentheses

timeout_value: An amount of time (in seconds) the CM Agent will wait to hear from the primary before executing a failover

• Set timeout_value to a reasonable value to take into account temporary network burps, etc

• Example

FOC serv1+(serv2+SDS)+HDR+RSS,10

• Default: FOC SDS+HDR+RSS,0

© 2012 IBM Corporation34

Complete Connection Manager Configuration File

• File format:

• value can be an instance name, an instance type, or a list of instance names and

types, separated by a “+”

• Example of a configuration manager configuration file

NAME doe_test

SLA oltp=primary

SLA report=HDR+SDS

SLA test=RSS

FOC inst1+SDS+RSS+HDR,10

DEBUG 1

LOGFILE /opt/IBM/informix/logs/cm.log

NAME ConnectionManagerName

SLA name=value

[ SLA name=value ] . . .

FOC failover_config, timeout_value

DEBUG [ 1 | 0 ]

LOGFILE <path_to_log_file>

© 2012 IBM Corporation35

oncmsm

oncmsm –c /path_to_config_file

oncmsm cm1 -s oltp=primary -s payroll=HDR+primary

-s report=SDS+HDR -l cm.log –f HDR+SDS,30

Starting and Stopping a Connection Manager

• If there is only one connection manager to start, configuration file is at default location and variables are set

• Other Examples

• To stop a connection manager agent

oncmsm –k agent_name

agent_name must be provided even if only one agent is running on server

© 2012 IBM Corporation36

Connection Manager Statistics

onstat –g cmsm

• Displays the various Connection Manager daemons and corresponding details that are attached to a server instance.

• Display contents:

• All connection managers inside cluster.

• Associated hosts.

• SLA and corresponding defines.

• Arbitrator configuration (discussed next).

• Flags and statistics.

• Sample Output:

CM name host sla define foc flag connections

Cm1 bia oltp primary SDS+HDR+RSS,0 3 5

Cm1 bia report (SDS+RSS) SDS+HDR+RSS,0 3 16

© 2012 IBM Corporation37

Connection Manager Failover Arbitrator: FAILOVER_CALLBACK

• FAILOVER_CALLBACK

• Valid for secondary instances

• Pathname to program/script to execute if the server is promoted from secondary to primary

• Can be used to issue alert, take specific actions, etc

© 2012 IBM Corporation38

• New setting for DRAUTO configuration parameter

DRAUTO 3

• Arbitrator first verifies no other active primary servers in the cluster before promoting a secondary to primary

• If another primary server is active, Arbitrator will reject promotion request

• Should be set on all Secondary instances in a High Availability Cluster and also the Primary

Connection Manager Failover Arbitrator: DRAUTO

© 2012 IBM Corporation39

Multiple Connection Managers

• Multiple CM agents can be active in a cluster at any time

• Each can have the same configuration **OR** they can have a different configuration

• File for the agent is passed in using the –c path_to_file syntax

• Each must be invoked while pointing to the cluster primary

• When invoked, it will connect to the primary to download cluster instance list

• All CM agents are “active” and can control connections, so verify client connection strings!!

• Only the first invoked agent is the active fail-over control (if configured) CM

• Fail-over control will cascade in the event of a Arbitrator failure though

© 2012 IBM Corporation40

Connection Manager - onpassword utility

• Used to encrypt/decrypt a centralized password file for access to all instances in a cluster by the Connection Manager

• Information in the encrypted file includes instance IDs of all serversin the cluster and their associated usernames and passwords

• User ID and password must already exist on the target physical server for the instance

• Output from utility is encrypted and stored in $INFORMIXDIR/etc/passwd_file

• Output required by the ONCMSM connection manager (discussed later) to connect to instances

• NOT used by client applications

© 2012 IBM Corporation41

onpassword: File Structure

• The password file is an ASCII text file with the following structure

instance_name alternate_instance username password

• instance_name: DBSERVERNAME/DBSERVERALIAS; must be TCP/IP based.

• alternate_instance: Alternate alias, must be TCP/IP based

• username: user ID for the connection

• password: password for user ID

• Example• lx-rama lx-rama ravi foobar

• toru toru_2 usr2 fivebar

• seth_tcp seth_alias fred 9ocheetah

• cheetah panther anup cmpl1cate

• One instance in the cluster per line

• If the second/alternate instance name is different, the Connection Manager will try that instance id if it cannot connect to the server using the first instance name

© 2012 IBM Corporation42

onpassword Utility: Examples

onpassword –k 6azy78op –e $HOME/my_passwd_file

onpassword –k 34RogerSippl1 –e

/user_data/my_stuff/my_passwd_file

• The encrypted output file in the above examples is called passwd_fileand placed in $INFORMIXDIR/etc/

onpassword –k 6azy78op –d $HOME/out_file

onpassword –k 34RogerSippl1 –d /tmp/another_file

• The decrypted output file in the above examples is placed where directed

© 2012 IBM Corporation43

Agenda

• High Availability BEFORE Informix 11.10• High Availability Data Replication (HDR)

• Enterprise Replication (ER)

• New High Availability Features in Informix 11.10• MACH 11 and required subcomponents

• Remote Standalone Secondary (RSS)

• Shared Disk Secondary (SDS)

• Continuous Log Restore (CLR)

• New Features in Informix 11.5• Updatable Secondaries

• Connection Manager

• New Features in Informix 11.70• Flexible Grid

• Connection Manager Grid Support

• Other Supporting Features

• Appendix

© 2012 IBM Corporation

What is a Flexible Grid?

• A named set of interconnected replication servers for propagating commands from an authorized server to the rest of the servers in the set

• Useful if you have multiple replication servers and you often need to perform the same tasks on every replication server

• Nodes in grid do not have to be identical

• Different tables, different hardware, different OS’s, different IDS versions

• Requirements

• Enterprise Replication must be running

• Servers must be on Panther (11.70.xC1)

• Pre-panther servers within the ER domain cannot be part of the GRID

© 2012 IBM Corporation

What are the features of the new Informix Flexible Grid?

• Simplify creation, management, and maintenance of a global grid

• Create grid, attach to grid, detach from grid, add/drop node to/from Grid

• DDL/DML operations on any node propagated to all nodes in the Grid

• Management of grid can be done by any node in the grid

• Run or create stored procedures or user-defined routines on one or all nodes

• Simplified management and maintenance of replication

• Tables no longer require primary keys

• Easily setup member servers and authorized users are of the grid

• Flexibility in setup of servers grid routines can be run

• Integration with OpenAdmin Tool (OAT)

© 2012 IBM Corporation

Define/Enable/Disable the Grid

• To setup/disable a GRID, use the cdr utility

• Define

• Defines the nodes within the grid

cdr define grid <grid_name> --all

cdr define grid <grid_name> <node1 node2 …>

• Enable

• Defines the nodes and users within the grid that can perform a grid level operations

cdr enable grid –grid=<grid_name> --user=<user> --node=<node>

• Disable

• Used to remove a node or user from being able to perform grid operations

cdr disable grid –grid=<grid_name> --node=<node_name> --user=<user_name>

OAT support enabled

© 2012 IBM Corporation

Propagating Database Object Changes

• Once the Grid is “enabled,” a replset is created for all replicates created within the grid

• Can propagate creating, altering, and dropping of database objects to servers in the grid while connected

• The grid must exist and the grid routines must be executed as anauthorized user from an authorized server

• Grid operations do NOT, by default, replicate DML operations

• To replicate DML, must enable ER by executing procedure ifx_set_erstate()

• To propagate database object changes:

• Connect to the grid by running the ifx_grid_connect() procedure

• Run one or more SQL DDL statements

• Disconnect from the grid by running the ifx_grid_disconnect() procedure

© 2012 IBM Corporation

Example of DDL propagation

execute procedure ifx_grid_connect(‘grid1’, ‘tag1’);

create database tstdb with log;

create table tab1 (

col1 int primary key,

col2 int,

col3 char(20)) lock mode row;

create index idx1 on tab1 (col2);

create procedure loadtab1(maxnum int)

define tnum int;

for tnum = 1 to maxnum

insert into tab1 values

(tnum, tnum * tnum, ‘mydata’);

end for:

end procedure;

execute procedure ifx_grid_disconnect();

Will be executedon all nodes within the ‘grid1’GRID

© 2012 IBM Corporation

Grid Operation Functions

• Operations can be run by any database in any node on the Grid

• ifx_grid_connect() – Opens a connection and any command run is applied to the Grid

• ifx_grid_disconnect() – Closes a connection with the Grid

• ifx_grid_execute() – Executes a single command across the Grid

• ifx_grid_function() – Executes a routine across the Grid

• ifx_grid_procedure() – Executes a procedure across the Grid

• ifx_set_erstate() – Controls replication of DML across the Grid for all tables that participate in a replicate

• ifx_get_erstate() – Reports whether replication is enabled on a transaction that is propagated across the Grid

• Ifx_grid_purge() – Purges metadata about operations that have been executed on the Grid

• ifx_grid_redo() – re-executes a failed and tagged grid operation

© 2012 IBM Corporation

Dynamically Enabling/Disabling ER

• Enable

• execute procedure ifx_set_erstate(‘on’)

• Disable

• execute procedure ifx_set_erstate(‘off’)

• Get current state

• execute function ifx_get_erstate();

• Return of 1 means that ER is going to snoop the logs for this transaction

• Example of enabling ER for the execution of a procedureExecute procedure ifx_grid_connect(‘grid1’);

create procedure myproc()

execute procedure ifx_set_erstate(‘on’);

execute procedure create_summary_report();

end procedure;

execute procedure ifx_grid_disconnect();

execute procedure ifx_grid_procedure(‘grid1’,’myproc()’);

© 2012 IBM Corporation

Monitoring a Grid

• cdr list grid

• View information about server in the grid

• View the commands that were run on servers in the grid

• Without any options or a grid name, the output shows the list of grids

• Servers in the grid on which users are authorized to run grid commands are marked with an asterisk (*)

• When you add a server to the grid, any commands that were previously run through the grid have a status of PENDING for that server

• Options include:--source=<source_node>

--summary

--verbose

--nacks

--acks

--pending

cdr list grid grid1

NEW: Monitor a clusteronstat -g cluster

© 2012 IBM Corporation52

Agenda

• High Availability BEFORE Informix 11.10• High Availability Data Replication (HDR)

• Enterprise Replication (ER)

• New High Availability Features in Informix 11.10• MACH 11 and required subcomponents

• Remote Standalone Secondary (RSS)

• Shared Disk Secondary (SDS)

• Continuous Log Restore (CLR)

• New Features in Informix 11.5• Updatable Secondaries

• Connection Manager

• New Features in Informix 11.70• Flexible Grid

• Connection Manager Grid Support

• Other Supporting Features

• Appendix

© 2012 IBM Corporation

Connection Manager and Flexible Grids

• The oncmsm agent functionality (Connection Manager) has been extended to include ER and Grid clusters!

• For ER/HA clusters

• The agent supports all Informix 11 instances!

• For Grid clusters

• The agent only supports Informix 11.70 instances

• Connection Manager is either for a cluster or for ER, no mixing

• SLA is at the replicate set level

© 2012 IBM Corporation54

Agenda

• High Availability BEFORE Informix 11.10• High Availability Data Replication (HDR)

• Enterprise Replication (ER)

• New High Availability Features in Informix 11.10• MACH 11 and required subcomponents

• Remote Standalone Secondary (RSS)

• Shared Disk Secondary (SDS)

• Continuous Log Restore (CLR)

• New Features in Informix 11.5• Updatable Secondaries

• Connection Manager

• New Features in Informix 11.70• Flexible Grid

• Connection Manager Grid Support

• Other Supporting Features

• Appendix

© 2012 IBM Corporation

Replicate tables without Primary Keys

• No longer require a Primary Keys for tables replicated by Enterprise Replication (ER)

• Use the WITH ERKEY keyword when defining tables

• Creates shadow columns (ifx_erkey_1, ifx_erkey_2, and ifx_erkey_3)

• Creates a new unique index and a unique constraint that ER uses for a primary key

• For most database operations, the ERKEY columns are hidden

• Not visible to statement like SELECT * FROM tablename;

• Example

CREATE TABLE customer (id INT) WITH ERKEY;

ALTER TABLE customer ADD ERKEY;

© 2012 IBM Corporation

Informix Flexible Grid - Quickly CLONE a Server

• Previously, to clone the Primary1. Create a level-0 backup

2. Transfer the backup to the new system

3. Restore the image

4. Initialize the instance

• ifxclone utility

• Clones an instance from a single command• Starts the backup and restore processes simultaneously (SMX

transfer)• No need to read or write data to disk or tape

• Creates a standalone server ER node or a remote standalone secondary (RSS) server

• If creating a new ER node, ER registration is cloned as well

• No Sync/Check is necessary

ifxclone -T -S machine2 -I 111.222.333.555 -P 456 -t machine1-i 111.222.333.444 -p 123

© 2012 IBM Corporation

Easily Convert Cluster Servers to ER nodes

• RSS � ER

• Use the rss2er() stored procedure is located in the syscdrdatabase

• Converts the RSS secondary server into an ER server

• Secondary will inherit the replication rules that the primary had

• Does not require a ‘cdr check’ or ‘cdr sync’

• HDR/RSS pair � ER pair (cdr start sec2er)

• Converts an HDR/RSS pair into an ER pair

• Automatically creates ER replication between primary and secondary server

• Splits HDR/RSS pair into independent standard servers that use ER

© 2012 IBM Corporation

Upgrading a Cluster while it is Online

• Use ‘cdr start sec2er’ and ‘ifxclone’ to perform a rolling upgrade of an HDR/RSS pair so that planned down time is not required during a server migration

• Basic Steps

1. Execute ‘cdr start sec2er’

2. Restrict application to only one of the nodes

3. Migrate server on which the apps are not running

4. Move apps to the migrated server

5. Use ifxclone to switch back to RSS/HDR

© 2012 IBM Corporation59