Oracle DBA

download Oracle DBA

If you can't read please download the document

description

sravan kumar

Transcript of Oracle DBA

ORACLE DATA BASE ADMINISTRATION

ORACLE DATA BASE ADMINISTRATION2421. Configure Rman Backupset Compression

RMAN compresses the backup set contents before writing them to disk. No extra uncompress ion steps are required during recovery when we use RMAN compression.RMAN has two types of compression:

1.) Null Compression and2.) Unused Block Compression

1.)Null Compression:When backing up datafiles into backup sets, RMAN does not back up the contents of data blocks that have never been allocated. This means RMAN will never backup the blocks that are ever used. For example: We have a tablespace having one datafile of size 100MB and out of 100MB only 50 MB is used. Then RMAN will backup only 50MB.

2.)Unused Block Compression:RMAN skips the blocks that do not currently contain data and this is called Unused Block Compression. RMAN now creates more compact backups of datafiles, by skipping datafile blocks that are not currently used to store data. No extra action is required on the part of the DBA to use this feature.

Example: We have a tablespace having one datafile of size 100MB and out of 100MB, 50MB is used by the user tables. Then user dropped a table belonging to that tablespace which was of 25MB, with Unused Block Compression only 25MB of the files is backed up. In this example if null compression is used then it would have backed up 50MB because Null Compression will consider the blocks that are formatted/ever used.

Binary Compression :Binary Compression can be done by specifying "AS COMPRESSED" clause in backup command, this compression is called as binary compression. RMAN can apply a binary compression algorithm as it writes data to backup sets. This compression is similar to the compression provided by many tape vendors when backing up data to tape. But we cannot give exact percentage of compression. This binary compression algorithm can greatly reduce the space required for disk backup storage. It is typically 2x to 4x, and greater for text-intensive databases. The command to take the compressed backup :

RMAN>backup as compressed backupset database ;

There is no special command to restore database from the compressed backupsets. The restore command will be the same as with uncompressed backups.The restore from the compressed backpuset will take more time than uncompressed backupsets.To use rman compression option, we can run the following RMAN commands to configure compression

RMAN>CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COMPRESSED BACKUPSET;followed by ..RMAN>CONFIGURE COMPRESSION ALGORITHM HIGH ;orRMAN>CONFIGURE COMPRESSION ALGORITHM MEDIUM ;orRMAN>CONFIGURE COMPRESSION ALGORITHM LOW ;orRMAN>CONFIGURE COMPRESSION ALGORITHM BASIC ;

Oracle 11g added several compression algorithms to compress data. They can be used for compressing tables, LOBs , compressed data pump exports or even RMAN backups. Unfortunately for some compression algorithms we need to purchase the Advanced Compression Option. The following table lists the available RMAN compression options, the most likely compression algorithm being used and states if an additional license is required:

The compression levels are BASIC, LOW, MEDIUM and HIGH and each affords a trade off related to backupthroughputand the degree of compression afforded. If we have enabled the Oracle Database 11g Release 2 Advanced Compression Option, then we can choose from the following compression levels : HIGH- Best suited for backups over slower networks where the limiting factor is network speed MEDIUM- Recommended for most environments. Good combination of compression ratios and speed LOW- Least impact on backup throughput and suited for environments where CPU resources are the limiting factor.

Note: The compression ratio generally increases from LOW to HIGH, with a trade-off of potentially consuming more CPU resources.We can check the compression level by using the command .

SQL>select * from V$RMAN_COMPRESSION_ALGORITHM;Output :

I found a good scenario on net related to compression level having statistics about the this compression level . Here is thescenario:The environment being used was a freshly created 11g Release 2 database with some smaller tables in it. The total sum of all segments equals to 4.88 GB. All database data files excluding the temporary ones are 7.3 GB total. Excluding temporary and undo data files total size equates to 5.9 GB.

Here is the test results displays of the compression level:Test results

As we can see from the table HIGH compression does an incredibly high load on the machine and take extremely long but produces the smallest backup set size.Surprisingly BASIC compression (which is available without advanced compression license) does a good job as well and produces the second smallest backup set but takes nearly as long as doing uncompressed backups. But in other environment with faster CPUs this will change .

In the test environment used either LOW or MEDIUM compression seems to be the best choice. Due to the fact MEDIUM produces a approx. 15% smaller backup set but taking only a few seconds more time to complete i would rank MEDIUM on 1st and LOW on second.

Finally we came to theconclusionthatstronger the compression the smaller the backup size but the more CPU-intensive the backup is. If we do not have the advanced compression license BASIC compression will produce reasonable compression rates at moderate Load. If we have the licence we have a lot more options to suit our needs.

If we want to test and optimize our rman backup, we basically have three major switches to play with : compression algorithmn rman parallelism and data transfer mechanism (SAN or Ethernet [this includes: iSCSI, NFS, CIFS, Backup to tape over Ethernet])

2.Oracle Advanced Compression

Oracle Advanced Compressionand Oracle Database 11g Release 2 helps manage more data in a cost-effective manner. With data volumes, on average, tripling every two years, Oracle Advanced Compression delivers compression rates of 2-4x across all types of data and applications.storage savings from compression will cascade throughout the data center, reducing network traffic and data backups as well. And by reading fewer blocks off disk, Oracle Advanced Compression also improves query performance.

Oracle Advanced Compression is an option of the Oracle 11g database (separately licensed) that allows data in the database to be compressed. Oracle Advanced Compression offers the following advantages:

1.)OLTPCompression:Itallows structured and unstructured data to be compressed on insert,update and delete operations.The following are features : New compression algorithm uses deferred or batched approach Data is inserted as is without compression until PCTFREE value is reached. Compression of data starts once PCTFREE threshold is reached Can be enabled at table, partition or tablespace level No need of decompressing the data during reads Recommended for low update activity tables

2.)Data Pump Compression :In Data Pump, the compression of metadata was introduced in 10g and compression of "data" was introduced in 11g.This covers the following features : Both are Inline operation Save on storage allocation No need to uncompress before Import Implemented with COMPRESSION attribute, Values supported are ALL, DATA_ONLY, METADATA_ONLY .

3.)Data guard Compression:It includes the following features : Redo is compressed as it is transmitted over a network . Helps efficiently utilize network bandwidth when data guard is across data centers Fasterre-synchronizationof Data guard during gap resolution. Recommended for low network bandwidth . Implemented with attribute COMPRESSION of initialization parameterlog_archive_dest_n

.4.)RMANBackup Compression:Itcompresses the RMAN backups.The followinf features are Supports compression of backups using "ZLIB" algorithm. Faster compression and low CPU utilization compared to default BZIP2 (10g). Low compression ratio compared to BZIP2. Implement with CONFIGURE COMPRESSION ALGORITHM value command where value can be High , Medium(ZLIB) and Low(LZO).

The Oracle Database 11g Advanced Compressionoption introduces a comprehensive set of compression capabilities to help customers maximize resource utilization and reduce costs. It allows IT administrators to significantly reduce their overall database storage footprint by enabling compression for all types of data be it relational (table), unstructured (file), or backup data . Although storage cost savings are often seen as the most tangible benefit of compression, innovative technologies included in the Advanced Compression Option are designed to reduce resource requirements and technology costs for all components of our IT infrastructure, including memory and network bandwidth .

Thebenefits of compressionare manyfold1.)Reduction of disk space used for storage .2.)Reduction in I/O bandwidth requirements .3.)Faster full table scans .4.)Lower server memory usage.

3.Change Database Character Set using CSSCAN

CSSCAN(Database Character Set Scanner) is a SCAN tool that allows us to see the impact of a database character set change or assist us to correct an incorrect databasenls_charactersetsetup.Data scanning identifies the amount of effort required to migrate data into the new character encoding scheme before changing the database character set. This information helps to determine the best approach for converting the database character set.

Before altering the character set of a database, check the convertibility of the data before converting. Character set conversions can cause data loss or data corruption. The Character Set Scanner utility provides the belowtwo features:

1.)Convertibility check of existing data and potential issues. The Scanner checks all character data in the database including the data dictionary and tests for the effects and problems of changing the character set encoding (characterset). At the end of the scan, it generates a summary and exception report of the database scan.

2.)Csscan allows also us to do a check if there is no data in the database that is incorrectly stored.

TheCSALTERscript is part of the Database Character Set Scanner utility. The CSALTER script is the most straightforward way to migrate a character set, but it can be used only if all of the schema data is a strict subset of the new character set.Each and every character in the current character set is available in the new character set andhas the same code point value in the new character set.

With the strict superset criteria in mind, only the metadata is converted to the new character set by the CSALTER script, with the following exception: the CSALTER script performs data conversion only on CLOB columns in the data dictionary and sample schemas that have been created by Oracle. CLOB columns that users have created may need to be handled separately

Note:it's possible to run Csscan from a client, but this client needs to be the same base version as the database home.(i.e, oracle 11g server need oracle 11g client) .To change the database character set, perform the following steps:

STEP 1 :Remove the invalid objects and purge the recyclebin , then takeafull backup of the database.

STEP 2 :Install the CSS utility if not install . We will get error css-00107 if css utility is not install .Install the CSS utility by running the csminst.sql script which is found in$ORACLE_HOME\rdbms\admin.

STEP 3 :Run the Database Character Set Scanner utility asset the oracle_sid and run asCCSSCAN /AS SYSDBA FULL=Y

STEP 4 :Run the CSALTER script.This script is in$ORACLE_HOME\rdbms\adminfolder .i.> shut downii.> startup restrictiii.>@csalter.plbiv.>shut immediatev.> startup

Note :i.>The CSALTER script does not perform any user data conversion. It only changes the character set metadata in the data dictionary.Thus, after the CSALTER operation, Oracle will behave as if the database was created using the new character set.

ii.>Changing the database characterset is not an easy task . It is quite tricky tasks and may face errors which need the oracle support . So,it's better to raise as SR for this task and involve the oracle support.

4.How to Reduce DB File Sequential Read Wait

DB File Sequential Readwait event occurs when we are trying to access data using index and oracle is waiting for the read of index block from disk to buffer cache to complete. A sequential read is a single-block read.Single block I/Os are usually the result of using indexes. Rarely, full table scan calls could get truncated to a single block call due to extent boundaries, or buffers already present in the buffer cache.Db file sequential read wait events may also appear when undo blocks are read from disk in order to provide a consistent get(rarely).

To determine the actual object being waited can be checked by the p1, p2, p3 info inv$session_wait. A sequential read is usually a single-block read, although it is possible to see sequential reads for more than one block (See P3). This wait may also be seen for reads from datafile headers (P2 indicates a file header read) ,wherep1,p2 and p3gives the the absolute file number ,the block being read ,and the number of blocks (i.e, P3 should be 1) respectively.

Block reads are fairly inevitable so the aim should be to minimise un-necessary IO. This is best achieved by good application design and efficient execution plans. Changes to execution plans can yield orders of magnitude changes in performance.Hence to reduce this wait event follow the below points .

1.)Tune Oracle -tuning SQL statements to reduce unnecessary I/O request is the only guaranteed way to reduce "db file sequential read" wait time.2.)Tune Physical Devices -Distribute(stripe) the data on diferent disk to reduce the i/o . Logical distribution is useless. "Physical" I/O performance is only governed by "independency of devices".3.)Faster Disk -Buy the faster disk to reduce theunnecessary I/O request .4.)Increase db_block_buffers- A larger buffer cache can (not will, "might") help .

5.Cannot Load OCI.DLL : While Connecting

Sometimes the error "cannot load OCI.DLL" occur whenever we try to connect with the oracle database by using thethird-partytools(i.e, toad,sqltools and others) or command prompt . This error may occur because of the following reason .

1.)The oci.dll error may occur because you have not set the correct ORACLE_HOME and path in environment variables .2.)It might be possible that the oci.dll file may be corrupt or may not exist on the correct path .3.)May be possible that oci.dll may not be correct version. (e.g. 32bit s/w will load a 32bit DDL - we cannot for example use a 64bit DLL for a 32bit executable) .

To solve this issue , consider the below points .

1.)Check theORACLE_HOMEandPathsetting in the envirnoment variable.2.)Check the correct location of the oci.dll path . The path of the oci.dll file is $ORACLE_HOME\bin\oci.dll3.)Check the oci.dll correct version .In my case , i am facing this issue because the $ORACLE_HOME is not correctly set in the environment variables . So setting the correct path in environment variables, we find not any error while connecting the database .

Transparent Data Encryption in Oracle 11gOracleTransparent Data Encryption(TDE) enables the organizations to encrypt sensitive application data on storage media completely transparent to the application.TDE addresses encryption requirements associated with public and private privacy and security regulations such as PCI DSS. TDE column encryption wasintroduced in Oracle Database 10g Release 2, enabling encryption of table columns containing sensitive information. The TDE tablespace encryption and the supportfor hardware security modules (HSM) were introduced in Oracle Database 11gR1.

TDE is protecting the data at rest. It is encrypting the data in the datafiles so that in case they are obtained by other parties it will not be possible to access the cleartext data. TDE cannot be used to obfuscate the data for the users who have privileges to access the tables. In the databases where TDE is configured any user whohas access on an encrypted table will be able to see the data in clear text because Oracle will transparently decrypt the data for any user having the necessaryprivileges.

TDE is using a two tier encryption key architecture consisting of : a master encryption key - this is the encryption key used to encrypt secondary keys used for column encryption and tablespace encryption. one or more table and/or tablespace keys - these are the keys that are used to encrypt one or more specific columns or the keys used to encrypt tablespaces. There is only one table key regardless of the number of encrypted columns in a table and it will be stored in the data dictionary. The tablespace key isstored in the header of each datafile of the encrypted tablespace.

The table and tablespace keys are encrypted using the master key. The master key is stored in an external security module (ESM) that can be one of the following: an Oracle Wallet - a secure container outside of the database. It is encrypted with a password. a Hardware Security Module (HSM) - a device used to secure keys and perform cryptographic operations.

To start using TDE the following operations have to be performed:

1.) Make sure that the wallet location exists.If a non default wallet location must be used then specify it in the sqlnet.ora file :

ENCRYPTION_WALLET_LOCATION = (SOURCE = (METHOD = FILE) (METHOD_DATA = (DIRECTORY = C:\app\neerajs\admin\orcl\wallet) ) )Note :The default encryption wallet location is$ORACLE_BASE/admin//wallet.If we want to let Oracle manage a wallet in the default locationthen there is no need to set theENCRYPTION_WALLET_LOCATIONparameter in sqlnet.ora.

It is important to check that the location specified in sqlnet.ora or the default location exists and can be read/written by the Oracle processes.

2.) Generate a master key :

SQL>alter system set encryption key identified by "wallet_password" ;system altered

This command will do the following :A.)If there is no wallet currently in the wallet location then a new wallet with the password "wallet_password" will be generated. The password is enclosed in doublequotes to preserve the case of the characters. If the double quotes are not used then the characters of the password will be all in upper case. This command will alsocause the new wallet to be opened and ready for use.

B.)A new master key will be generated and will be written to the wallet. This newly generated master key will become the active master key. The old master keys (ifthere were any) will still be kept in the wallet but they will not be active. They are kept there to be used when decrypting data that was previously encrypted usingthem .

To see the status of an wallet run the following query:

SQL>select * from v$encryption_wallet;WRL_TYPE WRL_PARAMETER STATUS----------- ------------------------------ -----------file C:\app\neerajs\admin\orcl\wallet OPEN

3.) Enable encryption for a column or for an entire tablespace:

3.1) Create a table by specifying the encrypt option:

SQL>create table test(col1 number, col2 varchar2(100) encrypt using 'AES256' NO SALT) ;

3.2) Encrypt the column(s) of an existing table :

SQL>alter table test modify( col2 encrypt SALT ) ;Note :If the table has many rows then this operation might take some time since all the values stored in col2 must be replaced by encrypted strings. If the access tothe table during this operations is needed then useOnline Table Redefinition

3.3)Create an encrypted tablespace :The syntax is the same as creating a normal tablespace except for two clauses: We specify the encryption algorithm in this case AES256. If we do not specify this, it will default to AES128. At the time of tablespace creation specify theencryption and default storage clause.

Define the encryption algorithem as "using 'algorithm'" along with the encryption clause. Wecan use the following algorithms while creating an encrypted tablespace.AES128AES192AED2563DES168If we don't specify any algorithm with the encryption clause it will use AES128 as default.

The DEFAULT STORAGE (ENCRYPT) clause.SQL>create tablespaceencryptedtbs datafile 'C:\app\neerajs\oradata\orcl\encryptedtbs01.dbf' size 100Mencryption using 'AES256' default storage(encrypt) ;Note:An existing non encrypted tablespace cannot be encrypted. If we must encrypt the data from an entire tablespace then create a new encrypted tablespace andthen move the data from the old tablespace to the new oneTDE Master Key and Wallet Management .

The wallet is a critical component and should be backed up in a secure location (different to the location where the database backups are stored!). If the walletcontaining the master keys is lost or if its password is forgotten then the encrypted data will not be accessible anymore.Make sure that the wallet is backed up in the following scenarios:

Immediately after creating it.1. When regenerating the master key2. When backing up the database. Make sure that the wallet backup is not stored in the same location with the database backup3. Before changing the wallet password

Make sure that the wallet password is complex but at the same time easy to remember. When it is possible split knowledge about wallet password .If needed, thewallet password can be changed within Oracle Wallet Manager or with the following command using orapki (starting from 11.1.0.7):

c:\>orapki wallet change_pwd -wallet Oracle recommends that the wallet files are placed outside of the $ORACLE_BASE directory to avoid having them backed up to same location as other Oracle files.Furthermore it is recommended to restrict the access to the directory and to the wallet files to avoid accidental removals.

we can identify encrypted tablespaces in the database by using the below query :SQL>SELECT ts.name, es.encryptedts, es.encryptionalgFROM v$tablespace tsINNER JOIN v$encrypted_tablespaces es ON es.ts# = ts.ts# ;

The following aresupportedwith encrypted tablespaces Move table back and forth between encrypted tablespace and non-encrypted tablespace . Datapump is supported to export/import encrypted content/tablespaces. Transportable tablespace is supported using datapump.The following arenot supportedwith encrypted tablespaces Tablespace encryption cannot be used for SYSTEM, SYSAUX, UNDO and TEMP tablespaces . Existing tablespace cannot be encrypted . Traditional export/import utilities for encrypted content.

Enable block change tracking in oracle 11g

Theblock change tracking (BCT)feature for incremental backups improves incremental backup performance by recording changed blocks in each datafile in a block change tracking file. This file is a small binary file calledblock change tracking(BCT) file stored in the database area. RMAN tracks changed blocks as redo is generated.If we enable block change tracking, then RMAN uses the change tracking file(BCT) to identify changed blocks for an incremental backup, thus avoiding the need to scan every block in the datafile. RMAN only uses block change tracking when the incremental level is greater than 0 because a level 0 incremental backup includes all blocks.

Enable block change tracking (BCT)

SQL>alter database enable block change tracking using file 'C:\app\neerajs\admin\noida\bct.dbf' ;When data blocks change, shadow processes track the changed blocks in a private area of memory at the same time they generate redo . When a commit is issued, the BCT information is copied to a shared area in Large Pool called 'CTWR dba buffer' . At the checkpoint, a new background process, Change Tracking Writer (CTWR) , writes the information from the buffer to the change-tracking file . If contention for space in the CTWR dba buffer occurs, a wait event called , 'Block Change Tracking Buffer Space' is recorded. Several causes for this wait event are poor I/O performance on the disk where the change-tracking file resides , or the CTWR dba buffer is too small to record the number of concurrent block changes .By default, the CTWR process is disabled because it can introduce some minimal performance overhead on the database.

Thev$block_change_trackingviews contains the name and size of the block change tracking file plus the status of change tracking: We can check by the below command :

SQL>select filename, status, bytes from v$block_change_tracking;

To check whether the block change tracking file is being used or not, use the below command .

SQL>select file#, avg(datafile_blocks), avg(blocks_read), avg(blocks_read/datafile_blocks) * 100as "% read for backup" from v$backup_datafile where incremental_level > 0 and used_change_tracking = 'YES' group by file# order by file# ;

TodisableBlock Change Tracking (BCT) issue the below command:SQL>alter database disable block change tracking ;

RMAN Tablespace Point-in-Time Recovery(TSPIR) in Oracle 11gR2

Recovery Manager (RMAN) AutomaticTSPITRenables quick recovery of one or more tablespaces in a database to an earlier time without affecting the rest of thetablespaces and objects in the database.

RMAN TSPITR is most useful for the following situations: We want to recover a logical database to a point different from the rest of the physical database, when multiple logical databases exist in separate tablespaces of onephysical database. For example, we maintain logical databases in the orders and personnel tablespaces. An incorrect batch job or DML statement corrupts the datain only one of the tablespaces. We want to recover data lost after DDL operations that change the structure of tables. We cannot use Flashback Table to rewind a table to before the point of astructural change such as a truncate table operation. We want to recover a table after it has been dropped with the PURGE option. We want to recover from the logical corruption of a table. We want to recover dropped tablespaces. In fact, RMAN can perform TSPITR on dropped tablespaces even when a recovery catalog is not used. We can also use Flashback Database to rewind data, but we must rewind the entire database rather than just a subset. Also, unlike TSPITR, the Flashback Databasefeature necessitates the overhead of maintaining flashback logs. The point in time to which you can flash back the database is more limited than the TSPITR window,which extends back to our earliest recoverable backup.

TSPIR was there in earleir release but have some limitation i.e, we cannot recover a dropped tablespace . Oracle 11gr2 performs a fully automated managed TSPIR. It automatically creates and start the auxiliary instance and restores the datafiles it requires and the files pertaining to the dropped tablespace. It will firstperform a recovery of the tablespace on the auxiliary instance and then use Data Pump and Transportable Tablespace technology to extract and import thetablespace meta data into the original source database .

Here we will illustrate theConcept of TSPIRwith an example .

We will create ausersay "TSPIR"and assign the default tablespace say"tspir"and create tables in this tablespace. We take the full backup of the databaseand further drop the tablespace "tspir" . Before dropping we we note the scn and use this scn to do TSPIR . Below are steps for the TSPIR

Step 1 : Clean the previous failed TSPIR

SQL>exec dbms_backup_restore.manageauxinstance ('TSPITR',1) ;PL/SQL procedure successfully completed.

Step 2 : Create tablespace and Users and Create tables

SQL>create tablespace tspir datafile 'C:\app\neerajs\oradata\orcl\tspir.dbf' size 150m autoextend on;Tablespace created.

SQL>create user tspir identified by tspir 2 default tablespace tspir 3 quota unlimited on tspir;User created.

SQL>grant resource,connect to tspir;Grant succeeded.

SQL>connect tspir/tspir@orclConnected.

SQL>create table test(id number);Table created.

SQL>insert into test values(12);1 row created.

SQL>insert into test values(121);1 row created.

SQL>commit;Commit complete.

SQL>select * from test; ID---------- 12 121

SQL>create table emp as select * from user_objects;Table created.

SQL>select count(*) from emp; COUNT(*)---------- 2

SQL>conn / as sysdbaConnected.

Step 3 : Take the fresh backup of database

SQL>host rman target sys@orclRecovery Manager: Release 11.2.0.1.0 - Production on Wed Nov 30 14:35:44 2011Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.target database Password:connected to target database: ORCL (DBID=1296005542)

RMAN>backup database plus archivelog;Starting backup at 30-NOV-11current log archivedusing target database control file instead of recovery catalogallocated channel: ORA_DISK_1channel ORA_DISK_1: SID=141 device type=DISKchannel ORA_DISK_1: starting archived log backup setchannel ORA_DISK_1: specifying archived log(s) in backup setinput archived log thread=1 sequence=3 RECID=1 STAMP=768238310input archived log thread=1 sequence=4 RECID=2 STAMP=768238310input archived log thread=1 sequence=5 RECID=3 STAMP=768238311input archived log thread=1 sequence=6 RECID=4 STAMP=768238314input archived log thread=1 sequence=7 RECID=5 STAMP=768239453input archived log thread=1 sequence=8 RECID=6 STAMP=768239455input archived log thread=1 sequence=9 RECID=7 STAMP=768305386input archived log thread=1 sequence=10 RECID=8 STAMP=768334227input archived log thread=1 sequence=11 RECID=9 STAMP=768393025input archived log thread=1 sequence=12 RECID=10 STAMP=768454251input archived log thread=1 sequence=13 RECID=11 STAMP=768521484input archived log thread=1 sequence=14 RECID=12 STAMP=768580566channel ORA_DISK_1: starting piece 1 at 30-NOV-11channel ORA_DISK_1: finished piece 1 at 30-NOV-11piece handle=F:\RMAN_BKP\01MSV6UP_1_1 tag=TAG20111130T143608 comment=NONEchannel ORA_DISK_1: backup set complete, elapsed time: 00:00:55Finished backup at 30-NOV-11

Starting backup at 30-NOV-11using channel ORA_DISK_1channel ORA_DISK_1: starting full datafile backup setchannel ORA_DISK_1: specifying datafile(s) in backup setinput datafile file number=00001 name=C:\APP\NEERAJS\ORADATA\ORCL\SYSTEM01.DBFinput datafile file number=00002 name=C:\APP\NEERAJS\ORADATA\ORCL\SYSAUX01.DBFinput datafile file number=00006 name=C:\APP\NEERAJS\ORADATA\ORCL\TSPIR.DBFinput datafile file number=00005 name=C:\APP\NEERAJS\ORADATA\ORCL\EXAMPLE01.DBFinput datafile file number=00003 name=C:\APP\NEERAJS\ORADATA\ORCL\UNDOTBS01.DBFinput datafile file number=00004 name=C:\APP\NEERAJS\ORADATA\ORCL\USERS01.DBFchannel ORA_DISK_1: starting piece 1 at 30-NOV-11channel ORA_DISK_1: finished piece 1 at 30-NOV-11piece handle=F:\RMAN_BKP\02MSV70H_1_1 tag=TAG20111130T143705 comment=NONEchannel ORA_DISK_1: backup set complete, elapsed time: 00:01:55Finished backup at 30-NOV-11

Starting backup at 30-NOV-11current log archivedusing channel ORA_DISK_1channel ORA_DISK_1: starting archived log backup setchannel ORA_DISK_1: specifying archived log(s) in backup setinput archived log thread=1 sequence=15 RECID=13 STAMP=768580741channel ORA_DISK_1: starting piece 1 at 30-NOV-11channel ORA_DISK_1: finished piece 1 at 30-NOV-11piece handle=F:\RMAN_BKP\04MSV746_1_1 tag=TAG20111130T143901 comment=NONEchannel ORA_DISK_1: backup set complete, elapsed time: 00:00:01Finished backup at 30-NOV-11

Starting Control File and SPFILE Autobackup at 30-NOV-11piece handle=F:\RMAN_BKP\CF\C-1296005542-20111130-01 comment=NONEFinished Control File and SPFILE Autobackup at 30-NOV-11

Step 4 : Note the SCN and drop the tablespace

SQL>select current_scn from v$database;CURRENT_SCN----------------- 5659022

SQL>drop tablespace tspir including contents and datafiles;Tablespace dropped.

Step 5 : Connect with rman and perform TSPIR Here we have used the auxiliary destination with the recover tablespace command becauseauxiliarydestination is an optional disk location where RMAN uses to temporarily store the auxiliary set files. The auxiliary destination is used only when using a RMAN-managed auxiliary instance. Specifying an auxiliary destination with a user-managed auxiliary instance results in an error.

C:\>rman target sys/xxxx@orclRecovery Manager: Release 11.2.0.1.0 - Production on Wed Nov 30 14:58:11 2011Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.connected to target database: ORCL (DBID=1296005542)

RMAN>recover tablespace tspir until scn 5659022 auxiliary destination 'F:\';Starting recover at 30-NOV-11using channel ORA_DISK_1RMAN-05026: WARNING: presuming following set of tablespaces applies to specified point-in-timeList of tablespaces expected to have UNDO segmentsTablespace SYSTEMTablespace UNDOTBS1Creating automatic instance, with SID='nume'initialization parameters used for automatic instance:db_name=ORCLdb_unique_name=nume_tspitr_ORCLcompatible=11.2.0.0.0db_block_size=8192db_files=200sga_target=280Mprocesses=50db_create_file_dest=F:\log_archive_dest_1='location=F:\'#No auxiliary parameter file usedstarting up automatic instance ORCLOracle instance startedTotal System Global Area 292933632 bytesFixed Size 1374164 bytesVariable Size 100665388 bytesDatabase Buffers 184549376 bytesRedo Buffers 6344704 bytesAutomatic instance createdList of tablespaces that have been dropped from the target database:Tablespace tspircontents of Memory Script:{# set requested point in timeset until scn 5659022;# restore the controlfilerestore clone controlfile;# mount the controlfilesql clone 'alter database mount clone database';# archive current online logsql 'alter system archive log current';# avoid unnecessary autobackups for structural changes during TSPITRsql 'begin dbms_backup_restore.AutoBackupFlag(FALSE); end;';}executing Memory Scriptexecuting command: SET until clauseStarting restore at 30-NOV-11allocated channel: ORA_AUX_DISK_1channel ORA_AUX_DISK_1: SID=59 device type=DISKchannel ORA_AUX_DISK_1: starting datafile backup set restorechannel ORA_AUX_DISK_1: restoring control filechannel ORA_AUX_DISK_1: reading from backup piece F:\RMAN_BKP\CF\C-1296005542-20111130-01channel ORA_AUX_DISK_1: piece handle=F:\RMAN_BKP\CF\C-1296005542-20111130-01 tag=TAG20111130T143903channel ORA_AUX_DISK_1: restored backup piece 1channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:04output file name=F:\ORCL\CONTROLFILE\O1_MF_7FD0QK8S_.CTLFinished restore at 30-NOV-11sql statement: alter database mount clone databasesql statement: alter system archive log currentsql statement: begin dbms_backup_restore.AutoBackupFlag(FALSE); end;contents of Memory Script:{# set requested point in timeset until scn 5659022;# set destinations for recovery set and auxiliary set datafilesset newname for clone datafile 1 to new;set newname for clone datafile 3 to new;set newname for clone datafile 2 to new;set newname for clone tempfile 1 to new;set newname for datafile 6 to"C:\APP\NEERAJS\ORADATA\ORCL\TSPIR.DBF";# switch all tempfilesswitch clone tempfile all;# restore the tablespaces in the recovery set and the auxiliary setrestore clone datafile 1, 3, 2, 6;switch clone datafile all;}executing Memory Scriptexecuting command: SET until clauseexecuting command: SET NEWNAMEexecuting command: SET NEWNAMEexecuting command: SET NEWNAMEexecuting command: SET NEWNAMEexecuting command: SET NEWNAMErenamed tempfile 1 to F:\ORCL\DATAFILE\O1_MF_TEMP_%U_.TMP in control fileStarting restore at 30-NOV-11using channel ORA_AUX_DISK_1channel ORA_AUX_DISK_1: starting datafile backup set restorechannel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup setchannel ORA_AUX_DISK_1: restoring datafile 00001 to F:\ORCL\DATAFILE\O1_MF_SYSTEM_%U_.DBFchannel ORA_AUX_DISK_1: restoring datafile 00003 to F:\ORCL\DATAFILE\O1_MF_UNDOTBS1_%U_.DBFchannel ORA_AUX_DISK_1: restoring datafile 00002 to F:\ORCL\DATAFILE\O1_MF_SYSAUX_%U_.DBFchannel ORA_AUX_DISK_1: restoring datafile 00006 to C:\APP\NEERAJS\ORADATA\ORCL\TSPIR.DBFchannel ORA_AUX_DISK_1: reading from backup piece F:\RMAN_BKP\02MSV70H_1_1channel ORA_AUX_DISK_1: piece handle=F:\RMAN_BKP\02MSV70H_1_1 tag=TAG20111130T143705channel ORA_AUX_DISK_1: restored backup piece 1channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:02:15Finished restore at 30-NOV-11datafile 1 switched to datafile copyinput datafile copy RECID=5 STAMP=768585055 file name=F:\ORCL\DATAFILE\O1_MF_SYSTEM_7FD0QYNZ_.DBFdatafile 3 switched to datafile copyinput datafile copy RECID=6 STAMP=768585056 file name=F:\ORCL\DATAFILE\O1_MF_UNDOTBS1_7FD0QYRF_.DBFdatafile 2 switched to datafile copyinput datafile copy RECID=7 STAMP=768585056 file name=F:\ORCL\DATAFILE\O1_MF_SYSAUX_7FD0QYPG_.DBFcontents of Memory Script:{# set requested point in timeset until scn 5659022;# online the datafiles restored or switchedsql clone "alter database datafile 1 online";sql clone "alter database datafile 3 online";sql clone "alter database datafile 2 online";sql clone "alter database datafile 6 online";# recover and open resetlogsrecover clone database tablespace "TSPIR", "SYSTEM", "UNDOTBS1", "SYSAUX" delete archivelog;alter clone database open resetlogs;}executing Memory Scriptexecuting command: SET until clausesql statement: alter database datafile 1 onlinesql statement: alter database datafile 3 onlinesql statement: alter database datafile 2 onlinesql statement: alter database datafile 6 onlineStarting recover at 30-NOV-11using channel ORA_AUX_DISK_1starting media recoveryarchived log for thread 1 with sequence 15 is already on disk as file D:\ARCHIVE\ORCL_ARCHIVE\ARC0000000015_0768224813.0001archived log for thread 1 with sequence 16 is already on disk as file D:\ARCHIVE\ORCL_ARCHIVE\ARC0000000016_0768224813.0001archived log file name=D:\ARCHIVE\ORCL_ARCHIVE\ARC0000000015_0768224813.0001 thread=1 sequence=15archived log file name=D:\ARCHIVE\ORCL_ARCHIVE\ARC0000000016_0768224813.0001 thread=1 sequence=16media recovery complete, elapsed time: 00:00:04Finished recover at 30-NOV-11database openedcontents of Memory Script:{# make read only the tablespace that will be exportedsql clone 'alter tablespace TSPIR read only';# create directory for datapump importsql "create or replace directory TSPITR_DIROBJ_DPDIR as ''F:\''";# create directory for datapump exportsql clone "create or replace directory TSPITR_DIROBJ_DPDIR as ''F:\''";}executing Memory Scriptsql statement: alter tablespace TSPIR read onlysql statement: create or replace directory TSPITR_DIROBJ_DPDIR as ''F:\''sql statement: create or replace directory TSPITR_DIROBJ_DPDIR as ''F:\''Performing export of metadata... EXPDP> Starting "SYS"."TSPITR_EXP_nume": EXPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK EXPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE EXPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK EXPDP> Master table "SYS"."TSPITR_EXP_nume" successfully loaded/unloaded EXPDP> ****************************************************************************** EXPDP> Dump file set for SYS.TSPITR_EXP_nume is: EXPDP> F:\TSPITR_NUME_43731.DMP EXPDP> ****************************************************************************** EXPDP> Datafiles required for transportable tablespace TSPIR: EXPDP> C:\APP\NEERAJS\ORADATA\ORCL\TSPIR.DBF EXPDP> Job "SYS"."TSPITR_EXP_nume" successfully completed at 16:20:28Export completedcontents of Memory Script:{# shutdown clone before importshutdown clone immediate}executing Memory Scriptdatabase closeddatabase dismountedOracle instance shut downPerforming import of metadata... IMPDP> Master table "SYS"."TSPITR_IMP_nume" successfully loaded/unloaded IMPDP> Starting "SYS"."TSPITR_IMP_nume": IMPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK IMPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE IMPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK IMPDP> Job "SYS"."TSPITR_IMP_nume" successfully completed at 16:21:48Import completedcontents of Memory Script:{# make read write and offline the imported tablespacessql 'alter tablespace TSPIR read write';sql 'alter tablespace TSPIR offline';# enable autobackups after TSPITR is finishedsql 'begin dbms_backup_restore.AutoBackupFlag(TRUE); end;';}executing Memory Scriptsql statement: alter tablespace TSPIR read writesql statement: alter tablespace TSPIR offlinesql statement: begin dbms_backup_restore.AutoBackupFlag(TRUE); end;Removing automatic instanceAutomatic instance removedauxiliary instance file F:\ORCL\DATAFILE\O1_MF_TEMP_7FD0Y3PY_.TMP deletedauxiliary instance file F:\ORCL\ONLINELOG\O1_MF_4_7FD0XROZ_.LOG deletedauxiliary instance file F:\ORCL\ONLINELOG\O1_MF_3_7FD0XK9R_.LOG deletedauxiliary instance file F:\ORCL\ONLINELOG\O1_MF_2_7FD0X9RF_.LOG deletedauxiliary instance file F:\ORCL\ONLINELOG\O1_MF_1_7FD0X2LK_.LOG deletedauxiliary instance file F:\ORCL\DATAFILE\O1_MF_SYSAUX_7FD0QYPG_.DBF deletedauxiliary instance file F:\ORCL\DATAFILE\O1_MF_UNDOTBS1_7FD0QYRF_.DBF deletedauxiliary instance file F:\ORCL\DATAFILE\O1_MF_SYSTEM_7FD0QYNZ_.DBF deletedauxiliary instance file F:\ORCL\CONTROLFILE\O1_MF_7FD0QK8S_.CTL deletedFinished recover at 30-NOV-11RMAN>

Step 6 : Check the tablepsace status and existance

SQL>select tablespace_name,status from dba_tablespaces;TABLESPACE_NAME STATUS------------------------- -----------SYSTEM ONLINESYSAUX ONLINEUNDOTBS1 ONLINETEMP ONLINEUSERS ONLINEEXAMPLE ONLINETSPIR OFFLINE

Since, we find the tablepspace "TSPIR" is offline . So bring the tablespace online .

SQL>alter tablespace tspir online;Tablespace altered.

SQL>alter database datafile 'C:\app\neerajs\oradata\orcl\tspir.dbf' online;Database altered.

SQL>select table_name from dba_tables where tablespace_name='TSPIR';TABLE_NAME-----------------TESTEMP

SQL>select * from tspir.test; ID---------- 12 121

SQL>select count(*) from tspir.emp; COUNT(*)---------- 2

Hence , we find that both the tables are recovered

ORA-7445 Internal Error

AnORA-7445is a generic error, and can occur from anywhere in the Oracle code. The precise location of the error is identified by the core file and/or trace file it produces. Whenever an ORA-7445 error is raised a core file is generated. There may be a trace file generated with the error as well.

Prior to 11g, the core files are located in the CORE_DUMP_DEST directory. Starting with 11g, there is a new advanced fault diagnosability infrastructure to manage trace data. Diagnostic files are written into a root directory for all diagnostic data called the ADR home. Core files at 11g will go to the ADR HOME/cdump directoryFor moreIndispensabilitycheck the below :

1. Check the Alert Log : The alert log may indicate additional errors or other internal errors at the time of the problem. In some cases, the ORA-7445 error will occur along with ORA-600, ORA-3113, ORA-4030 errors. The ORA-7445 error can be side effects of the other problems and we should review the first error and associated core file or trace file and work down the list of errors.

If the ORA-7445 errors are not associated with other error conditions, ensure the trace data is not truncated. If we see a message at the end of the fileMAX DUMP FILE SIZE EXCEEDED".theMAX_DUMP_FILE_SIZEparameter is not setup high enough or to unlimited. There could be vital diagnostic information missing in the file and discovering the root issue may be very difficult. Set the MAX_DUMP_FILE_SIZE appropriately and regenerate the error for complete trace information.

2. Search 600/7445 Lookup Tool : Visit My Oracle Support to access the ORA-00600 Lookup tool (Note 7445.1). The ORA-600/ORA-7445 Lookup tool may lead you to applicable content in My Oracle Support on the problem and can be used to investigate the problem with argument data from the error message or we can pull out key stack pointers from the associated trace file to match up against known bugs.

3. Fine tune searches in Knowledge Base : As the ORA-7445 error indicates an unhandled exception in the Oracle source code, our search in the Oracle Knowledge Base will need to focus on the stack data from the core file or the trace file.

Keep in mind that searches on generic argument data will bring back a large result set . The more we can learn about the environment and code leading to the errors, the easier it will be to narrow the hit list to match our problem.

4 . If assistance is required from Oracle : Should it become necessary to get assistance from Oracle Support on an ORA-7445 problem, please provide at a minimum, the

Alert log Associated tracefile(s) or incident package at 11g Patch level information Core file(s) Information about changes in configuration and/or application prior to issues If error is reproducible, a self-contained reproducible testcase: Note.232963.1 How to Build a Testcase for Oracle Data Server Support to Reproduce ORA-600 and ORA-7445 Errors. RDA report or Oracle Configuration Manager information

How to resize redolog file in oracle

Once , i receive the e-mail regarding the resize of the redo log file . The Sender want the easiest way to size the redo log file something like 'alter database logfile group 1 '?\redo01.log resize 100m ' or using some other trick .

We cannot resize the redo log files. We must drop the redolog file and recreate them .This is only method to resize the redo log files. A database requires atleast two groups of redo log files,regardless the number of the members. We cannot the drop the redo log file if its status is current or active . We have change the status to "inactive" then only we can drop it.

When a redo log member is dropped from the database, the operating system file is not deleted from disk. Rather, the control files of the associated database are updated to drop the member from the database structure. After dropping a redo log file, make sure that the drop completed successfully, and then use the appropriate operating system command to delete the dropped redo log file. In my case i have four redo log files and they are of 50MB in size .I will resize to 100 MB. Below aresteps to resize the redo log files.

Step 1 : Check the Status of Redo LogfileSQL>select group#,sequence#,bytes,archived,status from v$log; GROUP# SEQUENCE# BYTES ARC STATUS---------- ---------- ---------- ----- ------------- 1 5 52428800 YES INACTIVE 2 6 52428800 YES ACTIVE 3 7 52428800 NO CURRENT 4 4 52428800 YES INACTIVE

Here,we cannot drop the current and active redo log file .

Step 2 : Forcing a Checkpoint :The SQL statementalter system checkpointexplicitly forces Oracle to perform a checkpoint for either the current instance or all instances. Forcing a checkpoint ensures that all changes to the database buffers are written to the datafiles on disk .A global checkpoint is not finished until all instances that require recovery have been recovered.

SQL>alter system checkpoint global ;system altered.

SQL>select group#,sequence#,bytes,archived,status from v$log;

GROUP# SEQUENCE# BYTES ARC STATUS---------- ---------- ---------- ----- ---------------- 1 5 52428800 YES INACTIVE 2 6 52428800 YES INACTIVE 3 7 52428800 NO CURRENT 4 4 52428800 YES INACTIVESince the status of group 1,2,4 are inactive .so we will drop the group 1 and group 2 redo log file.

Step 3 : Drop Redo Log File :SQL>alter database drop logfile group 1;Database altered.

SQL>alter database drop logfile group 2;Database altered.

SQL>select group#,sequence#,bytes,archived,status from v$log; GROUP# SEQUENCE# BYTES ARC STATUS---------- ---------- ---------- --- ---------------- 3 7 52428800 NO CURRENT 4 4 52428800 YES INACTIVE

Step 4 : Create new redo log fileIf we don't delete the old redo logfile by OS command when creating the log file with same name then face the below error . Therefore to solve it delete the file by using OS command .

SQL>alter database add logfile group 1 'C:\app\neerajs\oradata\orcl\redo01.log' size 100m;alter database add logfile group 1 'C:\app\neerajs\oradata\orcl\redo01.log' size 100m*ERROR at line 1:ORA-00301:error in adding log file 'C:\app\neerajs\oradata\orcl\redo01.log' - file cannot be createdORA-27038:created file already existsOSD-04010: option specified, file already exists

SQL>alter database add logfile group 1 'C:\app\neerajs\oradata\orcl\redo01.log' size 100m;Database altered.

SQL>alter database add logfile group 2 'C:\app\neerajs\oradata\orcl\redo02.log' size 100m;Database altered.

SQL> select group#,sequence#,bytes,archived,status from v$log; GROUP# SEQUENCE# BYTES ARC STATUS---------- ---------- ---------- --- ---------------- 1 0 104857600 YES UNUSED 2 0 104857600 YES UNUSED 3 7 52428800 NO CURRENT 4 4 52428800 YES INACTIVE

Step 5 : Now drop the remaining two old redo log fileSQL>alter system switch logfile ;System altered.

SQL>alter system switch logfile ;System altered.

SQL> select group#,sequence#,bytes,archived,status from v$log; GROUP# SEQUENCE# BYTES ARC STATUS---------- ---------- ---------- --- ---------------- 1 8 104857600 YES ACTIVE 2 9 104857600 NO CURRENT 3 7 52428800 YES ACTIVE 4 4 52428800 YES INACTIVE

SQL>alter system checkpoint global;System altered.

SQL>select group#,sequence#,bytes,archived,status from v$log; GROUP# SEQUENCE# BYTES ARC STATUS---------- ---------- ---------- --- ---------------- 1 8 104857600 YES INACTIVE 2 9 104857600 NO CURRENT 3 7 52428800 YES INACTIVE 4 4 52428800 YES INACTIVE

SQL>alter database drop logfile group 3;Database altered.

SQL>alter database drop logfile group 4;Database altered.

SQL> select group#,sequence#,bytes,archived,status from v$log; GROUP# SEQUENCE# BYTES ARC STATUS---------- ---------- ---------- --- ---------------- 1 8 104857600 YES INACTIVE 2 9 104857600 NO CURRENT

Step 6 : Create the redo log fileSQL>alter database add logfile group 3 'C:\app\neerajs\oradata\orcl\redo03.log' size 100m;Database altered.

SQL>alter database add logfile group 4 'C:\app\neerajs\oradata\orcl\redo04.log' size 100m;Database altered.

SQL>select group#,sequence#,bytes,archived,status from v$log; GROUP# SEQUENCE# BYTES ARC STATUS---------- ---------- ---------- --- ---------------- 1 8 104857600 YES INACTIVE 2 9 104857600 NO CURRENT 3 0 104857600 YES UNUSED 4 0 104857600 YES UNUSEDHow Often Redo Log file should switch ?

Redo log file switch has good impact on the performance of the database. Frequent log switches may lead to the slowness of the database .If the log file switches after long times then there may be chances of lossing data when the redo log file get corrupt . Oracle documents suggests to resize the redolog files so that log switches happen more like every 15-30 min (roughly depending on the architecture and recovery requirements).

Butwhat happen when there in bulk load ??since we cannot resize the redolog file size every time because it's seems to be silly. Generally we donot load the data in bulk on regular basis . it's very often twice or thrice in a week . So what should be the accurate size ??

Here is avery good explanationof this question by "howardjr".One of my database have very large logs which are not intended to fill up under normal operation. They are actually big enough to cope with a peak load we get every week. previously, we had two or three log switches recorded under the one alert log timestamp! Now, they switch every 10 minutes or so, even under the heaviest load.So big logs are good for slowing things own under load. But I don't want to sit there with 5 hours of redo sitting in my current log during non-peak-load normal running. Therefore, I setarchive_lag_target to 1800(seconds = 30 minutes), and I know that in the worst possible case, I will only lose 30 minutes of redo.

I see LOADS of advantages for using archive_lag_target even for standalone instances. Actually especially for standalone instances. I want logs big enough not to cause rapid log switching. But I have bulk loads. Therefore, I have to have enormous logs to prevent rapid log switching during those times. In fact, on one database I am connected to right now, I have 2GB redo logs which nevertheless manage to switch every 8 minutes on a Friday night. We can imagine the frequency of log switches we had when those logs were originally created at 5MB each! And the number of redo allocation retries.

I'd like 8GB logs to get it down to a log switch every 30 minutes or so on a Friday night, but with multiple members and groups, that's just getting silly.But now I have an enormous log that will take forever and a day to fill up and switch when I'm NOT doing bulk loads. Ordinarily, without a forced log switch, my 2GB log takes 3 days to fill up.

How FAST_START_MTTR_TARGET affect the redolog file in case of recovery?

If I were to have a catastrophic hardware failure, I could lose my current redo log. Fast_start_mttr_target can't do anything to ameliorate that loss: flushing the dirty buffers to disk regularly doesn't protect my data, actually. In fact, there is no way to recover transactions that are sitting in the current redo log if that log is lost. Therefore, having an enormous log full of hours and hours (in my case, about 72 hours'-worth) of redo is a massive data loss risk, and not one I'm prepared to take.forcing log switches is a good thing for everyone to be able to do, when appropriate, even if they're not using Data Guard and standby databases.

That huge log files are necessary. That a forced log switch is essential thereafter to data security. We can certainly try to minimise the risk: that's what redo log multiplexing is all about. But if we lose all copies of your current log, then we have lost the only copy of that redo, and that means we have lost data.

Frequent checkpoints can help minimise the amount of redo that is vulnerable to loss, but they do nothing to minimise the risk of that loss occurring. Redundant disks (mirroring), redundant controllers, multiplexing: those are the only things that can help protect the current redo log and thus actually reduce the risk of failure occurring in the first place. Frequent checkpointing simply reduces the damage that the loss of all currrent logs would inevitably cause, but it doesn't (and cannot) reduce it to zero. It's therefore not a protection mechanism at all, in the slightest.

Checkpoints set a limit on potential data loss from redo log damage, absolutely they do. But no matter how frequently we checkpoint, we cannot reduce potential data loss to zero and reducing the potential cost of a disaster should it strike doesn't count as reducing the risk of the disaster happening. Buying car insurance doesn't reduce our risk of having a car accident: it simply means we can pay the bills when the accident eventually happens. Therefore, checkpoints cannot reasonably be called a "current redo logfile protection mechanism" . Mirroring, multiplexing and redundant hardware are the only ways to actually protect the current redo log Safety and performance always have to be traded off against each other, and we cannot realistically propose going for just one or the other without appreciating the impact on the other.

Configuration of Snapshot Standby Database in Oracle 11g

Snapshot Standbyis a new features introduced in Oracle 11g. A snapshot standby database is a type of updatable standby database that provides full data protectionfor a primary database. A snapshot standby database receives and archives, but doesnot apply, redo data from its primary database. Redo data received from the primarydatabase is applied when a snapshot standby database is converted back into aphysical standby database, after discarding all local updates to the snapshotstandby database.

Themain benefits of snapshot standbydatabase is that we can convert the physicalstandby database into a read-write real time clone of the production database andwe can then temporarily use this environment for carrying out any kind of development testing, QA type work or to test the impact of a proposed productionchange on an application.

The best part of this snapshot features is that the Snapshot Standby database inturn uses the Flashback Database technology to create a guaranteed restore point towhich the database can be later flashed back to.All the features of the flashback are inherent in the snapshot standby.

Here we will configure the snapshot standby database

Step 1 : Create the physical standby databaseCreate thephysical standby database.

Step 2: Enable Flashack Parameter

SQL> alter system set db_recovery_file_dest_size=4G scope=both ;System altered.

SQL>alter system set db_recovery_file_dest='D:\standby\fra\' scope=both ;System altered.

SQL>show parameter db_recoveryNAME TYPE VALUE----------------------------- ----------- -------------------------db_recovery_file_dest string D:\standby\fra\db_recovery_file_dest_size big integer 4G

Step 3 : Stop the media recovery processSQL>alter database recover managed standby database cancel;Database altered.

Step 4 : Ensure that the database is mounted, but not open.SQL>shut immediateORA-01109: database not openDatabase dismounted.ORACLE instance shut down.SQL>startup mount

Step 5 : Create guaranteed restore pointSQL>create restore point snapshot_rspt guarantee flashback database;Restore point created.

Step 6 : Perform the conversion to snapshot standby databaseSQL>alter database convert to snapshot standby ;Database altered.

SQL>select name,open_mode,database_role from v$database;NAME OPEN_MODE DATABASE_ROLE--------- -------------- --------------------------NOIDA MOUNTED SNAPSHOT STANDBY

SQL>alter database open;Database altered.

SQL>select name,db_unique_name ,open_mode,database_role from v$database;NAME DB_UNIQUE_NAME OPEN_MODE DATABASE_ROLE------- ---------------------- --------------- ------------------------NOIDA gurgoan READ WRITE SNAPSHOT STANDBY

Since the database is in read-write mode , so we can make some changes or even change the parameter say tunning parameter and check there performance after converting to physical standby and further flashback the whole changes.

SQL>select name,guarantee_flashback_database from v$restore_point;NAME GUA----------------------------------------------------------------- -------SNAPSHOT_STANDBY_REQUIRED_11/18/2011 20:41:01 YESSNAPSHOT_RSPT YES

While the original physical standby database has been now converted to snapshot database, some changes are happening on the Primary database and those changes are shipped to the standby site but not yet applied. They will accumulate on the standby site and will be applied after the snapshot standby database is converted back to a physical standby database.

Step 7 : Convert snapshot standby to physical standbySQL>shut immediateSQL>startup mount

SQL>alter database convert to physical standby ;Database altered.

SQL>shut immediateSQL>startup mount

SQL>alter database recover managed standby database disconnect from session;Database altered.

SQL>alter database recover managed standby database cancel;Database altered.

SQL> alter database open;Database altered.

SQL>select name,open_mode,db_unique_name,database_role from v$database;NAME OPEN_MODE DB_UNIQUE_NAME DATABASE_ROLE------- -------------- ----------------------- ----------------------NOIDA READ ONLY gurgoan PHYSICAL STANDBY

SQL>alter database recover managed standby database using current logfile disconnect;Database altered.

Hence, we finally back to physical standby database.

How to Drop Data Guard Configuration in oracle 11g

Once while configuring the dataguard broker, i faced theORA-16625andORA-16501. This error occurs beacuse the broker rejects an operation requested by the client when the database required to execute that operation was not reachable from the database where the request was made. If the request modifies the configuration, the request must be processed by the copy of the broker running on an instance of the primary database.

Few days ago i configured the standby database "RED" and broker, later drop it . Next time while configuring the data broker the error occurs as

DGMGRL> create configuration 'dgnoida'> as primary database is 'noida'> connect identifier is 'noida';Error: ORA-16501:the Data Guard broker operation failedError: ORA-16625:cannot reach database "red"Failed.

To solve this issue, I have remove the data guard broker configuration and then created the dataguard broker. The steps to drop the configuration are as follows :

Step 1 :Stop the standby data guard broker process ( On Standby )

SQL>show parameter dg_brokerNAME TYPE VALUE------------------------ ----------- ----------------------------------------------dg_broker_config_file1 string C:\APP\NEERAJS\PRODUCT\11.2.0\ DBHOME_1\DATABASE\DR1NOIDA.DATdg_broker_config_file2 string C:\APP\NEERAJS\PRODUCT\11.2.0\ DBHOME_1\DATABASE\DR2NOIDA.DATdg_broker_start boolean True

SQL>alter system set dg_broker_start=false;System altered.

Step 2 :Diable the archive log state(On Primary )SQL>select dest_id,destination,status from v$archive_dest where target='STANDBY';DEST_ID DESTINATION STATUS-------- --------------- ---------- 2 delhi VALID

SQL>alter system set log_archive_dest_state_2=defer ;System altered.

SQL>select dest_id,destination,status from v$archive_dest where target='STANDBY';

DEST_ID DESTINATION STATUS-------- --------------- ---------- 2 delhi DEFERRED

Step 3 :On both system rename or drop the metadata files

SQL>show parameter dg_broker

NAME TYPE VALUE------------------------ ----------- ----------------------------------------------dg_broker_config_file1 string C:\APP\NEERAJS\PRODUCT\11.2.0\ DBHOME_1\DATABASE\DR1NOIDA.DATdg_broker_config_file2 string C:\APP\NEERAJS\PRODUCT\11.2.0\ DBHOME_1\DATABASE\DR2NOIDA.DATdg_broker_start boolean False

Delete or rename the file DR1NOIDA.DAT and [email protected] .

Step-By-Step Configuration Of Data Guard Broker in Oracle 11gAs we have already discuss about the Data Guard Broker and its benefits in earlier post. Here we will configure the data Guard Broker . Here are the steps :

Primary Databse =NoidaStandby Database = Delhi

Step 1 : Check the Data Guard Broker processSQL>sho parameter dg_brokerNAME TYPE VALUE----------------- ---------- ----------dg_broker_start boolean FALSE

Step 2 : Start the Data Guard Broker Process on Primary database SQL>alter system set dg_broker_start=true scope=both;System altered.

Step 3 : Check DG_BROKER on standby database and start itSQL>sho parameter dg_brokerNAME TYPE VALUE----------------- ---------- ----------dg_broker_start boolean FALSE

SQL>alter system set dg_broker_start=true scope=both ;System altered.

Step 4 : Edit the listener.ora fileEdit the listener.ora file which includes thedb_unique_name_DGMGRL.db_domainvalues for theGLOBAL_DBNAMEin both primary and standby database . To set the value, letscheck thedb_domainvalue .

SQL>show parameter db_domainNAME TYPE VALUE-------------- ----------- --------------db_domain stringSince the value of db_domain is null so the the value of GLOBAL_DBNAME = NOIDA_DGMGRLfor primary database and for standby GLOBAL_DBNAME = DELHI_DGMGRL. The primary listener.ora file is as

SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (GLOBAL_DBNAME =noida_DGMGRL) (ORACLE_HOME = C:\app\neerajs\product\11.2.0\dbhome_1) (SID_NAME = noida) ) )Similarly, edit the listener.ora file on standby database .

Step 5 : Configure the Data Guard ConfigurationC:\>dgmgrlDGMGRL for 32-bit Windows: Version 11.2.0.1.0 - ProductionCopyright (c) 2000, 2009, Oracle. All rights reserved.Welcome to DGMGRL, type "help" for information.

DGMGRL>connect sys/xxxx@noidaConnected.

DGMGRL>create configuration 'dgnoida'> as primary database is 'noida'> connect identifier is noida ;Configuration "dgnoida" created with primary database "noida" .

Once the configuration is created then check the status of configuration .

DGMGRL>show configurationConfiguration - dgnoidaProtection Mode : MaxPerformanceDatabases :noida - Primary databaseFast-Start Failover : DISABLEDConfiguration Status :DISABLED

Step 6 : Add standby database to the data broker configurationDGMGRL> add database 'delhi' as> connect identifier is delhi> maintained as physical ;Database "delhi" added

DGMGRL>show configurationConfiguration - dgnoidaProtection Mode : MaxPerformanceDatabases : noida - Primary database : delhi - Physical standby databaseFast-Start Failover : DISABLEDConfiguration Status : DISABLED

Step 7 : Enable the configurationDGMGRL>enable configurationEnabled.DGMGRL>show configurationConfiguration - dgnoidaProtection Mode : MaxPerformanceDatabases :noida - Primary database : delhi - Physical standby databaseFast-Start Failover : DISABLEDConfiguration Status : SUCCESS

Step 8 : View the Primary and Standby database properties

DGMGRL>show database verbose noidaDatabase - noidaRole : PRIMARYIntended State : TRANSPORT-ONInstance(s) : noidaProperties:DGConnectIdentifier = 'noida'ObserverConnectIdentifier = ''LogXptMode = 'ASYNC'DelayMins = '0'Binding = 'optional'MaxFailure = '0'MaxConnections = '1'ReopenSecs = '300'NetTimeout = '30'RedoCompression = 'DISABLE'LogShipping = 'ON'PreferredApplyInstance = ''ApplyInstanceTimeout = '0'ApplyParallel = 'AUTO'StandbyFileManagement = 'AUTO'ArchiveLagTarget = '0'LogArchiveMaxProcesses = '4'LogArchiveMinSucceedDest = '1'DbFileNameConvert = ''LogFileNameConvert = ''FastStartFailoverTarget = ''StatusReport = '(monitor)'InconsistentProperties = '(monitor)'InconsistentLogXptProps = '(monitor)'SendQEntries = '(monitor)'LogXptStatus = '(monitor)'RecvQEntries = '(monitor)'HostName = 'TECH-199'SidName = 'noida'StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=TECH-199)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=noida_DGMGRL)(INSTANCE_NAME=noida)(SERVER=DEDICATED)))'StandbyArchiveLocation = 'D:\archive\'AlternateLocation = ''LogArchiveTrace = '0'LogArchiveFormat = 'ARC%S_%R.%T'TopWaitEvents = '(monitor)'Database Status = SUCCESS

DGMGRL>show database verbose delhi

Database - delhiRole: PHYSICAL STANDBYIntended State : APPLY-ON Transport Lag : 0 seconds Apply Lag : 0 seconds Real Time Query : ON Instance(s) : delhi Properties: DGConnectIdentifier = 'delhi' ObserverConnectIdentifier = '' LogXptMode = 'SYNC' DelayMins = '0' Binding = 'OPTIONAL' MaxFailure = '0' MaxConnections = '1' ReopenSecs = '300' NetTimeout = '30' RedoCompression = 'DISABLE' LogShipping = 'ON' PreferredApplyInstance = '' ApplyInstanceTimeout = '0' ApplyParallel = 'AUTO' StandbyFileManagement = 'AUTO' ArchiveLagTarget = '0' LogArchiveMaxProcesses = '4' LogArchiveMinSucceedDest = '1' DbFileNameConvert = 'C:\app\neerajs\oradata\noida\, D:\app\stand\oradata\, E:\oracle\, D:\app\stand\oradata\' LogFileNameConvert = 'C:\app\neerajs\oradata\noida\, D:\app\stand\oradata\' FastStartFailoverTarget = '' StatusReport = '(monitor)' InconsistentProperties = '(monitor)' InconsistentLogXptProps = '(monitor)' SendQEntries = '(monitor)' LogXptStatus = '(monitor)' RecvQEntries = '(monitor)' HostName = 'TECH-284' SidName = 'delhi' StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=TECH-284)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=delhi_DGMGRL)(INSTANCE_NAME=delhi)(SERVER=DEDICATED)))' StandbyArchiveLocation = 'D:\app\stand\archive\' AlternateLocation = '' LogArchiveTrace = '0' LogArchiveFormat = 'ARC%S_%R.%T' TopWaitEvents = '(monitor)'Database Status : SUCCESS

DGMGRL>

Switchover and Failover in Standby Oracle 11g

Data Guard uses two terms when cutting over the standby server,switch-overwhich is a planned and failover which a unplanned event .

1.) Switchover :Switchover is a planned event, it is ideal when we might want to upgrade the primary database or change the storage/hardware configuration (add memory, cpu networking), we may even want to upgrade the configuration to Oracle RAC .

What happens during a switchover is the following :

1.)Notifies the primary database that a switchover is about to occur2.)Disconnect all users from the primary database3.)Generate a special redo record that signals theEnd of Redo(EOR)4.)Converts the primary database into a standby database5.)Once the standby database applies the final EOR record, guaranteeing that no data loss has been lost, converts the standby database into the primary database.

The new standby database (old primary) starts to receive the redo records and continues process until we switch back again. It is important to remember that both databases receive the EOR record so both databases know the next redo that will be received.Although we can have users still connecting to the primary database while the switchover occurs (which generally takes about 60 seconds) I personal have a small outage just to be on the safe side and just in case things don't go as smoothly as I hoped.

We can even switch over form a linux database to a windows database from a 64 bit to a 32 bit database which is great if we want to migrate to a different O/S of 32/64 bit architecture, also our rollback option is very easy simply switchback if it did not work.

2.) Failover :Failover is a unplanned event, this is where the EOR was never written by the primary database, the standby database process what redo it has then waits, data loss now depends on the protection mode in affect .

Maximum Performance - possible chance of data loss Maximum Availability - possible chance of data loss Maximum Protection - no data loss

we have the option to manually failover or make the whole process automatic, manual gives the DBA maximum control over the whole process obliviously the the length time of the outage depends on getting the DBA out of bed and failing over. Otherwise Oracle Data Guard Fast-Start Failover feature can automatically detect a problem and failover automatically for us. The failover process should take between 15 to 25 seconds.

Which Role Transition Operation Should I Use ?

When faced with the decision on which role transition is best for the given situation, we need to always choose one that best reduces downtime and has the least potential for data loss. Also to consider is how the change will affect any other standby database in the configuration. We should consider the following when making the decision on which operation to use:

What is the current state of the primary database just before the transition? Is it available? What is the state of the selected standby database to be used in the role transition at the time of transition? Is the standby database configured as a physical or logical standby database?The following decision tree can be used to assist when making this critical decision as to which operation to perform:

One key point to consider is that if it would be faster to repair the primary database (from failure or a simple planned hardware/software upgrade), the most efficient method would be to perform the tasks and then to bring up the primary database as quickly as possible and not perform any type of role transition. This method can impose less risk to the system and does not require any client software to be re-configured.

Another consideration involves a Data Guard configuration which includes a logical standby database. A switchover operation can be performed using either a physical or logical standby database. Take note, however, of the following issues you may run in to regarding physical and logical standby configurations. If the configuration includes a primary, a physical standby, and a logical standby, and a switchover is performed on the logical standby, the physical standby will no longer be a part of the configuration and must be rebuilt. In the same scenario, if a switchover operation is performed on the physical standby, the logical standby remains in the Data Guard configuration and does not need to be rebuilt. Obviously, a physical standby is a better option to be a switchover candidate than a logical standby when multiple standby types exist in a given configuration.

Hence finally we come to conclusion that theorder to setup Data Guardis the following : The primary database is up and running Create a standby database Setup the redo transport rules Create the SRL files Execute one of the following

SQL>alter database set standby to maximum performance; //(default)SQL>alter database set standby to maximum availability;SQL>alter database set standby to maximum protection;

Data Protection Mode In Data GuardData Guard protection modes are simply a set of rules that the primary database must adhere to when running in a Data Guard configuration. A protection mode is only set on the primary database and defines the way Oracle Data Guard will maximize a Data Guard configuration for performance, availability, or protection in order to achieve the maximum amount of allowed data loss that can occur when the primary database or site fails

A Data Guard configuration will always run in one of the three protection modes listed above. Each of the three modes provide a high degree of data protection; however they differ with regards to data availability and performance of the primary database.When selecting a protection mode, always consider the one that best meets the needs of your business. Carefully take into account the need to protect the data against anyloss vs. availability and performanceexpectations of the primary database

Data Guard can support multiple standby databases in a single configuration, they may or may not have the same protection mode settings depending on our requirements.The protection modes are

1.)Maximum Performance 2.)Maximum Availability 3.)Maximum Protection 1.) Maximum Performance :This is the default mode, we get the highest performance but the lowest protection. This mode requires ASYNC redo transport so that the LGWR process never waits for acknowledgment from the standby database for maximum performance.How much data we lose depends on the redo rate and how well our network can handle the amount of redo also known as transport lag. Even if we have a zero lag time we still will lose some data atfail-overtime .

We can have up to 9 physical standby database in oracle 10g and 30 in oracle 11g and we will use the Asynchronous transport (ASYNC) with no affirmation of the standby I/O (NOAFFIRM). We can use this anywhere in the world but bear in mind the network latency and making sure it can support our redo rate .While it is not mandatory to have standby redo logs (SRL) in this mode, it is advise to do so. The SRL files need to be the same size as the online redo log files (ORL) .

The following table describes the attributes that should be defined for theLOG_ARCHIVE_DEST_ninitialization parameter for the standby database destination to participate in Maximum Performance mode.

For example: log_archive_dest_2='service=res ARCH NOAFFIRM' or log_archive_dest_2='service=red LGWR ASYNC NOAFFIRM'2.) Maximum Availability:Its first priority is to be available and its second priority is zero loss protection, thus it requires the SYNC redo transport. This is the middle middle of the range, it offers maximum protection but not at the expense of causing problems with the primary database. However we must remember that it is possible to lose data, if our network was out for a period of time and the standby has not had a chance tore-synchronizeand the primary went down then there will be data loss.

Again we can have up to 9 physical standby database in oracle 10g and 30 in oracle 11gand we will use Synchronous transport (SYNC) with affirmation of the standby I/O (AFFIRM) and SRL files. In the event that the standby server is unavailable the primary will wait the specified time in theNET_TIMEOUTparameter before giving up on the standby server and allowing the primary to continue to process. Once the connection has been re-established the primary will automatically resynchronize the standby database.

When the NET_TIMEOUT expires the LGWR process disconnects from the LNS process, acknowledges the commit and proceeds without the standby, processing continues until the current ORL is complete and the LGWR cycles into a new ORL, a new LNS process is started and an attempt to connect to the standby server is made, if it succeeds the new ORL is sent as normal, if not then LGWR disconnects again until the next log switch, the whole process keeps repeating at every log switch, hopefully the standby database will become available at some point in time. Also in the background if we remember if any archive logs have been created during this time the ARCH process will continually ping the standby database waiting until it come online.

We might have noticed there is a potential loss of data if the primary goes down and the standby database has also been down for a period of time and here has been no resynchronization, this is similar to Maximum Performance but we do give the standby server a chance to respond using the timeout.The minimum requirements are described in the following table :

For example : log_archive_dest_2='services=red LGWR SYNC AFFIRM

3.)Maximum Protection:This offers the maximum protection even at the expense of the primary database, there is no data loss.This mode uses the SYNC redo transport and the primary will not issue a commit acknowledgment to the application unless it receives an acknowledgment from at least one standby database, basically the primary will stall and eventually abort preventing any unprotected commits from occurring. This guarantees complete data protection, in this setup it is advised to have two separate standby databases at different locations with no Single Point Of Failures (SPOF's), they should not use the same network infrastructure as this would be a SPOF.

The minimum requirements are described in the following following table

For Example : log_archive_dest_2='service=red LWGR SYNC AFFIRM'

Finally the protection mode will be changed from its default of Maximum Performance to Maximum Protection.The protection modes run in the order from highest (most data protection) to the lowest (least data protection)

Each of the Data Guard data protection modes require that at least one standby database in the configuration meet the minimum set of requirements listed in the table below.

Data Guard Architecture Oracle 11g Part-IIIThe redo data transmitted from the primary database is written to the standby redo log on the standby database. Apply services automatically apply the redo data on the standby database to maintain consistency with the primary database. It also allows read-only access to the data.The main difference between physical and logical standby databases is the manner in which apply services apply the archived redo data.

There are two methods in which to apply redo i.e,1.)Redo Apply(physical standby) and2.) SQL Apply (logical standby).

They both have the same common features: Both synchronize the primary database Both can prevent modifications to the data Both provide a high degree of isolation between the primary and the standby database Both can quick transition the standby database into the primary database Both offer a productive use of the standby database which will have no impact on the primary database

1.)Redo Apply(Physical Standby): Redo apply is basically a block-by-block physical replica of the primary database, redo apply uses media recovery to read records from the SRL into memory and apply change vectors directly to the standby database. Media recovery does parallel recovery for very high performance, it comprises a media recovery coordinator (MRP0) and multiple parallel apply rocesses(PR0?). The coordinator manages the recovery session, merges the redo by SCN from multiple instances (if in a RAC environment) and parses redo into change mappings partitioned by the apply process. The apply processes read data blocks, assemble redo changes from mappings and then apply redo changes to the data blocks.

This method allows us to be able to use the standby database in a read-only fashion, Active Data Guard solves the read consistency problem in previous releases by the use of a "query" SCN. The media recovery process on the standby database advances the query SCN after all dependant changes in a transaction have been fully applied. The query SCN is exposed to the user via the current_scn column of thev$databaseviewRead-only use will only be able to see data up to the query SCN and thus the standby database can be open in read-only mode while the media recovery is active, which make this an ideal reporting database.

We can useSYNCorASYNCand is isolated from I/O physical corruptions, corruption-detections checks occur at the following key interfaces:

On theprimary during redo transport- LGWR, LNS, ARCH use the DB_UTRA_SAFE parameterOn thestandby during redo apply - RFS, ARCH, MRP, DBWR use the DB_BLOCK_CHECKSUM and DB_LOST_WRITE_PROTECT parameters .If Data Guard detects any corruption it will automatically fetch new copies of the data from the primary using gap resolution process in the hope of that the original data is free of corruption.The key features of this solution are Complete application and data transparency - no data type or other restrictions Very high performance, least managed complexity and fewest moving parts End-to-end validation before applying, including corruptions due to lost writes Able to be utilized for up-to-date read-only queries and reporting while providing DR Able to execute rolling database upgrades beginning with Oracle Database 11g

2.)SQL Apply(Logical Standby) :SQL apply uses the logical standby process (LSP) to coordinate the apply of changes to the standby database. SQL apply requires more processing than redo apply, the processes that make up SQL apply, read the SRL and "mine" the redo by converting it to logical change records and then building SQL transactions and applying SQL to the standby database and because there are more moving parts it requires more CPU, memory and I/O then redo apply .

SQL apply does not support all data types, such as XML in object relational format and Oracle supplied types such as Oracle spatial, Oracleinter-mediaand Oracle text .

The benefits to SQL apply is that the database is open to read-write while apply is active, while we can not make any changes to the replica data we can insert, modify and delete data from local tables and schemas that have been added to the database, we can even create materialized views and local indexes. This makes it ideal for reporting tools, etc to be used.

The keyfeatures of this solutionare : A standby database that is opened for read-write while SQL apply is active A guard setting that prevents the modification of data that is being maintained by the SQL apply Able to execute rolling database upgrades beginning with Oracle Database 11g using the KEEP IDENTITY clause

Data Guard Architecture Oracle 11g Part-IILNS(log-write network-server) andARCH(archiver) processes running on the primary database select archived redo logs and send them to the standby database, where theRFS(remote file server) background process within the Oracle instance performs the task of receiving archived redo-logs originating from the primary database.

TheLNSprocess support two modes as1.) Synchronous and2.) Asynchronous.1.)SynchronousMode: Synchronous transport (SYNC) is also referred to as"zero data loss"method because the LGWR is not allowed to acknowledge a commit has succeeded until the LNS can confirm that the redo needed to recover the transaction has been written at the standby site.In the below diagram, the phases of a transaction are:

The user commits a transaction creating a redo record in the SGA, the LGWR reads the redo record from the log buffer and writes it to the online redo log file and waits for confirmation from the LNS.The LNS reads the same redo record from the buffer and transmits it to the standby database using Oracle Net Services, the RFS receives the redo at the standby database and writes it to the SRL.When the RFS receives a write complete from the disk, it transmits an acknowledgment back to the LNS process on the primary database which in turns notifies the LGWR that the transmission is complete, the LGWR then sends a commit acknowledgment to the user.

This setup really does depend on network performance and can have a dramatic impact on the primary databases, low latency on the network will have a big impact on response times. The impact can be seen in the wait event "LNS wait on SENDREQ" found in thev$system_eventdynamic performance view.

2.)Asynchronous Mode:Asynchronous transport (ASYNC) is different from SYNC in that it eliminates the requirement that the LGWR waits for a acknowledgment from the LNS, creating a "near zero" performance on the primary database regardless of distance between the primary and the standby locations. The LGWR will continue to acknowledge commit success even if the bandwidth prevents the redo of previous transaction from being sent to the standby database immediately. If the LNS is unable to keep pace and the log buffer is recycled before the redo is sent to the standby, the LNS automatically transitions to reading and sending from the log file instead of the log buffer in the SGA. Once the LNS has caught up it then switches back to reading directly from the buffer in the SGA .

The log buffer ratio is tracked via the viewX$LOGBUF_READHISTa low hit ratio indicates that the LNS is reading from the log file instead of the log buffer, if this happens try increasing the log buffer size.

Thedrawback with ASYNCis the increased potential for data loss, if a failure destroys the primary database before the transport lag is reduced to zero, any committed transactions that are part of the transport lag are lost. So again make sure that the network bandwidth is adequate and that get the lowest latency possible.

A log file gap occurs whenever a primary database continues to commit transactions while the LNS process has ceased transmitting redo to the standby database (network issues). The primary database continues writing to the current log file, fills it, and then switches to a new log file, then archiving kicks in and archives the f