42177096 Oracle Interview

download 42177096 Oracle Interview

of 6

Transcript of 42177096 Oracle Interview

  • 7/30/2019 42177096 Oracle Interview

    1/6

    ***http://www.blurtit.com/q2664976.html***

    *** http://www.quest.com/presentations/A...0Internals.pdf***

    ***http://www.sioug.si/predavanja/97/platinum/sld012.htm***

    1.What is an Oracle Instance?

    Instance is a combination of memory structure and process structure.

    Memory structure is SGA and Process structure is background processes.

    2. What information is stored in Control File?

    Ans) The database name

    The timestamp of database creation

    The names and locations of associated datafiles and redo log files

    Tablespace information

    Datafile offline ranges

    The log history

    Archived log information

    Backup set and backup piece information

    Backup datafile and go back over log information

    Datafile copy information

    The current log sequence number

    3. When you start an Oracle DB which file is accessed first?

    Ans) If it has First read init.ora file otherwise spfile.ora

    4. What is the Job of SMON, PMON processes?

    SMON :- System monitor process recovers after an instance failure and monitors temporary

    segments and extents.

  • 7/30/2019 42177096 Oracle Interview

    2/6

    PMON :- Process monitor process cleans up processes and releases locks and resources after anabnormal termination of a process.

    5. What is Instance Recovery?

    When an Oracle instance fails,Oracle performs an instance recovery when the associateddatabase is re-started.

    Instance recovery occurs in two steps:

    Cache recovery: Changes being made to a database are recorded in

    the database buffer cache. These changes are also recorded in online redo log

    files simultaneously. When there are enough data in the database buffer cache,

    they are written to data files. If an Oracle instance fails before the data in

    the database buffer cache are written to data files, Oracle uses the data

    recorded in the online redo log files to recover the lost data when the

    associated database is re-started. This process is called cache recovery.

    Transaction recovery: When a transaction modifies data in a

    database, the before image of the modified data is stored in an undo segment.

    The data stored in the undo segment is used to restore the original values in

    case a transaction is rolled back. At the time of an instance failure, the

    database may have uncommitted transactions. It is possible that changes made by

    these uncommitted transactions have gotten saved in data files. To maintain

    read consistency, Oracle rolls back all uncommitted transactions when the

    associated database is re-started. Oracle uses the undo data stored in undo

    segments to accomplish this. This process is called transaction recovery.

    6. What is written in Redo Log Files?

  • 7/30/2019 42177096 Oracle Interview

    3/6

    Log writer (LGWR) writes redo log buffer contents Into Redo Log FIles. Log writer does thisevery three seconds, when the redo log buffer is 1/3 full and immediately before the Database

    Writer (DBWn) writes its changed buffers into the datafile.

    7. How do you control number of Datafiles one can have in an Oracle database?

    When starting an Oracle instance, the database's parameter file indicates the amount of SGAspace to reserve for datafile information; the maximum number of datafiles is controlled by the

    DB_FILES parameter. This limit applies only for the life of the instance.

    8. How many Maximum Datafiles can there be in an Oracle Database?

    Ans) Max datafiles in oracle is 65536 or 65546..

    9. What is a Tablespace?

    Ans) A database is divided intoL

    ogical Storage Unit called tablespaces. A tablespace is used togrouped related logical structures together.

    10. What is the purpose of Redo Log files?

    Ans) The purpose of redo log file is to record all changes made to the data during the recovery of

    database.

    T always advisable to have two or more redo log files and keep them in a separate disk So you

    can recover the data during the system crash.

    11. Which default Database roles are created when you create a Database?

    Ans)Connect , resource and dba are three default roles.

    12. What is a Checkpoint?

    13. Which Process reads data from Datafiles?

    Ans)Server process.

    14. Which Process writes data in Datafiles?

    Ans)Server Processes. (else DBWR)

    15. Can you make a Datafile auto extendible.If yes, how?

  • 7/30/2019 42177096 Oracle Interview

    4/6

    16. What is a Shared Pool?

    Ans)Shared Pool compromises of Library Cache and Dictionary Cache.Library Cache

    stores and shares sql stmts where as plsql stmts in memory.

    Dictionary Cache: Oracle continously request and update information in the data

    dictionary of the db.To maximize the performance of the system internal operation.Data

    Dictionary cache holds dictionary information.

    17. What is kept in the Database Buffer Cache?

    Ans)Database Buffer cache is one of the most important components of System Global

    Area (SGA). Database Buffer Cache is the place where data blocks are copied from

    datafiles to perform SQL operations. Buffer Cache is shared memory structure and it is

    concurrently accessed by all server processes.

    18. How many maximum Redo Logfiles one can have in a Database?

    Ans)Depends on what you specified for MAXLOGFILES during database creation

    (manually) or what you specified for "Maximum no. Of redo log files" with DBCA. You

    can check the current max with:

    19. What is difference between PFile and SPFile?

    Ans) Server Parameter Files - Binary version. Persistent.

    Initialization Parameter Files - Text version. Not persistent.

    20. What is PGA_AGGREGRATE_TARGET parameter?

    Ans)

    21. Large Pool is used for what?

    Ans)Large Pool is an optional memory structure used for the following purposes :-

    (1) Session information for shared server

  • 7/30/2019 42177096 Oracle Interview

    5/6

    (2) I/O server processes

    (3) Parallel queries

    (4) Backup and recovery if using through RMAN.

    The role of Large Pool is important because otherwise memory would be allocated from the

    Shared pool. Hence Large pool also reduces overhead of Shared pool.

    22. What is PCT Increase setting?

    Ans)PCTINCREASE: It take place when two extent are already allocated.it work for thirdor subsequent extent. Third extent size last extent size+ pctincrease of last extent size.

    23. What is PCTFREE and PCTUSED Setting?

    Ans)Pct Free is used to denote the percentage of the free space that is to be left whencreating a table. Similarly Pct Used is used to denote the percentage of the used space that

    is to be used when creating a table

    eg.:: Pctfree 20, Pctused 40

    24. What is Row Migration and Row Chaining?

    Ans)Row Chaining:-The row is too large to fit into an EMPTY data block.In this Oracle

    stores the DATA for the row in a CHAIN of one or more Data BLOCKS.

    CHAINING occurs when row is INSERTED or UPDATED

    Row chaining can happen for very large rows such as rows that contain LOB.Row chaining

    in such cases is Unavoidable.

    Row Migration:-An UPDATE statement increases the amount of DATA in a ROW so that

    the Row NO LONGER FITS in to its DATA BLOCK.

    Oracle tries to find another Blockwith enough free space to hold the entire row.If such

    block is available Oracle moves the entire ROW to the NEW BLOCK.Oracle keeps the

    original Row piece of a Migrated row row to POINT to the NEW BLOCK containing the

    actual row.The ROWID of the MIGRATED rows does not change.INDEXES are notupdated and they point to the ORIGINAL row LOCATION.

    25. What is 01555 - Snapshot Too Old error and how do you avoid it?

    Ans)he SNAPSHOT_TOO_OLD erro will come ... the query will goves into cartesionproduct or infinite loop .

  • 7/30/2019 42177096 Oracle Interview

    6/6

    Other case is suppose ur updated loarge no of rows at atime without saveing the records .

    In this case u can use commit statement for every 500 records then u can avoid this

    problam. Or ask DBA to extend the table space for this segment.

    -----------

    Snapshot_too_old exception is thrown when ur performing very large DML operation without

    commiting this can be resolved by increasing undo retention period. contact ur DBA to check

    for Undo retention period.