Interview Questions

37
5/22/2018 InterviewQuestions-slidepdf.com http://slidepdf.com/reader/full/interview-questions-561c2b3b5dc84 1/37 SPAM Steps name ABAP_GENERATION ADDON_CONFLICTS_? ADD_SYNCMARKS ADD_TO_BUFFER XPRA_EXECUTION UNLOCK_EU CHECK_REQUIREMENTS CLEAR_OLD_REPORTS CREATE_VERS_BEFORE AUTO_MOD_SPAU SCHEDULE_RDDIMPDP PROLOGUE AUTO_MOD_SPDD TEST_IMPORT RUN_SPDD_? PREPARE_XPRA CREATE_COMPONENTS SPDD_SPAU_CHECK RUN_SPAU_? OPTIMIZE_OBJLISTS OBJECTS_LOCKED_? MODIFY_BUFFER LOCK_EU INACTIVE_IMPORT IMPORT_PROPER IMPORT_OBJECT_LIST DDIC_ACTIVATION DDIC_IMPORT DISASSEMBLE EPILOGUE Upgrade: SAPEHPI: SUM: 1. Check the free space need atleast 120 GB 2. Check DDIC user password 3.  Upgrade the DB2 client with script db6_update_client.sh 4. Check the permission of the file sapuxuserchk chown root:sapsys sapuxuserchk chmod u+s,o-rwx sapuxuserchk 5. Increase the secondary log file parameter and archive log file size on DB db2 get db cfg for UH5 | grep -i log db2 update db cfg for UH5 using LOGSECOND 190 db2 update db cfg for UH5 using LOGFILSIZ 65520 6. If finance module active we need to set the RFC in TCode FINB_TR_DEST 7. Add the notes which is asked in extraction phase 8.  If you are doing the HCM system upgrade then take the wage type table T512W backup with report RPU12W0S 9. Update the SPAM, Not required its taken care by SUM as well 10.  Ask for SPDD and SPUA TR in phase configuration Error came in Upgrade: 1. Outstanding DB conversion need to clear with SE14 Tcode

description

Interview Questions

Transcript of Interview Questions

  • SPAM Steps name

    ABAP_GENERATION ADDON_CONFLICTS_? ADD_SYNCMARKS ADD_TO_BUFFER

    XPRA_EXECUTION UNLOCK_EU CHECK_REQUIREMENTS CLEAR_OLD_REPORTS

    CREATE_VERS_BEFORE AUTO_MOD_SPAU SCHEDULE_RDDIMPDP PROLOGUE

    AUTO_MOD_SPDD TEST_IMPORT RUN_SPDD_? PREPARE_XPRA

    CREATE_COMPONENTS SPDD_SPAU_CHECK RUN_SPAU_? OPTIMIZE_OBJLISTS

    OBJECTS_LOCKED_? MODIFY_BUFFER LOCK_EU INACTIVE_IMPORT

    IMPORT_PROPER IMPORT_OBJECT_LIST DDIC_ACTIVATION DDIC_IMPORT

    DISASSEMBLE EPILOGUE

    Upgrade:

    SAPEHPI:

    SUM:

    1. Check the free space need atleast 120 GB 2. Check DDIC user password 3. Upgrade the DB2 client with script db6_update_client.sh 4. Check the permission of the file sapuxuserchk

    chown root:sapsys sapuxuserchk chmod u+s,o-rwx sapuxuserchk

    5. Increase the secondary log file parameter and archive log file size on DB db2 get db cfg for UH5 | grep -i log db2 update db cfg for UH5 using LOGSECOND 190 db2 update db cfg for UH5 using LOGFILSIZ 65520

    6. If finance module active we need to set the RFC in TCode FINB_TR_DEST

    7. Add the notes which is asked in extraction phase

    8. If you are doing the HCM system upgrade then take the wage type table T512W backup with

    report RPU12W0S

    9. Update the SPAM, Not required its taken care by SUM as well

    10. Ask for SPDD and SPUA TR in phase configuration

    Error came in Upgrade:

    1. Outstanding DB conversion need to clear with SE14 Tcode

  • 2. Ask for to clear the BI Setup table using TCode

    3. In POST Processing phase its ask for to clear the SPAU

    4. Start and stop shadow system manually

    To stop the shadow instance manually, proceed as follows: cd /abap/bin ./SAPup stopshd

    To start the shadow instance manually, proceed as follows: cd /abap/bin ./SAPup startshd

    5. Parallel Processes during Runtime cd /abap/bin SAPup set procpar gt=scrol

    6. Changing the DDIC Password a. Stop the Software Update Manager by choosing Update Stop Update from the main

    menu of the GUI. b. Choose ABAPStart With Options from the menu. c. To make the new password of user DDIC known to the Software Update Manager, enter

    the commands: set DDICpwd (original system) set shdDDICpwd (shadow system)

    d. Restart the Software Update Manager.

  • In Checks phase it will ask for to create a tablespace with file DB6TBSXT.CLP

    ACT_UPG DDIC changes comes under this phase

  • TDMS:

    TCode CNV_MBT_TDMS

  • TADM70:

  • R3LDCTL reads the ABAP Dictionary to extract the database independent table and

    index structures, and writes them into *.STR files.

    R3LDCTL creates the DDL.TPL files for every SAP supported database.

    Since 6.40, additional DDL_LRG.TPL files are generated to support system

    copies of large databases more easy.

    R3SZCHK generates the target database size file DBSIZE.XML for SAPINST.

    R3SETUP generates a DBSIZE.TPL. R3SZCHK creates a DBSIZE.XML for SAPINST.

    Case 1: Table exists in database but not in the ABAP Dictionary - table will not be exported.

    Case 2: Table exists in ABAP Dictionary but not in database export errors are to be expected.

  • The report SMIGR_CREATE_DDL generates DDL statements for non-standard database objects and

    writes it into .SQL files.

    Make sure that . SQL files are accessible and located in the directory /DB/ (where is ORA, DB2, etc.)

  • Export:

  • Import:

  • Export:
  • Import:
  • SSO:

    1. Check Profile Parameters

    login/create_sso2_ticket = 2 and login/accept_sso2_ticket = 1

    2. Require one system user SAPJSF with role SAP_BC_JSF_COMMUNICATION and

    SAP_BC_USR_CUA_CLIENT_RFC

    3. Export / Import Certificates vise versa in ABAP from TCode STRUSTSSO2 and for JAVA Visual

    Admin -> TicketKetStore or latest portal NWA

    4. Create a JCo RFC provider in the J2EE Engine

    5. Add SAP System to Security providers list in visual admin Security Provider

    com.sap.security.core.server.jaas.EvaluateTicketLoginModule

  • Central User Administrator:

    1. TCode SCUA

    2. SCUM To maintain the local and global settings for the user

    3. SCUG Existing user master records are migrated to the central system with this TCode

    4. Delete child system Run report RSDELCUA

  • System Copy:

    Per-Processing:

    TCode Table Name Comment

    SMLG RZLLICLASS Logon Group List

    RZLLITAB Assignments of Logon/Server Groups to Instances

    WE21 EDIPORT Summary Table for all Port Types for IDoc Processing

    EDIPOA Table for ALE Port Definitions

    WE20 EDPP1 General partner profile

    EDP12 Partner profile outbound, additional data NAST

    EDP13 Partner profile outbound

    EDP21 Partner profile inbound

    EDBAS Basic types

    BD54 TBDLS Logical system

    TBDLST Text for logical system

    SMQS QSENDDEST Table of Registered tRFC/qRFC Destinations

    SMQR QIWKTAB Table of qRFC Inbound Queues to Be Activated Automatically

    SLICENSE SAPLIKEY Storage of SAP License Keys

    AL11 USER_DIR Table used to store user def. directories to be used in al11

    RSBASIDOC

    E070L CTS: Index for Assigning Numbers to Requests/Tasks

    BD64

    Distribution Model -> Switch Processing Mode

    Edit -> Model View -> Transport

    alter database backup controlfile to trace;

    recover database using backup controlfile until cancel;

    alter database open resetlogs

    RDDIMPDP

    In order to schedule job RDDIMPDP properly in all the clients of the system, you must do the following:

    Logon into client 000 with user DDIC

    Go to transaction SE38 and run report RDDNEWPP

    A new window will pop-up asking you if you want to schedule the background job with normal or high

    priority. Choose 'High Priority'

    Repeat the above steps in all the clients of the system

  • DB2 - Password Management

    DB2 UDB for UNIX and Windows additionally maintains the passwords for the connect user and user

    adm in file \\\sapmnt\\SYS\global\dscdb6.conf.

    DB2 UDB for UNIX and Windows provides functions to:

    Create password file dscdb6.conf

    This file can be recreated any time manually using the following command:

    dscdb6up create

    Retrieve passwords

    Java logs files:

    std_bootstrap.out

    jvm_bootstrap.out

    log_bootstrap_ID5012253.0.log

    dev_bootstrap

    std_bootstrap_ID501225300.out

    jvm_dispatcher.out

    jvm_server0.out

    dev_dispatcher

    std_dispatcher.out

    SAP Memory

    Roll area - The roll area is a memory area with a set (configurable) size that belongs to a work process. It

    is located in the heap of the virtual address space of the work process.

    Page area -

    Extended memory - SAP extended memory is the core of the SAP memory management system. Each

    SAP work process has a part reserved in its virtual address space for extended memory (see Virtual

    Address Space in a Work Process). You can set the size of extended memory using the profile parameter

  • em/initial_size_MB: Extended Memory Pool Size. Under Windows, further memory is assigned

    dynamically as needed, and you can also set this amount.

    Heap memory -

  • Oracle

    database

    An Oracle database is a collection of data, logically treated as a unit. The data is physically stored in one

    or several files. Oracle manages data in logical units called tablespaces. A database object, such as a

    table, is always created in a particular tablespace. A tablespace consists of one or more files.

    instance

    As the database is only a passive part of a database server, some processes and memory structures are

    needed to access the data and manage the database. The combination of Oracle (background) processes

    and memory buffers is called an Oracle instance.

    Every running Oracle database is linked to an Oracle instance. Moreover, every Oracle database needs

    its own instance.

    SGA

    Every time an Oracle instance is started, a shared memory region called the System Global Area (SGA) is

    allocated. The SGA allocated by an Oracle instance can only be accessed by the processes of this

    instance. This means that each instance has its own SGA. The SGA contains copies of data and control

    information for the corresponding Oracle instance. When the instance is stopped, the SGA is

    deallocated.

  • After an Oracle instance is started, a special process, the Listener allows the database clients and the

    instance to communicate with each other.

    Note: The listener process is not part of an Oracle instance; it is part of networking processes that work

    with Oracle.

    In SAP installations, dedicated servers are used. When a work process makes a request to connect to the

    database, the listener creates a dedicated server process and establishes an appropriate connection.

    The separate server process created on behalf of each work process (generally, for each user

    process) is called a shadow process.

    To handle database requests from several SAP system users, a work process communicates with its

    corresponding shadow process.

    When a work process has lost its connection to the database system, it automatically reconnects

    after the database server is available again and a database request is to be processed.

    Oracle background processes perform various tasks required for the database to function properly.

  • Databases are stored in data files on disks. To accelerate read and write access to data, it is cached in

    the database buffer cache in the SGA. .

    The Oracle database management system holds the executable SQL statements in the shared SQL area

    (also called the shared cursor cache), which is part of the shared pool allocated in SGA. Another part of

    the shared pool called the row cache caches Oracle data dictionary information.

    Caching of Data

    Databases are stored in data files on hard disks. However, data is never processed on the disks

    themselves. No matter whether a database client just needs to read some data or wants to modify it, it

    is first copied by the associated shadow process from the disk to the database buffer cache in the

    system global area (if it was not already there).

    Data is always cached if the data is accessed for the first time after an Oracle instance is started. But as

    all users concurrently connected to the instance share access to the database buffer cache, copies of

    data read from data files into the buffer cache can be reused by any user.

    The smallest logical unit that Oracle uses for copying data between data files and the buffer cache, as

    well as for managing data in the cache, is the data block.

    The size of an Oracle data block can generally be chosen during the creation of a database. In SAP

    installations, however, the block size is always 8 KB.

    For performance reasons, the physical allocation unit size on disks where you store Oracle files

    should also be 8 KB.

    Oracle always tries to keep the most recently used data blocks in the buffer cache. Depending on the

    size of the buffer cache, it may sometimes be necessary to overwrite the least recently used data blocks

    in the buffer cache.

    Any changes to Oracle data (inserts, updates, and deletions), are always performed in the buffer cache.

    An Oracle shadow process itself never copies modified data blocks ('dirty blocks') from the buffer cache

    to the disk. This is the task of a special Oracle background process called the database writer (DBW0).

    Control files

    Every Oracle database has a control file, which is a small binary file necessary for the database to start

    and operate successfully. A control file contains entries that specify the physical structure and state of

    the database, such as tablespace information, names and locations of data files and redo log files, or the

    current log sequence number. If the physical structure of the database changes (for example, when

    creating new data files or redo log files), the control file is automatically updated by Oracle.

    Control files can be changed only by Oracle. No database administrator or any other user can edit

    the control file directly.

    After opening the database, the control file must be available for writing. If for some reason the

    control file is not accessible, the database cannot function properly.

  • Oracle control files can be mirrored for security reasons. Several copies can be stored at different

    locations; Oracle updates these at the same time. In SAP installations, the control file is stored in

    three copies, which must be created on three physically separate disks.

    dbs (on UNIX) or database (on Windows)

    The Oracle profile init.ora or spfile.ora holds the Oracle instance configuration

    parameters

    The profile init.sap holds configuration parameters for administration tools BR*Tools

    sapdata

    Contains the data files of the tablespaces

    origlogA/B, mirrlogA/B

    Online redo log files reside in the origlog and mirrlog directories: Log file numbers 1 and 3 and their

    mirrors in origlogA and mirrlogA, log file numbers 2 and 4 and their mirrors in origlogB and mirrlogB,

    respectively

  • oraarch

    Offline redo log files are written to the oraarch directory; their names are specified with help of Oracle

    instance configuration parameters, so the name arch1_.dbf is just an example.

    saptrace

    Oracle dump files are written in the directory saptrace. The Oracle alert log alert_.log occurs in

    the directory /oracle//saptrace/diag/rdbms///trace.

    saparch

    Stores the logs written by the SAP tool BRARCHIVE

    sapbackup

    Stores logs written by the SAP tools BRBACKUP, BRRESTORE, and BRRECOVER

    sapreorg

    BRSPACE creates logs for its different functions here

    sapcheck

    BRSPACE creates logs for its different functions here

  • listener.ora

    listener.ora configures the listener and, as such, is only used on the database host.It is read when the

    listener is started. The configuration information specified inthis file determines OracleNet settings, such

    as the network protocol to be used,host name, port, and the default tracing information. listener.ora

    must containall Oracle system IDs and protocol addresses for which the listener shouldaccept

    connection requests.

    tnsnames.ora

    tnsnames.ora contains a list of service names for all databases that can beaccessed in the network.

    sqlnet.ora

    sqlnet.ora can contain client side information, such as a client domain to append to unqualified service

    names or net service names, or optional diagnostic parameters used for client tracing and logging.

  • Oracle Architecture

    A collection of information that has been systematically organized In form of physical files for easy

    access and analysis.

    A database is a set of files:

    1. Control file

    2. Redo file

    3. Datafile

    4. Parameter file

    Every running Oracle database is associated with an Oracle instance. When a database is started

    on a database server , Oracle allocates a memory area called the System Global Area (SGA) and

    starts one or more Oracle processes. This combination of the SGA and the Oracle processes is

    called an Oracle instance. The memory and processes of an instance manage the associated

    database's data efficiently and serve the one or multiple users of the database

    1. PARAMETER FILE (init.ora)

    Oracle uses a parameter file when starting up the database. The pfile is a text file containing the

    parameters and their values for configuring the database and instance.The default location is under

  • $ORACLE_HOME/dbs directory.

    The parameter files tell oracle the following when starting up an instance:-

    The name of the database and location of control files.

    The size of the SGA.

    The location of the dump and trace files.

    The parameters to set limits and that affect capacity.

    Some of the important parameters are:

    db_block_size

    db_files

    undo_management

    log_buffer

    max_dump_file_size

    db_block_buffers

    shared_pool_size

    log_checkpoint_interval

    init.sap

    The SAP utilities BRBACKUP, BRARCHIVE, and BRRESTORE must be configured before they can be

    used. To do this you must set the appropriate parameters in initialization profile init.sap. Before

    using one of the SAP utilities, find out exactly which parameters you have to configure. Changes to

    parameter values do not take effect until you call the corresponding utility.

    InitSID.sap is located in /usr/sap//SYS/exe/run/initSID.sap

    Important parameters in init.sap

    archive_copy_dir: This parameter identifies the directory used by BRARCHIVE to back up the

    offline redo log files to a local disk

    backup_mode: This parameter is used by BRBACKUP and BRRESTORE to determine the scope

    of the backup/restore activity.

    backup_type: Identifies the default type of the database backup. This parameter is only used

    by BRBACKUP

    tape_size: Storage size in gigabytes (G), megabytes (M) or kilobytes (K) for the tapes that will be

    used for backups and for archiving redo log files.

    remote_host: this parameter is to specify the name of the remote host, if you want to make a

    backup to a remote disk

    volume_archive: This parameter is used by BRARCHIVE to identify the volume/directory to be

    used for the archive of the offline redo log files

    volume_backup: This parameter is used by BRBACKUP to identify the volume/directory to be

    used for the backup of the database or non-database files

    backup_dev_type: Determines the backup medium that you want to use .It may be a disk, tape

    etc

    Database Buffer Cache:

  • Is a fairly large memory object that stores the actual data blocks that are retrieved from datafiles by

    system queries and other data manipulation language commands

    The buffers in the cache are organized in two lists:

    The write list and,

    The least recently used (LRU) list.

    The write list holds dirty buffers these are buffers that hold that data that has been modified, but the

    blocks have not been written back to disk.

    The LRU list holds free buffers, pinned buffers, and dirty buffers that have not yet been moved to the

    write list. Free buffers do not contain any useful data and are available for use. Pinned buffers are

    currently being accessed

    Redo Log buffer Cache:

    The Redo Log Buffer memory object stores images of all changes made to database blocks. As you

    know, database blocks typically store several table rows of organizational data. This means that if a

    single column value From one row in a block is changed, the image is stored. Changes include insert,

    update, delete, create, alter, or drop.

    Data dictionary Cache:

    The Data Dictionary Cache is a memory structure that caches data dictionary information that has been

    recently used. This includes user account information, datafile names, table descriptions, user

    privileges, and other information.The database server manages the size of the Data Dictionary Cache

    internally and the size depends on the size of the Shared Pool in which the Data Dictionary Cache

    resides. If the size is too small, then the data dictionary tables that reside on disk must be queried often

    for information and this will slow down performance.

    Different Processes of Oracle:

    DBWn:

    Writes to datafiles when one of these events occurs that is illustrated in the figure below

  • LGWR:

    The Log Writer (LGWR) writes contents from the Redo Log Buffer to the Redo Log File that is in

    use. These are sequential writes since the Redo Log Files record database modifications based on the

    actual time that the modification takes place. LGWR actually writes before the DBWn writes and only

    confirms that a COMMIT operation has succeeded when the Redo Log Buffer contents are successfully

    written to disk. LGWR can also call the DBWn to write contents of the Database Buffer Cache to

    disk. The LGWR writes according to the events illustrated in the figure shown below:

  • SMON:

    The System Monitor (SMON) is responsible for instance recovery by applying entries in the online redo

    log files to the datafiles. It also performs other activities as outlined in the figure shown below

    If an Oracle Instance fails, all information in memory not written to disk is lost. SMON is

    responsible for recovering the instance when the database is started up again. It does the

    following:

    Rolls forward to recover data that was recorded in a Redo Log File, but that had not yet been

    recorded to a datafile by DBWn. SMON reads the Redo Log Files and applies the changes to the

    data blocks. This recovers all transactions that were committed because these were written to

    the Redo Log Files prior to system failure

    Opens the database to allow system users to logon.

    Rolls back uncommitted transactions.

    SMON also does limited space management. It combines (coalesces) adjacent areas of free

    space in the database's datafiles for tablespaces that are dictionary managed.

    It also deallocates temporary segments to create free space in the datafiles.

  • PMON:

    The Process Monitor (PMON) is a cleanup type of process that cleans up after failed processes such as

    the dropping of a user connection due to a network failure or the abend of a user application program.

    It does the task shown in the figure below:

    CKPT:

    The Checkpoint (CKPT) process writes information to the database control files that identifies the point

    in time with regard to the Redo Log Files where instance recovery is to begin should it be

    necessary. This is done at a minimum, once every three seconds.

  • checkpoint records as a starting point for recovery. DBWn will have completed writing all buffers from

    the Database Buffer Cache to disk prior to the checkpoint, thus those record will not require

    recovery. This does the following:

    Ensures modified data blocks in memory are regularly written to disk CKPT can call the DBWn

    process in order to ensure this and does so when writing a checkpoint record.

    Reduces Instance Recovery time by minimizing the amount of work needed for recovery since

    only Redo Log File entries processed since the last checkpoint require recovery.

    Causes all committed data to be written to datafiles during database shutdown

    If a Redo Log File fills up and a switch is made to a new Redo Log File (this is covered in more

    detail in a later module), the CKPT process also writes checkpoint information into the headers

    of the datafiles.

    Checkpoint information written to control files includes the system change number (the SCN is a

    number stored in the control file and in the headers of the database files that are used to ensure

    that all files in the system are synchronized), location of which Redo Log File is to be used for

    recovery, and other information.

    CKPT does not write data blocks or redo blocks to disk it calls DBWn and LGWR as necessary

    Logical Structure:

    It is helpful to understand how an Oracle database is organized in terms of a logical structure that is

    used to organize physical objects.

  • Tablespace:

    A tablespace is a logical storage facility (a logical container) for storing objects such as tables, indexes,

    sequences, clusters, and other database objects. Each tablespace has at least one physical datafile that

    actually stores the tablespace at the operating system level. A large tablespace may have more than

    one datafile allocated for storing objects assigned to that tablespace, belongs to only one

    database.Tablespaces can be brought online and taken offline for purposes of backup and management,

    except for the SYSTEM tablespace that must always be online. Tablespaces can be in either read-only or

    read-write status.

    Datafile:

    Tablespaces are stored in datafiles which are physical disk objects. A datafile can only store objects for

    a single tablespace, but a tablespace may have more than one datafile this happens when a disk drive

    device fills up and a tablespace needs to be expanded, then it is expanded to a new disk drive.The DBA

    can change the size of a datafile to make it smaller or later. The file can also grow in size dynamically as

    the tablespace grows.

    Segment:

    When logical storage objects are created within a tablespace, a segment is allocated to the object.

    Obviously a tablespace typically has many segments. A segment cannot span tablespaces but can span

    datafiles that belong to a single tablespace

    Extent:

  • Each object has one segment which is a physical collection of extents. Extents are simply collections of

    contiguous disk storage blocks. A logical storage object such as a table or index always consists of at

    least one extent ideally the initial extent allocated to an object will be large enough to store all data

    that is initially loaded. As a table or index grows, additional extents are added to the segment. A DBA

    can add extents to segments in order to tune performance of the system. An extent cannot span a

    datafile.

    Block:

    The Oracle Server manages data at the smallest unit in what is termed a block or data block. Data are

    actually stored in blocks.