Live Cache- Administration and Monitoring

124
1 1 PRINT ON DEMAND sponsored by EUROPEAN SAP TECHNICAL EDUCATION CONFERENCE 2002 Sept. 30 – Oct. 2, 02 Bremen, Germany WORKSHOP Werner Thesing SAP liveCache Administration & Monitoring

Transcript of Live Cache- Administration and Monitoring

Page 1: Live Cache- Administration and Monitoring

11PRINT ON DEMAND

sponsored by

EUROPEANSAP TECHNICALEDUCATIONCONFERENCE 2002

Sept. 30 – Oct. 2, 02 Bremen, Germany

WORKSHOP

Werner Thesing

SAP liveCache Administration & Monitoring

Page 2: Live Cache- Administration and Monitoring

22PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 2

Learning Objectives

As a result of this workshop, you will be able to:

� Integrate your SAP liveCache into the APO system

� Start, stop and initialize your SAP liveCache

� Configure your SAP liveCache� Take backups and restore it

� React on critical situations

� Monitor the system regarding�Consistent views and garbage collection

�Memory areas

� Task structure

�Performance

Page 3: Live Cache- Administration and Monitoring

33PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 3

About the workshop

� The workshop contains 12 units

� Each unit consists of� lecture (15 min)

� Most units consist of� exercises (10 min)� solutions ( 5 min)

� Feel free to ask your questions during the exercises

� Breaks every 2 hours for 15 min

Page 4: Live Cache- Administration and Monitoring

44PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 4

Agenda

(1) live Cache concepts and architecture

(2) live Cache integration into R/3 via transaction lc10

(3) Basic administration (starting / stopping / initializing)

(4) Complete data backup

(5) Data storage

(6) Advanced administration (log backup / incremental data backup/ add volume)

(7) Consistent views and garbage collection

(8) Memory areas

(9) Task structure

(10) Recovery

(11) Configuration

(12) Performance analysis

(13) Summary

z In this workshop you will learn the main tasks of a liveCache administrator. Moreover the architecture and the concepts of the liveCache are introduced which gives an understanding of the liveCache behavior and ideas how to analyze and to overcome performance bottlenecks.

z This workshop refers to the live Cache release 7.4.

Page 5: Live Cache- Administration and Monitoring

55PRINT ON DEMAND

sponsored by

EUROPEANSAP TECHNICALEDUCATIONCONFERENCE 2002

Sept. 30 – Oct. 2, 02 Bremen, Germany

WORKSHOP

liveCacheConcepts and Architecture

SAP liveCache

Administration & Monitoring

z In this unit you will learn how to integrate an existing liveCache into the CCMS.

Page 6: Live Cache- Administration and Monitoring

66PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 6

Disk-based approaches and data storage based on relational schemas are not suitable for high performance processing and thus for Advanced Planning and Optimization (APO)

Why the liveCache has been developed (1)

z For the development of the Advanced Planning and Optimization (APO) component a database system was needed which allows fast access to data organized in a complex network.

z Applying conventional relational database management systems as data sources for the APO showed a poor performance since disk I/O and the non-appropriate data description in the relational schema limited the performance.

Page 7: Live Cache- Administration and Monitoring

77PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 7

Traditional buffering, data access/transfer times

Good performance only if all data fit into application buffer

Bring data to the application

Comprehensive computation triggers huge data traffic and disk I/O

Buffered data still relational

10 ms

1 ms

= 8KB

0.1 ms

Database

Dat

abas

e S

erve

r

Database buffer

Application

Application buffer

App

. Ser

ver

Presentation client

Why the liveCache has been developed (2)

z To read data from an application buffer which is in the same address space as the application takes about 0.1ms. Reading data from a database takes about 1ms if the corresponding record is already in the database buffer and even 10 ms if the record must be read from a hard disk before.

z Working with an application having a too small buffer to accommodate all required data causes a huge data traffic between application and database server.

z An additional problem of the traditional buffering is that after reading data into the application buffer they are still organized in a relational schema which is not appropriate to describe complex networks.

z To achieve a good performance for applications which require access to a large amount of data (i.e. APO) it is necessary to bring the application logic and the application data together in one address space. One possible solution could be to shift the application logic from the application server to the database server via stored procedures. However, this impairs the scalability of R/3. On the other hand one could shift all required data to the application server. But this requires that each server is equipped with very large main memory. Furthermore, the synchronization of the data changed on each server with the data stored in the database server is rather complicated.

Page 8: Live Cache- Administration and Monitoring

88PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 8

Why the liveCache has been developed (3)

Minor performance impact on transactional processing

Concurrency and transactional behavior supported

Bring application logic and data together

Avoid huge data traffic and disk I/O on comprehensive computation

Buffered data structures optimized for advanced business applications

Message-basedsemantic

Synchroni-zation

live Cachelive CacheDatabase

Dat

abas

e S

erve

r

Database buffer

Application

Application buffer

App

. Ser

ver

Presentation client

Advanced Planner &Optimizer

Pla

nnin

g S

erve

rDedicated

hardware/softwaresystem

z To overcome performance problems the liveCache was introduced which is a dedicated server tier for the main memory-based temporary storage of volatile shared data.

Page 9: Live Cache- Administration and Monitoring

99PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 9

What is the liveCache (1)

live Cache is an instance type of the relational DBMS SAP DB which was expanded by properties of an ODBMS

live Cache is an object management system for concurrent C++ programs which run in a single address space

live Cache provides an API to create, read, store and delete OMS objects

live Cache provides a transaction management for objects (commit, rollback)

live Cache ensures persistence of OMS objects including recovery

z liveCache is a program for high performance management of objects used by APO application programs (COM routines). These objects - called OMS objects - contain application data, whose meaning is unknown to the liveCache . All objects ideally are located in the main memory - in the global data cache - of the liveCache , but may be swapped out to disk in case of memory shortage.

z COM routines run as stored procedures in the address space of liveCache and are called from APO ABAP programs which run on the APO application servers. Due to the fact that COM routines run in the address space of liveCache , they have direct access to OMS objects, and navigation over networks of OMS objects is very fast. Typical access time is less than 10 microseconds per object.

z liveCache provides classes and class methods to the COM routines to administer their objects. Technically: COM routines inherit class methods from the liveCache base classes to create, read, store and delete OMS objects.

z liveCache relieves the application programs of implementing their own transaction and lock management. The application program is able either to commit or rollback allchanges made on several objects in a business transaction.

z liveCache ensures the existence of OMS objects beyond the lifetime of COM routines. That’s why liveCache uses the term persistent OMS objects. When liveCache is stopped or when a checkpoint is requested, all objects are stored on hard disks.

Page 10: Live Cache- Administration and Monitoring

1010PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 10

What is the liveCache (2)

live Cache provides suitable representations of complex data structures, like networks and trees, based on object references

live Cache is used for fast navigation in large and complex networks

live Cache offers consistent views to isolate navigation on data structures from simultaneous changes on these data structures

live Cache provides the complete functionality of a OLTP database which can be used in COM routines via a SQL interface

z The APO application uses a complex object orientated application model. This model is easier to implement by an object oriented programming than with the relational structures of a relational database. Therefore, liveCache supports object oriented programming through providing adequate C++ methods/functions.

z liveCache provides the application with the concept of consistent views to isolate the data of an application from simultaneous updates by other users (reader isolation).

z COM routines are implemented in liveCache as stored procedure. Therefore the call of a COM routine from ABAP is quite simple through using EXEC SQL.

Page 11: Live Cache- Administration and Monitoring

1111PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 11

liveCache objective

The main target of the live Cache is to optimize performance:� live Cache resides in main memory and therefore avoids disk I/O

� Object orientation enables efficient programming techniques

� C++ applications run in the address space of live Cache

� Objects are referenced via logical pointers (= OID)

ABAP

RDBMS

> 1 ms

Application Server

Database Server

live Cache Server

< 10 µs

live Cache

C++

Data Storage

z In a standard SAP system, typical database request times are above 1 ms. For data intensive applications, a new technology is required in order to achieve better response times. liveCache has been developed to reduce typical database request times to below 10 µs. Key factors in achieving these response times are:

y Accesses to liveCache data usually do not involve any disk I/O.

y The processes accessing the data are optimized C++ routines that run in the process context of the liveCache on the liveCache server.

y Object orientation enables the use of efficient programming techniques. Besides -compared to a relational database, where many related tables may have to be accessed to retrieve all requested information - one object contains all the relevant information and the need to access numerous objects or tables is eliminated. In other words, the typical liveCache data structure is NOT a relational data table.

y Objects are referenced via logical pointers OID in contrast to referencing records via keys (as in standard SQL) no search in an index tree is required.

z APO is the first product to use liveCache technology.

Page 12: Live Cache- Administration and Monitoring

1212PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 12

liveCache architecture (1)

live Cache interface of R/3 kernel(DBDS / native SQL)

COM objects (DLL)

Devices

Appl.server

live Cache

SQL Packet

z ABAP Programs and the APO optimizers use native SQL for communicating through the standard SAP DB interface to liveCache. liveCache has an SQL interface that is used to communicate with the SAP instances. With native SQL, ABAP programs call stored procedures in the liveCache that point to Component Object Model (COM) routines written in C++. An SQL class provides SQL methods to access the SQL data through the COM routines.

z The COM routines are part of a dynamic link library that runs in the process context of the liveCache instance. In the Windows NT implementation of liveCache, COM routines and their interface are registered in the Windows NT Registry. For the Unix implementation, a registry file is provided by liveCache. A persistent C++ class provides the COM routines with access to the corresponding Object Management System (OMS) data that is stored in the liveCache.

z COM Routines in APO are delivered in DLL libraries as SAPXXX.DLL and SAPXXX.LST on NT or as shared libraries SAPXXX.ISO and SAPXXX.LST on UNIX . The application specific knowledge is built into these COM routines based on the concept of object orientation.

Page 13: Live Cache- Administration and Monitoring

1313PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 13

Framework for

application embedding

liveCache architecture (2)

SQL

SQL basis(B* trees)

SQL/Objectdata devices

liveCache

Command analyzer

OMS basis(page chains)

live Cache interface of R/3 kernel(DBDS / native SQL)

OMSSQLclass

COM objects (DLL)

DBMS basis

Logdevices

Appl.server

z liveCache is a hybrid of a relational and object-oriented database

z The relational part of the liveCache is available as the open source data base SAP DB (see www.sapdb.org)

z The SQL part as well as the OMS part of the liveCache are based on the same DBMS basis functionality which supplies services as for instance transaction management, logging, device handling and caching mechanisms

z Object and SQL data are stored on common devices

z All liveCache data is stored in the caches as well as on disks in 8 KB blocks called pages.

z liveCache stores the OMS objects in page chains, the pages in the chain being linked by pointers. SQL table data are stored in the B*trees. SQL and OMS data reside together in the data cache and the data devices of the liveCache.

Page 14: Live Cache- Administration and Monitoring

1414PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 14

liveCache administration tools

The live Cache can be administered by

� Transaction LC10 in the SAPGUI

� Database Manager CLI (DBMCLI) command line interface

� Database Manager GUI (DBMGUI) graphical user interfacefor Windows NT/2000 only

� Web Database Manager (WEB DBM)

z liveCache, similar to the standard SAP RDBMS, can be administered within the SAP system. The SAP transaction LC10 makes it possible to monitor, configure and administer liveCache.

z The LC10 applies the Database Manager CLI (DBMCLI) to administer the liveCache. Therefore, it is obvious that all functionalities of an SAP System are still available without the LC10 and could also be performed with the „native“ data base administration tool DBMCLI.

z In addition to the DBMCLI the administration tool DBMGUI is available, which is a graphical user interface to the liveCache management client tool DBMCLI.

z While the DBMGUI works only on Windows NT/2000, running the WEB DBM requires only an internet browser and the DBM Web Server which can be installed anywhere in the net.

z DBMCLI, DBMGUI and WEB DBM should not be used for starting or stopping the liveCache, even if LC10 itself calls DBMCLI for starting or stopping the liveCache. They should only be used for changing liveCache parameters, defining backup media and for liveCache monitoring. That is because the LC10 runs in addition to starting, stopping and initializing application specific reports. Moreover, it registers the COM routines each time the liveCache is started.

Page 15: Live Cache- Administration and Monitoring

1515PRINT ON DEMAND

sponsored by

EUROPEANSAP TECHNICALEDUCATIONCONFERENCE 2002

Sept. 30 – Oct. 2, 02 Bremen, Germany

WORKSHOP

liveCache Integration into R/3 via LC10

SAP liveCache

Administration & Monitoring

z In this unit you will learn how to integrate an existing liveCache into the CCMS.

Page 16: Live Cache- Administration and Monitoring

1616PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 16

Transaction LC10

z Transaction LC10 introduces liveCache specific administration functions within R/3 (≥4.6D). It allows the administration of multiple liveCaches.

z Transaction LC10 identifies liveCaches via a connection name (in the example above it is LCA_LAPTOP) which need not to be the physical name of the liveCache as it was installed on the liveCache server.

z Integration Button

y creates and modifies new liveCache connections

z Monitoring Button

y leads to the main screen of the LC10

y liveCache administration (stop,start and initialization of the liveCache).

y changing liveCache configuration

y watch and analyze the liveCache performance by the liveCache administrator

y save and recovery of the liveCache

z Console Button

y views the status of liveCache tasks

z Alert Monitor Button

y reports error situations (liveCache specific part of the transaction RZ20)

Page 17: Live Cache- Administration and Monitoring

1717PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 17

liveCache integration into LC10 (1)

z Choose ‘Integration’ on the initial screen of LC10 to reach the integration screen.

z The integration data are required for the multi-db-connection from an R/3 system to the liveCache via NATIVE SQL. They are stored in tables DBCON and DBCONUSR on the RDBMS.

z The ‘Name of the database connection’ is used for a NATIVE SQL connection to an R/3 system.

z The ‘liveCache name’ is the name of the liveCache database. It can be different from the name of the database connection.

z The server name in the ‘liveCache server name’ is case sensitive. It must be the same as the output from the command ‘hostname’ on a DOS prompt or UNIX shell.

z The default user/password combinations are control/control for the DBM operator and sapr3/sap for the standard liveCache user.

z The APO application server has to be stopped and started again after changes of liveCache connection information. This guarantees the R/3 system connects to the correct liveCache instance.

Page 18: Live Cache- Administration and Monitoring

1818PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 18

liveCache integration into LC10 (2)

z There are two possibilities to authorize a connection to the liveCache:

decentral authorization:

y you have to authorize access to the liveCache on each APO application server

y On each application server you have to start the dbmcli command (via sm49):dbmcli –d <liveCache name> -n <lc-server> -us <DBM user>,<DBM password>

central authorization:

y central authorization data is stored in the APO database in table DBCONUSR

y this authorization is recommended and it is the default

y new with version 46D -> APO 3.1

Page 19: Live Cache- Administration and Monitoring

1919PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 19

liveCache integration into LC10 (3)

z Execution of application-specific functions:

y To run an ABAP report automatically prior or after liveCache start, stop and initialization, users can specify the report names in this section.

y The same report cannot be used more than once, unless a new name is used.

y The report names are stored in the table LCINIT within the APO database.

y The report /SAPAPO/DELETE_LC_ANCHORS has to be executed each time after the liveCache was initialized. This report is responsible for the integrity of the APO and the liveCache data.

Page 20: Live Cache- Administration and Monitoring

2020PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 20

liveCache integration into LC10 (4)

z When the alert monitor is activated a number of performance critical data (i.e. heap and device usage, cache hit rates) is collected periodically and displayed in the alert monitor which can be reached by pressing ’Alert monitor’ on the initial screen of the LC10.

z The alert monitor is activated by default if the liveCache was installed by the standard installation tool (LCSETUP).

Page 21: Live Cache- Administration and Monitoring

2121PRINT ON DEMAND

sponsored by

EUROPEANSAP TECHNICALEDUCATIONCONFERENCE 2002

Sept. 30 – Oct. 2, 02 Bremen, Germany

WORKSHOP

Basic Administration

SAP liveCache

Administration & Monitoring

z At the end of this unit you will be able to start, stop and initialize a liveCache and you will know where to find liveCache diagnosis files.

Page 22: Live Cache- Administration and Monitoring

2222PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 22

liveCache status

Basic status information

z This is the main screen of the LC10 which can be reached by pressing ’liveCache Monitoring’ on the initial screen of the LC10. It offers all services and information to administrate the liveCache.

z Before this window appears, the R/3 system sends a request to the liveCache about its status. The liveCache name and liveCache server information are stored in the table DBCON as described in previous slides. The remaining information displays the output from the status request. If the connection to a liveCache is not available, an error message is displayed.

z The left frame of the screen shows a tree which contains all information and services needed to administer the liveCache. The tree branches with the most important information and services are opened by default.

z The right frame displays the details which belong to the activated branch of the service tree.

z Initially the window screen which belongs to the ‘Properties’ icon is activated.

z The ‘DBM server version’ displays the version of the database manager server which is responsible for the dbmcli communication to the liveCache.

z The ‘liveCache version’ shows the liveCache kernel build version.

z The traffic light at ‘liveCache status’ illustrates the operation mode of the liveCache.

Page 23: Live Cache- Administration and Monitoring

2323PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 23

liveCache operation modes

Three possible operation modes of live Cache:

OFFLINE: live Cache kernel processes and caches do not exist

ADMIN: live Cache kernel active (processes started, caches initialized, but not synchronized)

ONLINE: live Cache kernel active and ready to work

z There are three liveCache operating modes:

y OFFLINE: No liveCache kernel processes are running, memory areas (caches) are not allocated. No user can use liveCache .

y ADMIN: The liveCache kernel is active, but caches are not yet synchronized with the volumes. Users cannot connect to the liveCache. Only the liveCache administration user can connect and perform administrative tasks like restoring the database.

y ONLINE: The liveCache kernel is active and data and log information is synchronized between caches and volumes. Users can connect to the liveCache.

Page 24: Live Cache- Administration and Monitoring

2424PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 24

Starting, initializing and stopping the liveCache

Starting,stopping and initializing the live Cache

z To start, stop or initialize the liveCache choose ‘Administration->Operating’ in the ‘liveCache: Monitoring’ screen. There you can find three buttons with the following meanings:

y Start liveCache starts the liveCache into online mode. After the restart all data, committed before the last shutdown (or crash), are available again.

y Initialize liveCache deletes the complete contents of the liveCache. (The next pages describe the initialization process in more detail.)

y Stop liveCache shuts the liveCache down into the offline mode.

z Although starting, stopping and initializing the liveCache is also possible with DBMGUI or DBMCLI, it is strongly recommended to use the transaction LC10. First, LC10 calls up an APO specific report after starting the liveCache instance. If the report does not run, accesses of work processes to the liveCache may cause an error. Second, during stopping the liveCache instance, LC10 informs all work processes about this, which causes them to automatically execute a reconnect when accessing the liveCache next time. If the liveCache instance was stopped using DBMGUI or DBMCLI, a short dump occurs as soon as work processes try to access the liveCache again after a restart.

Page 25: Live Cache- Administration and Monitoring

2525PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 25

Initialize liveCache (1)

z Initializing the liveCache always formats the log volumes. If the data volumes do not already exist they are created and formatted too.

z All liveCache data will be lost after initialization, and it has to be loaded again via the APO system or via a recovery.

z Program LCINIT.BAT with option init is used to initialize the liveCache.

z The initialization process is logged in the log file LCINIT.LOG, which is automatically displayed at the end of the initialization process.

Page 26: Live Cache- Administration and Monitoring

2626PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 26

Initialize liveCache (2)

INIT LIVECACHE LCA (init) #

• Start live Cache from OFFLINE into ADMIN mode

• Format the log volumes with initial configuration

• Activate live Cache into ONLINE mode

• Load live Cache system tables

• Create live Cache user SAPR3

• Activate live Cache monitoring

• Registration of COM routines

• Load live Cache procedures

live Cache LCA successfully initialized #

Run report /SAPAPO/DELETE_LC_ANCHORS

z This slide demonstrates the steps in a liveCache initialization process.

z Formatting log volume can take some time; It depends on the size of the log volumes.

z Loading system tables is needed for liveCache error messages and liveCache monitoring.

z User sapr3 is the owner of liveCache content. This user is re-created each time the liveCache is initialized.

z Registration of COM-Routines registers all application specific routines, e.g. sapapo.dllfor APO.

z In an APO system, the report /SAPAPO/DELETE_LC_ANCHORS is required to be executed immediately after the liveCache is initialized.

Page 27: Live Cache- Administration and Monitoring

2727PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 27

liveCache Message Files: lcinit.log

Log files for starting, stopping and initializing the live Cache

z Each time the liveCache is started, stopped or initialized, a log file (LCINIT.LOG ) is written which can be viewed in the branch ‘Logs->Initialization->Currently’ of the service tree.

z The log file of the previous starts, stops or initializations is displayed in‘Logs->Initialization->History’.

z The tab ‘Controlfile’ of the selection ‘Problem Analysis->Logs->Initialization’ displays the script LCINIT.BAT which is used to start, stop and initialize the liveCache.

z Whenever the liveCache is started, stopped or initialized successfully you can find a message liveCache <connection name> successfully started/stopped/initialized at the end of the log file.

Page 28: Live Cache- Administration and Monitoring

2828PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 28

liveCache message files: knldiag

live Cache system message files

z The knldiag file logs messages about current liveCache activities. The actions logged include liveCache start, user logons, writing of savepoints, errors and liveCache shutdown. Therefore, this file is one of the most important diagnostic files to analyze database problems or performance bottlenecks.

z The knldiag file is recreated at every liveCache start. The previous one is saved under ‘knldiag.old’ (‘Problem Analysis->Messages->Kernel->Previous’ ), which means that the content of every knldiag file is definitely lost after two consecutive restarts. To avoid loosing the information about fatal errors that happened during two consecutive startup failures, errors are also appended to file ‘knldiag.err’.

z To avoid that the size of the knldiag file increases unlimitedly with the time the database spends in the operation mode online, the knldiag file has a fixed length which can be set as a configuration parameter of the database. The system messages are written in a roundtrip. Therefore, it can happen that the knldiag file does not contain all system messages after long operation time. This is another reason why all error messages are written into the file ‘knldiag.err’.

Page 29: Live Cache- Administration and Monitoring

2929PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 29

liveCache message files: knldiag.err

live Cache system error message file

z In contrast to the knldiag file, knldiag.err is not overwritten cyclically or reinitialized during a restart. It logs consecutively the starting time of the database and any serious errors.

z This file is required to analyze errors if the knldiag files, which originally contained the error messages, are already overwritten.

Page 30: Live Cache- Administration and Monitoring

3030PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 30

liveCache directories

Directory structure

sapdb

bin pgm env etc lib miscincl sap

<SID>

db

binpgm

programs

wrkconfig

data

<SID> <SID>

z Installing a liveCache creates a number of directories which are in a standard installation subdirectories of a common root directory called sapdb.

z You should not change the names of these directories.

z The default System ID (SID) of a liveCache is LCA. The standard connection names are LCA for APO and LDA for ATP (Available To Promise).

z The directories <IndepPrograms> and <InstallationPath> contain all files which are required for the database management system while the <IndepData> directory accommodates all configuration and message files which belong to specific liveCache instances.

z For each instance a new subdirectory is created in the <IndepData>. The <Rundirectory> defines the name of the subdirectory where to find the message files which belongs to the instance currently monitored. Usually you should have only one instance on your liveCache server.

z In the <IndepPrograms> subdirectory those programs and scripts are stored which do not depend on particular liveCache releases, like e.g. the downward compatible network server program x_server that transfers data between any liveCache instance and a remote client.

z In contrast to the <IndepPrograms> directory all files contained in the directory <InstallationPath> are release dependent.

Page 31: Live Cache- Administration and Monitoring

3131PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 31

liveCache directories: example (1)

Database configuration files

Installation log files

Database working directories� knldiag

� knldiag.err

� knldump

� knltrace

� rtedump

Administration log files

Saved diagnosis files after abnormal shutdown

z Sapdb/data/config: database configuration file for each installed database instance.

z Sapdb/data/config/install: log file for each installation of the SAPDB database management system.

z Sapdb/data/wrk/Lca: working directory of a liveCache. The working directory contains the message files knldiag, knldiag.old and knldiag.err, the liveCache trace file knltrace and the dump file knldump. The dump file is created whenever the database crashes due to an error. The file contains an image of all structures stored in the memory. Together with the knldiag file this file is essential for the error analysis. The size of this file is about 10 percent larger than the size of the data cache. Make sure that there is always sufficient space on the device accommodating the working directory to host the knldump file in case of a crash.

z Sapdb/data/wrk/Lca/dbahist: detailed log files for each backup and restore of the database

z Sapdb/data/wrk/Lca/DIAGHISTORY: All message, dump and trace files except the knldiag.err are overwritten after a restart. To avoid the lost of the message files needed for the error analysis at each restart all files from the working directory are saved in a subdirectory of DIAGHISTORY when the liveCache detects that the previous shutdown was due to an error. The subdirectories are labeled with a time stamp.

Page 32: Live Cache- Administration and Monitoring

3232PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 32

liveCache files: example (2)

Release independent programs, e.g. DBMCLI

Programs specific for installed liveCache release

system programs, e.g kernel.exe

System programs, tools

List of installed files

Libraries for precompiler

SAP-specific live Cache utilities Map files of all system programs

Scripts for creation of system tables

Documentation filesRoot directory for the SAP DB Web Server

z Sapdb/Lca/db/pgm: executable programs in particular the program kernel.exe whichrepresents the database management system.

z Sapdb/Lca/db/sap: dynamic link libraries and shared object files respectively which contain the application code to run via COM in the database. Here you can also find the script LCINIT.BAT to start, stop and initialize the liveCache.

Page 33: Live Cache- Administration and Monitoring

3333PRINT ON DEMAND

sponsored by

EUROPEANSAP TECHNICALEDUCATIONCONFERENCE 2002

Sept. 30 – Oct. 2, 02 Bremen, Germany

WORKSHOP

CompleteData Backup

SAP liveCache

Administration & Monitoring

z At the end of this unit you will be able to perform a complete backup of the liveCache using the database administration tool DBMGUI.

Page 34: Live Cache- Administration and Monitoring

3434PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 34

DA

T_0

0001

paramfile

/sapdb/data/config/<SID>

DAT_00001

Labels:

Complete data backup

Datavolumes

Logvolumes

Data 1

Data 2

Data n

Log 1

Log 2

z A complete backup saves all occupied pages of the data volume. In addition, the liveCache parameter file is written to the backup.

z The complete backup as well as the incremental backups (see later) are always consistent on the level of transactions since the before images of running transaction are stored in the data area; i.e. they are included in the backup.

z Each backup gets a label reflecting the sequence of the backups. This label is used by the administrative tools to distinguish the backups. A map from the logical backup media name to the backup label can be found in the file dbm.mdf in the <Rundirectory> of the liveCache.

z For each backup log is written to the file dbm.knl in the <Rundirectory>.

z Backups are performed by the database process. Online backups of the volumes with operating system tools (e.g. dd, copy) are useless .

Page 35: Live Cache- Administration and Monitoring

3535PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 35

Backup (1) : Start the DBMGUI

Calling the DBMGUI

z To perform an initial backup of the liveCache we will use the DBMGUI which can be called by choosing ‘Tools->Database Manager (GUI)’. After the selection you will be asked for the user name of the database manager and its password which are usually CONTROL/CONTROL.

z Since backup and restore procedures of a liveCache are identical to those for a OLTP instance of the SAP DB these functions are not directly included in the liveCache specific transaction lc10 but can be accessed via the general administration tool DBMGUI.

z To use the DBMGUI it has to be installed on the local PC.

Page 36: Live Cache- Administration and Monitoring

3636PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 36

Backup (2) : Create a backup media

Define parallel backup media

Define single backup media

z Appearance of the DBMGUI:

y On the left side you can see all possible actions and information grouped into six topics.

y On the right upper side the most important database information are displayed:the filling levels of data and log volumes and the cache hit rates.

y In the central window new information will be shown if you click on one of the icons in the left window.

z To perform a backup, you first have to configure a backup media, which can be done by the selection `Configuration -> Backup Media`.

z You will then see an overview of all defined backup media – divided in single and parallel media.

z At the lower border of the central window there are two icons which can be used to define either a parallel or a single backup media.

Page 37: Live Cache- Administration and Monitoring

3737PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 37

Backup (3a) : Create a backup media

z You can choose nearly any name for the media name. There are only a few names reserved for external backup tools: ADSM, NSR, BACK. If your media name begins with one of these strings, an external backup tool is expected.

z Besides the media name you have to specify a location. You have to enter the complete path of the media. If you specify only a file name this file will be created in the <Rundirectory> of the database.

z There are four backup types:

y Complete: full backup of the data.

y Incremental: incremental backup of the data, saves all pages changed since the last complete data backup.

y Log: interactive backup of the full log area (in units of log segments).

y AutoLog: automatic log backup, when a log segment is completed, it will be written to the defined media.

z For a complete or incremental data backup you can choose one of the three device types: file, tape or pipe. For a log backup you can choose file or pipe. It is not possible to save log segments directly to tape.

z After you have entered the necessary information, you have to press the button „OK“ (green tick).

z The media definition is stored in the file dbm.mmm in the <Rundirectory> of the database.

Page 38: Live Cache- Administration and Monitoring

3838PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 38

Backup (3b) : Create media for external backup tools

z The liveCache supports three kinds of external backup tools:

(1) Tivoli Storage Manager (ADSM)

(2) Networker (NSR)

(3) Tools which support the Interface BackInt for Oracle (BACK)

z To use one these tools you have to choose the device type Pipe for your backup media. Moreover, the name of the media has to start with either the letters ADSM, NSR or BACK. The DBMGUI needs these letters to decide which kind of external tool it should apply.

z For Windows NT media location must be as ’\\.\<PipeName>‘ where <Pipename> stands for any name. On a UNIX platform the location can be any file name of a non existing file.

Page 39: Live Cache- Administration and Monitoring

3939PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 39

Backup (4) : Start complete data backup

‘Next Step’ button to continue

z To create a complete data backup you have to select ‘Backup->Complete‘. In the central window you are offered all media which are available for this operation.

z After you have chosen a media you have to confirm your choice by pressing the ‘Next Step‘ button. The following window repeats your choice and ask you to confirm it. When this is done the backup process starts and you can follow the progress displayed in a progress bar.

Page 40: Live Cache- Administration and Monitoring

4040PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 40

Backup (5) : Final backup report

z When the backup is finished, a status message will be displayed.

z A complete backup is consistent, i.e. it is possible to restart the recovered database without further log information.

z To continue working with the DBMGUI press the ‘Step back’ button.

Page 41: Live Cache- Administration and Monitoring

4141PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 41

Running a backup in the background

Report to perform a backup

z Use the report RSLVCBACKUP to perform a liveCache backup in the background.

z The report requires the input parameter:

y liveCache connection name (usually LCA)

y backup type

• BUP_DATA : complete data backup

• BUP_PAGE : incremental data backup

• BUP_LOG : log backup

y a backup media name

z Before the report can be executed the backup media must be defined which can be done with the DBMGUI as shown in the previous slides.

Page 42: Live Cache- Administration and Monitoring

4242PRINT ON DEMAND

sponsored by

EUROPEANSAP TECHNICALEDUCATIONCONFERENCE 2002

Sept. 30 – Oct. 2, 02 Bremen, Germany

WORKSHOP

Data Storage

SAP liveCache

Administration & Monitoring

z At the conclusion of this unit you will be able to monitor the data page usage of the liveCache.

Page 43: Live Cache- Administration and Monitoring

4343PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 43

liveCache objects

class MyObj : public OmsKeyedObject<MyObj, unsigned char>{public :

unsigned char UpdCnt;MyObj() { UpdCnt = 0;}

};

STDMETHODIMP TestComponent::OID_UPD_OBJ (int KeyNo){try {

const MyObj* pMyObjKey = MyObj::omsKeyAccess(*this, KeyNo,OMS_DEFAULT_SCHEMA_HANDLE, CONTAINER_NO);

if (pMyObjKey){

MyObj* pUpdMyObj = pMyObjKey->omsForUpdPtr (*this, DO_LOCK);

pUpdMyObj->UpdCnt++; // 1st updatepUpdMyObj->omsStore(*this);

pUpdMyObj->UpdCnt++; // 2nd update}else throw DbpError (100, "Object key not found");}

catch (DbpError e) { omsExceptionHandler(e); }return S_OK;}

z The liveCache was designed to store instances of C++ classes which are defined within COM routines. At runtime a COM routine generates instances of classes. These instances are called „persistent objects“ since they survive their creators (the COM routines). They are stored in liveCache and on physical disks.

z The example above shows the definition of a class (MyObj) to generate persistent objects in the liveCache and its usage by a COM-Object (TestComponent).

z By inheriting from the template OmsKeyedObject all instances from the class MyObjinherit the ability to be stored persistently in the liveCache. The template OmsKeyedObject belongs to the API supplied by the liveCache it offers transaction control (rollback,commit), lock mechanisms, access methods and the ability to be stored to all derived classes.

Page 44: Live Cache- Administration and Monitoring

4444PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 44

liveCache data storage

SQL data (B* tree) Object data (page chains)

Data volume pages

...

z SQL data is stored on SQL pages and is sorted using the B*tree algorithm. Access occurs via a key and requires a search for the record position in the index. In contrast object data is stored in OMS pages, which are linked to build page chains. Objects are accessed via an OID. The OID contains already the object position therefore no further search is required.

z In the liveCache, all data are stored in data volume pages regardless of the data type (SQL data or object data).

z The size of a data page is 8 KB.

Page 45: Live Cache- Administration and Monitoring

4545PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 45

Object access: RDBMS approach

Data pages

Primaryindex

Navigation via logical key using SQLin > 1 ms (data cache access)

2rec rec

Table 2

33

3

32

2

2

Table 1

record

record

11

logical referencevia primary key

2

Tables are logicallylinked by relational data

Data is retrieved from buffers or disk

1

1

z Application data in APO is organized as a network of linked data records. Data records contain application data and mostly one or more links to other records used for the navigation over the data network.

z In a traditional relational database management system, data is stored in relational tables. Tables containing related data are logically linked through one or more fields (which may but do not have to carry the same names). Mostly the primary key of the tables is used as link criteria.

z To retrieve data in a table, an index will be used – either the primary index containing the primary key or a secondary index. Normally more than one access to index data is necessary to navigate to the table data in the data pages.

z Navigation over a network of data, stored in one or several tables, is performed using several communications between application program and database:

y The database reads the first record and returns it to the application program.

y The application program gets the primary key of the next record from data stored in first record.

y The database reads the next record and returns it to the application program.

y (1-3) until all data is read.

z If most of the pages accessed are buffered in the database’s RAM then no disk access will be required, but if this is not the case the database software has to read information stored on hard disks to fulfill the data request. Physical disk access slows down the performance of the database.

Page 46: Live Cache- Administration and Monitoring

4646PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 46

Object access: liveCache approach

Objects ofclass 1 C

lass

co

ntai

ner

1

page 11 page 4 page 34

object

Cla

ss

cont

aine

r 2

page 5 page 20

objObjects ofclass 2

physical reference via OID

(= page number + offset)

Navigation via OID in < 10 µs

1

2

3

object

obj

z In liveCache the data (objects) are stored in class containers which consist of double linked page chains. Navigation between objects is very fast because objects are referenced using a physical reference – the Object ID (OID) which contains the page number and the page offset.

z Direct accesses to the body of an object, e.g. by searching data in the body, is not possible. Only alternative are keyed objects where the application may define a key on the object. Features like LIKE, GT, LT etc. are not supported only a key range iterator is supplied.

z liveCache can also store data in relational tables and access them correspondingly, but first, this is only used for a minority of data.

Page 47: Live Cache- Administration and Monitoring

4747PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 47

Class container for objects of fixed length

Chain 1 Chain 2 Chain 3 Chain 4first free

next free

Class container

index

Map: key->OID

z The liveCache supplies two kinds of class containers to store objects. One for fixed length objects and one for objects of variable length.

z Class container for objects of fixed length contain solely objects which are instances of one class and thus are all of one length.

z Containers consist of chains of double linked pages. All pages which contain free space to accommodate further objects are linked additionally in a free chain.

z Since new objects are inserted into a container always on the first page of the free chain the class containers can be partitioned into more than one chain to avoid bottlenecks during massive parallel insert of objects.

z The root page of each chain includes administrative data, e.g. the pointer to the first page in the chain where is still space for another object.

z An index can be defined for a class container (but at maximum one) which maps a key of fixed length onto an OID. Object of those containers can also be accessed via a key. The index is organized as one or several B* trees.

Page 48: Live Cache- Administration and Monitoring

4848PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 48

Data page structure for objects of fixed length

Occupied object frame

Page header (80 Byte)� page number

� check sum

� pointer to first free object frame

� num. of free/occupied frames

� pointers to next/previous pages

Free object frame

Page trailer (12 Byte)� page number

� check sum

Object frame header (24 Byte)� pointer to next free frame

� object lock state

� pointer before image

pointer to next free

object frame

z Each page contains objects instantiated from the same class, i.e. all objects on a page are of the same length. Therefore, they are stored in an array of object frames. With this approach, there is no space fragmentation on a data page.

z The object frame consists of a 24 Bytes header with internal data and the data body that is visible to the COM routines. The header stores for instance the lock state of the object, the pointer to the next free object frame and the pointer to the before image of the object.

z The length of a data page is 8KB. Each page has a header of 80 Byte and a trailer of 12 Bytes. These parts of the page are not used for object frames but filled with structural data as the page number, the numbers of the previous and next pages in the page chain, a checksum to detect I/O errors, the number of occupied/free object frames on the page and the offset of the first free frame.

z The length of a fixed length object is limited to the page size of slightly less than 8KB.

Page 49: Live Cache- Administration and Monitoring

4949PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 49

Class container for objects with variable length

Class container

Chain 32.

continuation

container

Chain 32.

continuation

container

Chain 32.

continuation

container

Primary container

i. continuation container

j. continuation container

Page 22 Page 100 Page 79

z Objects with variable length may be distributed over several pages and have a theoretical maximal length of 2GB.

z To store the objects they are divided into pieces of less than 8KB. The pieces are stored in class containers for objects of variable length. Each of those class containers consists of one primary container and six continuation container.The primary container can accommodate object smaller than 126 Byte. The ith continuation container contains object frames which can host object with the length of ~126*(2 i) Byte with i=1,..,6.

z To insert an object the liveCache chooses a free object frame from the primary file. If the object is smaller than 126 Byte it is put into this free frame otherwise the object is put into a frame of the continuation container which has the smallest object frames which can still accommodate the object. The OID of the frame, where the object is actually stored, is put into the chosen frame in the primary file.

z The OID which is used by the application to identify an object is always the OID from the primary container. This guarantees that the object can be accessed always by the same OID even if its length changed and it was moved to another continuation container.

z The construction of the page chains and the pages of the continuation files is similar to those of the fixed length class containers except that object frames in the continuation files are only 8 Byte long.

z No index can be defined for objects of variable length.

z Accesses to objects with variable length are more expensive than accesses to ordinary objects if they are longer than 126 Byte, since each access to those objects requires more than one page access.

z Primary containers as well as continuation containers can be partitioned too.

Page 50: Live Cache- Administration and Monitoring

5050PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 50

Analysis of the class containers with the LC10

Detailed data aboutclass containers

z The LC10 offers detailed data about all class containers stored in the liveCache. The class container monitor can be reached by ‘Problem Analysis->Performance->Class container’

z The data in the class container monitor are:

y Class ID: unique internal number for each class container. The ID is assigned in the order of the creation date of the container.

y Class name: name of the class whose instances are stored in the container.

y Object size: size of the stored objects in bytes.

y ContainerNo: external number of a class container. This number is used by the application to identify a class container.

y Container size: Number of data pages which are occupied by the container.

y Free container pages: number of container pages which contain free object frames.

y Empty container pages: number of container pages which contain no occupied object frame.

y Key pages: number of pages which are occupied by the index.

y Container use: percent of usable space on the data pages which is used by occupied object frames.

y Schema: name of the schema a class container is assigned to. Each container must be assigned to a schema. A schema can be considered as a name space which can be dropped with all its class containers at once.

y Class GUID: external unique identifier of the class.

Page 51: Live Cache- Administration and Monitoring

5151PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 51

COM routines

View registered COM routines

z Objects stored in the class containers can be accessed and manipulated only via COM routines which are methods of COM objects.

z The selection ´Current Status->Configuration->Database Procedures´ displays a list of all COM objects and their methods which are currently registered at the database. For each COM routine a detailed parameter description is available when the triangle left of the routine name is pressed.

z The COM routines can be executed through stored procedure calls. For instance The COM routine CREATE_SCHEMA from the example above can be executed by the SQL command “call CREATE_SCHEMA (‘MyFirstSchema’)”.

z The registration of the COM routines is done automatically when the liveCache is started by the LC10.

Page 52: Live Cache- Administration and Monitoring

5252PRINT ON DEMAND

sponsored by

EUROPEANSAP TECHNICALEDUCATIONCONFERENCE 2002

Sept. 30 – Oct. 2, 02 Bremen, Germany

WORKSHOP

SAP liveCache

Administration & Monitoring

Advanced Administration

z At the conclusion of this unit you will be able to save the log, perform an incremental backup, to add a data device and to configure the liveCache for saving the log automatically.

Page 53: Live Cache- Administration and Monitoring

5353PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 53

Log full situation (1)

live Cache icon

Show the state of data base tasks

z When performing the last exercise the liveCache ran into a log-full situation which caused a standstill of the liveCache. All users trying to write any entry into the log were suspended. However, users can still connect to the database and as long as they only read they can continue to work on the database.

z The filling level of the data and log volumes can be observed by the transaction LC10 or the DBMGUI. Within the LC10 the selection ‘Current Status->Memory Areas->Decvspaces’ displays a detailed list of occupation of the data and log devices. However, it is more convenient to watch the bars at the upper side of the DBMGUI. By a double click on the liveCache icon you can get a detailed information about data and log devices in the central screen too. If the log filling reaches critical values you can find warning messages in the knldiag file too.

Page 54: Live Cache- Administration and Monitoring

5454PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 54

Log full situation (2)

Suspended user task due to log full situation

z You can convince that no data base task in particular no user task is active in the log full situation by choosing ‘Check->Server’. By clicking on the selection ‘TASKS’ you get an overview what each database task is currently doing.

z In case the log device is full you find the archive log writer task in the state ‘log-full’.

z User tasks which have tried to write entries into the archive log you can find in the state ‘LogIOwait’

z Tasks which serve other users are not suspended and in the state ‘Command wait’, i.e. these users can use the database for read accesses.

z Notice that a user task can be suspended even before the log filling level reaches 100%. This is because a small amount of the log is reserved and cannot be used by user tasks. This reserved part is required to guarantee that the liveCache can be shut down even in a log full situation.

Page 55: Live Cache- Administration and Monitoring

5555PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 55

Solution to log full situation

last unsaved log entrycurrent log write position

(1) Log full

(4) Continuelog

Dev 1 Dev 2

Dev 1

(2) Add logvolume

Dev 1 Dev 2

BACKUP

Dev 1(3) Logbackup

Logwriter is still waiting

z At first glance one could think that a log full situation could be overcome by only adding another log volume. However, the liveCache/SAP DB writes the log cyclically onto the volumes as they would be only one device. This means that even if a new log volume is added, the log writing has to be continued after the last written entry. Therefore, a log volume cannot be used immediately after it was added but the log has to be backed up before (SAVE LOG – interactive log backup).

z Note: Prerequisite for a log backup is a data backup.

Page 56: Live Cache- Administration and Monitoring

5656PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 56

Interactive log backup (1)

Datavolumes

Logvolumes

Data 1

Data 2

Data n

Log 1

Log 2

LOG

_000

01

Labels:

LOG_00001

z Interactive log backup (SAVE LOG) backs up all occupied log segments from the log volumes which have not been saved before.

z Only version files are supported as media.

z We recommend to back up the log into version files. One version file will be created for each log segment. The version files get a number as extension (e.g. L_BackUpFile.001, Al_BackUpFile.002, ...).

z The label versions are independent of the labels generated with complete data backup (SAVE DATA) and incremental data backup (SAVE PAGES).

Page 57: Live Cache- Administration and Monitoring

5757PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 57

Interactive log backup (2)

Interactive log backup

Define log backup media

z Choosing ‘Backup->Log`in the DBMGUI you activate the central window which allows to back up all log segments (interactive log backup – SAVE LOG). After activating ‘Backup->Log` the central window displays a list of all log backup media defined so far which can be used to save the current log. If this window is empty or all media defined are already in use you must define a new log backup media before.

Page 58: Live Cache- Administration and Monitoring

5858PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 58

Interactive log backup (3)

z For the definition of the log backup media you have to enter a name and a location for the media. By pressing the green tick the input can be confirmed. By following the footprint icon you can now continue the log backup. No further input is required. At the end of the backup you get a report about the save.

z You can define a log backup media as well as a data save media also by choosing ‘Configuration->Backup Media’

z The log is logically divided into a number of log segments. The size of these segments is a configuration parameter of the liveCache. After the first of these segments is saved all tasks which were suspended due to the log full situation are immediately resumed. That means suspended tasks continue working already during the backup of the log area if there exist more than one log segment.

Page 59: Live Cache- Administration and Monitoring

5959PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 59

Autosave log mode

AutoLog mode selection

AutoLog mode on/off

AutoLog mode status

z To prevent the database from further standstills due to a full log device you can activate the autosave log mode (AutoLog mode). When the AutoLog mode is activated the log is automatically written to files whenever a log segment is full. Each segment is saved in a new backup file. The backup files are labeled as the corresponding media file plus a suffix of a three digit number. The numbers are assigned in ascending order according to the order of the saves.

z You can switch on the AutoLog mode by selecting ‘Backup->AutoLog on/off’. There you can select a media which stores the automatically written log files. Alternatively, you can define a new media by pressing the ‘Tape’ icon. After you have confirmed your media selection with the AutoLog icon the auto log mode is activated.

z Pressing the tape icon on the lower taskbar of the central window you can create also a new backup media.

z You can easily find the current status of the AutoLog mode by checking the column AutoLog in the upper right window of the DBMGUI.

Page 60: Live Cache- Administration and Monitoring

6060PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 60

Add data volume (1)

List and add volumes

z After your last exercise the database is nearly full. Therefore, another data volume should be added to prevent the liveCache from a standstill due to a database full situation.

z In the LC10 you can add a data volume by selecting ‘Administration->Configuration->Devspaces’. After pressing the ‘Add Devspace’ button in the upper left corner a new dialog window appears where you have to specify the size and the location of the new volume.

z The new volume is immediately available after you have saved and confirmed the input values.

z Data and log volumes can also be added using the DBMGUI (‘Configuration->Data Volumes’).

Page 61: Live Cache- Administration and Monitoring

6161PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 61

Add data volume (2)

Show and change live Cache configuration parameters

Maximum number of data volumes

z Before a data volume can be added the parameter MAXDATADEVSPACES of the liveCache configuration has to be checked. If a Nth device shall be added this parameter must be larger or equal to N. The parameter can be changed in the LC10 by the selection ‘Administration->Configuration->Parameters’. If you choose the DBMGUI you have to select the menu path ‘Configuration->Parameters’. Note that new values of the database configuration parameters are not valid until the database was stopped and started again.

Page 62: Live Cache- Administration and Monitoring

6262PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 62

Incremental data backup (1)

PA

G_0

0002

Labels:

PAG_00002

DAT_00001

PAG_00003

DAT_00004

PAG_00005

Datavolumes

Logvolumes

Data 1

Data 2

Data n

Log 1

Log 2

PAG_00006

z In addition to a complete data backup data pages can also be backed up with an incremental data backup.

z In contrast to a complete data backup an incremental data backup stores only those pages which have changed since the last complete data backup.

z Notice that the incremental backup differs from those of previous liveCache releases (<7.4) where the incremental backup contained all pages which changed since the last incremental or complete data backup.

z The label version is increased with each complete and incremental data backup.

z To decide if you should rather make an incremental backup than a complete backup check the number of pages which have been changed since the last complete backup. You can find this number by choosing the tab ‘Data area’ in the selection ‘Current Status->Memory Areas->Data Area’. An incremental backup is useful if the number of changed pages is small compared to the number of used pages.

Page 63: Live Cache- Administration and Monitoring

6363PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 63

Incremental data backup (2)

Incremental data backup

z An incremental data backup can be performed via the DBMGUI by selecting ‘Backup->Incremental’. As for the complete data backup you have to choose a media for the backup. Via the icons on the lower task bar of the central window you can also create and delete media or change the properties of existing media. The ‘Next Step’ button guides you through the further backup process. At it’s end a backup report is shown.

Page 64: Live Cache- Administration and Monitoring

6464PRINT ON DEMAND

sponsored by

EUROPEANSAP TECHNICALEDUCATIONCONFERENCE 2002

Sept. 30 – Oct. 2, 02 Bremen, Germany

WORKSHOP

Consistent Views andGarbage Collection

SAP liveCache

Administration & Monitoring

z In this unit the concept and the consequences of consistent views are explained.

Page 65: Live Cache- Administration and Monitoring

6565PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 65

Consistent views

All read accesses provide the image of an object that was committed at a certain time. This point in time is the same for all accesses within one transaction.

Commit

Set s=3T 1

Commit Set s=7T 3

Commit

T 4

s=3

Commit

Commit

T 5

T 2

time

Example of reading within implicit consistent views

s=7

Reading sFirst reading

Reading s

s=7

First reading

Reading s

z liveCache uses consistent views to isolate read accesses to object from concurrent changes on data by other applications.

z Consistent views see all liveCache data as it was when the consistent view was created. Changes by other simultaneous running applications are invisible to the transaction.

z Databases like Oracle support similar concepts, but mostly only for single statements (consistent read).

z Transactions are always performed as consistent views. The dedicated time which decides about the appropriate before image to be read is the first access to a persistent object and ends with COMMIT or ROLLBACK (implicit consistent view).

z liveCache also knows the concept of named consistent views, called versions. These views do not end with commit or rollback but can contain several transactions (see later) and may be active for several hours. Such named consistent views are used by APO for transactional simulations.

z Reading within consistent views allows to provide only committed images without waiting for the end of any other transactions.

Page 66: Live Cache- Administration and Monitoring

6666PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 66

Why consistent views

transaction T1� follow path to element B� knows the path to C

transaction T2� deletes element C� inserts element X

� updates the path B → X� commit

transaction T1 continued� knows the old image of B� wants to read element C

� element C is deleted� element D is unreachable

A

BC

D DA

B' C

X

DA

BC

time

T1

T 2

Example of reading without consistent views

z Consistent views are required to navigate trough networks.

z The example above demonstrates one problem that can occur when reading is without using a consistent view.

z Example description:

1. Transaction T1 starts to read a object chain at object A. It wants to follow the path until object D in order to update D.

2. Unfortunately a scheduler interrupts transaction T1 after reading object B.

3. Transaction T2 is started and replaces element C by X.

4. T2 commits.

5. Transaction T1 continues and follows the link to the object C. However, C is deleted and D therefore unreachable.

z If transaction 1 uses a consistent view of the chain, it can still access the deleted element C to follow up to element D.

z Therefore, consistent views starts with the first object access of a transaction.

Page 67: Live Cache- Administration and Monitoring

6767PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 67

Data cache

Classcontainer

OMS: pObj->value = y;pObj->omsStore(*this);...Commit

update OID 23.409

x (k1)y (k1)

History files

History files of open transactions

History files ofcommitted transactions

x(k1)

Transaction list

History file list

z The read consistency forces that all old images of objects which where updated by a transaction T are stored not only until T committed but until the last consistent view which where open before T committed is closed.

z The storage of before images is realized with the help of history files. When an object is updated, the old value of the object (the before image) is copied to a history file which exists for each transaction. Then the new object is copied to the data page and a pointer in the page points to the former object version in the history file.

z History files of open transactions are not only used for the consistent read but they can also be used for the rollback of transactions.

z In case of rollback, the old image is copied from the history file back to its original data page and the history file is destroyed. If the transaction ends with a commit its history file survives the transaction end and is inserted into a history file list.

Page 68: Live Cache- Administration and Monitoring

6868PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 68

Consistent reading via history files

Set s=3T 3

Commit Set s=7T 5

Commit Set s=8T 6

Reading s

s=

T 4First Reading

15

Classcontainer

s (8)

History files

s (15)

s (3)

s (7)

T 3

T 5

T 6

T 4

Set s=15T 2

Commit

z Several changes of an object made by different transactions are recorded in the history files. These different versions of an object are linked in the history files.

z Dependent on the start time of active consistent views (transactions or named consistent views) it may be necessary to keep several versions of an object.

z The before image of an object can be deleted when no consistent view may need to access the object anymore.

Page 69: Live Cache- Administration and Monitoring

6969PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 69

Garbage Collection (1)

Problem

Objects are marked as deleted only but not removed

Changes to objects (before images) are recorded in history files

Solution

Garbage is collected by garbage collector tasks:

� History files that cannot be accessed by consistent views anymore are deleted

� All deleted objects in the OMS pages that will not be accessed by consistent views anymore are released

� Free pages are released when all objects in the data or history page are released

� Garbage collectors are scheduled every 30 seconds and will startworking when data cache usage is higher than 80%

z Due to the consistent read no transaction that removes an object can remove the object directly since a consistent view of an other transaction could probably access this object or one of its before images. Therefore, objects are only marked as deleted when a transaction deletes them.

z Actually, objects marked as deleted are removed by special server tasks called garbage collectors. Scanning the history pages they remove objects when no consistent view can access the objects anymore.

Page 70: Live Cache- Administration and Monitoring

7070PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 70

Classcontainer

Garbage Collection (2)

STOP

History files ofcommitted transactions

Object marked as

deleted

T 3Commit

T 4Commit

T 6Commit

T 5

delete

delete

STOP

delete

T 3

T 4

T 6

Delete s Delete uDelete tGarbage

collection

z The garbage collectors scan periodically the history file list for history files of transactions which cannot be accessed anymore by open consistent views. When the garbage collector finds such a file it looks for all log entries which point to deleted objects and removes these objects finally, i.e. afterwards the corresponding object frames in the class container file are free and can be reused. After all delete entries in the history file were found and the corresponding objects were removed the complete file is dropped.

z The garbage collectors checks also whether the class containers contain too many empty pages. If more than 20% of the pages of a file are empty the GC removes all empty pages. The GC finds the empty pages by following the chain of free pages which belongs to each container.

Page 71: Live Cache- Administration and Monitoring

7171PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 71

Garbage Collection (3)

The algorithm of garbage collection changes according to the filling level of the database

Start of Garbage Collector every 30 seconds� History files are removed which belong to committed transactions which are older

than the oldest transaction which was open while one of the currently active consistent views start.

Database filling over 90%� To avoid a standstill of the data b ase due to a ‘database fu ll’ situation object history

files are removed even if their before images could be accessed by an active consistent view.

� The garbage collector removes the oldest history files until either the filling is again below 90% or there are no more history files of committed transactions.

z As long as transactions are not committed or named consistent views are not dropped, the before images of objects stored in the history files cannot be released, because they may be accessed by the consistent views. Remember that the consistent view wants to see the liveCache as it was when the consistent view started. So before images in the history files that are younger than the consistent view may reflect the status of liveCache at start of the consistent view. As a result the history files may grow.

z When a transaction or a named consistent view is active for a long time, this may become a problem for liveCache performance and availability:

y When the data cache is too small to hold the history files, data is swapped to disk. When the data is accessed again (by the application or the garbage collectors) it must be read into the data cache before. This leads to physical I/O what has to be avoided for liveCache.

y When history files grow further, this may lead to a ‘database full’ situation. The result is a standstill of the application.

z liveCache tries to optimize garbage collection, because scanning large history files is CPU and I/O consuming.

z The total usage of the data cache as well as the occupation with history and data pages can be monitored with transaction ‘LC10 -> Current Status -> Memory Areas -> Data Cache ’.

Page 72: Live Cache- Administration and Monitoring

7272PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 72

Classcontainer

s (8)

History files

s (15)

s (3)

s (7)

Loss of consistent views

Reading sT 4

Object history not found

First reading

T 3

T 5

T 6

Database filling >90%

T 4

Set s=3T 3

Commit Set s=7T 5

Commit Set s=8T 6

Set s=15T 2

Commit

z If the data cache filling exceeds the limit of 95% consistent views may become incomplete since old object images which belong to the view are removed. The access to such a removed old image causes the error ‘too old OID’ or ‘object history not found ‘.

z When the data cache filling level is above 95% before images which are not accessed by any consistent view are removed. However, since the before images are linked in a chain the connection to older images which might be visible in a consistent view is lost.

z When the database filling reaches the limit of 90% before images are removed which are visible in consistent views.

Page 73: Live Cache- Administration and Monitoring

7373PRINT ON DEMAND

sponsored by

EUROPEANSAP TECHNICALEDUCATIONCONFERENCE 2002

Sept. 30 – Oct. 2, 02 Bremen, Germany

WORKSHOP

Memory Areas

SAP liveCache

Administration & Monitoring

z In this unit you will get to know the two main memory areas of the liveCache: the data cache and the OMS heap.

Page 74: Live Cache- Administration and Monitoring

7474PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 74

Calling a liveCache method in ABAP

...set connection: <liveCache>...exec sql call OID_UPD_OBJ (:KeyNo);...exec sql commit;...

...set connection: <liveCache>...exec sql call OID_UPD_OBJ (:KeyNo);...exec sql commit;...

... Sessioncontext n

ABAP codingrunning on APO

application server

live Cache live Cache basis

Sessioncontext n

Sessioncontext n

Data cache

Private caches in the OMS heap

z A COM routine is called as a stored procedure in ABAP from the APO application server.

z Within a transaction (terminated by COMMIT or ROLLBACK), several COM routines can be called. All these routines will work within the same session context in the liveCache. An important feature of a session context is that global data is copied into a private memory area (OMS heap) and that all following operations will operate on these private copies. The access to private data is much faster than accessing global data from the data cache, leading to a considerable win of performance – on cost of memory consumption. The changes on private copies will be transferred into the global memory after a COMMIT and the private memory is released (one exception are versions). The released memory is not returned to the operating system but only free to be used again for new private caches. Therefore, the OMS heap memory can never shrink.

Page 75: Live Cache- Administration and Monitoring

7575PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 75

Memory areas in the liveCache

Data cache OMS heap

Parameter: CACHE_SIZE Parameter: OMS_HEAP_LIMIT

SQL pages

History pages

OMS data pages

Local COM memory

Copied OMS objects

z liveCache uses two main memory areas in the physical memory of the liveCache server: data cache and OMS heap

z Data cache

y Data cache is allocated in full size when the liveCache is started. The size is configured by liveCache parameter CACHE_SIZE

y data cache contains

• data pages with the persistent objects (OMS data pages)

• history pages with before images of changed or deleted objects (history pages)

• swapped named consistent views, keys for keyed objects and SQL pages (SQL pages). All pages which are organized as B* trees are called SQL pages.

y all these pages may be swapped to data volumes if the data cache is too small to hold all data

z OMS heap

y liveCache heap grows when additional heap memory is requested. The maximum size is configured by liveCache configuration parameter OMS_HEAP_LIMIT

y heap contains

• local copies of OMS objects (private cache for consistent views)

• local memory of a COM routine allocated by omsMalloc() and new()

y no swapping mechanism for heap memory is implemented except for inactive named consistent views

Page 76: Live Cache- Administration and Monitoring

7676PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 76

live Cachebasis

sessioncontext

private cache (OMS heap)

data cache

25

13

45

free

free

freefree

A D

B C

E

OID addr

25.1

instance map A

object pages

objects

Method: GET LIST

A B C

?object not found

Interaction of data cache and OMS heap (1)

OID 13.1

z When an object is accessed via its OID, the object is searched in the private cache of the session first. The OIDs of the private cache are stored in a hash table.

z When the object cannot be found in the private cache, the object is read from the global data cache. The OID contains the physical page number of the page that contains the object.

z If the page is not already in global data cache, it will be read from the data volumes.

Page 77: Live Cache- Administration and Monitoring

7777PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 77

sessioncontext

private cache (OMS heap)

live Cachebasis data cache

25

13

45

free

free

freefree

A D

B C

E

OID addr

25.1

instance map A

object pages

objects

Method: GET LIST

A B C

offset 1

B

?13.1

Interaction of data cache and OMS heap (2)

OID 13.1

z When the page that contains the searched object is located in the global data cache, the page offset which is part of the OID is used to locate the object inside the page.

z The object is copied to private cache and the hash table of the private cache is updated.

Page 78: Live Cache- Administration and Monitoring

7878PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 78

sessioncontext

private cache (OMS heap)

live Cachebasis data cache

25

13

45

free

free

freefree

A D

B C

E

OID addr

25.1

instance map A

object pages

objects

Method: GET LIST

A B C

B

Interaction of data cache and OMS heap (3)

13.1

z All further accesses to the object will be handled in the private cache.

z All changes on the object will be made on the local copy of the object.

z The global version of the object in data cache remains unchanged until the transaction performs a commit. If the transaction ends with a rollback the private cache is released without changing any global version of the object.

z The subtransactions are completely handled within the private cache.

z When the object is used by a version, the object will never be copied back to global cache, but will be released when the version is dropped.

Page 79: Live Cache- Administration and Monitoring

7979PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 79

Monitoring interaction of data cache and OMS heap

Object accesses from OMS heap and data cache

z The tab ‘Object accesses’ in the selection ‘Current Status->Problem Analysis->Performance->OMS Monitor’ lists for each COM routine the number of object accesses to the OMS heap and the data cache.

z The tab displays two kinds of columns. Columns named like ‘OMS …’ describe accesses to the private OMS heap while those named like ‘Basis …’ count the various object accesses to the data cache. By comparing an OMS column with the corresponding basis column you can find out how effective the private object cache works. Simply speaking: the larger the ratio between ‘OMS object acc.’ and ‘Basis object acc.’ is, the better works the OMS caching.

z Object accesses via keys and iterators are supplied only by the basis layer, therefore no columns for ‘OMS key accesses’ and ‘OMS iterator accesses’ exist.

Page 80: Live Cache- Administration and Monitoring

8080PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 80

Data cache and OMS heap configuration

Data cache (parameter CACHE_SIZE)

� Is a static memory and allocated when the live Cache is started

� Contains persistent OMS objects (OMS page chains)

� Contains swapped inactive transactional simulations

� Contains SQL data and keys for OMS objects (B* trees)

� Contains the history files (before images)

OMS heap (parameter OMS_HEAP_LIMIT)

� live Cache heap grows dynamically until OMS_HEAP_LIMIT is reached

� Contains copies of objects in consistent views

� transactions

� named consistent views (versions/transactional simulations)

z Memory administration in heap

y When local object copies are released at the end of transaction or when a named consistent view is dropped, the freed heap memory is not returned to the operation system. So the physical allocated heap never shrinks. It can only grow - up to OMS_HEAP_LIMIT.

y Internally the liveCache heap is organized in 64kB blocks.

y The allocated heap memory is fully under control of the liveCache. liveCache implements its own memory administration for OMS objects in private cache.

y Memory is only released to the liveCache and may be used for other liveCache objects.

z When OMS_HEAP_LIMIT is reached, liveCache copies inactive named consistent views to data cache and releases memory in heap.

z When no additional memory can be allocated for the heap, the COM routine that tries to allocate memory gets an outOfMemory error and the transaction is rolled back by the COM routine. All private data of this consistent view is freed. To handle the destruction of objects an emergency memory area of 10MB is allocated at liveCache start.

z Heap usage can be monitored with report /SAPAPO/OM_LC_MEM_MEMORY.

Page 81: Live Cache- Administration and Monitoring

8181PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 81

Monitoring the OMS heap usage

Display OMS heap usage

z The selection ‘Current Status->Memory Areas->Heap usage’ yields information about the usage of OMS heap.

z ‘Available heap’ is the memory that was allocated for heap from the operating system. It reflects the maximum heap size that was needed by the COM routines since start of liveCache .

z ‘Total Heap usage’ is the currently used heap. When additional memory is needed, liveCache uses the already allocated heap until ‘Available’ is reached. Additional memory requests will result in additional memory requests from operating system and the value of ‘Reserved’ will grow. (‘Available heap’ > ‘Total Heap usage’ )

z It is important to monitor the maximum heap usage. When the ‘Available heap’ reaches OMS_HEAP_LIMIT, errors in COM routines may occur due to insufficient memory . This should be avoided.

z ‘OMS malloc usage’: memory currently in use that has been allocated via calls of method 'omsMalloc' (‘Total Heap usage’ > ‘OMS malloc usage’)

z 'Temp. heap at memory shortage‘: size of the emergency chunk. If a db-procedure runs out of memory, the emergency chunk is assigned to the corresponding session and following memory requests are fulfilled from the emergency chunk. This ensures that the db-procedure can cleanup correctly, even if no more memory is available. After the db-procedure call the emergency chunk is returned to public.

z 'Temporary emergency reserve space‘: memory of emergency chunk currently in use.('Temp. heap at memory shortage' >= 'Temporary emergency reserve space')

z 'Max. emergency reserve space used‘: maximal usage of emergency chunk. ('Temp. heap at memory shortage' >= 'Max. emergency reserve space used')

Page 82: Live Cache- Administration and Monitoring

8282PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 82

Monitoring the data cache usage

Display data cache usage

z The menu path ‘Current Status->Memory Areas->Data cache’ leads to a screen which displays all information about the liveCache data cache like data cache size, used data cache and the usage and hit ratios for the different types of liveCache data.

z In an optimal configured system

y the data cache usage should be below 100%

y the data cache hit rate should be 100%

y if data cache usage is higher than 80%, the number of OMS data pages should be higher than the number of OMS history pages

z Use the refresh button to monitor the failed accesses to the data cache. Each failed access results in a physical disk I/O and should be avoided.

z More detailed information about the cache accesses can be found selecting ‘Problem Analysis->Performance->Monitor->Caches’.

z Compare the size of OMS data with OMS history. If data cache usage is higher than 80 % and OMS history has nearly the same size as OMS data, use the ‘Problem Analysis->Performance->Monitor->OMS Versions’ screen to find out if named consistent views (versions) are open for a long time. Maximum age should be four hours.

Page 83: Live Cache- Administration and Monitoring

8383PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 83

Versions and named consistent views

A session can run within a version: enclosed in the API commandsomsCreateVersion-OMSDropVersion.

All transactions running in one version have the same consistent view. It was started when the version was created. Such a consistent view is called a named consistent view.

All updates, creations and deletions of objects performed within a version remain in the private cache of the session. ÆÆ Complete detachment of a user from the action of other users.

Versions can be closed temporarily and re-opened. Closed versions are called inactive.

Set t=3

T 3

CreateVersion

CloseVersionReading t

t=1

CommitReading s

s=3

T 2

Set s=2

T 5

OpenVersion

T 4

DropVersionReading t

t=3

CommitReading s

s=3

Reading s Reading t

t=1s=2

Commit

Commit

z For larger plenary scenarios (implemented by so called transactional simulations) APO required the ability to keep one consistent view over more than one transaction. This is for instance because in such a plenary scenario DynPro changes occur which cause automatically commit requests. For these scenarios the liveCache provides versions.

z After creating a version within a session all transactions in this session have the same consistent view.

z After a commit no changed data is written into the global data cache but all data reside in the private cache. Thus cached objects cannot be released from the private cache after a commit or rollback. The consequence is that versions consume more and more OMS-memory the longer they exist . Moreover, the garbage collector cannot release history pages since the version could access an old image of an object.

z Versions can be closed temporarily and reopened in any other session. This is necessary since after a commit an application may be connected to another work process and therefore to another liveCache session.

z In case the heap consumption passes certain limits closed versions can be swapped into the global data cache where they are stored in B*-trees on temporary pages.

z Since temporary pages as well as the states of the private session caches are not recovered after a restart versions disappear automatically after stopping and starting the liveCache.

Page 84: Live Cache- Administration and Monitoring

8484PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 84

Monitoring versions

Listing versions and their heap consumptions

z One reason for a large consumption of OMS heap and data cache could be a long running version which cumulates heap memory and which prevents the garbage collector from releasing old object images.

z With the selection ‘Problem Analysis->Performance->Monitor->OMS versions’ you can monitor the memory usage by versions.

z The column ‘Memory usage’ displays the actual usage of OMS heap memory. The columns ‘Time’ and ‘Age (hours)’ define the starting time and the version and the time since the start. Note, there should never be any version older than 4 hours. To avoid this situation, the report /SAPAPO/OM_REORG_DAILY’ must be scheduled at least once a day.

z Versions can be closed and re-opened in another session. To gain heap memory versions can be rolled out into the global data cache where it is stored on temporary pages. The column ‘Rolled out’ displays if the version cache was rolled out into the data cache. In the column ‘Rolled out pages’ you find the number of temporary pages in the data cache which are occupied by the rolled out version cache.

z Long running transactions can cause the same memory lack as versions. To display starting time of all open transactions use ‘Problem Analysis->Performance->Transactions’.

Page 85: Live Cache- Administration and Monitoring

8585PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 85

Controlling the OMS heap consumption

Configurable control parameters:

OMS_VERS_THRESHOLD [KB]

OMS_HEAP_THRESHOLD [%]

After each COMMIT the liveCache checks whether the active version in the current session consumes more than OMS_VERS_THRESHOLD KB of the OMS heap or more than OMS_HEAP_THRESHOLD % of OMS_HEAP_LIMIT are in use.

If YES:

- Unchanged objects are removed from the cache of the current version

- The current version cache is rolled out into the data cache.

z The consumption of OMS heap by versions can be controlled by two configuration parameters OMS_VERS_THRESHOLD and OMS_HEAP_THESHOLD. Both parameter allow a limitation of the heap consumption on cost of the object access time.

z OMS_VERS_THRESHOLD:

At the end of the transaction, unchanged data from versions of a session are deleted from the version cache and the version cache is rolled out into the data cache if the version occupies more than OMS_VERS_THRESHOLD KB of memory. If the stored object is accessed again at a later stage within the version, the object must be copied again from the data cache into the heap. You do not have to do this if you set the OMS_VERS_THRESHOLD higher and there is enough memory available.

z OMS_HEAP_THRESHOLD:

If the percentage rate is exceeded when the available heap is occupied (The available heap is defined by the parameter OMS_HEAP_LIMIT. ), then objects that were read and not changed within a version are removed from the heap at the end of the transaction and the version cache is rolled out to the data cache. The default value is set to 100. Where memory bottlenecks are concerned, it might be wise to determine a smaller value.

Page 86: Live Cache- Administration and Monitoring

8686PRINT ON DEMAND

sponsored by

EUROPEANSAP TECHNICALEDUCATIONCONFERENCE 2002

Sept. 30 – Oct. 2, 02 Bremen, Germany

WORKSHOP

Task Structure

SAP liveCache

Administration & Monitoring

z At the conclusion of this unit you will be able to monitor the tasks running inside your liveCache server.

Page 87: Live Cache- Administration and Monitoring

8787PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 87

Process, thread and task structure

UKT

useruser

user

Coordinator

Requestor

Console

Clock

Dcom 0 - n

Dev 0 - n

(IOWorker 0 - n)

Asdev 0 - n

UKT

serverServer

Server

UKT

UKT

ALogWriter

UKT

TraceWriter

UKT

datawriterDataWriter Timer

UKT

eventEvent

UKT

UtilityGarbageCollector

live Cache process

GarbageCollector

UserUser

User

z The operating system sees the liveCache as one single OS process. The process is divided into several OS threads (Windows and UNIX). liveCache calls these threads UKTs (user kernel threads).

z Some threads contain different specialized liveCache tasks whose dispatching is under control of liveCache.

z Other threads contain just one single task.

z The tasks that perform the application requests are called user tasks. User tasks are contained in UKTs which contain exclusively user tasks.

z Each APO work process is connected to one or two user tasks.

z Starting with liveCache 7.2.5.4 (APO SP 13) the number of CPUs used by user tasks can be limited by parameter MAXCPU. MAXCPU defines the number of UKTs which accommodate user tasks. Since the usertasks consume the majority of the CPU performance MAXCPU defines approximately how many CPUs of the liveCache server are occupied by the liveCache.

Page 88: Live Cache- Administration and Monitoring

8888PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 88

Dev<i> slave threads

Dev0 threadMaster for I/O on volume

CoordinatorInitialization / UKT coordination

ConsoleDiagnosis

RequestorConnect processing

TimerTime monitoring

AsDev<i> threads

Async0 thread Master for backup I/O

Use

r K

erne

l Thr

ead

UK

T 3

User

User

User

UserUserUserUser

User

Page 89: Live Cache- Administration and Monitoring

8989PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 89

Executes commands from applications Executes commands from applications and interactive componentsand interactive components

Performs I/O during backupsPerforms I/O during backups

Writes the logs to the log volumesWrites the logs to the log volumes

Writes dirty pages from the data cache Writes dirty pages from the data cache to diskto disk

Flushes the kernel trace to the kernel Flushes the kernel trace to the kernel

trace filetrace file

Handles Handles livelive Cache administrationCache administration

Monitors LOCK and REQUEST Monitors LOCK and REQUEST TIMEOUTsTIMEOUTs

Removes outdated history files and Removes outdated history files and object dataobject data

User

Server

Utility

Timer

DataWriter

TraceWriter

ALogWriter

Task description

GarbageCollector

z Each UKT makes various tasks available, including:

y user tasks, i.e. tasks that users connect to in order to work with the liveCache

y tasks with specific internal functions

z The total number of tasks is determined at start-up time and they are then distributed dynamically over the configured UKTs according to defined rules. Task distribution is controlled by parameters like e.g. _TASKCLUSTER_02.

z UKT tasks allow a more effective synchronization of actions involving several components of the liveCache, and minimize expensive process switching.

z The user tasks execute all commands from the applications. COM routines for instance run within the user tasks.

z Sever tasks are used for various task. e.g. for I/O during backups, for creation of indexes, for read ahead.

z For NT tasks are implemented by fibers. For UNIX task are realized as OS threads. The threads of one UKT form a group in which only one thread can be active.

Page 90: Live Cache- Administration and Monitoring

9090PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 90

Task distribution

Show task distribution

z The task distribution of the liveCache can be viewed within the LC10 through the selection ‘Current Status->Kernel threads->Thread Overview’.

z liveCache configuration: All garbage collector tasks run always in one thread.

z In the example above all user tasks run in one thread. Accordingly the configuration parameter MAXCPU is one.

Page 91: Live Cache- Administration and Monitoring

9191PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 91

Current task state

Show task state

z The screen ‘Current Status->Kernel threads->Task manager’ displays information about the status of liveCache tasks which are currently working for an APO work process.

z In a running system, possible status are

y Running: task is in kernel code of liveCache and uses CPU

y Command Wait: user tasks wait for another command to execute

y DcomObjCalled: task is in COM routine code and uses CPU

y IO Wait (R) or IO Wait (W): task waits for I/O completion

y Vbegexl, Vsuspend: task waits for an internal lock in liveCache

y Vwait: task waits that a lock is released which is hold by another APO application. Locks are released after commit or rollback.

y No-Work: task is suspended since there is nothing to do

z If the sum of tasks in status ‘Running’ and ‘DcomObjCalled’ is higher than the number of CPUs on the liveCache server for a longer time, liveCache likely faces a CPU bottleneck. Before the number of CPUs is increased, a detailed analysis of the COM routines may be necessary .

z The ‘Application Pid’ is the process ID of the connected APO work process which can be identified in transaction SM50 and SM51 respectively.

Page 92: Live Cache- Administration and Monitoring

9292PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 92

liveCache Console

live Cache: Console

z The ‘liveCache: Console‘ window displays information about the liveCache status as they are mainly also shown by the selection ‘Current Status->Kernel threads‘ in the ‘liveCache: Monitoring’ window (see previous slide). However, while the output from the ‘liveCache: Monitoring‘ window bases always on SQL queries to the liveCache the results from the selection ‘liveCache: Console‘ get their results directly from the run time environment of the liveCache. That means in situations where you cannot connect anymore to the live Cache you can still use the live Cache console to investigate the live Cache status.

z All data shown in the various selections from the console screen can also be yield by calling the command ‘x_cons <liveCache name> show all‘ on a command line.

Page 93: Live Cache- Administration and Monitoring

9393PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 93

Cumulative task state

z A comprehensive description of all objects of the liveCache run time environment (RTE) is displayed when the ‚liveCache: Console‘ screen is used. RTE objects are tasks, disks, memory, semaphores (synchronization objects which here are called regions) and waiting queues.

z In addition to the information about the current task states which can also be displayed as shown on the previous slides the selection ‚Task activities‘ displays cumulated information about the task activities. In particular the dispatcher count is given which counts how often a task was dispatched by the task scheduler. As long as this number is constant the task is inactive. Other important parameter are:

y command_cnt: counts the number of application commands executed the the task.

y exclusive_cnt: number of accesses to regions (synchronization objects)

y state_vwait: counts the cases where the task had to wait for objects locked by another task

z Among the other information which can be displayed by the liveCache console the number of disk accesses, the accesses of critical regions (see slides in unit ‘Performance analysis’) and the PSE data are most important. Everything else is intended to be used only by liveCache developers. Therefore the displayed values sometimes may seem to be a little bit cryptic.

Page 94: Live Cache- Administration and Monitoring

9494PRINT ON DEMAND

sponsored by

EUROPEANSAP TECHNICALEDUCATIONCONFERENCE 2002

Sept. 30 – Oct. 2, 02 Bremen, Germany

WORKSHOP

Recovery

SAP liveCache

Administration & Monitoring

z At the conclusion of this unit you will be able to restore your liveCache.

Page 95: Live Cache- Administration and Monitoring

9595PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 95

Restart

T1

T2

t

C

T5 R

T3 R

No recoveryfor transactions T1 and T5

T4 C

Redo (read archive log) : T4, T6Undo (read undo file) : T2, T3

T6 C

Savepoint CrashSavepoint Savepoint

C: CommitR: Rollback

z Automatic recovery at restart.

z The restart performs a redo of transactions which were open at the time of the last savepoint and committed at the crash time. Transactions which were open at the crash time will only be rolled back if they were open at the time of the last savepoint .

z Start-point for the redo/undo is the last savepoint. All data written after the last savepointto the data volumes will not be considered.

z Our Example:

y Transactions 1 and 5 are not relevant for redo/undo. Transaction 1 was committed at the time of the last savepoint. The modifications were written to the Data-volumes. Modifications of transactions 5 are not in the data area of the last savepoint

y Transaction 2, 3 and 4 were not completed at the time of the last savepoint. The liveCache will redo transaction 4 ➜ REDO

y Transaction 2 and 3 will be rolled back, beginning at the time of the last savepoint ➜

UNDO

y The restart will completely redo transaction 6. The modifications are not in the data area of the last savepoint ➜ REDO

Page 96: Live Cache- Administration and Monitoring

9696PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 96

Restore

LOG_00010

LOG

_000

10

DA

T_0

0004

PA

G_0

0005

Restore

PAG_00006

Restore

DAT_00004

LOG

_000

11

Restore

LOG_00011

Recovery process

Data 1 Data 2 Data n

Archive log

Restart

automatically

Restartready

z Recovery always starts with a RESTORE DATA in the operation mode ADMIN. During the restore, pages are written back to the volumes.

z RESTORE PAGES overwrites the pages in the volumes with the modified images.

z Log recovery is based on the last savepoint executed with the SAVE DATA/PAGES. After the last RESTORE DATA/PAGES the database immediately performs a restart, if the log entries belonging to the savepoint persist in the archive log. The restart reapplies the log entries.

z RESTORE LOG must be run, if the savepoint belonging to the complete/incremental backup was overwritten in the archive Log.

z The database reads the log entries from the backup media until it can find the next entry in the log.

Page 97: Live Cache- Administration and Monitoring

9797PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 97

Recovery (1) : Start recovery process

Switch into OFFLINE/ADMIN/ONLINE mode

Recover database

z To perform a recovery it is necessary to bring the database to the ADMIN mode which can be done in the DBMGUI by pressing the yellow light in the traffic light symbol in the left upper corner.

z To start the backup you have to change to the selection ‘Recovery->Database‘. In the central window you can then choose which complete backup should be the basis for the recovery of the database. You can take the last complete backup (uppermost radio button) but also any other complete backup (middle radio button). With the ‚Next Step‘ icon you continue the recovery process.

Page 98: Live Cache- Administration and Monitoring

9898PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 98

Recovery (2) : Choose backup to start with

z All previously made complete data backups are shown in this list. To continue the recovery mark the backup which you want to use as the basis for the recovery and press the button ‘Next Step‘.

Page 99: Live Cache- Administration and Monitoring

9999PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 99

Recovery (3) : Choose strategy

Recovery strategies

z Now the simplest recovery strategy is shown. In the example above it is to restore the incremental backup after the complete backup. No further log backups are required since all needed log information are still on the log device.

z Instead of restoring the incremental backup you could restore the log backups. Therefore you have to mark one of the log backups all further needed backups would be marked automatically.

Page 100: Live Cache- Administration and Monitoring

100100PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 100

Recovery (4) : Start physical recovery

Start recovery

z To start the recovery you have to press the ‘Start‘ button.

z Each time a backup media is restored the DBMGUI asks for the next backup. If the backup media would be a tape and not a file you had to change the tape now. To continue the recovery press the ‘Start‘ button again.

Page 101: Live Cache- Administration and Monitoring

101101PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 101

Recovery (5) : Restart liveCache

Restart

z After the recovery from the backup media is finished the DBMGUI informs you that it is possible to restart the liveCache. Then the log entries from the log volumes will be redone.

z When the restart is finished the liveCache is in ONLINE mode and all its data and functionalities are available again.

Page 102: Live Cache- Administration and Monitoring

102102PRINT ON DEMAND

sponsored by

EUROPEANSAP TECHNICALEDUCATIONCONFERENCE 2002

Sept. 30 – Oct. 2, 02 Bremen, Germany

WORKSHOP

Configuration

SAP liveCache

Administration & Monitoring

z This unit introduces the key parameter of the liveCache configuration and demonstrates how they can be manipulated.

Page 103: Live Cache- Administration and Monitoring

103103PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 103

Displaying configuration parameters

Display parameters and their history

z Each time a liveCache is started it is configured according to a parameter set stored in the liveCache parameter file.

z The parameter file is named as the <SID> of the liveCache and stored in the directory <IndepData>/config which is usually ‚/sapdb/data/config‘. The changes of the parameter file are logged in the file <SID>.pah located in the same directory.

z The parameter file is not readable and must not be changed directly since the parameters are not independent and have to fulfil certain constraints. To change the parameters you have to use one of the administration tools like DBMCLI, DBMGUI, LC10 or WEBGUI.

z Within the LC10 the configuration parameters can be shown via the selection ‚Current Status->Configuration->Parameters->Currently‘. The history of each parameter can be accessed by pressing the triangle in front of it.

z According to their meaning for the administrator the parameters are divided into three groups:

y General: These parameters can be changed by the liveCache administrator.y Extended, Support: Changes should be performed only in cooperation with the SAP

support.

Page 104: Live Cache- Administration and Monitoring

104104PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 104

Change configuration parameters

Change parameters

Store changes

z To change the configuration parameter go to the selection ‚Administration->Configuration->Parameters‘. Here you find a column ‚New value‘ which is highlighted for all parameters which you are allowed to change. The other parameters are either fixed after the initialization or they are determined by other parameters.

z By pressing the ‚Check Input‘ button you can check if your new parameter values fulfil all required constraints. To store your updated values press the disk icon. The file which contains the constraints, rules and descriptions of the parameters is called cserv.pcf and can be found in <InstallationPath>/env.

z Notice that the configuration parameters are read only when the liveCache is started which means parameter value changes do not take effect before liveCache is stopped and started again.

z In principle all parameters should have proper values after the installation and no further reconfiguration should be necessary.

Page 105: Live Cache- Administration and Monitoring

105105PRINT ON DEMAND

sponsored by

EUROPEANSAP TECHNICALEDUCATIONCONFERENCE 2002

Sept. 30 – Oct. 2, 02 Bremen, Germany

WORKSHOP

Performance Analysis

SAP liveCache

Administration & Monitoring

z At the conclusion of this unit you will be able to use the LC10 and the DBMGUI to find out if the performance of your liveCache is limited by a bottleneck. Moreover, you will be given ideas of how to improve the performance.

Page 106: Live Cache- Administration and Monitoring

106106PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 106

Monitoring APO and liveCache performance

APO performance and live Cache

live Cache as share of APO

response time

Monitoring specific

transactions

Monitoring the live Cache

server

z Analyzing an APO system for liveCache workload and bottlenecks, three different areas must be covered:

y Estimate the liveCache share on the total APO response time and identify the APO transactions which cause the high liveCache workload.

yMonitor the liveCache server and identify bottlenecks

yDetail analysis of specific APO transactions which are identified as performance critical.

z These three areas are covered by different sets of SAP monitoring transactions

yWorkload analysis transaction ST03N

y liveCache monitor transaction LC10

y A combination of runtime analysis transaction SE30, SQL trace transaction LC10 and liveCache monitoring transaction LC10

z This workshop is focused on monitoring the liveCache server. However, a complete performance analysis has always to include all three parts shown above.

Page 107: Live Cache- Administration and Monitoring

107107PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 107

Reasons for poor liveCache performance

z High rate of I/O operations

z Serialization on synchronization objects

z Insufficient of CPU performance

z Algorithmic errors in the COM routines

z Algorithmic errors in the live Cache code

z There exist several causes for a poor liveCache performance. The most important are:

y A high rate of I/O operations performed by the user tasks.

y Serialization on liveCache synchronization objects. These objects are used to synchronize the parallel access to shared liveCache resources, such as the data cache.

y Too many user run COM routines.

y COM routines as well as the liveCache can raise a poor performance due to algorithmic errors.

Page 108: Live Cache- Administration and Monitoring

108108PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 108

How to increase performance

z Optimize setting of configuration parameters

z Extend main memory

z Increase number of CPUs

z Call APO/liveCache support

z The most important sanction to improve the liveCache performance is to optimize the setting of the liveCache configuration parameters.

z If a shortage of main memory or CPU performance is detected (see next slides) you should enlarge the main memory or increase the number of CPUs.

z Whenever the performance is poor due to unclear reasons you should call the APO/liveCache support.

Page 109: Live Cache- Administration and Monitoring

109109PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 109

Prerequisite for performance analysis

Show SQL statistics

Must be larger than 50000 for representative data

z A reliable analysis of the liveCache for a productive system is only possible, if a sufficient number COM routine have already been executed. If less than about 50000 COM routines have been executed, the monitored data may not reflect a representative workload of a productive APO system.

z To get an impression how many commands (DB Procedures / COM routines) have been executed, choose the tab ‘SQL statistics’ in the selection ‘Problem Analysis->Performance->Monitor’. The tab displays for each SQL action such as reading, inserting or deleting an record how often it was executed. To find the number of executed COM routines look for the row ‘External DBPROC calls’. For a liveCache this number corresponds to the number of COM routines executed.

Page 110: Live Cache- Administration and Monitoring

110110PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 110

Performance parameters: I/O

Show data cache filling and accesses

! Should be 100% !

z Although the liveCache is constructed to keep all data in the data cache when it is in ONLINE mode the liveCache can accommodate more data than it can host in the data cache. However, if this happens the liveCache performance can suffer heavily from I/O operations which are due to swap pages from the data cache to the data devices and vice versa.

z To detect bottlenecks due to I/O operations use the selection ‘Current Status->Memory->Areas->Data cache’. There you can find information about the data cache filling level as well as about the data cache accesses.

z For optimal liveCache performance (i.e. to avoid I/O-operations when accessing data and history pages) the data cache usage should be below 100% .

z Whether the performance is significantly effected by I/O operations can be seen from the number of failed cache accesses. The average data cache hit rate should be above 99.9%. A lower rate is a hint for a too small data cache. A situation as shown above shows rather poor performance.

z With the SQL command ‘monitor init’ you can reset the access counters to zero. This allows to display the current hit rates.

z Notice that after the start of the liveCache the data cache is empty and it takes some time till the hit rate shows a stable value which is relevant for an analysis.

Page 111: Live Cache- Administration and Monitoring

111111PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 111

Possible reasons for poor data cache hit rates

z Insufficient size of data cache

z Long-running version

z Long-running transaction

z The main reason of a poor cache hit rate is a data cache which is configured too small. However, sometimes the hit rate is poor due to long running versions or transactions. To keep the consistent view of the versions or transactions the liveCache is forced to store a large number of history pages which fill the cache and lead to a roll out of data and history pages to the data devices.

z To find out if a bad hit rate is caused by versions or transactions check the selections ‘Problem Analysis->Performance->Monitor->OMS versions’ and ‘Problem Analysis->Performance->Transactions’. There should be no version older than four hours.

Page 112: Live Cache- Administration and Monitoring

112112PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 112

How to configure the data cache

CACHE_SIZE ÌÌ 0.4 * FREE_MEMORY

FREE_MEMORY = min [physical memory - memory for OS and other applications,

MAX_VIRTUAL_MEMORY]

- SHOW_STORAGE

- MAXUSERTASK * _MAXTASK_STACK

- 100 MB

physical memory : physical memory of the live Cache server

MAXUSERTASK : parameter from the live Cache configuration file

_MAXTASK_STACK : parameter from the live Cache configuration file

MAX_VIRTUAL_MEMORY : NT see ‘MAX virtual memory‘ in knldiag fileUNIX call ulimit -a

100 MB : upper limit of memory for: task stacks of non user tasks + memory for COM routine DLLs+ memory for the live Cache program code

SHOW_STORAGE : result of the commanddbmcli –d <liveCache_name> -u control,control show storage

z The above formula gives a suggestion for the configuration parameter CACHE_SIZE determining the size of the data cache. However, depending on your particular profile it can be that the CACHE_SIZE has to deviate from the suggestion.

z If your cache hit rate is below 100% although the CACHE_SIZE is set as shown above the physical memory on your liveCache server should be enlarged.

z MAX_VIRTUAL_MEMORY describes the maximum memory that can be accessed by the liveCache. For NT this limit is displayed in the knldiag file for UNIX use the command ‘ulimit –a’.

z On Windows NT you should use the Enterprise Edition to increase the MAX_VIRTUAL_MEMORY from 2 GB to 3 GB.

Page 113: Live Cache- Administration and Monitoring

113113PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 113

How to configure the OMS heap

OMS_HEAP_LIMIT ÌÌ 0.6 * FREE_MEMORY

The heap size is all right if no OutOfMemory exceptions occur.

If (#OutOfMemoryExceptions > 0)

increase the OMS heap, if necessary on cost of the data cache.

z The free memory available for the data cache and the OMS heap should be divided in the ratio 40/60, where the OMS heap gets the larger part of the memory.

z In contrast to the data cache the OMS heap is not allocated at the start of the liveCache and thus there is no need to define the OMS_HEP_LIMIT in the configuration file. By setting the OMS heap limit to 0 you allow the liveCache to allocate as much heap memory as it can get from the operating system. However, on Windows NT and AIX the liveCache could crash if the OS cannot allocate anymore memory therefore you should set OMS_HEAP_LIMIT to the value suggested above. If the OMS_HEAP_LIMIT is not zero the liveCache stops to request heap memory from the OS if the OMS_HEAP_LIMIT is reached instead all COM routines requesting further memory are aborted.

z The heap memory is of sufficient size if there occur no OutOfMemory exceptions. They must be avoided since they let a COM routine abort. The occurrence of OutOfMemoryexceptions can be checked by executing the SQL command

select sum (OutOfMemoryExceptions) from Monitor_OMSor

checking the column ‘OutOfMemory excpt.‘ in the tab ‘Transaction counter‘ of the selection ‚Problem Analysis->Performance->OMS monitor‘ of the LC10

z If you find the number of OutOfMemory exceptions to grow you should increase the OMS heap (if necessary by making the data cache smaller).

Page 114: Live Cache- Administration and Monitoring

114114PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 114

Example: Data cache

The data cache is striped into _DATA_CACHE_RGNS regions. To access a page having the page number PNO a task must enter the data region (PNO mod _DATA_CACHE_RGNS)+1. In each region a maximum of one task can search for a page.

DATA CACHEData1 region

Data2 region

Data3 region

Data4 region

u1

u3

u4

u6u5

u2

Performance parameters: regions (1)

16

61

8

25

64

33

40

97

42

83

2

59

74

27

38

31

66

55

10

103

132

801

28

89

z When monitoring the liveCache task activities in the ‘liveCache: Console ->Active Task’ (or executing dbmcli –d <liveCache_name> -u control,control show act) you should find the user tasks ideally to reside in the state ‘Running’ or ‘DcomObjCalled’. If instead user tasks are often in the state ‘Vbegexcl’ it could be that your performance suffers from the serialized access to internal liveCache locks. The liveCache calls these internal locks regions (They correspond to Latches in Oracle). Regions are used to synchronize the parallel access to shared resources. For instance searching for a page in the data cache is saved by regions. In each region at maximum one task can search for a page.

z If a task requests a region which is already occupied by another task the requesting task is suspended as long as it cannot enter the region. This situation is displayed by the status ‘Vbegexcl‘ in the task monitor ‘liveCache: Console ->Active tasks’.

Page 115: Live Cache- Administration and Monitoring

115115PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 115

Performance parameters: regions (2)

Collision rate

Show region access

z The number of collisions, i.e. situations where a task must be suspended since it requested an occupied region, is displayed in the ‘liveCache: Console’ screen for each region.

z The collision rates of frequently used regions should not exceed 10%. Otherwise the liveCache performance is at risk.

z To reduce critical collision rates the configuration parameter defining number of regions used to stripe the corresponding resource can be increased. However, since a high collision rate could be an indicator for algorithmic errors this should be done only in collaboration with the liveCache support.

Page 116: Live Cache- Administration and Monitoring

116116PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 116

Performance parameters: MAXCPU

Guideline for servers used exclusively for a live Cache

if ( # CPUs of live Cache server < 8)

MAXCPU = # CPUs of live Cache server

else

MAXCPU = # CPUs of live Cache server - 1

z If a liveCache server possesses less than 8 CPUs the configuration parameter MAXCPU should set to the exact number of CPUs. But if there are more than 8 CPUs MAXCPU should be the number of CPUs reduced by one. This reserves one CPU for non user tasks. In particular the garbage collector can use this processor to remove deleted objects.

z A good choice for the number of garbage collectors (GC) is to choose _MAX_GARBAGE_COLL as twice the number of data devices. This choice has no influence on the CPU usage by the GCs since all GCs run in one thread but it results in a good I/O performance of the GCs.

z If more user tasks are in the state ‘Running’ and ‘DcomObjCalled’ than the liveCache server has CPUs the liveCache performance is CPU bound.

Page 117: Live Cache- Administration and Monitoring

117117PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 117

Analysis of COM routine performance

COM routine monitor

z Even if the liveCache works fine the COM routines can cause a poor performance due to algorithmic errors. To analyze those problems the liveCache supplies an expert tool to investigate the performance of COM routines. It lists the runtime, memory consumption and number of object accesses for each COM routine. All these data give hints which COM routine could be problematic. However, since the analysis is not simple this monitor should be used only by the APO support.

z Tab explanation:

y ‘Runtime’: total and average runtime of each COM routine.

y ‘Object Accesses’: number of object accesses from the private cache and from the data cache for each routine (see also slide).

y ‘Transaction counter’ : number of exceptions thrown within the routine, number of commits and rollbacks for subtransactions.

y ‘Cost summary’: summary of the previous four tabs.

Page 118: Live Cache- Administration and Monitoring

118118PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 118

Tracing internal liveCache activities

live Cache tracing

Activate/deactivate tracing

Flush trace

Create readable trace file

z To analyze the internal activities of the liveCache the liveCache can write a trace file. This file is very helpful to look for the reasons of a bad performance which may due to algorithmic or programming errors within the liveCache. The file should be interpreted only by the liveCache support.

z The trace is not automatically written but must be activated using the DBMGUI. In the selection ‘Check->Tracing’ you can choose which operations should be traced. After activating the trace it is written into a main memory structure to avoid a slow down of the system due to trace I/O operations. To write the trace actually to a file it must be flushed. The resulting file is not yet readable but still an image of the memory structure. A readable file can be created in the tab ‘Protocol’.

Page 119: Live Cache- Administration and Monitoring

119119PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 119

Summary

(1) live Cache concepts and architecture

(2) live Cache integration into R/3 via transaction lc10

(3) Basic administration (starting / stopping / initializing)

(4) Complete data backup

(5) Data storage

(6) Advanced administration (log backup / incremental data backup/ add volume)

(7) Consistent views and garbage collection

(8) Memory areas

(9) Task structure

(10) Recovery

(11) Configuration

(12) Performance analysis

Page 120: Live Cache- Administration and Monitoring

120120PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 120

ÎÎ Service Marketplace:http://service.sap.com MySAP SCM Technology

Further Information

ÎÎ Public Web:www.sap.comÎ Solutions Î

Supply Chain Managementwww.sapdb.org

ÎÎ Related Workshop at TechEd 2002SAP DB Administration Made Easy,

September 30th / 4:00 pm, Hall 5/Room L

ÎÎ Related Lectures at TechEd 2002liveCache: The Engine of APO,

October 2nd / 3:00pm – 4:00 pm, Kaisen Saal

Page 121: Live Cache- Administration and Monitoring

121121PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 121

Q&A

Page 122: Live Cache- Administration and Monitoring

122122PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 122

Feedback

http://www.sap.com/teched/bremen/

ÎÎ Conference Activities

Page 123: Live Cache- Administration and Monitoring

123123PRINT ON DEMAND

sponsored by

SAP AG 2002, Title of Presentation, Speaker Name 123

No part of this publication may be reproduced or transmitted in any form or for any purpose without the express permission of SAP AG. The information contained herein may be changed without prior notice.

Some software products marketed by SAP AG and its distributors contain proprietary software components of other software vendors.

Microsoft®, WINDOWS®, NT®, EXCEL®, Word®, PowerPoint® and SQL Server® are registered trademarks of Microsoft Corporation.

IBM®, DB2®, DB2 Universal Database, OS/2®, Parallel Sysplex®, MVS/ESA, AIX®, S/390®, AS/400®, OS/390®, OS/400®, iSeries, pSeries, xSeries, zSeries, z/OS, AFP, Intelligent Miner, WebSphere®, Netfinity®, Tivoli®, Informix and Informix® Dynamic ServerTM are trademarks of IBM Corporation in USA and/or other countries.

ORACLE® is a registered trademark of ORACLE Corporation.

UNIX®, X/Open®, OSF/1®, and Motif® are registered trademarks of the Open Group.

Citrix®, the Citrix logo, ICA®, Program Neighborhood®, MetaFrame®, WinFrame®, VideoFrame®, MultiWin® and other Citrix product names referenced herein are trademarks of Citrix Systems, Inc.

HTML, DHTML, XML, XHTML are trademarks or registered trademarks of W3C®, World Wide Web Consortium, Massachusetts Institute of Technology.

JAVA® is a registered trademark of Sun Microsystems, Inc.

JAVASCRIPT® is a registered trademark of Sun Microsystems, Inc., used under license for technology invented and implemented by Netscape.

MarketSet and Enterprise Buyer are jointly owned trademarks of SAP Markets and Commerce One.

SAP, SAP Logo, R/2, R/3, mySAP, mySAP.com and other SAP products and services mentioned herein as well as their respective logos are trademarks or registered trademarks of SAP AG in Germany and in several other countries all over the world. All other product and service names mentioned are trademarks of their respective companies.

Copyright 2002 SAP AG. All Rights Reserved

Page 124: Live Cache- Administration and Monitoring

124124PRINT ON DEMAND

sponsored by

EUROPEANSAP TECHNICALEDUCATIONCONFERENCE 2002

Sept. 30 – Oct. 2, 02 Bremen, Germany

WORKSHOP