HA200

41
HA200 Unit1 Lesson 1 - Daily challenges: o Complex system landscapes o High flexibility o Immediate result o Massive growth of data volume o Skilled workforce - Problem without SAP HANA: o Sub optimal execution speed o Lack of responsiveness o User frustration o Unsupportable business processes o Lack of transparency o Need for aggregation o Outdated figures o Guessing current situation o Reactive business model o Missing opportunities o Competitive disadvantage - What is in memory computing: o HW Technology innovation Multi core architecture (8 CPU x 10 cores per blade) Massive parallel scaling with many blades 64bit address space ( 2 TB in current servers) Dramatic decline in price/performance o SW Technology Innovation Row and column store Compression Partitioning No aggregate tables Insert only on delta - Past disk-centric, singular processing platforms are the bottleneck o Long online transaction and batch processes o Lack of flexibility o Complex and costly database landscape

description

Summary of HANA 200 SPS07

Transcript of HA200

Page 1: HA200

HA200Unit1Lesson 1- Daily challenges:

o Complex system landscapeso High flexibilityo Immediate resulto Massive growth of data volumeo Skilled workforce

- Problem without SAP HANA:o Sub optimal execution speedo Lack of responsivenesso User frustrationo Unsupportable business processeso Lack of transparencyo Need for aggregationo Outdated figureso Guessing current situationo Reactive business modelo Missing opportunitieso Competitive disadvantage

- What is in memory computing:o HW Technology innovation

Multi core architecture (8 CPU x 10 cores per blade) Massive parallel scaling with many blades 64bit address space ( 2 TB in current servers) Dramatic decline in price/performance

o SW Technology Innovation Row and column store Compression Partitioning No aggregate tables Insert only on delta

- Past disk-centric, singular processing platforms are the bottlenecko Long online transaction and batch processeso Lack of flexibilityo Complex and costly database landscapeo Explosion in data volume caused major bottleneck in data transfero Low I/O transfer rateo To overcome bottleneck, complex deployment architecture was added but

compromised flexibility and added cost- Required new technology platform: unified, low latency, low complexity to support

real time business requirementso Store massive amount of information compressed in main memoryo Utilize parallel processing on multiple cores

Page 2: HA200

o Move data intensive calculations from applications layer into db layero Since all data is available in memory and processed on the fly, no need for

aggregated information and materialized viewso Simplify architecture, reduce latency, reduce complexity, reduce costo High scalability (multi core, multi threaded processors, 64 bit address space,

advancement in parallel data processing)- Software component view

o Analytical and Special Interfaces: SQL, SQL Script, MDX, Othero Application logic extensions: Text analytics, application function libraries

(Business function library, predictive analysis library)o Parallel data flow computing model: parallel calculation engineo Multiple in-memory stores: relational stores (row based, columnar), object graph

storeo Appliance packaging: managed appliance

- SAP HANA Deployment ViewSAP HANA Applianceo SAP HANA Database: name server (maintains landscape information), master

index server (hold data and execute all operations), statistics server (collect performance data about HANA), xs server (XS service)

o SAP HANA Studio repository (repository for HANA Content LM)o SAP Host Agent (enable remote start/stop)o SAP HANA LifeCycle Manager (Manage SW updates for SAP HANA)

- SAP HANA:Real time applications & real time analyticsIn-memory database: optimizes memory access between CPU cache and main memoryPredictive analytics: in-database predictive algorithms, and access to open source algorithms via R integrationText search and miningAgility for business analysts and users: discover trends & outliers with lumira, adapt to business scenario by combining, manipulating and enriching data with Explorer, tell your story with self service visualizations and analytics with Analysis, forecast and predict with Predictive Analysis

Lesson 2- SAP HANA information source:

Http://help.sap.com/hanao HANA Master guide (overview, architecture, software components, deployment

scenarios)o SAP HANA Server Installation guide (how to install SAP HANA)o Technical operations manual (administration tools available, key tasks of system

administrator)o SAP HANA Database admin guide (database administration using Administration

console in SAP HANA Studio)Lesson 3

Page 3: HA200

- Updates are shipped with Support Package Stacks (SPS), released twice/year, backward compatible

- SAP HANA Support Package Revisions, every 3 months- SAP HANA Maintenance Revisions, every 2 weeks- Naming convention: SPS 09 revision 71.1 = Support package stacks 9 revision 71

maintenance revision 1- SP revision= contains all fixes delivered through maintenance revisions, performance

improvement, for any customer and any scenario- Maintenance revision= contains only fixes of major bugs found in HANA key

scenarios, focus on production and business critical

Unit 2Lesson 1- Sizing of HANA appliance is mainly based on required main memory size- Memory sizing is determined by the amount of data that is to be stored in memory- Main memory size is depending on scenario: BW on HANA, Suite on HANA or

general sizing- SAP HANA sizing consist of:

o Main memory sizing for static datao Main memory sizing for object created during runtime (data load & query

execution)o Disk sizingo CPU sizing

- RAM size:o XS: 2 x 10 core Westmere EX (2 socket system), 128 GB main memory, 160 GB

PCIe flash / SSD for log volume, 1 TB SAS/SSD for data volume, 3 x 1 GB n/w or 1 x 10 GB n/w (trunk), redundant n/w

o S: 2 x 10 core Westmere EX (2 or 4 socket system), 256 GB main memory, 320 GB PCIe-flash / SSD for log volume, 1 TB SAS/SSD for data volume, 3 x 1 GB n/w or 1 x 10 GB n/w (trunk), redundant n/w

o M: 4 x 10 core Westmere EX (4 - 8 socket system), 512 GB main memory, 640 GB PCIe-flash / SSD for log volume, 2 TB SAS/SSD for data volume, 3 x 1 GB n/w or 1 x 10 GB n/w (trunk), redundant n/w

o L: 8 x 10 core Westmere EX (8 socket system), 1 TB main memory, 1.2 TB PCIe-flash / SSD for log volume, 4 TB SAS/SSD for data volume, 3 x 1 GB n/w or 1 x 10 GB n/w (trunk), redundant n/w

- General sizing:Static and dynamic RAM requirement:Calculate uncompressed data volume to be loaded into HANAApply compression factorMultiply result by 2 (because static=dynamic)

- Static RAM= amount of main memory used for holding table data (exclude associated index), space of uncompressed data then applied compression factor to determine size of RAM

- Dynamic RAM= amount of memory when new data is loaded or queries are executed, same amount as static RAM

Page 4: HA200

- Disk sizing:Disk size for persistence layer= 1 x RAMDisk size for log files/ operational= 1 x RAMData volume size= 3 to 4 x RAMLog volume size= 1 x RAM

- Data volume have to hold: space for one data export, space for at least one process image, shared volume for executable

- CPU Sizing= 300 SAPS / active user- HANA Queries are divided into 3 categories: Easy, Medium (used 2 x resources as

Easy) and Heavy (used 10 x resources as Easy)- HANA users can be divided into 3 categories: Sporadic (1 query per hour: 80% easy

queries, 20% medium queries), Normal (11 queries per hour: 50% easy queries, 50% medium queries), Expert (33 queries per hour: 100% heavy queries)

- Default distribution of user categories: 70% sporadic, 25% normal, 5% expert- Average resource requirement: 0.2 cores per user- CPU Sizing in complex scenario influenced by: data volume and query complexity- SAP HANA can be sized using Quicksizer (calculates memory, CPU, disk, I/O

resource category), from: http://service.sap.com/quicksizer- Use Quicksizer for initial sizing recommendation- System Type:

o Single host system= system with one host (one operating system environment)o Multi-host (distributed systems)= used to spread load over several hosts

- SAP HANA system composed of:o Host= operating environment in which HANA DB runs, provides all resources

and services (CPU, Memory, network & o/s) that HANA DB requires, provides links to installation directory, data directory, log directory, or storage itself (doesn’t have to be on the hosts)

o System= one or more instance with the same number, term is interchangeably with HANA DB. SID is the identifier for HANA system

o Instance= Set of SAP HANA System components that are installed on one host, can be distributed over several hosts, but instance distributed over several hosts must have the same instance number

- Single SAP HANA host with Single SAP HANA system: perform the installation of the first SAP HANA system with SAP HANA unified installer

- Single SAP HANA host with multiple SAP HANA systems: use SAP HANA Lifecycle Manager (HLM) to add SAP HANA system to a host where SAP HANA system already installed with different SID & different instance #

- Operating system for SAP HANA:SUSE Linux Enterprise Servers (SLES) 11 SP2 is necessary for using hdblcm

- Hardware requirements:For software: 20 GB RAM (15 GB for basic software & 5 GB for programs)Additional memory required for data & log volume based on requirements

- During update and installation of SAP HANA DB, hardware check is performed (script automatically called by installer)

- Hardware requirement for network connection: 10GBit/s between HANA Landscape & source system

Page 5: HA200

- Important directories and space required/ root 10 GB/hana/shared mount directory to share files between all hosts 1 x RAM/hana/shared/<SID>/hdbclient/hana/shared/<SID>/hdbstudio/usr/sap local SAP System instance directory 50 GB/hana/data/<SID> default path to data directory 4 x RAM/hana/log/<SID> default path to log directory 1 x RAMFor patching= 3 GB in working directoryHANA DB can only used 90% of physical memory

- File system structure<insert picture here>

Unit 3Lesson 1- SAP HANA SPS7:

For update and configuration= SAP HANA Lifecycle Manager (HLM)For installation and update= SAP HANA Lifecycle Management tool hdblcm(gui)

- Different type of installation:o SAP HANA Appliance delivery: fast implementation, support fully provided by

SAPo SAP HANA Tailored data center integration (TDI): more flexibility, save budget

& existing investment- The one who installed the system has to be certified (SAP Certified Technology

Specialist E_HANAINST142) and SAP Certified Technology Associate is the pre requisite (C_HANATEC142), refer to oss 1905389

- Most important tools:hdblcm – installation toolhdblcmgui – installation tool with user interfaceHLM (SAP HANA Lifecycle Manager) – different feature

- Installation with interactive mode can be done using hdblcm & hdblcmguiInstallation with batch mode can be done using hdblcmOptions:General help: -h or --help Installation help: --action=install –hUpdate help: --action=update –hUninstallation help (called from <Installation path>/hdblcm --uninstall -h

- For trouble-shooting refer to:Log files /var/tmp/hdblcm or /var/tmp/hdblcmgui or /var/tmp/hdbinst or /var/tmp/hdbupdClean up partially installed SAP HANA System using hdbuninstEnabling trace, switch on trace in environment variable: HDB_INSTALLER_TRACE_FILE to <trace file name>

- HLM (HANA Lifecycle Manager) Programs:hdbinst command-line tool for installationhdbsetup installation tool with GUI for installation & update

Page 6: HA200

hdbuninstall command-line tool for un-install and remove hosthdbaddhost command-line tool for adding host to systemhdbupd command-line tool for updating softwarehdbrename command-line tool for renaming systemhdbreg command-line tool for registering SAP HANA Systemhdbremove-host command-line tool for removing host

- Easier way is to use: HANA Life cycle management tool: hdblcm or hdblcmgui- Installation procedure:

Change to installation medium (/hana/shared/downloads/DATA_UNITS/HDB_LCM_LINUX_X86_64)Start installer:./hdblcmgui or ./hdblcm

- If installation is run in batch mode from installation medium, minimum required parameter is SID and password (specified in XML syntax and streamed in or specifie in configuration file). If you only put in SID and password, other parameters will take default values. If mandatory parameter without default is not specified, installation fails with error

- For multi-host system, check mandatory values on each host before installation- Default parameters:

actionautostartcertificates_hostmapclient_path /hana/shared/<SID>/hdbclientcomponentscopy_repository /hana/shared/<SID>/hdbstudio_updatedatapath /hana/data/<SID>groupidhome /usr/sap/<SID>/homehostnameinstall_hostagentlogpath /hana/log/<SID>numberroot_usersapmnt /hana/sharedshell /bin/shstudio_path /hana/shared/<SID>/hdbstudiostudio_repositoryuseridtimezonevm

- This user will be created automatically during installation:<sid>adm o/s user required for administrative tasks such as start & stop

Group id and user id must be unique & identical on each host of multi-host system

sapadm sap host agent administrator

Page 7: HA200

SYSTEM initially SYSTEM user has all system permission & initial permission can never be revoked

- SAP HANA System can be installed interactively with command line using hdblcm or with graphical installation tool with hdblcmgui

- Advanced installation: automated installation and the configuration of multi-host system using hdblcm

- 3 different method of using hdblcm: o command line option

./hdblcm –s <SID> -n <instance#> -G <usergroupid>o configuration file

./hdblcm --configfile=<configfilepath>/<configfilename>.cfgo configuration file in batch mode

./hdblcm --configfile=<configfilepath>/<configfilename>.cfg -b- Certain parameters are only available on certain installation variant:

All parameters are available if using: hdblcm only, hdblcm + config file, hdblcm + batch, hdblcm + config file + batch, hdblcmgui + config file, hdblcmgui + command lineReduced parameter choice, when using: hdblcm + interactive modus, hdblcmgui

Lesson 2- Review: host grouping and storage option before installing multi-host system- On multi-host system, additional hosts must be defined as worker machines or

standby machines- Host types:

Worker machines process data (default)Standby machines do not handle any processing, just wait to take over processes in case of worker machine failure

- Server role:Master: actual master index server is assigned on the same host as name server with actual role MASTER. Actual index server role is MASTER. Master index server provides metadata for other active index serversSlave: actual index server role of remaining hosts is SLAVE (except standby host). These are active index servers and are assigned to one volume. If an active index server fails, the active master name server assigns its volume to one of the standby hostsAll servers should have the same size

- Typical configuration for a distributed system:Initial hostName server configured role: Master 1Name server actual role: MasterIndex server configured role: WorkerIndex server actual role: Master1st host addedName server configured role: Master 2Name server actual role: SlaveIndex server configured role: Worker

Page 8: HA200

Index server actual role: Slave2nd host addedName server configured role: SlaveName server actual role: SlaveIndex server configured role: WorkerIndex server actual role: Slave3rd host addedName server configured role: SlaveName server actual role: SlaveIndex server configured role: StandbyIndex server actual role: Standby

- Maximum number of master name server = 3- Host grouping does not affect the load distribution among worker hosts, load are

distributed among all workers. If there are multiple standby, host grouping decides allocation of standby resources if worker machine fails.If no host group is specified, all hosts belong to one host group called default.

- There are 2 types of groups: sapsys groups and host groupso SAP system group (sapsys group) group that defines all hosts in a system. All

hosts in multi-host system must have the same sapsys group IDo Host group is a group of hosts that share the same standby resources only. If

multi-host system has one standby host, all hosts must be in the same host group (default), so all hosts have access to standby host

- In multi-host system, Database installation path: /hana/shared, data path: /hana/data/<SID>, log path: /hana/log/<SID> (all shared) Local directory: for hana1 host: /usr/sap/<SID>, hana2 host: /usr/sap/<SID>, hana3 host: /usr/sap/<SID>

- Pre-requisite for multi-host system:/hana/shared, /hana/data/<sid>, /hana/log/<sid> had to be mounted on all hosts, including primary host

- Performed following tasks after installation:Backup, Change password (if vendor installed as appliance), finalize customization

- For testing and debugging, it is possible to copy scale out landscape to single node using SAP HANA Studio

- For single host SAP HANA system, it is possible to use plain attached storage devices (SCSI hard drive, SSDs or SANs)

- In multi-host system with failover capabilities, storage must:Standby host has file accessFailed worker host no longer has access to write to files (called fencing)

- Different storage configuration: Shared storage devices (NFS or IBM’s GPFS)Separate storage devices with failover reassignment

- Externally attached storage subsystem devices are capable of providing dynamic mount points for hosts

Unit 4Lesson 1

Page 9: HA200

- Post installation steps:o Establish SOLMAN connectivityo Configure Remote Service Connection (via SAP router)

- SAP Support can access customer database via local SAP HANA studio installation- Involved components are:

o Host Agent (communicate with HANA Database)o Diagnostics Agent (communicate with Host Agent)o SOLMAN (Diagnostics Agent has to be assigned to SOLMAN), consist of

LMDB (Landscape Managed Database), DBACOCKPIT, Performance Warehouse, Alerting Framework

- Remote connection to SOLMANStandard SAPGUI and HTTP connection to SOLMAN has to be established (oss note 962516)

- Setup RCA (Root Cause Analysis), System monitoring and Early watch alert (oss note 1747682)

- SAP HANA database service connections (oss 1592925, 1635304)- Setup SSH or Telnet Remote connection (oss 1275351, 1327257)- Setup Windows Terminal Server connection (oss 605795)- Two kinds of license keys:

o Temporary license keys (automatically installed, valid for 90 days)o Permanent license keys (if permanent license keys expired, temporary license

keys automatically installed valid for 28 days)- To install license key, use HANA Studio.

Right click on system -> Properties -> License -> Install license key- Customer can assign amount of memory to particular HANA instance, this info is

provided when requesting for license and this number will be put into generated license key file. Once license key is installed, the number will be set in HANA instance and showed up in HANA Studio

- Only system with valid license can be backed up. Licensed will be restored with recovery. If backup is too old and license key from backup is expired, database will be locked after recovery and new valid license need to be installed to unlock database

Lesson 2- Before updating SAP HANA components, make sure no read or write process are

running on SAP HANA DB. Performed update process in offline mode. After the update, you have to start SAP HANA and its components again

- HLM functions:o Rename SAP HANA System: change SID, instance number, hostname; change

system administrator password, change database user passwordo Register in System Landscape Directoryo Add Solution Manager Diagnostics Agent (SMD)o Update SAP HANA System: Update SAP HANA Lifecycle Manager (time

required= time for shutdown + time for restart SAP HANA + 20 minutes), Apply Support Package Stack, Apply single support package

o Make decision for the source of the archives for update: automated update of SAP HANA System can be executed automatically downloaded from SAP Service

Page 10: HA200

Marketplace (needs host name, valid S-user & password, proxy), or manually downloaded content (needs location of downloaded archive)

o Add/remove additional host (the system must be already started)o Add/remove SAP HANA System (specified host name is FQDN)o Add Application Functional Library (AFL)o Add LiveCache application (LCApps)o Deploy SAP HANA Application content (i.e. HANA Live, HANA RDL, SAP UI,

HAVANA)o Change SAP HANA License Type

- Available working modes for SAP HANA Lifecycle Manager (provide easy and flexible customization)o Using SAP HANA Studioo Using command line interface (CLI) (applicable for heterogeneous SAP product

landscape)o Using standalone HTML5 enabled web browser

- Uninstalling SAP HANA component using uninstall.sh script, but it doesn’t uninstall SAP Host Agent and SMD Agent (need to be done first before running uninstall script)./uninstall.sh /tmp/hanainstdir HDB

Unit 5Lesson 1- Use Database Migration Option (DMO) of SUM (Software Update manager)- Benefit: migration steps are simplified, system update and database migration are

combined in one tool, business downtime is reduce, original database is kept (can be reactivated as fall back), lower pre requisite for SAP and DB start releases, inplace migration keeps application server and SID stable, well known tool SUM is used with improved UI, Unicode migration is included

- SUM is not new, it is used for Release upgrade, EHP implementation, apply SP stack for SAP NetWeaver

- Classical way of migration: upgrade source database, upgrade application software, migrate database, Unicode migration

- Steps of data migration:o Upgrade prepareo Execute Upgradeo Switch database connection (from traditional DB to HANA DB)o Migrate application data (include data conversion)o Finalize upgradeo Start SAP HANA based system

- SUM:o Create usual shadow instance and shadow repository on database level (so

shadow system temporarily exists)o Copy shadow repository to SAP HANA DB as target repositoryo Application data are migrate HANA DBo Target instance kernel is setup with basic software of new SAP release

Page 11: HA200

o Direct access to log files to check status and error

Unit 6Lesson 1- Although SAP HANA is in-memory database mgmt system, data is also persisted in

data and log volumes- Core processes on single node instance:

o Several processes running in Linux operating systemo Daemon (starts all other processes, keeps other processes running)o Indexserver (main database process, data loads, queries, calculations)o Nameserver (db landscape, data distribution)o Statisticsserver (monitoring service, proactive alerting)o Preprocessor (to feed unstructured data into HANA)o XSengine (web service component, sometime referred to as application server)

- Shared nothing architecture, each processes (indexserver, nameserver, etc) persists data in corresponding data and log volumes independently

- XSengine service can be deactivated and removed if not needed (oss 1867324)- Starting from SPS7, new statistics service implementation design (makes

statisticsserver component obsolete (oss 1917938)- Architecture of SAP HANA Indexserver

o HANA Core processeso External interfaces (allows client to communicate with HANA, queries, data

loads, Admin) SQL Interface MDX Interface Web interface

o Processing engines (operate on data, execute queries) (page 149)

o Relational engines (store data in memory) Row store Column store

o Storage engine (handle data pages, handle transfer RAM Disk) Page management (asynchronous writing) Logger (synchronous writing)

o Disk Storage (non volatile data storage) Data volume (asynchronous writing – complete main memory content at

specific point in time) Log volume (synchronous writing – changes written to log area before

successful commit of transaction)

Page 12: HA200

- Persistenceo Data:

SQL data and undo log information Additional HANA information (modeling data) Kept in-memory for maximum performance Write process is asynchronously

o Log Information about data changes (redo log) such as: insert, delete, and update

are saved to disk immediately in the logs (synchronous) Directly saved to persistent storage when transaction is committed Cyclical overwrite only after backup

o Savepoint Changed data and undo log is written from memory to persistent storage Automatic At least every 5 minutes (can be changed)

- Disk access is not performance bottleneck, since data written to Data volume asynchronously and user doesn’t have to wait for this process. When data in main memory is read, no need to access persistent storage. When applying changes to data, transaction cannot be successfully committed before changes are persisted to log area. To optimized performance, log area fast storage is used (SSD)

- Data volumes are located in file systems:

External Interfaces

SQL Interfaces MDX Interfaces Web Interfaces

Request Processing

SQL Optimizer

Calculation engine

OLAP Engine

Join Engine

Row store engine

Session Management

Transaction Manager

Authorization Manager

Metadata ManagerRelational Engine

Column Store Row Store

Storage EnginePage Management Logger

Disk StorageData volumes Log volumes

Asynchronous writing Synchronous writing

SAP HANA Database

Page 13: HA200

o One data volume per instanceo Each data volume contains one file, data is organized into pageso Growing until disk or LUN is fullo Logical volume manager (LVM) needed on OS level to extend file systemso Growing with number of data volumeso Different page size (page size class: 4k, 16k, 16M) arranged in superblock of 64Mo Typical size for data volumes (4 * RAM)HANA SP06:o File size limitation is 2 TB, located in ext3 file system, when reached 2 TB

additional files are automatically created o Allows usage of ext3 file system with larger memory implementation per hosto No implication to backup/recoveryo Monitoring: select * from PUBLIC.M_VOLUME_FILES

- Redo log entries are written synchronously, changed data in data volumes is periodically (asynchronously) copied to disk during savepoint operation. During savepoint, SAP HANA flushed all changed data from memory to data volumes.

- Savepoint frequency: can be configured, triggered by data backup, database shutdown and restart, or manually (using command ALTER SYSTEM SAVEPOINT)

- Shadow paging concept = write operations write to new physical pages and previous savepoint version is still kept in shadow pages.

- Savepoint phases:o Write changed pages in parallel

Acquire lock to prevent modification Determine log position Remember open transaction Copy modified pages and trigger write Increase savepoint version Release lock

o Wait for IO-requests to finisho Write anchor page

- In the event of database crash, data from last completed savepoint can be read from data volumes and redo log entries written to log volumes thus data can be restored to the last committed state.

- After system restart, not all tables are loaded into main memory immediately (to allow short restart time). Only row store is always loaded entirely, column store are loaded if requested. Column store that are loaded (and their attributes) before system restart are reloaded. This may not be necessary in non productive system, can be configure in: indexserver.ini in section: sql, parameter: reload_tables=false

- Startup process:Row store: loaded completely into memory during startup & has to stay there, secondary index created during loadColumnar store: loaded ‘lazy’ on demand during startup, ensure early availabilityImportant factors for startup: remaining log to be rolled forward; I/O performance of data and log disks; separate log, data & backup disk area (logically and physically)

- It is possible to mark individual column for preload:

Page 14: HA200

Set preload flag (possible value: FULL, PARTIALLY, NO)Don’t set every table to be preloaded, startup maybe very slow

- Total amount of memory used is called used memory. Used memory = program code (called the text) & program stack, data table & system table, memory for temporary computation

- Data memory is called heap- Physical memory = free + resident (consist of SAP HANA, OS and other programs)- Resident memory is physical memory actually in operational use by a process- When virtual memory needs to be used, it is loaded or mapped to real, physical

memory and becomes ‘resident’- SAP HANA reserves pool of memory before actual use, so Linux memory indicator

(top and meminfo) don’t accurately reflect HANA used memory size (use HANA monitoring feature)

- When memory is required, HANA obtains from existing memory pool. When pool can’t satisfy the request, HANA memory manager request and reserve from o/s, virtual memory grows. Once the need for memory is gone, HANA memory manager returns memory to pool without informing Linux. So used memory smaller than resident memory. It is normal.

- HANA Database may unload tables or columns from memory, if a query require more memory than available. This unloading based on least recently used algorithm.

- Memory management in column store: although column store is optimized for read operation, it is also provide good performance for write operation through the use of 2 data structures: main storage and delta storage.

- Main storage: compression by creating dictionary and apply further compression, speed up data load into CPU cache, equality check, compression is computed during delta merge operation, read optimized

- Delta storage: write optimized, update is performed by inserting new entry in delta storage, only exists in main memory, only delta log entries are written to persistence layer when delta entries are inserted

- Read operation: always read from both main & delta storage and merge the result, IMCE (in memory computer engine) uses multi version concurrency control to ensure consistent read operation.

HANA memory pool

Table data

Allocated memory pool (Free)

Database mgmt

Column tables

Row tables

System tables

Code and stack

HANAusedmemory

Page 15: HA200

- Delta merge operation: to move changes in delta storage into main storage, happens asynchronously, executed on table level, use double buffer concept (adv: only needs to be locked for a short time), minimum memory requirement = current size of main storage + future size of main storage + current size of delta storage + additional memoryAlthough table is only partially loaded, but to performed delta merge, whole table is loaded into memory

- Several ways to trigger delta merge:o Auto merge (standard method): mergedog (process) periodically checks column

store tables that are loaded locally and determines if merge is necessary based on configurable criteria (size of delta storage, available memory, time since last merge, etc)

o Smart merge: application request the system to check if delta merge makes sense now (issues smart merge hint). For example during large load, application will disable delta merge temporarily and do a merge once load has completed.

o Hard and force merge: hard merge is manually triggered using SQL statement and executed immediately once sufficient resources are available, force merge: regardless of resources, triggered by passing optional parameter.

o Critical merge: database trigger critical merge to keep the system stable (example when auto merge is disabled and no smart merge hint is sent, and delta storage has grown too large pass the threshold)

- Paged attribute access: SAP HANA can read attribute structures from disk based on pages (reduce overhead and data don’t have to be stored in memory), reduce memory footprint, only read the needed page (not the whole column)How to activate this feature: alter table <tablename> alter (<column> varchar(80) column loadable, <column> varchar(500) page loadable)Things to consider:Columns are stored in 64Kb pages instead of bigger page structure (up to 16MB page), because of smaller chunks lesser compression rate, can be used for all non primary key column, if attributes are often read/changed on single record base then it is beneficial, if column often used for analytical scans then no benefit, more suited for Suite of HANA

- Hybrid LOB: LOB can be stored in virtual files inside HANABefore SPS6, HANA stored LOB inside row & columnar store. Disadvantage: consume memory, can’t be used for analytics, can’t be unloaded to diskSince SPS6, HANA stored LOB in virtual files inside HANA. Advantage: each LOB has its own virtual file and will be anchored to data record, only load LOBs that are needed, list of virtual files for LOBs are stored in M_TABLE_LOB_FILES, available for column & row store and for all type BLOB, CLOB and NCLOB

- Advantage of Hybrid LOB:Reducing main memory consumption, in case of memory shortage LOBs are unloaded, if the size exceed threshold then LOB can be put on disk, uses threshold to keep only small LOBs, performance is kept stable compared to pure in memory LOBs, bigger LOBs are immediately transferred to disk and reference is kept in table structure, to optimized LOBs on disk - used cache with short term dispositionHow to activate: use alter table statement or changeLobType python script

Page 16: HA200

HANA parameters: default_lob_storage_type is applied for new columns, lob_memory_threshold (if value is less than or equal to lob_memory_threshold then it is stored in memory, 0 all LOB then data is stored on disk, 1 all LOB then data is stored in memory)

- Smart Data Access: enable remote data to be accessed as if they are local tables in HANAAdvantage: operational & cost benefit; ability to access, synthesize & integrate data from multiple system in real-time; no special syntax to access heterogeneous data source; processing is pushed to target data source using smart query processing, remote data type to be mapped to HANA data types using automatic data type translationSupported remote data sources: Teradata, SAP Sybase IQ, SAP Sybase Adaptive Service Enterprise, Intel distribution for Apache Hadoop

- New/improved SDA:Support new remote source (oracle, MSSQL, Hadoop), extended DML to insert/update/delete on virtual tables, Calc view support for virtual tables, deliver generic adapter framework (ODBC data source), remote caching for hadoop sources, support for CLOB and BLOB data types, hdbsdautil to debug remote source configuration

Lesson 2:- Concurrency control method: to solve the problem where one user is reading the

database and the other user is writing to it.- Multi version concurrency control (MVCC): each user connected to db sees a

snapshot of database at that particular time, any changes to the database will not be seen by other user until the changes are committed. MVCC uses insert only data records, this enables long running transaction and high level parallelization.

- When SAP HANA updates an item of data, it will not overwrite the data but mark it as obsolete and add newer version. Therefore multiple versions stored but only one is the latest. It allows database to avoid overhead of filling the holes in memory but requires system to periodically sweep through and delete the old, obsolete data objects.

- MVCC is used to implement different transaction isolation levels: transaction level snapshot isolation and statement level snapshot isolation.

- Transaction level snapshot isolation: all statement of a transaction see the same snapshot of database. The snapshot contains all committed changes at the time the transaction started plus changes made by transaction itself (same with SQL isolation level repeatable read)

- Statement level snapshot isolation: different statement in a transaction may see different snapshot of a database. Each statement sees the changes committed when the transaction started (same with SQL isolation level read committed)

- Transaction isolation level can be changed using command: set transaction

Lesson 3:- SAP HANA Platform edition composed of:

o SAP HANA Database (installed on SUSE Linux)

Page 17: HA200

o SAP HANA Client & HANA client for excel (for connecting to HANA DB)o SAP HANA Studio (application for SAP HANA Appliance software)o SAP HANA Lifecycle Manager (tool for customizing SAP HANA system)o Host Agent (tool for monitoring & control of SAP instances, non SAP instances,

o/s and databases)o SAP HANA AFL/LCApps (application framework supporting function library

AFL – pre-delivered utilized business, predictive and other type of algorithmBFL – business function library, pre-built, parameter driven used algorithm in financePAL – predictive analysis library, predictive analysis and data mining)

o SAP HANA RDL content package (river design language – like SQL, declarative data definition based on SAP HANA Core Data Services)

o SAP HANA INA Toolkit for HTML (built-in enablement of SAP HANA to retrieve and visualize data in an end-user friendly way)

o SAP HANA EPM Content Package (Enterprise Performance Management – to design, deliver and operate Planning and Consolidation Applications)

o SAP HANA Smart Data Access (transparent access to remote database table via HANA proxy tables)

o SAP HANA Studio SAPUI5 Plug-in (Java script based HTML5 browser rendering library for Business Applications)

o SAP HANA HW Config Check (tool to verify SAP HANA software requirements on proposed hardware capabilities)

o SAP HANA Information Composer (web based environment which allow business user to upload data to SAP HANA DB and to manipulate data by creating information views)

- Components of SAP HANA Platform edition are divided into:o Mandatory server components: SAP HANA Database, SAP HANA Client, SAP

HANA Studio, SAP HANA Lifecycle Manager, Host Agent, SAP HANA AFL/LCApps

o Optional server components: SAP HANA RDL, SAP HANA INA Toolkit, SAP HANA EPM, SAP HANA SDA, SAP HANA Studio SAPUI5 Plug-in, SAP HANA HW config check, SAP HANA information composer

o Front-end tools- SAP HANA Platform DUs:

Default content:Delivery units are integral part of any SAP HANA Database installation, they are required for SAP HANA to operate as desired. Maintained as part of lifecycle management of database component with each revision automaticallyo SAPUI5 Client Runtimeo HANA XS Administrationo HANA XS LMo HANA TA Configo HANA XS Baseo HANA UI Integration Svco HANA UI Integration Cont.

Page 18: HA200

o SAP HANA Admino SAP HANA IDE | IDE coreOptional content:Delivered together with SAP HANA Database but not automatically become available/activeo INA Serviceo SAP HANA DXCo HANA EPM SvcAdd-ons:Part of SAP HANA product and available for download from SAP Marketplace can be installed through HANA Lifecycle Manager from within SAP HANA studioo HANA INA Toolkit for HTMLo HANA RDL Cont

Lesson 4:- SAP HANA: in-memory database management system, also comprises many

additional features: spatial processing, search and text mining, integrated libraries.- SAP HANA scenarios:

o Side-by-side: SAP HANA is added as additional component to existing landscape (example: data mart) Agile data marts:

Create more flexibility compared to Enterprise Data WarehouseData is loaded using ETL (Data services)Data has been transformedBased on analytic data models

Operational data marts:Views calculate result for reports in real time on actual operational dataNo transformation during loadReal time replication of time critical data (SLT)

SAP HANA AcceleratorsTurnkey solution to accelerate (standard ABAP report, business process in ERP)Flexible reporting using BO BI clientUsing SLT and DBSL (Database shared library)

o Integrated: SAP HANA is used as primary databaseBW on HANA, SAP Business Suite powered by SAP HANA,

o SAP HANA as application platformAny application could connect to HANA using standard interfaces: JDBC, ODBCNative SAP HANA application can be implemented in SAP HANA without additional application server on the basis of SAP HANA XS (extended application services)

o Combination of multiple SAP HANA scenarios

Lesson 5:- SAP HANA Deployment options:

o On-premise:

Page 19: HA200

Pre-configured appliance: pre-configured hardware, pre-installed software, solution validation done by SAP

HANA TDI (tailored datacenter integration): installation by customer, more flexibility, save IT budget & existing investment

Virtualized with vmware vsphereo On-demand/cloud:

SAP HANA One: fully featured SAP HANA hosted in public cloud, hourly subscription basis

SAP HANA Developer edition SAP HANA infrastructure subscription: monthly subscription basis, quickly

deploy existing SAP HANA license SAP HANA platform as a service: platform as a service in cloud env, monthly SAP HANA managed service: enterprise class SAP HANA in cloud, monthly

o Hybrid: migrate some solutions to the cloud- Running multiple scenarios on one system or database:

o Virtualization:1 database schema per databaseSeparate HANA database per SAP SystemSeparate virtual machine and O/SShared hardware and storageRestriction: non-production system, single node up to 1 TB

o Multiple components on one system (MCOS):1 database schema per databaseSeparate HANA database per SAP SystemShared hardware, storage and O/SRestriction: non-production system

o Multiple components on one database (MCOD)Multiple database schemas per databaseShared SAP HANA databaseDedicated application server per applicationShared hardware, storage and O/SRestriction: non-production system, single node/multi node

o Technical co-deployment1 SAP HANA Database, 1 schema1 ABAP Application server/ SIDAvailable for SRM and SCM as ERP Add-onUsage: prod & non-prod, single node/multi node, can be combined with virtualization

Unit 7Lesson 1:- SAP HANA Administration tool:

o HANA StudioAdministration: start/stop HANA database, backup & recovery, user & role mgmt, configuration changes, SAP HANA modeler, lifecycle managementMonitoring: integration of all SAP HANA database, detailed views

Page 20: HA200

Alerting: alerts are generated automatically, adjust alert threshold, config of email notificationTracing: change trace level, display trace file, view merged trace

o SOLMANBasic administration and holistic monitoring within existing SAP landscapes through DBA Cockpit, solution manager diagnostics, System landscape directory (SLD), Maintenance optimizer (MOPZ), early problem analysis and transport integration

o SAP DBA CockpitAdministration: schedule backup, configuration changesMonitoring: integration of all SAP HANA database (via SLD & manually), detailed views, integration with Solution Manager Performance Warehouse Alerting: alerts are generated automatically, integration into SOLMAN E2E Tracing: change trace level, display trace file

Lesson 2:- SAP HANA Studio:

Consist of several perspective/application:Administration console, information modeler, lifecycle managementIs used by developer to create content (modeled views, stored procedures)Development artifacts are stored in repository

- There are 2 ways to add system in HANA Studio:o Add system (have to provide: hostname, instance #, description, database user,

password)o Add system archive link – one user can manage list of all systems in centrally

accessible archive (File -> Export -> SAP HANA -> Landscape) and others can link to this archiveAdvantage: more efficient, avoids users having to obtain connection detail and add them individually, users have up-to-date system access

- In system navigator screen (left hand side) you can see:o Backup: backup configuration (destination, file size), backup catalog, snapshoto Catalog: schemas with tables (column and row store), functions, procedureso Content: packages (development and modeling artifacts) & viewso Provisioning: smart data access, remote data source, proxy tableso Security: users and roles, security settings

- From Context menu of Systems view, you can:Add system, stop/start/restart system, open system properties, backup/recover system, storage snapshot, import/export catalog object, open SQL console, find table, open table definition

- Administration Console perspective: contains db admin and monitoring featureThere are 3 screen areasSystems view (left-hand), editor area (top right-hand), other views (bottom right-hand)There are these tabs:Overview, landscape, alerts, performance, volumes, configuration, system information, diagnosis files, trace configuration

Page 21: HA200

- Overview tab: most important information about system at a glance (can navigate to more detail information)System status, system information, current alerts, memory usage, CPU usage, disk usage

- SAP HANA Studio normally collects information about system using SQL, but when system is not yet started/down no SQL connection is available then HANA Studio collects information using SAP start service (sapstartsrv). This information can be viewed in diagnosis mode as operating system user <sid>adm

Lesson 3: - Start DBA Cockpit using tcode: DBACOCKPIT- DBACOCKPIT layout

Application Toolbar: basic function to display/hide system landscape toolbar and navigation frameSystem Landscape Toolbar: central function to manage system landscape: manage database connection and choose system to monitorNavigation Frame: quick access to analysis information, i.e. performance monitoring, space management, job schedulingNavigation frame contains: current status folder (overview and alert), configuration folder (.ini file), performance, jobs, diagnostics, system informationFramework Message Window: complete history of messages sent during the sessionCentral System Data: provides information: time of last refresh, database startup time and database nameAction Area: displays details of currently selected actionAction Message Window: additional information for selected actionDBA Planning calendar only available in DBACOCKPIT, not in HANA Studio

- Integrating SAP HANA as remote database (with SOLMAN version 7.10 SP04)Prerequisite for SOLMAN integration:Installation of HANA client software, kernel version min 7.20 patch 100, SAP HANA DBSL min 7.20 patch 110, SAP Host Agent min 7.20 patch 84, SAP SOLMAN Diagnostics Agent

Application Toolbar

System Landscape Toolbar

Navigation Frame

Central system data

Action area

Action Message Window

Framework message window

Page 22: HA200

Refer to these oss #: 1664432, 1612172, 1672429, 1721598- To connect to remote SAP HANA database, add secondary database connection

Define: connection name, database system (SAP HANA DB), user name (SAP HANA DB user with monitoring privileges) & password, Database host, SQL Port (3##15)(can also be done from other system, doesn’t have to be from SOLMAN, as long as you setup secondary database connection, can be done from FID)

Current status: overview of the statuses of the most important database resources (disk space, memory, CPU, services, alerts, time when db was started)Performance: performance relevant informationConfiguration: overview of the configuration fileJobs: DBA Planning CalendarDiagnostics: Trace possibilities (SQLDBC trace, database trace, explain)System information: deeper investigation when analyzing performance issuesDocumentation: link to documentation available on SDNSystem landscape

- You can analyze performance of your database system using Performance Warehouse, prerequisite: SOLMAN with SMD is enabled. All performance indicators are stored in BI system and is used by SMD (to configure: use SMD Setup Wizard)

- Diagnostics consist of: Audit log (all actions that make changes to db), missing tables and indexes (not available for remote system), explain (execution plan for select, insert, update, delete), SQL editor (to execute SQL statement), tables/views (display/monitor table/view), Diagnosis file (used for SAP HANA DB that are offline), SQLDBC Trace (activate, deactivate, analyze SQLDBC trace), Database trace (activate, deactivate, analyze trace)

- System Information consist of:Connections (detail info about open connection), transactions (display open transactions), connection statistics (network IO statistics), cache (cache created by SAP HANA DB), query cache (where SQL statement execute are cached), large tables (largest table in SAP HANA, table sizes, delta sizes, fastest growing table), SQL workload (overview of statements that were executed)

Page 23: HA200

Lesson 4:- HDBSQL features:

Execute SQL statementExecute DB procedureRequest information about database catalogExecute shell commandsExecute commandsOverview of all HDBSQL call optionsOverview of all HDBSQL commands

- Two different options: one step logon (with user name and password) and two step logon (start hdbsql first and connect to system)

- One step logon command: hdbsql [<options>] –n <database_host> -i <instance_id> -u <database_user> -p <database_user_password>

- Two step logon command:hdbsql [<options>]\c [<options>] –n <database_host> -i <instance_id> -u <database_user> -p <database_user_password>

- Command to display general information: \s- Command to exit: exit or quit or \q- Command to display all command: \? or \h- Using hdbuserstore to connect to SAP HANA, located in

/hana/shared/<SID>/hdbclient- To create an entry in hdbuserstore:

hdbuserstore SET <userkey> <hostname>:3##15 <username> <password>- To display all user store key:

hdbuserstore LIST

Unit 8Lesson 1:- Different ways to start/stop SAP HANA system:

HANA Studio (must know <sid>adm)Use OS command:Login as <sid>adm and execute: HDB start or HDB stop this only starts and stops in local hostAs root and execute: sapcontrol –nr 00 –function Stop ALLsapcontrol –nr 00 –function Start ALLsapcontrol –nr 00 –function GetProcessListOr using <sid>adm and using sudo (define first in sudoers)This command is used in scale-out HANA system

- When system is started, these activities are executed:o Database receives the status of the last committed transactiono All the changes of committed transactions that were not written to data area are

redoneo All write transactions that were open when db stopped are rolled backo Row tables are loaded into memoryo Savepoint is performed

Page 24: HA200

o Relevant column tables and their attributes are loaded into memory asynchronously

- Stopping SAP HANA database:o Hard: forces all db on all hosts to stop immediatelyo Soft: triggers a savepoint operation before stopping all db serviceso Stop wait time out: how long to wait for service to stop, if timeout expires

remaining services are shutdown- You can start or stop individual database services with system privilege: SERVICE

ADMIN, these options are available:Stop – stop normally and restartedKill – stop immediately and restartedReconfigure service – service is reconfigured and changes to parameters are appliedStart missing services – any inactive services are started

Lesson 2:- Configuring SAP HANA:

o Database user logon: connection detail and authenticationo JDBC trace: enable JDBC trace to identify issue with HANA studio connectivityo License: display and install new license keyo Resource: change description of systemo SAP system logon: enter and store <sid>adm credentialo Security: maintain SAML identity providero Version history: display version and installation timeo XS properties: maintain XS host

- Organizing SAP HANA Systems in folders: only available from Administration console perspective

- Maintaining SAP HANA Studio preferences: Window -> PreferencesIf all the other services are running, but there is an error, this could be because sapstartsrv can’t be reached because HTTP proxy is incorrectly configured:Go to Windows -> preferences -> network connections -> change from Native to direct

- Parameters can be changed and displayed in Configuration tab of Administration editor. You need privilege: INIFILE ADMINParameters that are active at system level are indicated by icon –Parameters that are active and deviate from default are indicated with green icon

- Configuration files are located in:o Global parameters: /usr/sap/<sid>/SYS/global/hdb/custom/configo Server parameters: /usr/sap/<sid>/HDB/<instance #>/<server name>

- During installation of SAP HANA db, these configuration files are created:o sapprofile.ini – system ID information: SID and instance #o daemon.ini – info which db services to starto nameserver.ini – global info, system specific landscape ID, assignment of roles

(MASTER, WORKER, or STANDBY)- Global_allocation_limit parameter is calculated: 90% of the first 64 GB of available

physical memory on the host + 97% of each further GB

Page 25: HA200

- Save_point_interval_s: how often internal buffer is flushed to disk and restart record is written

- Log_mode is set to normal: for point in time recoveryLog_mode is set to overwrite: can only recover to specific data backup

- Enable_auto_log_backup: to prevent log full situation that can caused db to freeze- Log_buffer_size_kb: sets size of one in memory log buffer (higher buffer size:

increase throughput, but COMMIT latency is higher)- Content_vendor: has to be maintained before creating delivery unit

Lesson 3:- When to use column store:

Calculation on small number of column, table is searched based on the value of few column, table has large number of column, table has large number of rows and columnar operations are require, high compression rates shall be achieved

- Sample SQL command to create column store table:Create column table “<schema>”.”<table name>”(“<column1>” <type>(<length>) default “ NOT NULL,“<column2>” <type>(<length>) default,Primary key (“<column1>”))

- When to use row store:Processing single record at one time / many select and update, accessing complete records, column contains distinct value, no aggregation or fast search required, small number of row

- Open tables and views in different ways:Table definition – info about table structure and propertiesTable content – execute select statement on the tableData preview – analyze the content in a different way

- Table partitioning and distribution:Split column store table horizontally into disjunctive sub-tables/partitionsAdditional DDL statement for portioning: create partition, move partition to other hosts, add/delete partition, re-partition table, merge partition to one table

- Advantage of partitioning:Load balancing (across multiple host), parallelization (several execution threads), partition pruning (improve response time), improve performance of delta merge (depend on size of main index), size limitation of column store table (max 2 billion rows), explicit partition handling

- Single level partitioning:Hash partitioning: distributed equallyRange partitioning: dedicated partition for certain value rangeRound robin: distributed equally like hash but no need to define partitioning column (tables must not have primary keys)

- Special time selection partitioning is called: aging (partitioned into different temperature like hot or cold)

- Table distribution editor:Move tables and partitions to other host in the systemPartition non-partition tables

Page 26: HA200

Change partitioned table into non-partition by merging its partition- For partition table, there are 2 checks:

o General check: consistency checkCall check_table_consistency (‘check_partitioning’, ‘<schema>’, ‘<table>’)

o Data check: general check plus check if all rows are located in correct partsCall check_table_consistency (‘check_partitioning_data’, ‘<schema>’, ‘<table>’)Call check_table_consistency (‘repair_partitioning_data’, ‘<schema>’, ‘<table>’)

- There is an option to replicate table to multiple hosts, useful when master data has to be joined with other table that located in multiple host and you want to reduce network trafficeCreate column table <table1> (I int primary key) replica at all locations

- SAP HANA manages loading and unloading tables into and from memory independently, but you can do it manually:Load and unload table command:Load <table_name>Unload <table_name>Load and unload individual columns:Load <table_name> (<column_name>, …)Unload <table_name> (<column_name>, …)

- You can also do delta merge manually:Merge delta of <table_name>

- You can export catalog objects (including table) to file system and import it back to another database (meta data only or meta data and content)

Lesson 4:- Administrative tasks:

o Initial tasks: full data and file system backup, install valid licenseo Regular tasks:

Check system status: overall system state, general system information, alert, CPU/memory/file system utilizationCheck status of services (from landscape tab): list of services, status, detail resource consumption, can restart/kill/stop/reconfigure services, can reset memory statisticsPerform data backupsCheck alerts and error logsCheck performanceCheck volume configurationMaintain configurationCheck system information

o On-demand tasks:Check diagnosis filesActivate and analyze additional traceAvoid log full situationAvoid log backup area becoming fullMonitor disk space used for diagnosis files

Page 27: HA200

- Monitoring resource utilization and memory allocation:Showed peak used memory, used memory. There is also feature to limit maximum memory consumption (parameter: statement_memory_limit=<integer> in GB, from global.ini memorymanager)

- Memory Display:o show memory peak usedo show memory usedo can reset memory statistics

- Memory Overview Editor:o detail of memory usage (pie chart): physical memory, used memory, memory

usage of table, memory usage of database management- Memory allocation statistics editor:

o for each selected services, components are listed based on current used memoryo SAP HANA used memory displayed in pie charto for each component, alllocators are listed based on inclusive memoryo top 10 highest consuming allocators

- Host sub-tab displayed:o all hosts in distributed systemo failover statuso host reconfiguration optiono remove host from system

- Redistribution sub-tab displayed:o redistribution of data before removing a hosto redistribution of data after adding a hosto optimize table distributiono optimize table partitioning

- System replication sub-tab displayed:o initial system replication configuration to establish connection between 2 identical

systemo system replication status to make sure that both systems are in-synco trigger failover to secondary system

- Alert sub-tab:o Current alerto Detail information of individual alerto Alert sorted by time period (last 15, 30, 60, 120 minutes, today, yesterday, last

week)- In Performance tab:

o Threadso Jobso Expensive statemento SQL plan cacheo Blocked transactiono Sessionso Load

Page 28: HA200

- Threads sub-tab: all running threads, blocked threads, detail of threads, end of operation, view call stack. Threads are group by: connection ID, call hierarchy, duration

- Sessions sub-tab: all sessions (active/inactive), blocked session, statistics (avg. query runtime, # of DML and DDL statement), cancel sessions

- Blocked transaction sub-tab: showed transactions that are unable to be processed further because need to acquire transactional lock that are currently being held by another transaction. Or being blocked waiting for another resources (disk or network)

- SQL Plan cache sub-tab: for performance analysis, overview of statement executed in the system, stores compiled execution plans of SQL statement for reuse, for monitoring: keep statistics of each plan

- Expensive statement sub-tab: individual SQL queries whose execution exceed the threshold, may reduce performance of the database, trace records information for further analysis. Expensive trace is de-activated by default.

- Job progress sub-tab: monitor long running transaction such as: delta merge, data compression, delta log replays; current high load; start time; when will they finish

- Load sub-tab: display of current performance (CPU usage, memory consumption, table unloads)

- Volume tab, you can monitor:o Disk usageo Volume sizeo Other disk activity statistics

- S- S- S-