Essbase Optimize With Hyper V

22
earnobiee.wordpress.com [email protected] for all Hyperion video tutorial/Training/Certification Essbase Optimization Techniques Amit Sharma Hyperion Trainer learnhyperion.wordpress.com

description

esv

Transcript of Essbase Optimize With Hyper V

Page 1: Essbase Optimize With Hyper V

http://learnobiee.wordpress.com [email protected] for all Hyperion video tutorial/Training/Certification/Materialhttp://learnobiee.wordpress.com [email protected] for all Hyperion video tutorial/Training/Certification/Material

Essbase Optimization Techniques

Amit SharmaHyperion Trainerlearnhyperion.wordpress.com

Page 2: Essbase Optimize With Hyper V

http://learnobiee.wordpress.com [email protected] for all Hyperion video tutorial/Training/Certification/Materialhttp://learnobiee.wordpress.com [email protected] for all Hyperion video tutorial/Training/Certification/Material

Essbase Optimization

The first step of Essbase optimization is to monitor Essbase performance and identify performance bottleneck.

Viewing Essbase Server/Database Information

MaxL Display Application

ESSCMD GETAPPSTATE, GETPERFSTATS

ESSCMD GETAPPINFO,GETDBINFOListLocks, UNLOCKOBJECT

Viewing Essbase Server/Database Information

MaxL Display Application

ESSCMD GETAPPSTATE, GETPERFSTATS

ESSCMD GETAPPINFO,GETDBINFOListLocks, UNLOCKOBJECT

Monitoring User Sessions and Requests

MaxL display session, alter system

alter system logout session by user 'admin' on application sample force;Unlockobject 1 sample basic basic

Monitoring User Sessions and Requests

MaxL display session, alter system

alter system logout session by user 'admin' on application sample force;Unlockobject 1 sample basic basic

Page 3: Essbase Optimize With Hyper V

http://learnobiee.wordpress.com [email protected] for all Hyperion video tutorial/Training/Certification/Materialhttp://learnobiee.wordpress.com [email protected] for all Hyperion video tutorial/Training/Certification/Material

Database Cache Settings Database Cache Settings describes database cache settings and lists the location of

the settings in Administration Services, MaxL, and ESSCMD.

Page 4: Essbase Optimize With Hyper V

http://learnobiee.wordpress.com [email protected] for all Hyperion video tutorial/Training/Certification/Materialhttp://learnobiee.wordpress.com [email protected] for all Hyperion video tutorial/Training/Certification/Material

Optimizing Database CachesOptimizing Database Caches

Analytic Services depends on caches for indexes, paging and calculating. Inadequate cache settings can significantly impact database processes

Analytic Services depends on caches for indexes, paging and calculating. Inadequate cache settings can significantly impact database processes

Index CacheIndex Cache

When you request a data block, the index is used to find its location on disk. If the block location is not found in the index cache, the index page that has the block entry is pulled into memory (into the index cache) from the disk. If the index cache is full, the least recently used index page in memory (in the index cache) is dropped to make room for the new index page.

When you request a data block, the index is used to find its location on disk. If the block location is not found in the index cache, the index page that has the block entry is pulled into memory (into the index cache) from the disk. If the index cache is full, the least recently used index page in memory (in the index cache) is dropped to make room for the new index page.

alter database set index_cache_sizealter database set index_cache_size

Page 5: Essbase Optimize With Hyper V

http://learnobiee.wordpress.com [email protected] for all Hyperion video tutorial/Training/Certification/Materialhttp://learnobiee.wordpress.com [email protected] for all Hyperion video tutorial/Training/Certification/Material

1

Block Numbering

Index:100-10, New York

100-20, New York

100-30, New York

100, New York

200-20, New York

200-30, New York

200-40, New York

200, New York

. . .

100-10, Massachusetts

100-20, Massachusetts

100-30, Massachusetts

. . .

2

4

5

6

20

21

7

22

8

33

11

33

Optimizing Database CachesOptimizing Database Caches

Page 6: Essbase Optimize With Hyper V

http://learnobiee.wordpress.com [email protected] for all Hyperion video tutorial/Training/Certification/Materialhttp://learnobiee.wordpress.com [email protected] for all Hyperion video tutorial/Training/Certification/Material

Data CacheData Cache

Optimizing Database CachesOptimizing Database Caches

Data blocks can reside on physical disk and in RAM. The amount of memory allocated for blocks is called the data cache.

Data blocks can reside on physical disk and in RAM. The amount of memory allocated for blocks is called the data cache.

When a block is requested, the data cache is searched. If the block is found in the data cache, it is accessed immediately. If the block is not found in the data cache, the index is searched for the appropriate block number. The block's index entry is then used to retrieve the block from the proper data file on disk.

When a block is requested, the data cache is searched. If the block is found in the data cache, it is accessed immediately. If the block is not found in the data cache, the index is searched for the appropriate block number. The block's index entry is then used to retrieve the block from the proper data file on disk.

alter database set data_cache_sizealter database set data_cache_size

Page 7: Essbase Optimize With Hyper V

http://learnobiee.wordpress.com [email protected] for all Hyperion video tutorial/Training/Certification/Materialhttp://learnobiee.wordpress.com [email protected] for all Hyperion video tutorial/Training/Certification/Material

Understanding Buffered I/O and Direct I/OThe Essbase Kernel uses buffered I/O (input/output) by default, but direct I/O is available on most of the operating systems and file systems that Essbase supports.

Understanding Buffered I/O and Direct I/OThe Essbase Kernel uses buffered I/O (input/output) by default, but direct I/O is available on most of the operating systems and file systems that Essbase supports.

Buffered I/O uses the file system buffer cache.Direct I/O bypasses the file system buffer cache, and is able to perform asynchronous, overlapped I/Os. The following benefits are provided:● Faster response time. A user waits less time for Essbase to return data.● Scalability and predictability. Essbase lets you customize the optimal cache sizes for its databases.

If you set a database to use direct I/O, Essbase attempts to use direct I/O the next time the database is started. If direct I/O is not available on your platform at the time the database is started, Essbase uses buffered I/O, which is the default. However, Essbase will store the I/O access mode selection in the security file, and will attempt to use that I/O access mode each time the database is started.

Buffered I/O uses the file system buffer cache.Direct I/O bypasses the file system buffer cache, and is able to perform asynchronous, overlapped I/Os. The following benefits are provided:● Faster response time. A user waits less time for Essbase to return data.● Scalability and predictability. Essbase lets you customize the optimal cache sizes for its databases.

If you set a database to use direct I/O, Essbase attempts to use direct I/O the next time the database is started. If direct I/O is not available on your platform at the time the database is started, Essbase uses buffered I/O, which is the default. However, Essbase will store the I/O access mode selection in the security file, and will attempt to use that I/O access mode each time the database is started.

Page 8: Essbase Optimize With Hyper V

http://learnobiee.wordpress.com [email protected] for all Hyperion video tutorial/Training/Certification/Materialhttp://learnobiee.wordpress.com [email protected] for all Hyperion video tutorial/Training/Certification/Material

The Data file cache setting is the size of the buffer used to load the database page files into memory. This setting is only relevant if you have your database set to Direct I/O. Buffered I/O is the default and on all but the largest of data loads has tweaking this setting made any noticeable difference.

The Data file cache setting can be set as large as the combined total size of all of the database page files. On a 30GB database setting, the page file cache size may not be practical, or even possible, especially if you only have 16 GB of system RAM!

For the Data cache setting, it is usually fine left where it is at the default setting of 3072KB. The recommended maximum size is 0.125 times the size of the Data file cache setting. Only change this setting if you are experiencing performance issues and have many concurrent users accessing the database.

The Index page setting is a static number and cannot be set by you. Oracle has determined that 8KB is sufficient for a database index page size.

The Data file cache setting is the size of the buffer used to load the database page files into memory. This setting is only relevant if you have your database set to Direct I/O. Buffered I/O is the default and on all but the largest of data loads has tweaking this setting made any noticeable difference.

The Data file cache setting can be set as large as the combined total size of all of the database page files. On a 30GB database setting, the page file cache size may not be practical, or even possible, especially if you only have 16 GB of system RAM!

For the Data cache setting, it is usually fine left where it is at the default setting of 3072KB. The recommended maximum size is 0.125 times the size of the Data file cache setting. Only change this setting if you are experiencing performance issues and have many concurrent users accessing the database.

The Index page setting is a static number and cannot be set by you. Oracle has determined that 8KB is sufficient for a database index page size.

Page 9: Essbase Optimize With Hyper V

http://learnobiee.wordpress.com [email protected] for all Hyperion video tutorial/Training/Certification/Materialhttp://learnobiee.wordpress.com [email protected] for all Hyperion video tutorial/Training/Certification/Material

Calculator CacheCalculator Cache

Optimizing Database CachesOptimizing Database Caches

The calculator cache is a buffer in memory that Essbase uses to create and track data blocks during calculation operations. Essbase can create a bitmap, whose size is controlled by the size of the calculator cache, to record and track data blocks during a calculation. Determining which blocks exist using the bitmap is faster than accessing the disk to obtain the information, particularly if calculating a database for the first time or calculating a database when the data is very sparse.

The calculator cache is a buffer in memory that Essbase uses to create and track data blocks during calculation operations. Essbase can create a bitmap, whose size is controlled by the size of the calculator cache, to record and track data blocks during a calculation. Determining which blocks exist using the bitmap is faster than accessing the disk to obtain the information, particularly if calculating a database for the first time or calculating a database when the data is very sparse.

The dynamic calculator cache is a buffer in memory that Essbase uses to store all of the blocks needed for a calculation of a Dynamic Calc member in a dense dimension (for example, for a query). Essbase uses a separate dynamic calculator cache for each open database. The DYNCALCCACHEMAXSIZE setting in the essbase.cfg file specifies the maximum size of each dynamic calculator cache on the server.

The dynamic calculator cache is a buffer in memory that Essbase uses to store all of the blocks needed for a calculation of a Dynamic Calc member in a dense dimension (for example, for a query). Essbase uses a separate dynamic calculator cache for each open database. The DYNCALCCACHEMAXSIZE setting in the essbase.cfg file specifies the maximum size of each dynamic calculator cache on the server.

Dynamic calculator cacheDynamic calculator cache

Page 10: Essbase Optimize With Hyper V

http://learnobiee.wordpress.com [email protected] for all Hyperion video tutorial/Training/Certification/Materialhttp://learnobiee.wordpress.com [email protected] for all Hyperion video tutorial/Training/Certification/Material

Preventing or Removing Fragmentation

You can prevent and remove fragmentation:

•To prevent fragmentation, optimize data loads by sorting load records based upon sparse dimension members.

•To remove fragmentation, perform an export of the database, delete all data in the database with CLEARDATA, and reload the export file.

•To remove fragmentation, force a dense restructure of the database.

Types of Database Restructuring

This section describes the two ways that a database restructure is triggered:

Implicit Restructures

Explicit Restructures

Page 11: Essbase Optimize With Hyper V

http://learnobiee.wordpress.com [email protected] for all Hyperion video tutorial/Training/Certification/Materialhttp://learnobiee.wordpress.com [email protected] for all Hyperion video tutorial/Training/Certification/Material

Hit Ratio – Percentage of searches that did not involve retrieving from disk

Percentage Of Maximum Blocks Existing - Is A Percentage Comparison between existing blocks and potential blocks

Compression Ratio - Ratio of compressed block size to expanded block size

The average clustering ratio database statistic indicates the fragmentation level of the data (.pag) files. The maximum value, 1, indicates no fragmentation

Page 12: Essbase Optimize With Hyper V

http://learnobiee.wordpress.com [email protected] for all Hyperion video tutorial/Training/Certification/Materialhttp://learnobiee.wordpress.com [email protected] for all Hyperion video tutorial/Training/Certification/Material

Others Database Settings

Miscellaneous Database Settings describes miscellaneous database settings and lists the location of the settings in Administration Services, MaxL, and ESSCMD.

Page 13: Essbase Optimize With Hyper V

http://learnobiee.wordpress.com [email protected] for all Hyperion video tutorial/Training/Certification/Materialhttp://learnobiee.wordpress.com [email protected] for all Hyperion video tutorial/Training/Certification/Material

Ten Steps To OptimizationTen Steps To Optimization

Step 1: The Starting Line: Model AnalysisStep 1: The Starting Line: Model Analysis

Minimize the number of dimensions. Do not ask for everything in one model

Minimize complexity of individual dimensions. Consider UDAs and Attribute Dimensions in order to reduce the size of some of the dimensions Examine the level of granularity in the dimensions.

Minimize the number of dimensions. Do not ask for everything in one model

Minimize complexity of individual dimensions. Consider UDAs and Attribute Dimensions in order to reduce the size of some of the dimensions Examine the level of granularity in the dimensions.

Step 2: Order The Outline Step 2: Order The Outline

Sparse smallest to Largest

Sparse smallest to Largest

Dense LargestSmallest

Dense LargestSmallest

Hourly Glass ModelHourly Glass Model

Dense dimensions from largest to smallest. Small and large is measured simply by counting the number of Stored members in a dimension. The effect of sparse dimension ordering is much greater than dense dimension ordering.

Sparse dimensions from smallest to largest. This relates directly to how the calculator cache functions.

Dense dimensions from largest to smallest. Small and large is measured simply by counting the number of Stored members in a dimension. The effect of sparse dimension ordering is much greater than dense dimension ordering.

Sparse dimensions from smallest to largest. This relates directly to how the calculator cache functions.

Page 14: Essbase Optimize With Hyper V

http://learnobiee.wordpress.com [email protected] for all Hyperion video tutorial/Training/Certification/Materialhttp://learnobiee.wordpress.com [email protected] for all Hyperion video tutorial/Training/Certification/Material

Ten Steps To OptimizationTen Steps To Optimization

Step 3: Evaluate Dense/Sparse Settings Step 3: Evaluate Dense/Sparse Settings

Finding the optimal configuration for the Dense/sparse settings is the most

important step in tuning a database.

Optimize the block size. This varies per operating system, but in choosing the best Dense/sparse configuration keep in mind that blocks over 100k tend to yield poorer performance. In general, Analytic Services runs optimally with smaller block sizes

Finding the optimal configuration for the Dense/sparse settings is the most

important step in tuning a database.

Optimize the block size. This varies per operating system, but in choosing the best Dense/sparse configuration keep in mind that blocks over 100k tend to yield poorer performance. In general, Analytic Services runs optimally with smaller block sizes

Step 4: System TuningStep 4: System Tuning

System tuning is dependent on the type of hardware and operating

Keep memory size higher.Ensure there is no conflict for resources with other applications

System tuning is dependent on the type of hardware and operating

Keep memory size higher.Ensure there is no conflict for resources with other applications

Page 15: Essbase Optimize With Hyper V

http://learnobiee.wordpress.com [email protected] for all Hyperion video tutorial/Training/Certification/Materialhttp://learnobiee.wordpress.com [email protected] for all Hyperion video tutorial/Training/Certification/Material

Ten Steps To OptimizationTen Steps To Optimization

Step 5: Cache SettingsStep 5: Cache Settings

The actual cache settings recommended is strongly dependent on your specific situation. To measure the effectiveness of the cache settings, keep track of the time taken to

do a calculation and examine the hit ratio statistics in your database information.

The actual cache settings recommended is strongly dependent on your specific situation. To measure the effectiveness of the cache settings, keep track of the time taken to

do a calculation and examine the hit ratio statistics in your database information.

Step 6: Optimize Data LoadsStep 6: Optimize Data Loads

Know your database configuration settings (which dimensions are dense and sparse).Organize the data file so that it is sorted on sparse dimensions. The most effective data load is one which makes the fewest passes on the database. Hence, by sorting on sparse dimensions, you are loading a block fully before

moving to the next one.Load data locally on the server. If you are loading from a raw data file dump, make sure the data file is on the server. If it is on the client, you may bottleneck

on the network

Know your database configuration settings (which dimensions are dense and sparse).Organize the data file so that it is sorted on sparse dimensions. The most effective data load is one which makes the fewest passes on the database. Hence, by sorting on sparse dimensions, you are loading a block fully before

moving to the next one.Load data locally on the server. If you are loading from a raw data file dump, make sure the data file is on the server. If it is on the client, you may bottleneck

on the network

Page 16: Essbase Optimize With Hyper V

http://learnobiee.wordpress.com [email protected] for all Hyperion video tutorial/Training/Certification/Materialhttp://learnobiee.wordpress.com [email protected] for all Hyperion video tutorial/Training/Certification/Material

Ten Steps To OptimizationTen Steps To Optimization

Step 7: Optimize RetrievalsStep 7: Optimize Retrievals

Increase the Retrieval Buffer size. This helps if retrievals are affected due to

dynamic calculations and attribute dimensions. Increase the Retrieval Sort Buffer size if you are performing queries involving sorting or ranking. Smaller block sizes tend to give better retrieval performance. Logically, this makes sense because it usually implies less I/O. Smaller reports retrieve faster. Attribute may impact the calculation performance which usually has a higher importance from a performance standpoint. If you have a lot of dynamic calculations or attribute dimensionsHigher Index cache settings may help performance since blocks are found quicker

Increase the Retrieval Buffer size. This helps if retrievals are affected due to

dynamic calculations and attribute dimensions. Increase the Retrieval Sort Buffer size if you are performing queries involving sorting or ranking. Smaller block sizes tend to give better retrieval performance. Logically, this makes sense because it usually implies less I/O. Smaller reports retrieve faster. Attribute may impact the calculation performance which usually has a higher importance from a performance standpoint. If you have a lot of dynamic calculations or attribute dimensionsHigher Index cache settings may help performance since blocks are found quicker

Page 17: Essbase Optimize With Hyper V

http://learnobiee.wordpress.com [email protected] for all Hyperion video tutorial/Training/Certification/Materialhttp://learnobiee.wordpress.com [email protected] for all Hyperion video tutorial/Training/Certification/Material

Ten Steps To OptimizationTen Steps To Optimization

Step 8: Optimize Calculations Step 8: Optimize Calculations

Unary calculations are the fastest. Try to put everything in the outline and perform a

Calc All when possible. You should FIX on sparse dimensions, IF on dense dimensions. FIX statements on sparse dimensions only brings into memory blocks with those sparse combinations which the calc has focused on. If statements on dense dimensions operate on blocks

as they are brought into memory.Use the Two Pass Calculation tag. Try to avoid multiple passes on the database. In the case where the calculation is a CALC Use Intelligent Calc in the case of simple calc scripts.

Unary calculations are the fastest. Try to put everything in the outline and perform a

Calc All when possible. You should FIX on sparse dimensions, IF on dense dimensions. FIX statements on sparse dimensions only brings into memory blocks with those sparse combinations which the calc has focused on. If statements on dense dimensions operate on blocks

as they are brought into memory.Use the Two Pass Calculation tag. Try to avoid multiple passes on the database. In the case where the calculation is a CALC Use Intelligent Calc in the case of simple calc scripts.

Page 18: Essbase Optimize With Hyper V

http://learnobiee.wordpress.com [email protected] for all Hyperion video tutorial/Training/Certification/Materialhttp://learnobiee.wordpress.com [email protected] for all Hyperion video tutorial/Training/Certification/Material

Ten Steps To OptimizationTen Steps To Optimization

Step 9: Defragmentation Step 9: Defragmentation

Fragmentation occurs over time as data blocks are updated. As the data blocks are updated, they grow (assuming you are using compression) and the updated blocks are appended to

the page file. This tends to leave small free space gaps in the page file.

Fragmentation occurs over time as data blocks are updated. As the data blocks are updated, they grow (assuming you are using compression) and the updated blocks are appended to

the page file. This tends to leave small free space gaps in the page file.

Time - The longer you run your database without clearing and reloading the more likely it is that it has become fragmented.

Incremental Loads - This usually leads to lots of updates for blocks.

Many Calculations/Many Passes On The Database - Incremental calculations or calculations that pass through the data blocks multiple times leads to fragmentation.

Time - The longer you run your database without clearing and reloading the more likely it is that it has become fragmented.

Incremental Loads - This usually leads to lots of updates for blocks.

Many Calculations/Many Passes On The Database - Incremental calculations or calculations that pass through the data blocks multiple times leads to fragmentation.

Page 19: Essbase Optimize With Hyper V

http://learnobiee.wordpress.com [email protected] for all Hyperion video tutorial/Training/Certification/Materialhttp://learnobiee.wordpress.com [email protected] for all Hyperion video tutorial/Training/Certification/Material

Ten Steps To OptimizationTen Steps To Optimization

Step 10: PartitionStep 10: Partition

By breaking up one large database into smaller pieces, calculation performance may be optimized. Because this adds a significant layer of complexity to administration, this is the last of the optimization steps we list. However, this does not mean that has the least impact.

By breaking up one large database into smaller pieces, calculation performance may be optimized. Because this adds a significant layer of complexity to administration, this is the last of the optimization steps we list. However, this does not mean that has the least impact.

Page 20: Essbase Optimize With Hyper V

http://learnobiee.wordpress.com [email protected] for all Hyperion video tutorial/Training/Certification/Materialhttp://learnobiee.wordpress.com [email protected] for all Hyperion video tutorial/Training/Certification/Material

The Essbase optimization main items checklist (for block storage cubes)

Block size Large block size means bigger chunks of data to be pulled into memory with each read, but might also mean more of the data you need is in memory IF your operations are done mainly in-block. Generally I prefer smaller block sizes, but there is not a specific guide. In the Essbase Admin Guide they say blocks should be between 1-100Kb in size, but nowadays with more memory on servers this can be larger. My experience is to make blocks below 50Kb but not less the 1-2Kb, but this is all dependent on the actual data density in the cube. Do not be afraid to experiment with dense and sparse settings to get to the optimal block size, I have done numerous cubes with just one dimension as dense (typically a large account dimension), and cubes where neither the account nor time dimension is dense. You will know you have good block size selection by looking at the next point, block density.

Block density This gives an indication of the average percentage of each block which contains data. In general data is sparse, therefore a value over 1% is actually quite good. If your block density is over 5%, then your dense/sparse setting is generally spot-on. Look at this whenever you change dense and sparse settings in conjunction with block size to see if you settings are optimal. A large block with high density is OK, but large blocks with very low density (< 1%) not.

Cache settings Never ever leave a cube with the default cache settings. Often a client complains about Essbase performance, and sure enough when I look at the cache settings it is the default settings. This is never enough (except for a very basic cube). Rule of thumb here is to see if you can get the entire index file into the cache, and make the data cache 3 times index cache, or at least some significant size. Also check your cube statistics to see the hit ratio on index and data cache, this gives an indication what % of time the data being searched is found in memory. For index cache this should be as close to 1 as possible, for data cache as high as possible.

created and filled with data in sequence, making the data load faster and the cube less fragmented.

The Essbase optimization main items checklist (for block storage cubes)

Block size Large block size means bigger chunks of data to be pulled into memory with each read, but might also mean more of the data you need is in memory IF your operations are done mainly in-block. Generally I prefer smaller block sizes, but there is not a specific guide. In the Essbase Admin Guide they say blocks should be between 1-100Kb in size, but nowadays with more memory on servers this can be larger. My experience is to make blocks below 50Kb but not less the 1-2Kb, but this is all dependent on the actual data density in the cube. Do not be afraid to experiment with dense and sparse settings to get to the optimal block size, I have done numerous cubes with just one dimension as dense (typically a large account dimension), and cubes where neither the account nor time dimension is dense. You will know you have good block size selection by looking at the next point, block density.

Block density This gives an indication of the average percentage of each block which contains data. In general data is sparse, therefore a value over 1% is actually quite good. If your block density is over 5%, then your dense/sparse setting is generally spot-on. Look at this whenever you change dense and sparse settings in conjunction with block size to see if you settings are optimal. A large block with high density is OK, but large blocks with very low density (< 1%) not.

Cache settings Never ever leave a cube with the default cache settings. Often a client complains about Essbase performance, and sure enough when I look at the cache settings it is the default settings. This is never enough (except for a very basic cube). Rule of thumb here is to see if you can get the entire index file into the cache, and make the data cache 3 times index cache, or at least some significant size. Also check your cube statistics to see the hit ratio on index and data cache, this gives an indication what % of time the data being searched is found in memory. For index cache this should be as close to 1 as possible, for data cache as high as possible.

created and filled with data in sequence, making the data load faster and the cube less fragmented.

Page 21: Essbase Optimize With Hyper V

http://learnobiee.wordpress.com [email protected] for all Hyperion video tutorial/Training/Certification/Materialhttp://learnobiee.wordpress.com [email protected] for all Hyperion video tutorial/Training/Certification/Material

Ten Steps To OptimizationTen Steps To Optimization

Outline dimension order Remember the hourglass principle. This means order the dimensions in your outline as follows – first put the largest (in number of members) dense dimension, then the next largest dense dimension, and continue until the smallest dense dimension. Now put the smallest sparse dimension, then next smallest, and continue until the largest sparse dimension. Because of the way the Essbase calculator works, this arrangement optimizes number of passes through a cube. A variation of this which also seems to work well is the hourglass on a stick, where you put all non-aggregating sparse dimensions (i.e. years,, scenario) beneath the largest sparse dimension.

Commit Block settings This controls how often blocks in memory are written to disk while busy loading or calculating a cube. You want to minimize disk writes, as this takes up a lot of processing time, so set this to be quite high. The default setting is 3000 blocks, if your block size is relatively small (< 10KB) make this much higher, 20000 to 50000. This setting alone can cause dramatic performance improvements specifically on Calc All operations and cube loads.

Use of FIX..ENDFIX in calc scripts One of the most misunderstood and common mistakes in calc scripts is the usage of FIX..ENDFIX. Always FIX first on sparse dimensions only, and then within this FIX again on dense members, or use dense members in IF statements within the FIX statements. The reason for this is that if you FIX only on sparse members first, it filters on just specific blocks, which is faster than trying to fix within blocks (i.e. dense members).

Optimizing data loads The best technique to make large data loads faster is to have the optimal order of dimensions in your source file, and to sort this optimally. Do do this, order the fields in your source file (or SQL statement) by having as your first field your largest sparse dimension, your next field your next largest sparse dimension, and so on. So if you are using the hourglass dimension order, you data file should have dimensions listed from the bottom dimension upwards. Your dense dimensions should always be last, and if you have multiple data columns these should be dense dimension members. Then you should sort the data file in the same order, i.e. by largest sparse dimension, next largest sparse dimension, etc. This will cause blocks to be created and filled with data in sequence, making the data load faster and the cube less fragmented.

Outline dimension order Remember the hourglass principle. This means order the dimensions in your outline as follows – first put the largest (in number of members) dense dimension, then the next largest dense dimension, and continue until the smallest dense dimension. Now put the smallest sparse dimension, then next smallest, and continue until the largest sparse dimension. Because of the way the Essbase calculator works, this arrangement optimizes number of passes through a cube. A variation of this which also seems to work well is the hourglass on a stick, where you put all non-aggregating sparse dimensions (i.e. years,, scenario) beneath the largest sparse dimension.

Commit Block settings This controls how often blocks in memory are written to disk while busy loading or calculating a cube. You want to minimize disk writes, as this takes up a lot of processing time, so set this to be quite high. The default setting is 3000 blocks, if your block size is relatively small (< 10KB) make this much higher, 20000 to 50000. This setting alone can cause dramatic performance improvements specifically on Calc All operations and cube loads.

Use of FIX..ENDFIX in calc scripts One of the most misunderstood and common mistakes in calc scripts is the usage of FIX..ENDFIX. Always FIX first on sparse dimensions only, and then within this FIX again on dense members, or use dense members in IF statements within the FIX statements. The reason for this is that if you FIX only on sparse members first, it filters on just specific blocks, which is faster than trying to fix within blocks (i.e. dense members).

Optimizing data loads The best technique to make large data loads faster is to have the optimal order of dimensions in your source file, and to sort this optimally. Do do this, order the fields in your source file (or SQL statement) by having as your first field your largest sparse dimension, your next field your next largest sparse dimension, and so on. So if you are using the hourglass dimension order, you data file should have dimensions listed from the bottom dimension upwards. Your dense dimensions should always be last, and if you have multiple data columns these should be dense dimension members. Then you should sort the data file in the same order, i.e. by largest sparse dimension, next largest sparse dimension, etc. This will cause blocks to be created and filled with data in sequence, making the data load faster and the cube less fragmented.

Page 22: Essbase Optimize With Hyper V

http://learnobiee.wordpress.com [email protected] for all Hyperion video tutorial/Training/Certification/Materialhttp://learnobiee.wordpress.com [email protected] for all Hyperion video tutorial/Training/Certification/Material

Video TutorialVideo Tutorial

If you need the Video tutorial of Hyperion Essbase, Planning or Hyperion Financial Management Please mail to [email protected]

If you need the Video tutorial of Hyperion Essbase, Planning or Hyperion Financial Management Please mail to [email protected]