Sybase Important Concepts

49
Scrollable Cursors Posted on March 25, 2013 by sybaserays Adaptive Server Enterprise allows both scrollable and nonscrollable cursors, which can be either semi‐sensitive or insensitive. “Scrollable” means that you can scroll through the cursor result set by fetching any, or many, rows, rather than one row at a time; you can also scan the result set repeatedly. A scrollable cursor allows you to set the position of the cursor anywhere in the cursor result set for as long as the cursor is open, by specifying the option first, last, absolute, next, prior, absolute or relative in a fetch statement. Syntax: declare cursor_name [cursor_sensitivity] [cursor_scrollability] cursor for cursor_specification Example: declare CurSr scroll cursor for select emp_name from employees Note: cursor_scrollability can be defined as scroll or no scroll. The default is no scroll. Note: cursor_sensitivity can be defined as insensitive or semi_sensitive. Note: The default for the cursor is semisensitive. No support for the concept of “sensitive” exists in ASE 15.

description

This Document Contains Important Concepts of Sybase ASE, which will help a intermediate Level Sybase Developer while attending interviews.

Transcript of Sybase Important Concepts

Scrollable Cursors

Posted onMarch 25, 2013bysybaseraysAdaptive Server Enterprise allows both scrollable and nonscrollable cursors, which can be eithersemisensitiveorinsensitive.Scrollable means that you can scroll through the cursor result set by fetching any, or many, rows, rather than one row at a time; you can also scan the result set repeatedly.

A scrollable cursor allows you to set the position of the cursor anywhere in the cursor result set for as long as the cursor is open, by specifying the optionfirst, last, absolute, next, prior,absolute orrelativein a fetch statement.Syntax:declare cursor_name[cursor_sensitivity][cursor_scrollability] cursorfor cursor_specificationExample:declare CurSr scroll cursor forselect emp_name from employees

Note:cursor_scrollability can be defined asscrollor noscroll. The default isno scroll.Note:cursor_sensitivity can be defined asinsensitiveorsemi_sensitive.Note:The default for the cursor issemisensitive. No support for the concept of sensitive exists in ASE 15.Note:For scrollable cursors in ASE 15, the only valid cursor specification is for read only.Note:All update cursors are nonscrollable.The UseCursor property must be set correctly in order to obtain the desired scrollable cursor.Setting the UseCursor connection property: When you set the UseCursor connection property to 1, and the ASE version is 15.0 or later, serverside scrollable cursors are used. Serverside scrollable cursors are not available on all pre15.0 ASE versions. When you set the UseCursor connection property to 0, clientside scrollable cursors (cached resultsets) are used, regardless of the ASE version.Cursor related Global variables:

Cursor Global Variables

@@fatch_status:

Cursor Fetch Status

@@cursor_rows:

@cursor_rows

Rules for Scrollable Cursor:next =>A fetch utilizing the next extension when the cursor is already positioned in the last row of the cursor set will result in @@sqlstatus = 2, @@fetch_status =1, and no data returned by the fetch. Cursor position will remain on the last row of the cursor set.prior =>A fetch utilizing the prior extension when the cursor is already positioned at the first row of the cursor result set will result in @@sqlstatus = 2, @@fetch_status =1, and no data returned by the fetch. Cursor position will remain on the first row of the cursor set.Note:A subsequent fetch of the next cursor row will fetch the first row of the cursor result set.absolute =>A fetch utilizing the absolute extension that calls a row that is greater than the rowcount in the cursor set will result in @@sqlstatus = 2, @@fetch_status of1, and no data returned by the fetch.Sensitivity and scrollability: Insensitive:The cursor shows only the result set as it is when the cursor is opened; data changes in the underlying tables are not visible Semisensitive:some changes in the base tables made since opening the cursor may appear in the result set. Data changes may or may not be visible to the semisensitive cursor.Insensitive scrollable cursors:When you declare and open an insensitive cursor, a worktable is created and fully populated with the cursor result set. Locks on the base table are released, and only the worktable is used for fetching.To declare cursor CurSr_I as an insensitive cursor, enter:declare CurSr_I insensitive scroll cursor forselect emp_id, fname, lnamefrom emp_tbwhere emp_id > 2002000open CurSr_I

Table (for exaplaining scrollable cursor)

The scrolling worktable is now populated with the data shown in abovetable.To change the name Sam to Joe, enter:update emp_tab set fname = Joewhere fname = SamNow four Sam rows in the base table emp_tab disappear, replaced by four Joe rows.

fetch absolute 2 CurSr_I

The cursor reads the second row from the cursor result set, and returns Row 2, 2002020, Sam, Clarac. Because the cursor is insensitive, the updated value is invisible to the cursor, and the value of the returned rowSam, rather than Joeis the same as the value of Row 2 in above displayedTable.

This next command inserts one more qualified row (that is, a row that meets the query condition in declare cursor) into table emp_tab, but the row membership is fixed in an cursor, so the added row is not visible to cursor CurSr_I. Enter:

insert into emp_tab values (2002101, Sophie, Chen, .., , )The following fetch command scrolls the cursor to the end of the worktable, and reads the last row in the result set, returning the row value 2002100, Sam, West. Again, because the cursor is insensitive, the new row inserted in emp_tab is not visible in cursor CurSr_Is result set.

fetch last CurSr_ISemisensitive scrollable cursors:Semisensitive scrollable cursors are like insensitive cursors in that they use a worktable to hold the result set for scrolling purposes. But insemi_sensitivemode, the cursors worktable materializes as the rows are fetched, rather than when you open the cursor. The membership of the result set is fixed only after all the rows have been fetched once.To declare cursor CurSr_SS semisensitive and scrollable, enter:

declare CurSr_SS semi_sensitive scroll cursor forselect emp_id, fname, lnamefrom emp_tabwhere emp_id > 2002000open CurSr_SSThe initial rows of the result set contain the data shown in above displayedTable.Because the cursor is semisensitive, none of the rows are copied to the worktable when you open the cursor. To fetch the first record, enter:

fetch first CurSr_SSThe cursor reads the first row fromemp_taband returns 2002010, Mari, Cazalis. This row is copied to the worktable. Fetch the next row by entering:

fetch next CurSr_SSThe cursor reads the second row fromemp_taband returns 2002020, Sam, Clarac. This row is copied to the worktable. To replace the name Sam with the name Joe, enter:update emp_tab set fname = Joewhere fname = SamThe four Sam rows in the base tableemp_tabdisappear, and four Joe rows appear instead. To fetch only the second row, enter:

fetch absolute 2 CurSr_SSThe cursor reads the second row from the result set and returns employee ID 2002020, but the value of the returned row is Sam, not Joe. Because the cursor is semisensitive, this row was copied into the worktable before the row was updated, and the data change made by theupdatestatement is invisible to the cursor, since the row returned comes from the result set scrolling worktable.

To fetch the fourth row, enter:

fetch absolute 4 CurSr_SSThe cursor reads the fourth row from the result set. Since Row 4, (2002040, Sam, Burke) was fetched after Sam was updated to Joe, the returned employee ID 2002040 is Joe, Burke. The third and fourth rows are now copied to the worktable.

To add a new row, enter:insert into emp_tab values (2002101, Sophie, Chen, .., , )

One more qualified row is added in the result set. This row is visible in the followingfetchstatement, because the cursor is semisensitive and because we have not yet fetched the last row. Fetch the updated version by entering:

fetch last CurSr_SS

Thefetchstatement reads 2002101, Sophie, Chen in the result set.

After using fetch with thelastoption, you have copied all the qualified rows of the cursor CurSr_SS to the worktable. Locking on the base table,emp_tab, is released, and the result set of cursor CurSr_SS is fixed. Any further data changes inemp_tabdo not affect the result set of cursor.Note:Locking schema and transaction isolation level also affect cursor visibility. The above example is based on the default isolation level, level 1Query Processor

Posted onJune 26, 2013bysybaseraysQuery processorprocesses SQL queriesspecified by user. The processor yields highly efficient query plans that execute using minimal resources, and ensure that results are consistent and correct.To process a query efficiently, the query processor uses: SQL Query specified by user. Statistics about the tables, indexes, and columns specified in the SQL query. Configurable variables defined for the ASE.The query processor uses several query processor modules to successfully executes several steps of process a queryQuery Processor modules:

INCLUDEPICTURE "http://sybaserays.com/wp-content/uploads/2013/06/QueryProcessing_Diagram.jpg" \* MERGEFORMATINET Query Processor modules

Above modules works as following:1. Theparsertransalate theSQL query statement (Querysepecified by a user)to an internal form called aquery tree. Parser checks syntax, verifies relations.2. This query tree isnormalized. This involves determining column and table names, transforming the query tree intoconjugate normal form (CNF), and resolving datatypes.3. Thepreprocessortransforms the query tree for some types of SQL statements, such as SQL statements with subqueries and views, to a more efficient query tree. Basic functions of preprocessor:Theoptimizeranalyzes the possible combinations of operations (parallelism, join ordering,access and join methods) to execute the SQL statement, and selects an efficient one based on thecost estimatesof the alternatives. Amongst all equivalent evaluation plans choose the one with lowest cost. Cost is estimated using statistical information. If a relation used in the query is view then each use of this relation in the form-list must replace by parser tree that describe the view.

It is also responsible for semantic checking mentioned below:

Checks relation uses : Every relation mentioned in FROM clause must be a relation or a view in current schema.

Check and resolve attribute uses: Every attribute mentioned in SELECT or WHERE clause must be an attribute of same relation in the current scope.

Check types: All attributes must be of a type appropriate to their uses.

4. The code generator converts the query plan generated by theoptimizerinto a format more suitable for the queryexecution engine.5. Theprocedural engineexecutes command statements such as create table, execute procedure, and declare cursor directly. For data manipulation language (DML) statements, such as select, insert, delete, and update, the engine sets up the execution environment for all query plans and calls the query execution engine.6. The query execution engine executes the ordered steps specified in the query plan provided by the code generator.Query Processor Improvements in ASE 15:

Performance of index based data access has been improved. Before ASE 15, the optimizer could not use the index if the join columns were of different datatypes. With ASE 15 there are no more issues surrounding mismatched datatypes and index usage. More than one index per table can be used to execute a query. This feature increases the performance of queries containingorsandstar joins.New optimization techniques now try to avoid creating worktables in the query scenario. Worktables were created in the tempdb to perform various tasks including sorting. The creation of worktables slows performance since they are typically resource intensive. ASE 15s newhashing technique performs sorting and grouping in memory, thus avoiding the necessity of a worktable. It is the buffer memory and not the procedure cache that is used for this operation. The elimination of the worktables has improved the performance of the queries containing order by and group by statements.ASE 15 has enhanced the parallelism to handle large data sets. It now handles bothhorizontalandverticalparallelism.Vertical parallelismprovides the ability to usemultiple CPUsat the same time to run one or more operations of a single query.Horizontal parallelismallows the query to access different data located on different partitions or disk devices at the same time.Temporary Tables

Posted onJuly 2, 2013bysybaseraysIn this section we will discuss about Temporary tables in Sybase ASE.I found temporary tables are very useful, many times in my work experience.From name itself you can guess that Temporary table is temporary but not permanent in database.Temporary tables are created in tempdb database (one of the default system database).There are two types of Temporary tables: Tables that can be shared among Adaptive Server sessions Tables that can be accessible only by the current Adaptive Server session or procedureNote:For easy reference, will replace Tables that can be accessible only by the current Adaptive Server session or procedure with hash table.Note:For easy reference, will replace Tables that can be shared among Adaptive Server sessions with Global Temporary table. Term Global Temporary table is not an official Sybase ASE term; we will be using this term in this post just for easy reference.Tables that can be shared among Adaptive Server sessions (GlobalTemporary Tables):This type of table can be shared among ASE server sessions. If you need to share data either between code segments or between stored procedures (during execution in a single session) or between users, you should use a global temporary table. Global temporary tables are permanent in tempdb.

We can create and drop global temporary table as followsCREATE TABLE:(if you are connected to user database)CREATE TABLE tempdb..accounts(id int)goOR

use tempdbgoCREATE TABLE accounts(id int)goDROP TABLE:(if you are connected to user database)DROPTABLE tempdb..accountsgoOR

use tempdbgoDROP TABLE accountsgoNote:You cannot DROP or ALTER a global temporary table while other connections are using the table.Note:If you dont mentioned tempdb.. in create statement, it will create accounts table in permanent user database.Tables that can be accessible only by the current Adaptive Server session or procedure:These types of tables are preceded with a # (which is why they are sometimes called hash tables). They BELONG to the user connection, the spid. They are not intended to be shared, either with other users, or with other code segments, stored procedures, etc.Hash tables are accessible only by the current Adaptive Server session or procedure. The table existsuntil the current session or procedure ends, or until its owner drops it using drop table.

Create a non-shareable temporary table by specifying a pound sign (#) before the table name in the create table statement.CREATE TABLE:CREATE table #accounts(id int)OR(Creating a temporary table from another table)SELECT * into #accounts FROM accountsAbove created table #accounts will be dropped if current session ends or if user drops it by usingdrop table command likeDROP TABLE:DROPTABLE #accountsNote:Drop tablecommand should be executed from the same session in which table was created.Hope you did understand, whatis thedifference between Tables that can be shared among Adaptive Server sessionsand Tables that can be accessible only by the current Adaptive Server session or procedure.

Where and why do we use Global temporary tables and hash tables?If there is a scenario where you want to see data in a temporary table after execution of a procedureor after specific connection closes (assuming we are using temporary table inside the procedure). What would you like to use for above scenario?If you are thinking to use # (hash) table, I would say its not a good idea as hash table wont be available after connection closes or after execution of procedure. So shall we use global temporary table(Tables that can be shared among Adaptive Server)?Now you may have question that if multiple users are accessing this global temporary table, it may result in something wrong (incorrect data).

For example:(if you are connected to user database)CREATE TABLE tempdb..accounts(id int)

If we use above table in a procedure (assume below script is written in a procedure andpermanent tables used in this script already exists in database and @current_user is a parameter passed to a procedure)-Procedure scriptDELETE FROM tempdb..accountsINSERT INTO tempdb..accounts (id) VALUES (SELECT id FROM user_accounts WHERE user_name=@current_user)SELECT * FROM account_details ad, tempdb..accounts taccWHERE ad.id = tacc.idDELETE FROM tempdb..accounts-Procedure script end

If above script (procedure) will be executed by two users at the same time then it may give unexpected results to one of the user.

Assume user ABHI and user Mike run above script at the same time.Data inuser_accountstable is as below image.

Above procedure script will delete data from tempdb..accounts on every run and insert data fromuser_accounts table for specified user_name (i.e. ABHI or Mike) . So it may result in unexpected results.

What can we do to avoid such issue?To avoid issues like this we can add session_id(@@spid) in global temporary tables.(if you are connected to user database)CREATETABLEtempdb..accounts(spid int,id int)Note:@@spidis a global variable and identifies the current user process id.And script would be like below. This will solve this issue and manyusers/processes can use a global temporary table at the same time. Each process will work(select/delete/update etc) on only those data rows which belong to their spid (process id).-Procedure script startDELETE FROM tempdb..accounts WHERE spid = @@spidINSERT INTO tempdb..accounts (spid, id) VALUES (SELECT @@spid, id FROM user_accounts WHERE user_name = @current_user)SELECT * FROM account_details ad, tempdb..accounts taccWHERE ad.id = tacc.idAND tacc.spid = @@spidDELETE FROM tempdb..accounts WHERE spid = @@spid-Procedure script endSuggestion: Create an Index on such tables. If the table is for partitioned use, place the spid as the first column in the Index.

Note:We can use hash tables, where we want to use temporary table in the same session.

Sometime you may have faced an issue that you created a permanent table in tempdb (global temporary table) and after few days when you try to access that table, it doesnt exist.

Do you know why?Temporary database itself is temporary. The contents are lost when the server exits (either gracefully, under SHUTDOWN; or without clean up, under power failure). The temporary database, being temporary, is not recovered. Instead it is created fresh, when the server is booted. It is created from the model database (imagine LOAD DATABASE tempdb FROM model, on each server boot). Therefore, if you want your permanent tables which reside in the temporary database to be available in temp database when the server reboots, simply create them in model database.Global temporary tables forPerformance Enhancement:There is another scenario where I create permanent tables in the temp db (global temporary tables). This is purely for performance reasons. Temp db is the most heavily used db on any server: therefore, anything you can do to enhance performance there will enhance the overall performance of the entire server. Lets say you have some stored procedures that utilize temporary tables (#tables), that are heavily used, many times every day, by many users. These tables get created and destroyed all the time, millions of times a day, and contain a very small amount of information (the scenario is even worse if the amount of data is large). If you implement these temporary tables as permanent tables in temp db, you achieve two significant performance enhancements: Eliminate that creation/destruction millions of times per day update statistics allow those stats to be used when the queries are optimized (hash temporary tables have no statistics; they default to 10 pages, 100 rows, which leads to incorrect optimizer decisions). This is significant for larger tables, as it ensures the decisions made by the optimizer when producing the Query Plan, are correct.Hope this post was useful for you. Please do comment if you have any doubts/suggestions.Partitioning Strategies

Posted onJune 11, 2013bysybaseraysData partitioning breaks up large tables and indexes into smaller pieces that can reside on separate partitions.Note:A segment is a portion of a device that is defined within ASE. It is used for the storage of specific types of data such as system data, log data, and the data itself.Partitions can be placed on individual segments and multiple partitions can be placed on a single segment. In turn, a segment or segments can be placed on any logical or physical device, thus isolating I/O and aiding performance and data availability.Note:Partitions are transparent to the end user, who can select, insert, and delete data using the same DML commands whether the table is partitioned or not.Note:To view information about partitions use sp_helpartition.Benefits of Partitioning: Improved scalability.

Improved performance concurrent multiple I/O on different partitions, and multiple threads on multiple CPUs working concurrently on multiple partitions.

Faster response time.

Partition transparency to applications.

Very large database (VLDB) support concurrent scanning of multiple partitions of very large tables.

Range partitioning to manage historical data.

Data partitions:A data partition is an independent database object with aunique partition ID. It is a subset of a table, and shares the column definitions and referential and integrity constraints of thebase table.

Note:Sybase recommends that you bind each partition to a different segment, and bind each segment to a different storage device to maximize I/O parallelism, Each semantically partitioned table has apartition keythat determines how individual data rows are distributed to different partitions.

Above above shows a number of activities happening on a table like Update statistics, Load data, a large query. So many activities on a large and unpartitioned table will slow down performance.By partitioning a table or index into smaller pieces, DBAs can run utilities on a per partition basis. This results in the utilities running faster, allowing other operations to work efficiently on data in other partitions and insuring that the bulk of the data in the table is available for applications.Local and global indexes on partitioned tables

Posted onJune 18, 2013bysybaseraysIndexes,like tables, can be partitioned. Prior to Adaptive Server 15.0, all indexes wereglobal. With Adaptive Server 15.0, you can createlocalas well asglobalindexes.

Adaptive Server supports local and global indexes. Alocal index spans data in exactly one data partition. For semantically partitioned tables, a local index has partitions that are equipartitioned with their base table; that is, the table and index share the same partitioning key and partitioning type. For all partitioned tables with local indexes, each local index partition has one and only one corresponding data partition. Each local index spans just one data partition. You can create local indexes onrange,hash,list, androundrobinpartitionedtables. Local indexes allow multiple threads to scan each data partition in parallel, which can greatly improve performance.

Aglobal index spans all data partitions in a table. Sybase supports only unpartitioned global indexes. All unpartitioned indexes on unpartitioned tables are global.

Apartitionedtable can havepartitionedandunpartitionedindexes. Anunpartitionedtable can have onlyunpartitioned, global indexes.Local versus global indexes Local indexes can increase concurrency through multiple index access points, which reduces rootpage contention. You can place local nonclustered index subtrees (index partitions) on separate segments to increase I/O parallelism. You can run reorg rebuild on a perpartition basis, reorganizing the local index subtree while minimizing the impact on other operations. Global nonclustered indexes are better for covered scans than local indexes, especially for queries that need to fetch rows across partitions.Creating global indexes:You can create global, clustered indexes only for roundrobinpartitioned tables. Adaptive Server supports global, nonclustered, unpartitioned indexes for all types of partitioned tables.You can create clustered and nonclustered global indexes on partitioned tables using syntax supported in Adaptive Server version 12.5.x and earlier. When you create an index on a partitioned table, Adaptive Server automatically creates a global index, if you: Create a nonclustered index on any partitioned table, and do not include the local index keywords. For example, on the hashpartitioned table mysalesdetail.Example:create nonclustered index ord_idx on mysalesdetail (au_id) Create a clustered index on a roundrobinpartitioned table, and do not include the local index keywords. For example, on currentpublishers table.Example:create clustered index pub_idx on currentpublishersCreating local indexes:Adaptive Server supports local clustered indexes and local nonclustered indexes on all types of partitioned tables. A local index inheritsthe partition types, partitioning columns, and partition boundsof the base table.Forrange, hash, and listpartitioned tables, Adaptive Server always creates local clustered indexes, whether or not you include the keywords local index in the create index statement.This example creates a local, clustered index on the partitioned mysalesdetail table. In a clustered index, the physical order of index rows must be the same as that of the data rows; you can create only one clustered index per table.create clustered index clust_idx on mysalesdetail(ord_num) local indexThis example creates a local, nonclustered index on the partitioned mysalesdetail table. The index is partitioned by title_id. You can create as many as 249 nonclustered indexes per table.create nonclustered index nonclust_idxon mysalesdetail(title_id) local index p1on seg1, p2 on seg2, p3 on seg3Global nonclustered index on partitioned table:You can create global indexes that are nonclustered and unpartitioned for all partitioning table strategies.The index and the data partitions can reside on the same or different segments. You can create the index on any indexable column in the table.The example in below imageis indexed on thepub_namecolumn; the table is partitioned on the pub_id column.

Global nonclustered index on a partitioned table (pub_name)

For this example, we use alter table to repartition publishers with three range partitions on thepub_idcolumn.Alter table publishers partition by range(pub_id)(a values