1 Algorithms for Query Processing and Optimization.

117
1 Algorithms for Query Processing and Optimization
  • date post

    21-Dec-2015
  • Category

    Documents

  • view

    242
  • download

    2

Transcript of 1 Algorithms for Query Processing and Optimization.

Page 1: 1 Algorithms for Query Processing and Optimization.

1

Algorithms for Query Processing and Optimization

Page 2: 1 Algorithms for Query Processing and Optimization.

2

0. Basic Steps in Query Processing

1. Parsing and translation: Translate the query into its internal form. This is then translated into relational algebra. Parser checks syntax, verifies relations

2. Query optimization: The process of choosing a suitable execution

strategy for processing a query. 3. Evaluation:

The query evaluation engine takes a query execution plan, executes that plan, and returns the answers to the query.

Page 3: 1 Algorithms for Query Processing and Optimization.

3

Basic Steps in Query Processing

Page 4: 1 Algorithms for Query Processing and Optimization.

4

Basic Steps in Query Processing

Page 5: 1 Algorithms for Query Processing and Optimization.

5

Basic Steps in Query Processing

Two internal representations of a query: Query Tree Query Graph

Select balance From account where balance < 2500 balance2500(balance(account))

is equivalent to

balance(balance2500(account))

Page 6: 1 Algorithms for Query Processing and Optimization.

6

Basic Steps in Query Processing

Annotated relational algebra expression specifying detailed evaluation strategy is called an evaluation-plan. E.g., can use an index on balance to find

accounts with balance 2500, or can perform complete relation scan and

discard accounts with balance 2500

Query Optimization: Amongst all equivalent evaluation plans choose

the one with lowest cost. Cost is estimated using statistical information

from the database catalog e.g. number of tuples in each relation, size of tuples

Page 7: 1 Algorithms for Query Processing and Optimization.

7

Measures of Query Cost

Cost is generally measured as total elapsed time for answering query. Many factors contribute to time cost disk accesses, CPU, or even network communication

Typically disk access is the predominant cost, and is also relatively easy to estimate.

For simplicity we just use number of block transfers from disk as the cost measure

Costs depends on the size of the buffer in main memory Having more memory reduces need for disk access

Page 8: 1 Algorithms for Query Processing and Optimization.

8

1. Translating SQL Queries into Relational Algebra (1)

Query Block: The basic unit that can be translated into the

algebraic operators and optimized. A query block contains a single SELECT-FROM-

WHERE expression, as well as GROUP BY and HAVING clause if these are part of the block.

Nested queries within a query are identified as separate query blocks

Aggregate operators in SQL must be included in the extended algebra.

Page 9: 1 Algorithms for Query Processing and Optimization.

9

Translating SQL Queries into Relational Algebra (2)

SELECT LNAME, FNAMEFROM EMPLOYEEWHERE SALARY > ( SELECT MAX (SALARY)

FROM EMPLOYEEWHERE DNO = 5);

SELECT MAX (SALARY)FROM EMPLOYEEWHERE DNO = 5

SELECT LNAME, FNAME

FROM EMPLOYEE

WHERE SALARY > C

πLNAME, FNAME (σSALARY>C(EMPLOYEE)) ℱMAX SALARY (σDNO=5 (EMPLOYEE))

Page 10: 1 Algorithms for Query Processing and Optimization.

10

2. Algorithms for External Sorting (1)

Sorting is needed in Order by, join, union, intersection, distinct, …

For relations that fit in memory, techniques like quicksort can be used.

External sorting: Refers to sorting algorithms that are suitable for

large files of records stored on disk that do not fit entirely in main memory, such as most database files.

For relations that don’t fit in memory, external sort-merge is a good choice

Page 11: 1 Algorithms for Query Processing and Optimization.

11

External Sort-Merge

External sort-merge algorithm has two steps: Partial sort step, called runs. Merge step, merges the sorted runs.

Sort-Merge strategy: Starts by sorting small subfiles (runs) of the main file

and then merges the sorted runs, creating larger sorted subfiles that are merged in turn.

Sorting phase: Sorts nB pages at a time

nB = # of main memory pages buffer creates nR = b/nB initial sorted runs on disk

b = # of file blocks (pages) to be sorted Sorting Cost = read b blocks + write b blocks = 2 b

Page 12: 1 Algorithms for Query Processing and Optimization.

12

External Sort-Merge

Example: nB = 5 blocks and file size b = 1024 blocks, then nR = (b/nB) = 1024/5 = 205 initial sorted runs

each of size 5 bocks (except the last run which will have 4 blocks)

nB = 2, b = 7, nR = b/nB = 4

run

Page 13: 1 Algorithms for Query Processing and Optimization.

13

External Sort-Merge

Sort Phase: creates nR sorted runs. i = 0 Repeat Read next nB blocks into RAMSort the in-memory blocksWrite sorted data to run file Ri

i = i + 1Until the end of the relation

nR = i

Page 14: 1 Algorithms for Query Processing and Optimization.

14

External Sort-Merge

Merging phase: The sorted runs are merged during one or more

passes. The degree of merging (dM) is the number of

runs that can be merged in each pass.

In each pass, One buffer block is needed to hold one block

from each of the runs being merged, and One block is needed for containing one block of

the merged result.

Page 15: 1 Algorithms for Query Processing and Optimization.

15

External Sort-Merge

We assume (for now) that nR nB. Merge the runs (nR -way merge).

Use nR blocks of memory to buffer input runs, and 1 block to buffer output.

Merge nR Runs Step Read 1st block of each nR runs Ri into its buffer pageRepeat

Select 1st record (sort order) among nR buffer pagesWrite the record to the output buffer.

If the output buffer is full write it to diskDelete the record from its input buffer pageIf the buffer page becomes empty then read the next block (if

any) of the run into the bufferUntil all input buffer pages are empty

Page 16: 1 Algorithms for Query Processing and Optimization.

16

External Sort-Merge

Merge nB - 1 Runs Step If nR nB, several merge passes are required.

merge a group of contiguous nB - 1 runs using one buffer for output

A pass reduces the number of runs by a factor of nB - 1, and creates runs longer by the same factor. E.g. If nB = 11, and there are 90 runs, one pass

reduces the number of runs to 9, each 10 times the size of the initial runs

Repeated passes are performed till all runs have been merged into one

Page 17: 1 Algorithms for Query Processing and Optimization.

17

External Sort-Merge

Degree of merging (dM) # of runs that can be merged together in each pass =

min (nB - 1, nR) Number of passes nP = (logdM(nR))

In our example dM = 4 (four-way merging)

min (nB-1, nR) = min(5-1, 205) = 4 Number of passes nP = (logdM(nR)) = (log4(205)) = 4

First pass:– 205 initial sorted runs would be merged into 52 sorted runs

Second pass:– 52 sorted runs would be merged into 13

Third pass:– 13 sorted runs would be merged into 4

Fourth pass:– 4 sorted runs would be merged into 1

Page 18: 1 Algorithms for Query Processing and Optimization.

18

External Sort-Merge

Blocking factor bfr = 1 record, nB = 3, b = 12, nR = 4, dM = min(3-1, 4) = 2

Page 19: 1 Algorithms for Query Processing and Optimization.

19

External Sort-Merge

Page 20: 1 Algorithms for Query Processing and Optimization.

20

External Sort-Merge

External Sort-Merge: Cost Analysis Disk accesses for initial run creation (sort phase) as well as

in each merge pass is 2b reads every block once and writes it out once

Initial # of runs is nR = b/nB and # of runs decreases by a factor of nB - 1 in each merge pass, then the total # of merge passes is np = logdM

(nR)

In general, the cost performance of Merge-Sort is Cost = sort cost + merge cost Cost = 2b + 2b * np Cost = 2b + 2b * logdM nR

= 2b (logdM(nR) + 1)

Page 21: 1 Algorithms for Query Processing and Optimization.

21

External Sort-Merge

Page 22: 1 Algorithms for Query Processing and Optimization.

22

Catalog Information

File r: # of records in the file R: record size b: # of blocks in the file bfr: blocking factor

Index x: # of levels of a multilevel index bI1: # of first-level index blocks

Page 23: 1 Algorithms for Query Processing and Optimization.

23

Catalog Information

Attribute d: # of distinct values of an attribute sl (selectivity):

the ratio of the # of records satisfying the condition to the total # of records in the file.

s (selection cardinality) = sl * r average # of records that will satisfy an equality

condition on the attribute For a key attribute:

d = r, sl = 1/r, s = 1 For a nonkey attribute:

assuming that d distinct values are uniformly distributed among the records

the estimated sl = 1/d, s = r/d

Page 24: 1 Algorithms for Query Processing and Optimization.

24

File Scans

Types of scans File scan – search algorithms that locate and

retrieve records that fulfill a selection condition.

Index scan – search algorithms that use an indexselection condition must be on search-key of index.

Cost estimate C = # of disk blocks scanned

Page 25: 1 Algorithms for Query Processing and Optimization.

25

3. Algorithms for SELECT Operations

Implementing the SELECT Operation

Examples: (OP1): SSN='123456789' (EMP) (OP2): DNUMBER>5(DEPT) (OP3): DNO=5(EMP) (OP4): DNO=5 AND SALARY>30000 AND SEX=F(EMP) (OP5): ESSN=123456789 AND PNO=10(WORKS_ON)

Page 26: 1 Algorithms for Query Processing and Optimization.

26

Algorithms for Selection Operation

Search Methods for Simple Selection: S1 (linear search)

Retrieve every record in the file, and test whether its attribute values satisfy the selection condition.

If selection is on a nonkey attribute, C = b

If selection is equality on a key attribute, if record found, average cost C = b/2, else C = b

Page 27: 1 Algorithms for Query Processing and Optimization.

27

Algorithms for Selection Operation

S2 (binary search) Applicable if selection is an equality

comparison on the attribute on which file is ordered.

Assume that the blocks of a relation are stored contiguously

If selection is on a nonkey attribute: C = log2b: cost of locating the 1st tuple +

s/bfr - 1: # of blocks containing records that satisfy selection condition

If selection is equality on a key attribute: C = log2b, since s = 1, in this case

Page 28: 1 Algorithms for Query Processing and Optimization.

28

Selections Using Indices

S3 (primary or hash index on a key, equality) Retrieve a single record that satisfies the corresponding

equality condition If the selection condition involves an equality comparison on a

key attribute with a primary index (or a hash key), use the primary index (or the hash key) to retrieve the record.

Primary index: retrieve 1 more block than the # of index levels,C = x + 1:

Hash index:C = 1: for static or linear hashingC = 2: for extendable hashing

Page 29: 1 Algorithms for Query Processing and Optimization.

29

Selections Using Indices

S4 (primary index on a key, range selection) S4 Using a primary index to retrieve multiple

records: If the comparison condition is >, ≥, <, or ≤ on a key field with a

primary index, use the index to find the record satisfying the corresponding equality condition, then retrieve all subsequent records in the (ordered) file.

Assuming relation is sorted on AFor Av(r) use index to find 1st tuple = v and retrieve all

subsequent records.For Av(r) use index to find 1st tuple = v and retrieve all

preceding records. – OR just scan relation sequentially till 1st tuple v; do not use index

with average cost C = b/2

Average cost C = x + b/2

Page 30: 1 Algorithms for Query Processing and Optimization.

30

Selections Using Indices

S5 (clustered index on nonkey, equality) Retrieve multiple records. Records will be on consecutive blocks C = x + s/bfr

# of blocks containing records that satisfy selection condition

Page 31: 1 Algorithms for Query Processing and Optimization.

31

Selections Using Indices

S6-1 (secondary index B+-tree, equality) Retrieve a single record if the search-key is a

candidate key, C = x + 1

Retrieve multiple records if search-key is not a candidate key,C = x + s Can be very expensive!. Each record may be on a

different block , one block access for each retrieved record

Page 32: 1 Algorithms for Query Processing and Optimization.

32

Selections Using Indices

S6-2 (secondary index B+-tree, comparison) For Av(r) use index to find 1st index entry = v

and scan index sequentially from there, to find pointers to records.

For Av(r) just scan leaf pages of index finding pointers to records, till first entry v

If ½ records are assumed to satisfy the condition, then ½ first-level index blocks are accessed, plus ½ the file records via the indexC = x + bI1/2 + r/2

Page 33: 1 Algorithms for Query Processing and Optimization.

33

Complex Selections: 12…n(r)

S7 (conjunctive selection using one index) Select i and algorithms S1 through S6 that

results in the least cost for i(r).

Test other conditions on tuple after fetching it into memory buffer.

Cost of the algorithms chosen.

Page 34: 1 Algorithms for Query Processing and Optimization.

34

Complex Selections: 12…n(r)

S8 (conjunctive selection using composite index). If two or more attributes are involved in equality

conditions in the conjunctive condition and a composite index (or hash structure) exists on the combined field, we can use the index directly.

Use appropriate composite index if available using one the algorithms S3 (primary index), S5, or S6 (B+-tree, equality).

Page 35: 1 Algorithms for Query Processing and Optimization.

35

Complex Selections: 12…n(r)

S9 (conjunctive selection by intersection of record pointers) Requires indices with record pointers. Use corresponding index for each condition, and

take intersection of all the obtained sets of record pointers, then fetch records from file

If some conditions do not have appropriate indices, apply test in memory.

Cost is the sum of the costs of the individual index scan plus the cost of retrieving records from disk.

Page 36: 1 Algorithms for Query Processing and Optimization.

36

Complex Selections: 12… n(r)

S10 (disjunctive selection by union of identifiers) Applicable if all conditions have available indices.

Otherwise use linear scan. Use corresponding index for each condition, and

take union of all the obtained sets of record pointers. Then fetch records from file

READ “Examples of Cost Functions for Select” page

569--570.

Page 37: 1 Algorithms for Query Processing and Optimization.

37

Join Algorithms

Join (EQUIJOIN, NATURAL JOIN) two–way join: a join on two files e.g. R A=B S multi-way joins: joins involving more than two

files. e.g., R A=B S C=D T Examples

(OP6): EMP DNO=DNUMBER DEPT (OP7): DEPT MGRSSN=SSN EMP

Factors affecting JOIN performance Available buffer space Join selection factor Choice of inner vs outer relation

Page 38: 1 Algorithms for Query Processing and Optimization.

38

Join Algorithms

Join selectivity (js) : 0 js 1 the ratio of the # of tuples of the resulting join file

to the # of tuples of the Cartesian product file. = |R A=BS| / |R S| = |R A=BS| / (rR * rS)

If A is a key of R, then |R S| rS, so js 1/rR

If B is a key of S, then |R S| rR, so js 1/rS

Page 39: 1 Algorithms for Query Processing and Optimization.

39

Join Algorithms

Having an estimate of js, the estimate # of tuples of the resulting join file is

|R S| = js * rR * rS

bfrRS is the blocking factor of the resulting join file

Cost of writing the resulting join file to disk (bRS) is (js * rR * rS)/bfrRS

Page 40: 1 Algorithms for Query Processing and Optimization.

40

Join Algorithms

J1: Nested-loop join (brute force): For each record t in R (outer loop), retrieve every

record s from S (inner loop) and test whether the two records satisfy the join condition t[A] = s[B].

J2: Single-loop join (Using an index): If an index (or hash key) exists for one of the two

join attributes — say, B of S — retrieve each record t in R, one at a time, and then use the index to retrieve directly all matching records s from S that satisfy s[B] = t[A].

Page 41: 1 Algorithms for Query Processing and Optimization.

41

Join Algorithms

J3 Sort-merge join: If the records of R and S are physically sorted

(ordered) by value of the join attributes A and B, respectively, we can implement the join in the most efficient way possible.

Both files are scanned in order of the join attributes, matching the records that have the same values for A and B.

In this method, the records of each file are scanned only once each for matching with the other file—unless both A and B are non-key attributes, in which case the method needs to be modified slightly.

Page 42: 1 Algorithms for Query Processing and Optimization.

42

Join Algorithms

J4 Hash-join: The records of files R and S are both hashed to

the same hash file, using the same hashing function on the join attributes A of R and B of S as hash keys.

A single pass through the file with fewer records (say, R) hashes its records to the hash file buckets.

A single pass through the other file (S) then hashes each of its records to the appropriate bucket, where the record is combined with all matching records from R.

Page 43: 1 Algorithms for Query Processing and Optimization.

43

Nested-Loop Join (J1)

To compute the theta join R S

for each tuple tR in R do for each tuple tS in S do

if tR,tS satisfy , add tR•tS to result

R is called the outer relation and S the inner relation of the join.

Requires no indices and can be used with any kind of join condition.

Expensive since it examines every pair of tuples in the two relations.

Page 44: 1 Algorithms for Query Processing and Optimization.

44

Nested-Loop Join (Cont.)

In the worst case, if there is enough memory only to hold one block of each relation (3 in total), the estimated cost is C = bR + (rR bS) + bRS

If the smaller relation fits entirely in memory use that as the inner relation reduces cost to bR + bS + bRS

If both relations fit entirely in memory cost to bR + bS + bRS

Page 45: 1 Algorithms for Query Processing and Optimization.

45

Nested-Loop Join (Cont.)

Examples use the following information customer depositor # of records of rcustomer=10000, rdepositor=5000 # of blocks of bcustomer=400, bdepositor=100

Assuming worst case, join cost estimate is If depositor is used as outer relation

C = 5000 400 + 100 + bRS = 2,000,100 + bRS

If customer is used as the outer relation C = 1000 100 + 400 + bRS = 1,000,400 + bRS

If smaller relation (depositor) fits entirely in memory, C = 100 + 400 + bRS = 500 + bRS

Page 46: 1 Algorithms for Query Processing and Optimization.

46

Block Nested-Loop Join

Every block of inner relation is paired with every block of outer relation.for each block BR of R do for each block BS of S do for each tuple tR in BR do for each tuple tS in BS do if tR,tS satisfy , add tR•tS to result

Page 47: 1 Algorithms for Query Processing and Optimization.

47

Block Nested-Loop Join

In the worst case, if there is enough memory only to hold one block of each relation (3 in total), the estimated cost is C = bR + (bR bS) + bRS

If both relations fit entirely in memory C = bR + bS + bRS

If the smaller relation fits entirely in memory use that as the inner relation C = bR + bS + bRS

Choose smaller relation for outer loop, if neither of R and S fits in memory.

Page 48: 1 Algorithms for Query Processing and Optimization.

48

Block Nested-Loop Join

Cost can be reduced to bR /(nB-2) bS + bR + bRS

use nB-2 blocks for outer relations where nB = memory buffer size in blocks use remaining two blocks for inner relation and

output

Page 49: 1 Algorithms for Query Processing and Optimization.

49

Block Nested-Loop Join

Cost can be reduced If equi-join or natural join attribute forms a key

on the inner relation, stop inner loop on first match

Scan inner loop forward and backward alternately, to make use of the blocks remaining in buffer.

Use index on inner relation if available

Page 50: 1 Algorithms for Query Processing and Optimization.

50

Indexed Nested-Loop Join (J2)

Index lookups can replace file scans if an index is available on the inner relation’s join attribute

Can construct an index just to compute a join.

For each tuple tR in the outer relation R, use the index to retrieve tuples in S that satisfy the join

condition with tuple tR

If indices are available on join attributes of both R and S, use the relation with fewer tuples as the outer relation.

Page 51: 1 Algorithms for Query Processing and Optimization.

51

Indexed Nested-Loop Join

Worst case: buffer has space for only one page of R, and, for

each tuple in R, we perform index lookup on S.

The cost C depends on the type of the index used Secondary index bR + (rR (x + s)) + bRS

Cluster index bR + (rR (x + (s/bfrS))) + bRS

Primary index bR + (rR (x + 1)) + bRS

Hash index bR + (rR h) + bRS

h is the average # of blocks acesses to retrieve a record

Page 52: 1 Algorithms for Query Processing and Optimization.

52

Example of Nested-Loop Join Costs

Compute depositor customer, with depositor as the outer relation. # of records of rcustomer=10000, rdepositor=5000 # of blocks of bcustomer=400, bdepositor=100

Let customer have a primary B+-tree index on the join attribute customer-name, which contains 20 entries in each index node.

Since customer has 10,000 tuples, the height of the tree is 4, and one more access is needed to find the actual data = 5

Page 53: 1 Algorithms for Query Processing and Optimization.

53

Example of Nested-Loop Join Costs

Cost of block nested loops join (worst case) 100 + 100*400 = 40,100 + bRS

may be significantly less with more memory

Cost of indexed nested loops join

100 + 5000 * 5 = 25,100 + bRS

READ “Effects of Available Buffer …” pages 549-550. “Example of Using the Cost …” page 572.

Page 54: 1 Algorithms for Query Processing and Optimization.

54

Sort-Merge Join (J3)

Sort both relations on their join attribute if not already sorted on the join attributes

Merge the sorted relations to join them Join step is similar to the merge stage of the

external sort-merge algorithm .

Main difference is handling of duplicate values in join attribute every pair with same value on join attribute must be

matched

Can be used only for equi-joins and natural joins

Page 55: 1 Algorithms for Query Processing and Optimization.

55

Sort-Merge Join

sort R on A and S on Bwhile !eof(R) and !eof(S) do { scan R and S concurrently until tR.A=tS.B; if (tR.A=tS.B=c) {output A=c(R) B=c(S))}}

r

s

tR.A = c

tS.B = c

Page 56: 1 Algorithms for Query Processing and Optimization.

56

Sort-Merge Join

Each block needs to be read only once

Cost of sorts assuming nB buffers = 2bR (lognB-1(bR/nB) + 1)+ 2bS (lognB-1(bS/nB) + 1)

Thus number of block accesses for merge-join is bR + bS + sorting cost (if not sorted) + bRS

Page 57: 1 Algorithms for Query Processing and Optimization.

57

Hash-Join (J4)

Applicable for equi & natural joins R A=BS.

Applicable if a hash table for the smaller of the two files can be kept entirely in memeory.

A hash function h is used to partition tuples of both relations

h maps join attribute A values to {1, ..., M} R1, R2, …, RM denote partitions of R tuples

Each tR R is put in partition Ri, where i = h(tR [A]) S1, S2, …, SM denotes partitions of s tuples

Each tS S is put in partition Si, where i = h(tS [A])

Page 58: 1 Algorithms for Query Processing and Optimization.

58

Hash-Join

Page 59: 1 Algorithms for Query Processing and Optimization.

59

Hash-Join

R tuples in Ri need only to be compared with S tuples in Si

Need not be compared with S tuples in any other partition, since: an R tuple and an S tuple that satisfy the join

condition will have the same value for the join attributes.

If that value is hashed to some value i, the R tuple has to be in Ri and the S tuple in Si

Page 60: 1 Algorithms for Query Processing and Optimization.

60

Hash-Join Algorithm

1. Partition Phase (using hashing function h) partition R into M partions: R1, R2, …, RM partition S similarly. M+1 blocks of memory buffer are needed. A disk sub-file is created per partition to store the tuples

for that partition.2. Probing Phase (Joining)

For i = 1 to M (two partitions Ri & Si are joined)(a) Load first the partiton with # of smaller blocks

(say Ri) into memory and build an in-memory hash index on it using the join attributes. This hash index uses a different hash function than the earlier

one h.

(b) For each tuple tS, locate each matching tuple tR in Ri using the in-memory hash index.

(c) Output the concatenation of their attributes.

Page 61: 1 Algorithms for Query Processing and Optimization.

61

Hash-Join Algorithm

Relation R is called the build input and S is called the probe (search) input

The value M and the hash function h is chosen such that each Ri and its in-memory hash index should fit in memory. Typically M bR/nB Use the smaller relation as the build relation. The probe relation partitions Si need not fit in

memory

Page 62: 1 Algorithms for Query Processing and Optimization.

62

Cost of Hash-Join

Cost of hash join is Reading and writing each record from R & S

during the partitioning phase: (bR+bS), (bR+bS) Reading each record during the joining phase:

(bR+bS) Writing the result of join: bRS Total Cost: 3(bR + bS) + bRS

If the entire build input can be kept in main memory, the algorithm does not partition the relations into temporary files. Cost estimate goes down to bR + bS + bRS

Page 63: 1 Algorithms for Query Processing and Optimization.

63

Example of Cost of Hash-Join

Assume that memory size is nB = 20 blocks bdepositor= 100 and bcustomer = 400. depositor is to be used as build input.

Partition it into M = 5 partitions, each of size 20 blocks.

This partitioning can be done in one pass.

Similarly, partition customer into five partitions,each of size 80. This is also done in one pass.

Therefore total cost = 3(100 + 400)+bRS=1500+bRS

customer depositor

Page 64: 1 Algorithms for Query Processing and Optimization.

64

Partition Hash Join

Partition hash join Partitioning phase:

Each file (R and S) is first partitioned into M partitions using a partitioning hash function on the join attributes: 

– R1 , R2 , R3 , ...... Rm and S1 , S2 , S3 , ...... Sm Minimum number of in-memory buffers needed for the

partitioning phase: M+1. A disk sub-file is created per partition to store the tuples

for that partition.  

Joining or probing phase: Involves M iterations, one per partitioned file. Iteration i involves joining partitions Ri and Si.

Page 65: 1 Algorithms for Query Processing and Optimization.

65

Partition Hash Join

Partitioned Hash Join Procedure: Assume Ri is smaller than Si.

1. Copy records from Ri into memory buffers.

2. Read all blocks from Si, one at a time and each record from Si is used to probe for a matching record(s) from partition Si.

3. Write matching record from Ri after joining to the record from Si into the result file.

Page 66: 1 Algorithms for Query Processing and Optimization.

66

Partition Hash Join

Cost analysis of partition hash join:1. Reading and writing each record from R and S during

the partitioning phase:(bR + bS), (bR + bS)

2. Reading each record during the joining phase:(bR + bS)

3. Writing the result of join: bRES

Total Cost: 3* (bR + bS) + bRES

Page 67: 1 Algorithms for Query Processing and Optimization.

67

Hybrid Hash–Join

Hybrid hash join: Same as partitioned hash join except:

Joining phase of one of the partitions is included during the partitioning phase.

Partitioning phase: Allocate buffers for smaller relation- one block for each of the

M-1 partitions, remaining blocks to partition 1. Repeat for the larger relation in the pass through S.)

Joining phase: M-1 iterations are needed for the partitions R2, R3, R4, …, RM

and S2, S3, S4, …, SM. R1 and S1 are joined during the partitioning of S1, and results

of joining R1 and S1 are already written to the disk by the end of partitioning phase.

Page 68: 1 Algorithms for Query Processing and Optimization.

68

Hybrid Hash–Join

E.g. With memory size of 25 blocks, depositor can be partitioned into five partitions, each of size 20 blocks.

Division of memory: The first partition of the R (R1) can be kept in

memory. (It occupies 20 blocks of memory) 1 block is used for input, and 1 block each for

buffering the other 4 partitions. customer is similarly partitioned into five partitions

each of size 80; the 1st partition is used right away for probing, instead of being written out and read back.

Page 69: 1 Algorithms for Query Processing and Optimization.

69

Hybrid Hash–Join

Cost of hybrid hash join 3(100-20 + 400-80) + 20 + 80 + bRS = 1300+bRS instead of 1500 + bRS with plain hash-join

Page 70: 1 Algorithms for Query Processing and Optimization.

70

Complex Joins

Join with a conjunctive condition:R 1 2... n

S Either use nested-loop/block nested-loop

joins, or Compute the result of one of the simpler

joins R i S

final result comprises those tuples in the intermediate result that satisfy the remaining conditions

1 . . . i –1 i +1 . . . n

Page 71: 1 Algorithms for Query Processing and Optimization.

71

Complex Joins

Join with a disjunctive condition

R 1 2 ... n S

Compute as the union of the records in individual joins R i S:

(R 1 S) (R 2 S) . . . (R n S)

Page 72: 1 Algorithms for Query Processing and Optimization.

72

Duplicate Elimination

Duplicate elimination can be implemented via hashing or sorting. On sorting, duplicates will come adjacent to each

other, and all but one set of duplicates can be deleted.

Optimization: duplicates can be deleted during run generation as well as at intermediate merge steps in external sort-merge.

Cost is the same as the cost of sorting

Hashing is similar – duplicates will come into the same bucket.

Page 73: 1 Algorithms for Query Processing and Optimization.

73

PROJECT

Algorithm for PROJECT operations (Figure 15.3b)

<attribute list>(R)

1. If <attribute list> has a key of relation R, extract all tuples from R with only the values for the attributes in <attribute list>.

2. If <attribute list> does NOT include a key of relation R, duplicated tuples must be removed from the results.

Methods to remove duplicate tuples1. Sorting2. Hashing

Page 74: 1 Algorithms for Query Processing and Optimization.

74

Cartesian Product

CARTESIAN PRODUCT of relations R and S include all possible combinations of records from R and S. The attribute of the result include all attributes of R and S.

Cost analysis of CARTESIAN PRODUCT If R has n records and j attributes and S has m records

and k attributes, the result relation will have n*m records and j+k attributes.

CARTESIAN PRODUCT operation is very expensive and should be avoided if possible.

Page 75: 1 Algorithms for Query Processing and Optimization.

75

Set Operations

R S: (See Figure 15.3c) 1. Sort the two relations on the same attributes. 2. Scan and merge both sorted files concurrently,

whenever the same tuple exists in both relations, only one is kept in the merged results.

R S: (See Figure 15.3d) 1. Sort the two relations on the same attributes. 2. Scan and merge both sorted files concurrently,

keep in the merged results only those tuples that appear in both relations.

R – S: (See Figure 15.3e) keep in the merged results only those tuples that

appear in relation R but not in relation S.

Page 76: 1 Algorithms for Query Processing and Optimization.

76

Aggregate Operations

The aggregate operations MIN, MAX, COUNT, AVERAGE, and SUM can be computed by scanning the whole records (the worst case)

If index exists on the attribute of MAX , MIN operation, then these operations can be done in a much more efficient way: select max/min (salary) from employee If an (ascending) index on SALARY exists for

the employee relation, then the optimizer could decide on traversing the index for the largest/smallest value, which would entail following the right/left most pointer in each index node from the root to a leaf.

Page 77: 1 Algorithms for Query Processing and Optimization.

77

Aggregate Operations

SUM, COUNT and AVG For a dense index (each record has one index entry):

Apply the associated computation to the values in the index. For a non-dense index:

Actual number of records associated with each index entry must be accounted for

With GROUP BY: the aggregate operator must be applied separately to each group of tuples. Use sorting or hashing on the group attributes to partition the

file into the appropriate groups; Computes the aggregate function for the tuples in each

group. What if we have Clustering index on the grouping attributes?

Page 78: 1 Algorithms for Query Processing and Optimization.

78

Outer Joins

Outer join can be computed either as:

1. Modifying Join Algorithms:

Nested Loop or Sort-Merge joins can be modified to implement outer join. E.g., For left outer join, use the left relation as outer relation and

construct result from every tuple in the left relation. If there is a match, the concatenated tuple is saved in the result. However, if an outer tuple does not match, then the tuple is still

included in the result but is padded with a null value(s).

Page 79: 1 Algorithms for Query Processing and Optimization.

79

Outer Joins

2. Executing a combination of relational algebra operators. SELECT FNAME, DNAME FROM (EMPLOYEE LEFT OUTER JOIN

DEPARTMENT ON DNO = DNUMBER); {Compute the JOIN of the EMPLOYEE and DEPARTMENT

tables} TEMP1FNAME,DNAME(EMPLOYEE DNO=DNUMBER DEPARTMENT)

{Find the EMPLOYEEs that do not appear in the JOIN} TEMP2 FNAME (EMPLOYEE) - FNAME (Temp1)

{Pad each tuple in TEMP2 with a null DNAME field}

TEMP2 TEMP2 x 'null' {UNION the temporary tables to produce the LEFT OUTER

JOIN} RESULT TEMP1 υ TEMP2

The cost of the outer join, as computed above, would include the cost of the associated steps (i.e., join, projections and union).

Page 80: 1 Algorithms for Query Processing and Optimization.

80

6. Combining Operations using Pipelining

Motivation A query is mapped into a sequence of operations. Each execution of an operation produces a

temporary result (Materialization). Generating & saving temporary files on disk is

time consuming. Alternative:

Avoid constructing temporary results as much as possible.

Pipeline the data through multiple operationspass the result of a previous operator to the next

without waiting to complete the previous operation. Also known as stream-based processing.

Page 81: 1 Algorithms for Query Processing and Optimization.

81

Materialization

Materialized evaluation: evaluate one operation at a time, starting

at the lowest-level. Use intermediate results materialized into

temporary relations to evaluate next-level operations. )(2500 accountbalance

Page 82: 1 Algorithms for Query Processing and Optimization.

82

Pipelining

Pipelined evaluation evaluate several operations simultaneously,

passing the results of one operation on to the next. E.g., in previous expression tree, don’t store result

of

instead, pass tuples directly to the join. Similarly, don’t store result of join, pass tuples

directly to projection.

Much cheaper than materialization: no need to store a temporary relation to disk.

Pipelining may not always be possible e.g., sort, hash-join.

)(2500 accountbalance

Page 83: 1 Algorithms for Query Processing and Optimization.

83

7. Using Heuristics in Query Optimization(1)

Process for heuristics optimization1. The parser of a high-level query generates an initial

internal representation;2. Apply heuristics rules to optimize the internal

representation.3. A query execution plan is generated to execute groups of

operations based on the access paths available on the files involved in the query.

The main heuristic is to apply first the operations that reduce the size of intermediate results. E.g., Apply SELECT and PROJECT operations before

applying the JOIN or other binary operations.

Page 84: 1 Algorithms for Query Processing and Optimization.

84

Using Heuristics in Query Optimization (2)

Query tree: A tree data structure that corresponds to a relational

algebra expression. It represents the input relations of the query as leaf nodes of the tree, and represents the relational algebra operations as internal nodes.

An execution of the query tree consists of executing an internal node operation whenever its operands are available and then replacing that internal node by the relation that results from executing the operation.

Query graph: A graph data structure that corresponds to a relational

calculus expression. It does not indicate an order on which operations to perform first. There is only a single graph corresponding to each query.

Page 85: 1 Algorithms for Query Processing and Optimization.

85

Using Heuristics in Query Optimization (3)

Example: For every project located in ‘Stafford’, retrieve the project

number, the controlling department number and the department manager’s last name, address and birthdate.

Relation algebra:

PNUMBER, DNUM, LNAME, ADDRESS, BDATE (((PLOCATION=‘STAFFORD’(PROJECT))

DNUM=DNUMBER (DEPARTMENT))

MGRSSN=SSN (EMPLOYEE))

SQL query:Q2: SELECT P.NUMBER,P.DNUM,E.LNAME, E.ADDRESS,

E.BDATE FROM PROJECT AS P,DEPARTMENT AS D, EMPLOYEE AS E WHERE P.DNUM=D.DNUMBER AND D.MGRSSN=E.SSN AND P.PLOCATION=‘STAFFORD’;

Page 86: 1 Algorithms for Query Processing and Optimization.

86

Using Heuristics in Query Optimization (4)

Page 87: 1 Algorithms for Query Processing and Optimization.

87

Using Heuristics in Query Optimization (5)

Page 88: 1 Algorithms for Query Processing and Optimization.

88

Using Heuristics in Query Optimization (6)

Heuristic Optimization of Query Trees: The same query could correspond to many

different relational algebra expressions — and hence many different query trees.

The task of heuristic optimization of query trees is to find a final query tree that is efficient to execute.

Example:Q: SELECT LNAME FROM EMPLOYEE,

WORKS_ON, PROJECT WHERE PNAME = ‘AQUARIUS’ AND PNMUBER=PNO AND ESSN=SSN AND BDATE > ‘1957-12-31’;

Page 89: 1 Algorithms for Query Processing and Optimization.

89

Using Heuristics in Query Optimization (7)

Page 90: 1 Algorithms for Query Processing and Optimization.

90

Using Heuristics in Query Optimization (8)

Page 91: 1 Algorithms for Query Processing and Optimization.

91

Transformation Rules for RA Operations

1. Cascade of : A conjunctive selection condition can be broken up into a cascade (sequence) of individual operations:

c1 AND c2 AND ... AND cn(R) c1(c2(...(cn(R))...))

2. Commutativity of :The operation is commutative:

c1(c2(R)) c2(c1(R))

3. Cascade of :In a cascade (sequence) of operations, all but the last one can be ignored:

List1(List2(...(Listn(R))...)) = List1(R)

4. Commuting with :If the selection condition c involves only the attributes A1, ..., An in the projection list, the two operations can be commuted:

A1, A2, ..., An(c(R)) = c(A1, A2, ..., An(R))

Page 92: 1 Algorithms for Query Processing and Optimization.

92

Transformation Rules for RA Operations

5. Commutativity of ⋈ (or ):The ⋈ operation is commutative as is the operation:

R ⋈c S = S ⋈c R; R S = S R

6. Commuting with ⋈ (or ):If all the attributes in the selection condition c involve only the attributes of one of the relations being joined—say, R—the two operations can be commuted as follows:

c(R ⋈ S) (c(R)) ⋈ S

Alternatively, if the selection condition c can be written as (c1 and c2), where condition c1 involves only the attributes of R and condition c2 involves only the attributes of S, the operations commute as follows:

c(R ⋈ S) (c1(R)) ⋈ (c2(S))

Page 93: 1 Algorithms for Query Processing and Optimization.

93

Transformation Rules for RA Operations

7. Commuting with ⋈ (or ):Suppose that the projection list is L = {A1, ..., An, B1, ..., Bm}, where A1, ..., An are attributes of R and B1, ..., Bm are attributes of S. If the join condition c involves only attributes in L, the two operations can be commuted as follows:

L(R ⋈c S) (A1, ..., An(R)) ⋈c (B1, ..., Bm(S))

If the join condition c contains additional attributes not in L, these must be added to the projection list, and a final operation is needed.

Page 94: 1 Algorithms for Query Processing and Optimization.

94

Transformation Rules for RA Operations

8. Commutativity of set operations:The set operations and are commutative but – is not.

9. Associativity of ⋈, , , and :These four operations are individually associative; that is, if stands for any one of these 4 operations (throughout the expression), we have

(R S) T R (S T)

10.Commuting with set operations:The operation commutes with , , and –. If stands for any one of these 3 operations, we have

c(R S) (c(R)) (c(S))

Page 95: 1 Algorithms for Query Processing and Optimization.

95

Transformation Rules for RA Operations

11.The operation commutes with . L(R υ S) (L(R)) υ (L(S))

12.Converting a (sequence into ⋈:If the condition c of a thatfollows a corresponds to a join condition, convert the (sequence into a ⋈ as

(c(R S)) = (R ⋈c S)

13.Other transformations (See page 564)

Page 96: 1 Algorithms for Query Processing and Optimization.

96

Heuristic Algebraic Optimization Algorithm

1. Using rule 1, break up any select operations with conjunctive conditions into a cascade of select operations.

σc1 AND c2 AND … AND cn(R) ≡ σc1(σc2(…(σcn(R))…))2. Using rules 2, 4, 6, and 10 concerning the

commutativity of select with other operations, move each select operation as far down the query tree as is permitted by the attributes involved in the select condition.

3. Using rule 9 concerning associativity of binary operations, rearrange the leaf nodes of the tree: Position the leaf node relation with the most restrictive

σ operations so they are executed first, Make sure that the ordering of leaf nodes

does not cause CARTESIAN PRODUCT operations

Page 97: 1 Algorithms for Query Processing and Optimization.

97

Heuristic Algebraic Optimization Algorithm

4. Using Rule 12, combine a cartesian product operation with a subsequent select operation in the tree into a join operation.

Combine a with a subsequent σ in the tree into a ⋈

5. Using rules 3, 4, 7, and 11 concerning the cascading of project and the commuting of project with other operations, break down and move lists of projection attributes down the tree as far as possible by creating new project operations as needed.

6. Identify subtrees that represent groups of operations that can be executed by a single algorithm.

Page 98: 1 Algorithms for Query Processing and Optimization.

98

Summary of Heuristics for Algebraic Optimization

The main heuristic is to apply first the operations that reduce the size of intermediate results.

Perform select operations as early as possible to reduce the number of tuples and perform project operations as early as possible to reduce the number of attributes. (This is done by moving select and project operations as far down the tree as possible.)

The select and join operations that are most restrictive should be executed before other similar operations. (This is done by reordering the leaf nodes of the tree among themselves and adjusting the rest of the tree appropriately.)

Page 99: 1 Algorithms for Query Processing and Optimization.

99

Query Execution Plans

An execution plan for a relational algebra query consists of a combination of the relational algebra query tree and information about the access methods to be used for each relation as well as the methods to be used in computing the relational operators stored in the tree.

Materialized evaluation: the result of an operation is stored as a temporary relation.

Pipelined evaluation: as the result of an operator is produced, it is forwarded to the next operator in sequence.

Page 100: 1 Algorithms for Query Processing and Optimization.

100

8. Using Selectivity and Cost Estimates in Query Optimization (1)

Cost-based query optimization: Estimate and compare the costs of executing a

query using different execution strategies and choose the strategy with the lowest cost estimate.

(Compare to heuristic query optimization)

Issues Cost function Number of execution strategies to be

considered

Page 101: 1 Algorithms for Query Processing and Optimization.

101

Using Selectivity and Cost Estimates in Query Optimization (2)

Cost Components for Query Execution1. Access cost to secondary storage2. Storage cost3. Computation cost4. Memory usage cost5. Communication cost

Note: Different database systems may focus on different cost components.

Page 102: 1 Algorithms for Query Processing and Optimization.

102

Using Selectivity and Cost Estimates in Query Optimization (3)

Catalog Information Used in Cost Functions Information about the size of a file

number of records (tuples) (r), record size (R), number of blocks (b) blocking factor (bfr)

Information about indexes and indexing attributes of a file Number of levels (x) of each multilevel index Number of first-level index blocks (bI1) Number of distinct values (d) of an attribute Selectivity (sl) of an attribute Selection cardinality (s) of an attribute. (s = sl * r)

Page 103: 1 Algorithms for Query Processing and Optimization.

103

Using Selectivity and Cost Estimates in Query Optimization (4)

Examples of Cost Functions for SELECT S1. Linear search (brute force) approach

CS1a = b; For an equality condition on a key, CS1a = (b/2) if the

record is found; otherwise CS1a = b. S2. Binary search:

CS2 = log2b + (s/bfr) –1 For an equality condition on a unique (key) attribute,

CS2 =log2b S3. Using a primary index (S3a) or hash key (S3b) to

retrieve a single record CS3a = x + 1; CS3b = 1 for static or linear hashing; CS3b = 1 for extendible hashing;

Page 104: 1 Algorithms for Query Processing and Optimization.

104

Using Selectivity and Cost Estimates in Query Optimization (5)

Examples of Cost Functions for SELECT (contd.) S4. Using an ordering index to retrieve multiple records:

For the comparison condition on a key field with an ordering index, CS4 = x + (b/2)

S5. Using a clustering index to retrieve multiple records: CS5 = x + ┌ (s/bfr) ┐

S6. Using a secondary (B+-tree) index: For an equality comparison, CS6a = x + s; For an comparison condition such as >, <, >=, or <=, CS6a = x + (bI1/2) + (r/2)

Page 105: 1 Algorithms for Query Processing and Optimization.

105

Using Selectivity and Cost Estimates in Query Optimization (6)

Examples of Cost Functions for SELECT (contd.) S7. Conjunctive selection:

Use either S1 or one of the methods S2 to S6 to solve. For the latter case, use one condition to retrieve the

records and then check in the memory buffer whether each retrieved record satisfies the remaining conditions in the conjunction.

S8. Conjunctive selection using a composite index: Same as S3a, S5 or S6a, depending on the type of

index.

Examples of using the cost functions.

Page 106: 1 Algorithms for Query Processing and Optimization.

106

Using Selectivity and Cost Estimates in Query Optimization (7)

Examples of Cost Functions for JOIN Join selectivity (js) js = | (R C S) | / | R x S | = | (R C S) | / (|R| *

|S |) If condition C does not exist, js = 1; If no tuples from the relations satisfy condition C, js

= 0; Usually, 0 <= js <= 1;

Size of the result file after join operation | (R C S) | = js * |R| * |S |

Page 107: 1 Algorithms for Query Processing and Optimization.

107

Using Selectivity and Cost Estimates in Query Optimization (8)

Examples of Cost Functions for JOIN (contd.) J1. Nested-loop join:

CJ1 = bR + (bR*bS) + ((js* |R|* |S|)/bfrRS) (Use R for outer loop)

J2. Single-loop join (using an access structure to retrieve the matching record(s)) If an index exists for the join attribute B of S with

index levels xB, we can retrieve each record s in R and then use the index to retrieve all the matching records t from S that satisfy t[B] = s[A].

The cost depends on the type of index.

Page 108: 1 Algorithms for Query Processing and Optimization.

108

Using Selectivity and Cost Estimates in Query Optimization (9)

Examples of Cost Functions for JOIN (contd.) J2. Single-loop join (contd.)

For a secondary index, CJ2a = bR + (|R| * (xB + sB)) + ((js* |R|* |S|)/bfrRS);

For a clustering index, CJ2b = bR + (|R| * (xB + (sB/bfrB))) + ((js* |R|* |S|)/bfrRS);

For a primary index, CJ2c = bR + (|R| * (xB + 1)) + ((js* |R|* |S|)/bfrRS);

If a hash key exists for one of the two join attributes — B of S CJ2d = bR + (|R| * h) + ((js* |R|* |S|)/bfrRS);

J3. Sort-merge join: CJ3a = CS + bR + bS + ((js* |R|* |S|)/bfrRS); (CS: Cost for sorting files)

Page 109: 1 Algorithms for Query Processing and Optimization.

109

Using Selectivity and Cost Estimates in Query Optimization (10)

Multiple Relation Queries and Join Ordering A query joining n relations will have n-1 join

operations, and hence can have a large number of different join orders when we apply the algebraic transformation rules.

Current query optimizers typically limit the structure of a (join) query tree to that of left-deep (or right-deep) trees.

Left-deep tree: A binary tree where the right child of each non-leaf

node is always a base relation. Amenable to pipelining Could utilize any access paths on the base relation (the right

child) when executing the join.

Page 110: 1 Algorithms for Query Processing and Optimization.

110

10. Semantic Query Optimization

Semantic Query Optimization: Uses constraints specified on the database schema in order to

modify one query into another query that is more efficient to execute.

Consider the following SQL query,SELECT E.LNAME, M.LNAMEFROM EMPLOYEE E MWHEREE.SUPERSSN=M.SSN AND E.SALARY>M.SALARY

Explanation: Suppose that we had a constraint on the database schema that

stated that no employee can earn more than his or her direct supervisor. If the semantic query optimizer checks for the existence of this constraint, it need not execute the query at all because it knows that the result of the query will be empty. Techniques known as theorem proving can be used for this purpose.

Page 111: 1 Algorithms for Query Processing and Optimization.

111

Example (1)

An un-optimized relational algebra expression: Name ( GPA 3.5 and Title = 'Ada Programming Language’ and

Students.SSN = Enrollment.SSN and Enrollment.Course_no =

Courses.Course_no (Students Enrollment Courses))

Page 112: 1 Algorithms for Query Processing and Optimization.

112

Example (2)

Initial query tree:

Name

GPA 3.5 and Title = 'Ada Programming Language’

and Students.SSN = Enrollment.SSN

and Enrollment.Course_no = Courses.Course_no

Courses

EnrollmentStudents

Page 113: 1 Algorithms for Query Processing and Optimization.

113

Example (3)

Perform selections as early as possible.

Name

Title = 'Ada Programming

Language’

CoursesEnrollment

Students

Enrollment.Course_no = Courses.Course_no

Students.SSN = Enrollment.SSN

GPA 3.5

Page 114: 1 Algorithms for Query Processing and Optimization.

114

Example (4)

Replace Cartesian products by joins.

Name

Title = 'Ada Programming Language’

CoursesEnrollment

Students

GPA 3.5

Page 115: 1 Algorithms for Query Processing and Optimization.

115

Example (5)

Perform more restrictive joins first.

Name

Title = 'Ada Programming

Language’

Courses

EnrollmentStudents

GPA 3.5

Page 116: 1 Algorithms for Query Processing and Optimization.

116

Example (6)

Project out useless attributes early.

Name

Title = 'Ada Programming

Language’

Courses

EnrollmentStudents

GPA 3.5

SSN, Name

SSN, Course_no Course_no

SSN

Page 117: 1 Algorithms for Query Processing and Optimization.

117

Example (7)

The optimized algebra expression is: Name (( SSN, Name ( GPA 3.5 (Students))) ((SSN,

Course_no(Enrollments)) (Course_no ( Title = 'Ada

Programming Language’ (Courses)))))

Projections and selections on the same relation are usually performed using the same scan of the relation.