© 2008 Quest Software, Inc. ALL RIGHTS RESERVED. Benchmarking Advice & Recommendations August 2008.

38
© 2008 Quest Software, Inc. ALL RIGHTS RESERVED. Benchmarking Advice & Recommendations August 2008

Transcript of © 2008 Quest Software, Inc. ALL RIGHTS RESERVED. Benchmarking Advice & Recommendations August 2008.

Page 1: © 2008 Quest Software, Inc. ALL RIGHTS RESERVED. Benchmarking Advice & Recommendations August 2008.

© 2008 Quest Software, Inc. ALL RIGHTS RESERVED.

Benchmarking Advice & Recommendations

August 2008

Page 2: © 2008 Quest Software, Inc. ALL RIGHTS RESERVED. Benchmarking Advice & Recommendations August 2008.

Agenda• This is mean to be more of an open discussion

• No set time per topic - each topic has just enough info to spur questions and/or open a dialog – so talk

• Feel free to ask questions about other topics

• No cell phones with ringer turned on (use vibrate)• Email only during breaks under penalty of death

• Customers generally fail with BMF due to:– Lack of preparation (70%)

– Unreasonable expectations (30%)2

Page 3: © 2008 Quest Software, Inc. ALL RIGHTS RESERVED. Benchmarking Advice & Recommendations August 2008.

Benchmarks Require Preparation

Page 4: © 2008 Quest Software, Inc. ALL RIGHTS RESERVED. Benchmarking Advice & Recommendations August 2008.

Basics•Architectural diagram of the database server, network and IO setup•Reviewed the official benchmark specification to fully understand the test – critical step!!!•Defined goals for satisfactory benchmark performance

•TPS, Average Transaction Time, IO throughput, CPU utilization,•memory consumption, network utilization, swapping level, etc…

•Verified the assumed capacity of each and every hardware component •Select a database benchmarking tool (i.e. load generator) – BMF•Select a database monitoring/diagnostic tool – TOAD DBA, PA & Spotlight•Select an operating system monitoring/diagnostic tool – TOAD DBA, Spotlight & Foglight•Select a database performance resolution/corrective tool – TOAD DBA

Storage•Number of storage arrays being used•Are the storage arrays virtualized or shared•Storage array nature (i.e. SAN, NAS, iSCSI, NFS, etc)•Storage array connective bandwidth per storage array and total•Amount of cache memory per storage array and total•How many spindles per storage array and total•Number of processors per storage array and total•Amount of memory cache per storage array and total•Storage array caching allocation settings read vs. write•Storage array caching size/algorithm for read-ahead settings•Nature, size, speed and cache of disks per storage array and total•Number of LUN’s available for usage per storage array and total•RAID level, stripe width and stripe size/length of the LUN’s

Database Benchmarking Prep Checklist – Pg 1

Page 5: © 2008 Quest Software, Inc. ALL RIGHTS RESERVED. Benchmarking Advice & Recommendations August 2008.

Servers•Number of database servers being used (usually one, unless clustering or replicating)•Are the database servers virtualized or shared•Database server architecture/nature (i.e. uni-processor, SMP, DSM, NUMA, ccNUMA, etc)•Database server CPU word-size and architecture/nature (i.e. RISC vs. CISC)•Database server CPU physical count (slots) per database server and total•Database server CPU logical count (cores) per database server and total•Database server CPU speed and cache per logical unit (core)•Hyper-threading turned off if it is available – critical, otherwise will negatively skew results•Amount, type and speed of RAM per database server and total•Number and throughput of HBA’s per database server and total•HBA interconnect nature and speed (i.e. fiber, infiniband, 1GB Ethernet, 10GB Ethernet, etc)•Number and throughput of NIC’s per database server and total•Database server interconnect nature and speed (if clustering or replicating)

Operating System•Operating system word-size•Operating system basic optimization parameters set or tuned•Operating system database optimization parameters set or tuned•Disk array and inter-node Ethernet NIC’s set to utilize jumbo frames

Network•Matching cabling and switch/router throughput to fully leverage NIC’s•Disk array and inter-node Ethernet switches set to utilize jumbo frames•Disk array and inter-node paths on private networks or private VLAN’s

Database Benchmarking Prep Checklist – Pg 2

Page 6: © 2008 Quest Software, Inc. ALL RIGHTS RESERVED. Benchmarking Advice & Recommendations August 2008.

Database•Database version (e.g. partition differently under 10g vs. 11g)•Database word-size•Database basic optimization parameters set or tuned•Database specific optimization parameters set or tuned for given benchmark

Benchmark Factory•Using most recent Benchmark Factory software version available (i.e. currently 5.7.0)•Using the best available database driver for that database platform (native vs. ODBC)•Starting a number of agents = max total concurrent users for the benchmark / 900•Place no more than four concurrent agents for the same test on a single app server•Customize the Benchmark Factory project meta-data for your specific needs:

•Partition or cluster tables•Partition or cluster indexes•Collect optimizer statistics•Collect performance snapshot (e.g. Oracle Stats Pack or AWR snapshot)•Run the workload

•Collect performance snapshot (e.g. Oracle Stats Pack or AWR snapshot)

Database Benchmarking Prep Checklist – Pg 3

Page 7: © 2008 Quest Software, Inc. ALL RIGHTS RESERVED. Benchmarking Advice & Recommendations August 2008.

Standard Benchmark Options?

• TPC-A measures performance in update-intensive database environments typical in on-line transaction processing applications. (Obsolete as of 6/6/95)

• TPC-B measures throughput in terms of how many transactions per second a system can perform. (Obsolete as of 6/6/95)

• TPC-D represents a broad range of decision support (DS) applications that require complex, long running queries against large complex data structures. (Obsolete as of 4/6/99)

• TPC-R is a business reporting, decision support benchmark. (Obsolete as of 1/1/2005)

• TPC-W is a transactional web e-Commerce benchmark. (Obsolete as of 4/28/05)

• TPC-C is an on-line transaction processing benchmark. (showing its age – soon to be replaced by TPC-E)

• TPC-E is a new On-Line Transaction Processing (OLTP) workload.

• TPC-H is an ad-hoc, decision support benchmark.

7

Page 8: © 2008 Quest Software, Inc. ALL RIGHTS RESERVED. Benchmarking Advice & Recommendations August 2008.

Know Thy Test – Read The Spec !!!

8

If you don’t know this info, how can you set BMF parameters???

http://tpc.org/tpcc/spec/tpcc_current.pdf

Page 9: © 2008 Quest Software, Inc. ALL RIGHTS RESERVED. Benchmarking Advice & Recommendations August 2008.

Understand Database Design – TPC-C

9

Page 10: © 2008 Quest Software, Inc. ALL RIGHTS RESERVED. Benchmarking Advice & Recommendations August 2008.

Understand Database Design – TPC-H

10

Page 11: © 2008 Quest Software, Inc. ALL RIGHTS RESERVED. Benchmarking Advice & Recommendations August 2008.

Data Model if Unsure (TPC-H)

11

Page 12: © 2008 Quest Software, Inc. ALL RIGHTS RESERVED. Benchmarking Advice & Recommendations August 2008.

Say Goodbye to Simple Designs – TPC-E

12

Page 13: © 2008 Quest Software, Inc. ALL RIGHTS RESERVED. Benchmarking Advice & Recommendations August 2008.

Know Some of the Workload

13

If you don’t know what the database is being asked to do,then how can you tune the database instance parameters?2.6 Shipping Priority Query (Q3)This query retrieves the 10 unshipped orders with the highest value.

2.6.1 Business QuestionThe Shipping Priority Query retrieves the shipping priority and potential revenue, defined as the sum of l_extendedprice * (1-l_discount), of the orders having the largest revenue among those that had not been shipped as of a given date. Orders are listed in decreasing order of revenue. If more than 10 unshipped orders exist, only the 10 orders with the largest revenue are listed.

2.6.2 Functional Query DefinitionReturn the first 10 selected rows

select l_orderkey, sum(l_extendedprice*(1-l_discount)) as revenue, o_orderdate, o_shippriorityfrom customer, orders, lineitemwhere c_mktsegment = '[SEGMENT]' and c_custkey = o_custkey and l_orderkey = o_orderkey and o_orderdate < date '[DATE]' and l_shipdate > date '[DATE]'group by l_orderkey, o_orderdate, o_shippriorityorder by revenue desc, o_orderdate;

2.6.3 Substitution ParametersValues for the following substitution parameters must be generated and used to build the executable query text:1. SEGMENT is randomly selected within the list of values defined for Segments in Clause 4.2.2.13;2. DATE is a randomly selected day within [1995-03-01 .. 1995-03-31].

Page 14: © 2008 Quest Software, Inc. ALL RIGHTS RESERVED. Benchmarking Advice & Recommendations August 2008.

Time to Introduce BMF to Equation• BMF does three things (all per spec)

– Create ANSI SQL standard database objects (tables, indexes, views)– Load those objects with appropriate amount of data for the scale factor– Creates the concurrent user workload or stream workload on db server

• What BMF does NOT do:– Does not partition, cluster, or any other advanced storage parameters– Does not know about storage array & LUNS – so does not spread IO– Does not know about db optimization techniques: e.g. “gather stats”– Does not monitor benchmark workload – other than to show progress– Does not diagnose database tuning/optimization required to improve– Does not diagnose operating system tuning parms required to improve– Does not diagnose hardware configuration tuning required to improve– Does not offer a “push one single button” benchmarking solution

• It’s a basic tool required to do the job, but it does not do the job for you – the user has to own & drive the process

14

Page 15: © 2008 Quest Software, Inc. ALL RIGHTS RESERVED. Benchmarking Advice & Recommendations August 2008.

Step 1 – Create Static Hold Schema & Load

15

That way refresh of data can occur in a few minutesUsing CTAS rather than waiting on BMF client load!!!

Page 16: © 2008 Quest Software, Inc. ALL RIGHTS RESERVED. Benchmarking Advice & Recommendations August 2008.

Step 2 – Copy Schema, Gather Stats, Run, etc.

16

Page 17: © 2008 Quest Software, Inc. ALL RIGHTS RESERVED. Benchmarking Advice & Recommendations August 2008.

Copy User Script

17

set verify off

drop user &1 cascade;

CREATE USER &1 IDENTIFIED BY "&1" DEFAULT TABLESPACE "USERS" TEMPORARY TABLESPACE "TEMP" PROFILE DEFAULT QUOTA UNLIMITED ON "USERS";

GRANT "CONNECT" TO &1;GRANT "RESOURCE" TO &1;grant select any table to &1;ALTER USER &1 DEFAULT ROLE "CONNECT", "RESOURCE";

purge recyclebin;

exit

Page 18: © 2008 Quest Software, Inc. ALL RIGHTS RESERVED. Benchmarking Advice & Recommendations August 2008.

Copy/Create Tables

18

set verify off

CREATE TABLE C_WAREHOUSETABLESPACE USERSNOLOGGING NOCOMPRESS NOCACHEPARALLEL (DEGREE &1)NOMONITORINGASSELECT * FROM &4..C_WAREHOUSE;

create cluster c_district_cluster ( d_id number, d_w_id number)TABLESPACE USERSsingle tablehashkeys 1008000hash is ( ((d_w_id * 10) + d_id) )size 1448;

CREATE TABLE C_DISTRICTcluster c_district_cluster (d_id, d_w_id)NOCOMPRESS NOMONITORINGASSELECT * FROM &4..C_DISTRICT;

Why did I create a cluster?

Why this table in cluster?

BMF not really or easily support doing these kinds of things

-Spec-Disclosure reports

Page 19: © 2008 Quest Software, Inc. ALL RIGHTS RESERVED. Benchmarking Advice & Recommendations August 2008.

19

CREATE TABLE C_ORDERTABLESPACE USERSPARTITION BY HASH (O_W_ID, O_D_ID, O_C_ID, O_ID) PARTITIONS &3NOLOGGING NOCOMPRESS NOCACHEPARALLEL (DEGREE &1)NOMONITORINGASSELECT * FROM &4..C_ORDER;

CREATE TABLE C_ORDER_LINETABLESPACE USERSPARTITION BY HASH (OL_W_ID, OL_D_ID, OL_O_ID, OL_NUMBER) PARTITIONS &3NOLOGGING NOCOMPRESS NOCACHEPARALLEL (DEGREE &1)NOMONITORINGASSELECT * FROM &4..C_ORDER_LINE;

CREATE TABLE C_ITEMTABLESPACE USERSPARTITION BY HASH (I_ID) PARTITIONS &3NOLOGGING NOCOMPRESS NOCACHEPARALLEL (DEGREE &1)NOMONITORINGASSELECT * FROM &4..C_ITEM;

Why did I create a thispartitioning scheme???

-Spec-Disclosure reports

Page 20: © 2008 Quest Software, Inc. ALL RIGHTS RESERVED. Benchmarking Advice & Recommendations August 2008.

Disclosure Reports

20http://tpc.org/tpcc/results/tpcc_perf_results.asp

Page 21: © 2008 Quest Software, Inc. ALL RIGHTS RESERVED. Benchmarking Advice & Recommendations August 2008.

Disclosure Report – Lots of Info

21

This is where people documentexactly what advanced database feature and storage parametersthey used – info is invaluable

Page 22: © 2008 Quest Software, Inc. ALL RIGHTS RESERVED. Benchmarking Advice & Recommendations August 2008.

Disclosure Report – Appendix B

22

Page 23: © 2008 Quest Software, Inc. ALL RIGHTS RESERVED. Benchmarking Advice & Recommendations August 2008.

23

Try that with BMF

Lessons people have learned: e.g. TPC-H results entirely rely on # disks and nothing else – need well over 100 spindles for just 300 GB test

Page 24: © 2008 Quest Software, Inc. ALL RIGHTS RESERVED. Benchmarking Advice & Recommendations August 2008.

Top 10 Benchmarking Misconceptions

24

Page 25: © 2008 Quest Software, Inc. ALL RIGHTS RESERVED. Benchmarking Advice & Recommendations August 2008.
Page 26: © 2008 Quest Software, Inc. ALL RIGHTS RESERVED. Benchmarking Advice & Recommendations August 2008.
Page 27: © 2008 Quest Software, Inc. ALL RIGHTS RESERVED. Benchmarking Advice & Recommendations August 2008.
Page 28: © 2008 Quest Software, Inc. ALL RIGHTS RESERVED. Benchmarking Advice & Recommendations August 2008.
Page 29: © 2008 Quest Software, Inc. ALL RIGHTS RESERVED. Benchmarking Advice & Recommendations August 2008.
Page 30: © 2008 Quest Software, Inc. ALL RIGHTS RESERVED. Benchmarking Advice & Recommendations August 2008.
Page 31: © 2008 Quest Software, Inc. ALL RIGHTS RESERVED. Benchmarking Advice & Recommendations August 2008.

31

Results – Average Response Time

Average Response Time

0.00

1.00

2.00

3.00

4.00

5.00

6.00

50 100 150 200 250 300 350 400 450 500

Run 1

Run 2

Run 3

Run 4

Run 5

SubSecond

Page 32: © 2008 Quest Software, Inc. ALL RIGHTS RESERVED. Benchmarking Advice & Recommendations August 2008.
Page 33: © 2008 Quest Software, Inc. ALL RIGHTS RESERVED. Benchmarking Advice & Recommendations August 2008.

33

Apply Top-DownAnalysis & Revision

1. Benchmark Factory

Industry standardbenchmark: TPC-C &

Trace FilesKey Metric = AvgResponse Time

3. TOAD with DBA ModuleAWR/ADDM & Stats Pack

Again record before & afterfor improvements confirmation

Performance Testing Process (using tools)

2. Spotlight on RAC

Record before & after results

Confirm improvements

Page 34: © 2008 Quest Software, Inc. ALL RIGHTS RESERVED. Benchmarking Advice & Recommendations August 2008.

34

TOAD® with DBA Module

• Expedite typical DBA management & tuning tasks

• Great Productivity Enhancing Features– Database Health Check

– Database Probe

– Database Monitor

– AWR/ADDM Reports

– UNIX Monitor

– Stats Pack Reports

• See Toad World paper– Title: “Maximize Database Performance Via Toad for Oracle”

– http://www.toadworld.com/Education/ToadWorldPapersandPodcasts/tabid/82/Default.aspx

• Let’s DBA concentrate on task at hand – Correcting (i.e. Fixing)

Toad to the Rescue (as usual)

Page 35: © 2008 Quest Software, Inc. ALL RIGHTS RESERVED. Benchmarking Advice & Recommendations August 2008.

35

Page 36: © 2008 Quest Software, Inc. ALL RIGHTS RESERVED. Benchmarking Advice & Recommendations August 2008.

36

Page 37: © 2008 Quest Software, Inc. ALL RIGHTS RESERVED. Benchmarking Advice & Recommendations August 2008.

Config So Toad has that Performance Data

Page 38: © 2008 Quest Software, Inc. ALL RIGHTS RESERVED. Benchmarking Advice & Recommendations August 2008.

Wrap Up• I can email everyone the slides, scripts, projects etc

• Time Permitting – show new BMF TOAD Integration

• BMF has a new product management directive– # concurrent users sweet spot = 1000

– # concurrebt users max we’ll support = 2500

• This limit is not because of BMF, but rather that people don’t prepare or have right expectations

• We cannot afford to keep doing their benchmarking projects for them when they do tens of thousands

38