A Closer Look inside Oracle ASM

Click here to load reader

download A Closer Look inside Oracle ASM

of 40

description

A Closer Look inside Oracle ASM. UKOUG Conference 2007 Luca Canali, CERN IT. Outline. Oracle ASM for DBAs Introduction and motivations ASM is not a black box Investigation of ASM internals Focus on practical methods and troubleshooting ASM and VLDB Metadata, rebalancing and performance - PowerPoint PPT Presentation

Transcript of A Closer Look inside Oracle ASM

A closer Look inside Oracle ASM

UKOUG Conference 2007Luca Canali, CERN ITA Closer Look inside Oracle ASMCERN - IT DepartmentCH-1211 Genve 23Switzerlandwww.cern.ch/it1OutlineOracle ASM for DBAsIntroduction and motivations ASM is not a black box Investigation of ASM internalsFocus on practical methods and troubleshootingASM and VLDBMetadata, rebalancing and performanceLessons learned from CERNs production DB services

CERN - IT DepartmentCH-1211 Genve 23Switzerlandwww.cern.ch/itASMOracle Automatic Storage ManagementProvides the functionality of a volume manager and filesystem for Oracle (DB) filesWorks with RAC Oracle 10g feature aimed at simplifying storage managementTogether with Oracle Managed Files and the Flash Recovery Area An implementation of S.A.M.E. methodologyGoal of increasing performance and reducing cost

CERN - IT DepartmentCH-1211 Genve 23Switzerlandwww.cern.ch/itASM for a Clustered ArchitectureOracle architecture of redundant low-cost components

ServersSANStorageCERN - IT DepartmentCH-1211 Genve 23Switzerlandwww.cern.ch/it4This is the architecture deployed at CERN for the Physics DBs, more on https://twiki.cern.ch/twiki/pub/PSSGroup/HAandPerf/Architecture_description.pdf ASM Disk GroupsExample: HW = 4 disk arrays with 8 disks eachAn ASM diskgroup is created using all available disksThe end result is similar to a file system on RAID 1+0 ASM allows to mirror across storage arrays Oracle RDBMS processes directly access the storageRAW disk access

MirroringStripingStripingFailgroup1Failgroup2ASM DiskgroupCERN - IT DepartmentCH-1211 Genve 23Switzerlandwww.cern.ch/itFiles, Extents, and Failure GroupsFiles and extent pointers

Failgroupsand ASMmirroring

CERN - IT DepartmentCH-1211 Genve 23Switzerlandwww.cern.ch/itASM Is not a Black BoxASM is implemented as an Oracle instanceFamiliar operations for the DBAConfigured with SQL commandsInfo in V$ views Logs in udump and bdumpSome secret details hidden in X$TABLES and underscore parameters

CERN - IT DepartmentCH-1211 Genve 23Switzerlandwww.cern.ch/itSelected V$ Views and X$ TablesView NameX$ TableDescriptionV$ASM_DISKGROUPX$KFGRPperforms disk discovery and lists diskgroupsV$ASM_DISKX$KFDSK, X$KFKIDperforms disk discovery, lists disks and their usage metricsV$ASM_FILEX$KFFILlists ASM files, including metadataV$ASM_ALIASX$KFALSlists ASM aliases, files and directoriesV$ASM_TEMPLATEX$KFTMTAASM templates and their propertiesV$ASM_CLIENTX$KFNCLlists DB instances connected to ASMV$ASM_OPERATIONX$KFGMGlists current rebalancing operationsN.A.X$KFKLIBavailable libraries, includes asmlibN.A.X$KFDPARTNERlists disk-to-partner relationshipsN.A.X$KFFXPextent map table for all ASM filesN.A.X$KFDATallocation table for all ASM disksCERN - IT DepartmentCH-1211 Genve 23Switzerlandwww.cern.ch/it8A complete list in https://twiki.cern.ch/twiki/bin/view/PSSGroup/ASM_Internals

View NameX$ Table nameDescriptionV$ASM_DISKGROUPX$KFGRPperforms disk discovery and lists diskgroupsV$ASM_DISKGROUP_STATX$KFGRP_STATdiskgroup stats without disk discoveryV$ASM_DISKX$KFDSK, X$KFKIDperforms disk discovery, lists disks and their usage metricsV$ASM_DISK_STATX$KFDSK_STAT, X$KFKIDlists disks and their usage metricsV$ASM_FILEX$KFFILlists ASM files, including metadata/asmdisk filesV$ASM_ALIASX$KFALSlists ASM aliases, files and directoriesV$ASM_TEMPLATEX$KFTMTAlists the available templates and their propertiesV$ASM_CLIENTX$KFNCLlists DB instances connected to ASMV$ASM_OPERATIONX$KFGMGlists rebalancing operationsN.A.X$KFKLIBavailable libraries, includes asmlib pathN.A.X$KFDPARTNERlists disk-to-partner relationshipsN.A.X$KFFXPextent map table for all ASM filesN.A.X$KFDATextent list for all ASM disksN.A.X$KFBHdescribes the ASM cache (buffer cache of ASM in blocks of 4K (_asm_blksize)N.A.X$KFCCEa linked list of ASM blocks. to be further investigated

Additional in 11g:View NameX$ Table nameDescriptionV$ASM_ATTRIBUTEX$KFENVASM attributes, the X$ table shows also 'hidden' attributesV$ASM_DISK_IOSTATX$KFNSDSKIOSTI/O statisticsN.A.X$KFDFSN.A.X$KFDDDN.A.X$KFGBRBN.A.X$KFMDGRPN.A.X$KFCLLEN.A.X$KFVOLN.A.X$KFVOLSTATN.A.X$KFVOFSN.A.X$KFVOFSV

ASM ParametersNotable ASM instance parameters:

*.asm_diskgroups='TEST1_DATADG1','TEST1_RECODG1'*.asm_diskstring='/dev/mpath/itstor*p*'*.asm_power_limit=5*.shared_pool_size=70M*.db_cache_size=50M*.large_pool_size=50M*.processes=100CERN - IT DepartmentCH-1211 Genve 23Switzerlandwww.cern.ch/it9A longer list:

*.asm_diskgroups='TEST1_DATADG1','TEST1_RECODG1'*.asm_diskstring='/dev/mpath/itstor*p*'*.asm_power_limit=5*.cluster_database=true*.cluster_database_instances=6*.shared_pool_size=70M*.db_cache_size=50M*.large_pool_size=50M*.processes=100+ASM1.instance_number=1 (..repeat for each instance) *.instance_type='asm'+ASM1.local_listener='LISTENER_+ASM1 (..repeat for each instance)*.remote_login_passwordfile='exclusive'*.user_dump_dest='/ORA/dbs00/oracle/admin/+ASM/udump*.core_dump_dest='/ORA/dbs00/oracle/admin/+ASM/cdump'*.background_dump_dest='/ORA/dbs00/oracle/admin/+ASM/bdump'

More ASM ParametersUnderscore parameters Several undocumented parametersTypically dont need tuningException: _asm_ausize and _asm_stripesizeMay need tuning for VLDB in 10g

New in 11g, diskgroup attributesV$ASM_ATTRIBUTE, most notabledisk_repair_timeau_sizeX$KFENV shows underscore attributes

CERN - IT DepartmentCH-1211 Genve 23Switzerlandwww.cern.ch/it10Query to display underscore parametersselect a.ksppinm "Parameter", c.ksppstvl "Instance Value" from x$ksppi a, x$ksppcv b, x$ksppsv c where a.indx = b.indx and a.indx = c.indx and ksppinm like '%asm%' order by a.ksppinm; ASM Storage InternalsASM Disks are divided in Allocation Units (AU) Default size 1 MB (_asm_ausize)Tunable diskgroup attribute in 11gASM files are built as a series of extentsExtents are mapped to AUs using a file extent map When using normal redundancy, 2 mirrored extents are allocated, each on a different failgroupRDBMS read operations access only the primary extent of a mirrored couple (unless there is an IO error)In 10g the ASM extent size = AU sizeCERN - IT DepartmentCH-1211 Genve 23Switzerlandwww.cern.ch/it1111g implements variable extent size for extent#>20000 (similar to the automatic extent size management in the DB)

ASM Metadata Walkthrough Three examples follow of how to read data directly from ASM. Motivations:Build confidence in the technology, i.e. get a feeling of how ASM worksIt may turn out useful one day to troubleshoot a production issue.CERN - IT DepartmentCH-1211 Genve 23Switzerlandwww.cern.ch/itExample 1: Direct File Access 1/2 Goal: Reading ASM files with OS tools, using metadata information from X$ tables

Example: find the 2 mirrored extents of the RDBMS spfilesys@+ASM1> select GROUP_KFFXP Group#, DISK_KFFXP Disk#, AU_KFFXP AU#, XNUM_KFFXP Extent# from X$KFFXP where number_kffxp=(select file_number from v$asm_alias where name='spfiletest1.ora');

GROUP# DISK# AU# EXTENT#---------- ---------- ---------- ---------- 1 16 17528 0 1 4 14838 0CERN - IT DepartmentCH-1211 Genve 23Switzerlandwww.cern.ch/itExample 1: Direct File Access 2/2Find the disk path sys@+ASM1> select disk_number,path fromv$asm_disk where GROUP_NUMBER=1 and disk_numberin (16,4); DISK_NUMBER PATH ----------- ------------------------------------ 4 /dev/mpath/itstor417_1p1 16 /dev/mpath/itstor419_6p1

Read data from disk using dd dd if=/dev/mpath/itstor419_6p1 bs=1024kcount=1 skip=17528 |strings CERN - IT DepartmentCH-1211 Genve 23Switzerlandwww.cern.ch/it14Device mapper multipathing is used under LinuxBlock devices are given a name /dev/mpath/itstor.. to reflect the storage array nameThe name is a symbolic link (handled by device mapper) to to /dev/dm-xxx (device mapper block device)Device names and device mapper configurations are set up with /etc/multipath.confX$KFFXPColumn NameDescriptionNUMBER_KFFXP ASM file number. Join with v$asm_file and v$asm_aliasCOMPOUND_KFFXPFile identifier. Join with compound_index in v$asm_fileINCARN_KFFXPFile incarnation id. Join with incarnation in v$asm_fileXNUM_KFFXP ASM file extent number (mirrored extent pairs have the same extent value)PXN_KFFXP Progressive file extent numberGROUP_KFFXPASM disk group number. Join with v$asm_disk and v$asm_diskgroupDISK_KFFXPASM disk number. Join with v$asm_diskAU_KFFXP Relative position of the allocation unit from the beginning of the disk. LXN_KFFXP 0->primary extent,1->mirror extent, 2->2nd mirror copy (high redundancy and metadata)CERN - IT DepartmentCH-1211 Genve 23Switzerlandwww.cern.ch/it15File extent pointer X$ table. This is the author interpretation of the table based on investigation of actual instances.

X$KFFXP Column NameDescriptionADDRx$ table address/identifierINDXrow unique identifierINST_IDinstance number (RAC)NUMBER_KFFXP ASM file number. Join with v$asm_file and v$asm_aliasCOMPOUND_KFFXPFile identifier. Join with compound_index in v$asm_fileINCARN_KFFXPFile incarnation id. Join with incarnation in v$asm_filePXN_KFFXPProgressive file extent numberXNUM_KFFXP ASM file extent number (mirrored AU have the same extent value)GROUP_KFFXPASM disk group number. Join with v$asm_disk and v$asm_diskgroupDISK_KFFXPDisk number where the extent is allocated. Join with v$asm_diskAU_KFFXP Relative position of the allocation unit from the beginning of the disk. The allocation unit size (1 MB) in v$asm_diskgroupLXN_KFFXP 0->primary extent, ->mirror extent, 2->2nd mirror copy (high redundancy and metadata)FLAGS_KFFXPN.K.CHK_KFFXPN.K.

Example 2: A Different WayA different metadata table to reach the same goal of reading ASM files directly from OS:sys@+ASM1> select GROUP_KFDAT Group# ,NUMBER_KFDAT Disk#, AUNUM_KFDAT AU# from X$KFDAT where fnum_kfdat=(select file_number from v$asm_alias where name='spfiletest1.ora');

GROUP# DISK# AU#---------- ---------- ---------- 1 4 14838 1 16 17528

CERN - IT DepartmentCH-1211 Genve 23Switzerlandwww.cern.ch/it16This is much slower that reading X$KFFXP since it scans all the disks allocation tablesX$KFDATColumn Name (subset)DescriptionGROUP_KFDAT Diskgroup number, join with v$asm_diskgroupNUMBER_KFDAT Disk number, join with v$asm_diskCOMPOUND_KFDATDisk compund_index, join with v$asm_diskAUNUM_KFDAT Disk allocation unit (relative position from the beginning of the disk), join with x$kffxp.au_kffxpV_KFDAT Flag: V=this Allocation Unit is used; F=AU is freeFNUM_KFDAT File number, join with v$asm_fileXNUM_KFDATProgressive file extent number join with x$kffxp.pxn_kffxpCERN - IT DepartmentCH-1211 Genve 23Switzerlandwww.cern.ch/it17Disk allocation table X$ table. This is the author interpretation of the table based on investigation of actual instances.

X$KFDAT Column NameDescriptionADDRx$ table address/identifierINDXrow unique identifierINST_IDinstance number (RAC)GROUP_KFDAT diskgroup number, join with v$asm_diskgroupNUMBER_KFDAT disk number, join with v$asm_diskCOMPOUND_KFDATdisk compund_index, join with v$asm_diskAUNUM_KFDAT Disk allocation unit (relative position from the beginning of the disk), join with x$kffxp.au_kffxpV_KFDAT V=this Allocation Unit is used; F=AU is freeFNUM_KFDAT file number, join with v$asm_fileI_KFDATN.K.XNUM_KFDATprogressive file extent number join with x$kffxp.pxn_kffxpRAW_KFDATraw format encoding of the disk,and file extent information

Example 3: Yet Another WayUsing the internal package dbms_diskgroup

declare fileType varchar2(50); fileName varchar2(50); fileSz number; blkSz number; hdl number; plkSz number; data_buf raw(4096);begin fileName := '+TEST1_DATADG1/TEST1/spfiletest1.ora'; dbms_diskgroup.getfileattr(fileName,fileType,fileSz, blkSz); dbms_diskgroup.open(fileName,'r',fileType,blkSz, hdl,plkSz, fileSz); dbms_diskgroup.read(hdl,1,blkSz,data_buf); dbms_output.put_line(data_buf);end;/CERN - IT DepartmentCH-1211 Genve 23Switzerlandwww.cern.ch/itDBMS_DISKGROUPCan be used to read/write ASM files directly Its an Oracle internal packageDoes not require a RDBMS instance 11gs asmcmd cp command uses dbms_diskgroup

Procedure NameParametersdbms_diskgroup.open (:fileName, :openMode, :fileType, :blkSz, :hdl,:plkSz, :fileSz) dbms_diskgroup.read (:hdl, :offset, :blkSz, :data_buf) dbms_diskgroup.createfile (:fileName, :fileType, :blkSz, :fileSz, :hdl, :plkSz, :fileGenName) dbms_diskgroup.close (:hdl) dbms_diskgroup.commitfile (:handle) dbms_diskgroup.resizefile (:handle,:fsz) dbms_diskgroup.remap (:gnum, :fnum, :virt_extent_num) dbms_diskgroup.getfileattr (:fileName, :fileType, :fileSz, :blkSz) CERN - IT DepartmentCH-1211 Genve 23Switzerlandwww.cern.ch/itFile Transfer Between OS and ASMThe supported tools (10g)RMANDBMS_FILE_TRANSFERFTP (XDB)WebDAV (XDB)They all require a RDBMS instanceIn 11g, all the above plus asmcmd cp commandWorks directly with the ASM instance

CERN - IT DepartmentCH-1211 Genve 23Switzerlandwww.cern.ch/it20ftp into asm diskgroups can be used together with transportable tablespace.CERNs experience is the migration and consolidation of 10 DBs from 9i to a multi TB RAC DB on 10g

11g asmcmd is improved from the 10g version: Look in $OH/bin and $OH/rdbms/lib (find $ORACLE_HOME -name *asmcmd*)

Strace and ASM 1/3Goal: understand strace output when using ASM storage Example:read64(15,"#33\0@\"..., 8192, 473128960)=8192 This is a read operation of 8KB from FD 15 at offset 473128960What is the segment name, type, file# and block# ?CERN - IT DepartmentCH-1211 Genve 23Switzerlandwww.cern.ch/itStrace and ASM 2/3From /proc//fd I find that FD=15 is/dev/mpath/itstor420_1p1This is disk 20 of D.G.=1 (from v$asm_disk)From x$kffxp I find the ASM file# and extent#:Note: offset 473128960 = 451 MB + 27 *8KBsys@+ASM1>select number_kffxp, xnum_kffxp from x$kffxp where group_kffxp=1 and disk_kffxp=20 and au_kffxp=451;

NUMBER_KFFXP XNUM_KFFXP------------ ---------- 268 17 CERN - IT DepartmentCH-1211 Genve 23Switzerlandwww.cern.ch/itStrace and ASM 3/3From v$asm_alias I find the file alias for file 268: USERS.268.612033477 From v$datafile view I find the RDBMS file#: 9From dba extents finally find the owner and segment name relative to the original IO operation: sys@TEST1>select owner,segment_name,segment_type from dba_extents where FILE_ID=9 and 27+17*1024*1024 between block_id and block_id+blocks;

OWNER SEGMENT_NAME SEGMENT_TYPE ----- ------------ ------------SCOTT EMP TABLE CERN - IT DepartmentCH-1211 Genve 23Switzerlandwww.cern.ch/itInvestigation of Fine StripingAn application: finding the layout of fine-striped filesExplored using strace of an oracle session executing alter system dump logfile .. Result: round robin distribution over 8 x 1MB extents

Fine striping size = 128KB (1MB/8)

A0B0

A1B1

A2B2

A3B3

A4B4

A5B5

A6B6

A7B7AU = 1MBCERN - IT DepartmentCH-1211 Genve 23Switzerlandwww.cern.ch/itMetadata Files ASM diskgroups contain hidden files Not listed in V$ASM_FILE (file# 4 x 2 Gb(measured with parallel query)CERN - IT DepartmentCH-1211 Genve 23Switzerlandwww.cern.ch/itImplementation DetailsMultipathingLinux Device Mapper (2.6 kernel) Block devicesRHEL4 and 10gR2 allow to skip raw devices mappingExternal half of the disk for data disk groupsJBOD configNo HW RAID ASM used to mirror across disk arraysHW:Storage arrays (Infortrend): FC controller, SATA disksFC (Qlogic): 4Gb switch and HBAs (2Gb in older HW)Servers are 2x CPUs, 4GB RAM, 10.2.0.3 on RHEL4, RAC of 4 to 8 nodesCERN - IT DepartmentCH-1211 Genve 23Switzerlandwww.cern.ch/itQ&AQ&A

Links:http://cern.ch/phydbhttp://twiki.cern.ch/twiki/bin/view/PSSGroup/ASM_Internalshttp://www.cern.ch/canaliCERN - IT DepartmentCH-1211 Genve 23Switzerlandwww.cern.ch/itChart29284881723771263813273240218939052427418726334901267454102858551130855727306063632913

Oracle 11gOracle 10gDiskgroup Rebalance ParallelismRate, MB/minASM Rebalancing Performance (RAC)

ASM - RebalanceASM REBALANCINGSpeedPowerOracle 11gOracle 10g19284882172377132638132743240218953905242764187263374901267485410285895511308510572730601163632913

ASM - Rebalance0000000000000000000000

Oracle 11gOracle 10gDiskgroup Rebalance ParallelismRate (MB/min)ASM Rebalancing Performance (RAC)

Chart2297.664992277586.54216984561374.54182603492429.6866776653247.48938300734047.55261113874637.53357562725040.0628611765573.05146172865959.18928179626189.42723803196624.39494515486706.66210320796905.93374470657082.20274302887264.72914935747414.49167959657721.38677643067663.5721998737806.71581823098040.80766173998015.93315811538110.38575650818129.35098107048076.81183747888183.34035937898291.29554940068335.28068064758460.21405407548402.35676864778592.32507532498425.83371570828650.8787797328449.93982713358668.65335153138563.47480990848674.96588797388653.6780540697

Workload, number of oracle sessionsI/O small read per sec (IOPS)Small Random I/O (8KB block, 32GB probe table) 64 SATA HDs (4 arrays, 1 instance)

Sheet12297.6649922776718.96276652000000143763391635.5019105.520293695072.516624590848571309208429.5220.513086532007127.5494.51202.51213.5444586.54216984563409.81450752000000139782425334.5013192.52083145.546031706512793696.51321669.566.517.511319188.52008362.52390.53248.576974101374.54182603491455.030296200000012137.51923224041102007091.54133.51644209356841396554.539801396320.5200056482.5113.5287.5149202429.686677665823.1514042000000138711778724.5011102004563.54133.5164213841921.5154026813801540015.52000478.511.5407.5251.5303247.4893830073615.8603662000000143391909012012120040123977164168663040.516991057.57.501698774.5200003500364330404047.5526111387494.12575752000000155002334676.50555320081054290164503961604.51855846819.5018552662001428.5126107813450504637.5335756272431.26372420000001611822415220335.52004509.53977164209418243.52015893.532002015311.520005811.51.5571580605040.062861176396.8204475200000016795.52419412.501792004004397716416800768222131194.532.502212356200002700665763.5705573.0514617286358.8698245200000017170.52546658.501322004007397716416825344262354966.5497.502354026200003100798937.5805959.1892817962335.61612252000000179603034011055162009010.54073.51645781401632529186.57178.50.525277382001421202.51321093.51243.5906189.4272380319323.13167652000000179352939315.502932004229.54193.51641864806422721530.54.5151.5027201462000036001213.51383.51006624.3949451548301.914366200000019071.53084597.501662004018.539771641691955212899676.52.5314028981572000041.50099715201106706.6621032079298.2109385200000018605330861604782005003.539771642498867233089376657503087160200107561.513272209.51206905.9337447065289.60602220000001875835030020205.52004016.53977164169031680.53271583.52899032692482000039.500132723361307082.2027430288282.39801552000000191733697519.501882004595.53977164216463360.534426352.594803440175.52000618.500132724591407264.7291493574275.30276220000001940338787980238.52004047.53977164171571200.53610212.53.5116503607319.52000047.518.5613272872.51507414.4916795965269.7420252000000199894394296.505657.520092054290164594073604.53823248.54.52999.553820301.52001399.5289.5139.516402647.51607721.3867764306259.02082852000000196074205213.50253.52004031.5397716417026048039786553169603976013.52000054.50039772641.51707663.572199873260.974901520000002059544594030573.52006067.53977164337049601.54217978.51.54646.504214542.520021391.51.5397734341807806.7158182309256.1896765200000019768.5463341803552004032.539771641703424004413301.542349.504410189.52000055.500397731121908040.8076617399248.731232520000001997547474820271.52004020.53977164169359360453057942753.504527395.52000044.50039773181.52008015.9331581153249.5030785200000020051501030502722004023.53977164169605120.547748961.53729.504771256.52000046.500397736392108110.3857565081246.597395200000020129.55197741.502722004024.53977164169687041.5495094934757049470802000047.500397738692208129.3509810704246.02212000000213805776580.5057772009234.542901645964902425098905.5513055050948902001430510.5138.5429034982308076.8118374788247.622458200000021328.55728243.50290.52004028.539771641700147225334960.5317985.5053311002000052.500397738582408183.3403593789244.398975520000002217658849200935201878539771653788672075427215.5527080054225482014854.522.539774665.52508291.2955494006241.2168265200000021208.56048503.5036920040373977164170711040.55696089.531565305691462200006000397746272608335.2806806475239.9439295200000020672.5625373201895.52006488.54133.516437153792257588907331376205880340.52001421353854133.583762708460.2140540754236.4006382000000216886633545.506025.520098384033.516464592896114.55971150222454.5259651762002493397.5131.54033.55553.52808402.3567686477238.028455200000021049.56680764.505582004041397716417103872162572265.523149.506251095.520000650039776128.52908592.3250753249232.7658675200000021084.56764826.50363.5200405339771641720217626315428.5431673.506310426.520000760039775001.53008425.8337157082237.365235200000021446.57132973.503712004049.539771641717350416689545.521826806684172.52000072.50039775373.53108650.878779732231.190385520000002116871801530407.52004357397716419692544495.567493835.522065067435122000084872139775780.53208449.9398271335236.6880762000000222137932481.5058822006069.5429016433721344374.57016789340904070104302001517.554.527.542906292.53308668.6533515313230.7163432000000214877638304.5044920040773977164173987845.57124892.52.53003107117600.5200010000397772923408563.4748099084233.55005352000000217227951978.50528.52005532.53977164293222403.573948822.53624207387960.520016170039776918.53508674.9658879738230.548457200000021726.58063650.5014152004098.539771641757491247365070395706073577282000121.50039777342.53608653.6780540697231.115600520000002185083289330434.5200410139771641759539257708311.539368672077002962000092.528.55.539777983

Sheet100000000000000000000000000000000000000

Workload, number of parallel query slavesI/O small read per sec (IOPS)Small Random I/O

Sheet2

Sheet3

Chart134.9094625133287.4670191623409.5735978619503.6173443858548.6779324081617.7959665095611.9832999417683.3908759101695.529558478677.4192746467689.8283568821718.5955929836713.0668765247745.1732377404727.2567555806739.9409144715741.8871810638740.3040318499732.3320534061724.4304199538

Workload, number of parallel query slavesMB/secSequential I/O Throughput, 80GB probe table64 SATA HDs (4 arrays and 4 RAC nodes)

Sheet114.16153222481877.3133495800000016710.554594204917.5799487406549400780877.517029639.5232315.553.5337.517023518663000234.2687391236227.97745780000006333199234.50080000008000000655360000000387920.5144000018736438791.5448.8249776199160.0103148000000608180168033.58000000800000065536000000056710200000018755656711860.0358658297130.13054680000006095115957.50080000008000000655360000000.595151.508.55.50000187792951511065.4075065146119.44347780000005973.51295810488000000800000065536000000010949820000001877921094971273.6470182549106.0803380000005958.5135741.5008000002.58000000655360204800.5116587.50.50522.5001877921165871472.954094403107.087889580000006091.5158535.50913.580000008000000655360000000138789.51.511.500000187784138789.51681.466540802795.89826680000006074.5160525.501.580000008000000655360000000142706.50.50000001877921427061882.913584527794.22460880000006308.51760090768000269800000065538203648108.51574233156180.597.520016.526187784157294.52080.754670458696.743630580000006132.5196915.5008000000800000065536000000017726030000001877921772592282.233948335995.00334380000006130214027057480000008000000655360000000194423.51000000187783194424.52485.663270113991.20011480000006157.5220992.50080000008000000655360000001201256.50.511.520000187777201256.52685.004195752791.90722880000006110241930.50080000008000000655360000000.52221701.5000000187777222169.52888.831572263387.947334580000006135.5249381.503080000008000000655360000000.52299700000000187768229970.53086.695761153890.11397980000006119.5266981.50080000008000000655360000008.5246128117.50.500001877842461273288.207830723788.56923480000006154285152.5010.5800000080000006553600000013.52648111820000187792264812.53488.439843781488.336881580000006315.5296727.5017.58000000800000065536000000827608513000001877582760843688.251117688488.52579180000006254.553265203.58000000800000065536000000102906760.51810000187758290676.53887.300783801889.4894680000006330.533226609.580000008000000655360000003310195133.500001877613101954086.358835691790.46555580000006278359609055.580000008000000655360000006.53369991.5820000187792336999

Sheet2134.90946251331877.3133495800000016710.554594204917.5799487406549400780877.517029639.5232315.553.5337.5170235186630002287.4670191623227.97745780000006333199234.50080000008000000655360000000387920.5144000018736438791.54409.5735978619160.0103148000000608180168033.580000008000000655360000000567102000000187556567118503.6173443858130.13054680000006095115957.50080000008000000655360000000.595151.508.55.500001877929515110548.6779324081119.44347780000005973.512958104880000008000000655360000000109498200000018779210949712617.7959665095106.0803380000005958.5135741.5008000002.58000000655360204800.5116587.50.50522.50018779211658714611.9832999417107.087889580000006091.5158535.50913.580000008000000655360000000138789.51.511.500000187784138789.516683.390875910195.89826680000006074.5160525.501.580000008000000655360000000142706.50.500000018779214270618695.52955847894.22460880000006308.51760090768000269800000065538203648108.51574233156180.597.520016.526187784157294.520677.419274646796.743630580000006132.5196915.50080000008000000655360000000177260300000018779217725922689.828356882195.00334380000006130214027057480000008000000655360000000194423.51000000187783194424.524718.595592983691.20011480000006157.5220992.50080000008000000655360000001201256.50.511.520000187777201256.526713.066876524791.90722880000006110241930.50080000008000000655360000000.52221701.5000000187777222169.528745.173237740487.947334580000006135.5249381.503080000008000000655360000000.52299700000000187768229970.530727.256755580690.11397980000006119.5266981.50080000008000000655360000008.5246128117.50.5000018778424612732739.940914471588.56923480000006154285152.5010.5800000080000006553600000013.52648111820000187792264812.534741.887181063888.336881580000006315.5296727.5017.580000008000000655360000008276085130000018775827608436740.304031849988.52579180000006254.553265203.58000000800000065536000000102906760.51810000187758290676.538732.332053406189.4894680000006330.533226609.580000008000000655360000003310195133.5000018776131019540724.430419953890.46555580000006278359609055.580000008000000655360000006.53369991.5820000187792336999

Sheet200000000000000000000

Workload, number of parallel query slavesMB/secSequential I/O Throughput

Sheet3