February 23, 2016Data Mining: Concepts and Techniques 1 Data Mining: Concepts and Techniques UNIT...
-
Upload
aubrey-atkinson -
Category
Documents
-
view
218 -
download
0
description
Transcript of February 23, 2016Data Mining: Concepts and Techniques 1 Data Mining: Concepts and Techniques UNIT...
May 5, 2023Data Mining: Concepts and
Techniques 1
Data Mining: Concepts and Techniques
— UNIT IV & V —
Nitin Sharma
Asstt. Profffesor in Department of CS & IET, Alwar
Books Recommended: M.H. Dunham, “ Data Mining: Introductory &
Advanced Topics” Pearson Education Jiawei Han, Micheline Kamber, “ Data Mining
Concepts & Techniques” Elsevier Sam Anahory, Denniss Murray,” data
warehousing in the Real World: A Practical Guide fro Building Decision Support Systems, “ Pearson Education
Mallach,” Data Warehousing System”, TMH
May 5, 2023Data Mining: Concepts and
Techniques 2
UNIT IV: Data Warehousing and OLAP Technology: An Overview
What is a data warehouse? A multi-dimensional data model Data warehouse architecture Data warehouse implementation From data warehousing to data mining
May 5, 2023Data Mining: Concepts and
Techniques 3
What is Data Warehouse? Defined in many different ways, but not rigorously.
A decision support database that is maintained separately from the organization’s operational database
Support information processing by providing a solid platform of consolidated, historical data for analysis.
“A data warehouse is a subject-oriented, integrated, time-variant, and nonvolatile collection of data in support of management’s decision-making process.”—W. H. Inmon
Data warehousing: The process of constructing and using data warehouses
May 5, 2023Data Mining: Concepts and
Techniques 4
Data Warehouse—Subject-Oriented Organized around major subjects, such as customer, product, sales Focusing on the modeling and analysis of data for decision makers, not
on daily operations or transaction processing Provide a simple and concise view around particular subject issues by
excluding data that are not useful in the decision support process Data Warehouse—Integrated Constructed by integrating multiple, heterogeneous data sources
relational databases, flat files, on-line transaction records Data cleaning and data integration techniques are applied.
Ensure consistency in naming conventions, encoding structures, attribute measures, etc. among different data sources
E.g., Hotel price: currency, tax, breakfast covered, etc. When data is moved to the warehouse, it is converted.
May 5, 2023Data Mining: Concepts and
Techniques 5
Data Warehouse—Time VariantThe time horizon for the data warehouse is significantly longer than that of operational systems
Operational database: current value dataData warehouse data: provide information from a historical perspective (e.g., past 5-10 years)
Every key structure in the data warehouseContains an element of time, explicitly or implicitlyBut the key of operational data may or may not contain “time element”Data Warehouse—Nonvolatile
A physically separate store of data transformed from the operational environmentOperational update of data does not occur in the data warehouse environment
Does not require transaction processing, recovery, and concurrency control mechanismsRequires only two operations in data accessing:
initial loading of data and access of data
May 5, 2023Data Mining: Concepts and
Techniques 6
Data Warehouse vs. Heterogeneous DBMS
Traditional heterogeneous DB integration: A query driven approach Build wrappers/mediators on top of heterogeneous databases When a query is posed to a client site, a meta-dictionary is used
to translate the query into queries appropriate for individual heterogeneous sites involved, and the results are integrated into a global answer set
Complex information filtering, compete for resources Data warehouse: update-driven, high performance
Information from heterogeneous sources is integrated in advance and stored in warehouses for direct query and analysis
May 5, 2023Data Mining: Concepts and
Techniques 7
Data Warehouse vs. Operational DBMS
OLTP (on-line transaction processing) Major task of traditional relational DBMS Day-to-day operations: purchasing, inventory, banking, manufacturing,
payroll, registration, accounting, etc. OLAP (on-line analytical processing)
Major task of data warehouse system Data analysis and decision making
Distinct features (OLTP vs. OLAP): User and system orientation: customer vs. market Data contents: current, detailed vs. historical, consolidated Database design: ER + application vs. star + subject View: current, local vs. evolutionary, integrated Access patterns: update vs. read-only but complex queries
May 5, 2023Data Mining: Concepts and
Techniques 8
OLTP vs. OLAP OLTP OLAP users clerk, IT professional knowledge worker function day to day operations decision support DB design application-oriented subject-oriented data current, up-to-date
detailed, flat relational isolated
historical, summarized, multidimensional integrated, consolidated
usage repetitive ad-hoc access read/write
index/hash on prim. key lots of scans
unit of work short, simple transaction complex query # records accessed tens millions #users thousands hundreds DB size 100MB-GB 100GB-TB metric transaction throughput query throughput, response
May 5, 2023Data Mining: Concepts and
Techniques 9
Why Separate Data Warehouse? High performance for both systems
DBMS— tuned for OLTP: access methods, indexing, concurrency control, recovery
Warehouse—tuned for OLAP: complex OLAP queries, multidimensional view, consolidation
Different functions and different data: missing data: Decision support requires historical data which
operational DBs do not typically maintain data consolidation: DS requires consolidation (aggregation,
summarization) of data from heterogeneous sources data quality: different sources typically use inconsistent data
representations, codes and formats which have to be reconciled Note: There are more and more systems which perform OLAP
analysis directly on relational databases
May 5, 2023Data Mining: Concepts and
Techniques 10
Characteristics of Data WarehouseA DWH can be viewed as an information system with the
following attributes:1. It is a database designed for analytical tasks, using data
from multiple applications.2. It supports a relatively small no. of users with relatively long
interactions.3. Its usage is read-intensive.4. Its content is periodically updated (mostly additions).5. It contains current and historical data to provide a historical
perspective of information.6. It contains a few large tables.7. Each query frequently results in a large result set and
involves frequent full table scan and multi-table joins.8. DWH is an environment, not a product. It is an architectural
construct of information systems that provides users with current and historical decision support information.
9. It is blend of technologies aimed at the effective integration of operational database into an environment.
May 5, 2023Data Mining: Concepts and
Techniques 11
Need of DWH1. Decisions need to be made quickly and correctly
using all available data.2. Users are business domain experts, not computer
professionals.3. The amount of data doubles every 18 months
which affects response time & the sheer ability to comprehend its content.
4. Competition is heating up in the area of business intelligence and added information value.
5. The DWH is designed to address the incompatibility of informational and operational transactional systems.
6. The IT infrastructure is changing( such as price of computer, speed, storage, N/W bandwidth, heterogeneous workplace)
May 5, 2023Data Mining: Concepts and
Techniques 12
Benefits of Datawarehousing Tangible Benefits:1. Product inventory turnover is improved.2. Cost of product introduction are decreased with improved
selection of target markets.3. More cost-effective decision making is enabled by
separating query processing from running against operational databases.
4. Better business intelligence is enabled by increased quality & flexibility of market analysis available.
Intangible Benefits:1. Improved productivity2. Reduced redundant processing, support and S/W to support
overlapping decision support application.3. Enhanced customer relations4. Enabling business process reengineering DWH can provide
useful insights into the work processes themselves.
May 5, 2023Data Mining: Concepts and
Techniques 13
The Data Warehouse Delivery processIT Strategy
Education
Build the Vision
History Load
Ad-hoc Query
Automation
Extending Scope
Business Requirements
Business Case Analysis
Technical BlueprintRequirem
ents Evolution
May 5, 2023Data Mining: Concepts and
Techniques 14
The Data Warehouse Delivery process
IT Strategy : DWH project must include IT strategy for procuring and retaining funding.
Business Case Analysis : It is necessary to understand the level of investment that can be justified and to identify the projected business benefits that should be derived from using DWH.
Education & Prototyping : Organizations will experiment with the concept of data analysis and educate themselves on the value of DWH. This is valuable and should be considered if this is the organization’s first exposure to the benefits of DS information. The prototyping activity can progress the growth of education. It is better than working models. Prototyping require business requirement, technical blueprint ,architecture.
Business Requirement : It includes such as The logical model for information within the DWH. The source system that provide this data( mapping rules) The business rules to be applied to data. The query profiles for the immediate requirement
May 5, 2023Data Mining: Concepts and
Techniques 15
The Data Warehouse Delivery process
Technical Blueprint : It must identify :The overall system architectureThe server & data mart architecture for both data & applications.The essential components of the data base design.The data retention strategyThe backup & recovery strategyThe capacity plan fro H/w and infrastructure
Building the vision : It is the stage where the first production deliverable is produced. This stage will probably build the major infrastructure components for extracting and loading data, but limit them to the extraction & load of data sources. History Load : In most cases, the next phase is one where the remainder of the required history is loaded into the DWH. This means that new entities would not be added to the DWH, but additional physical tables would probably be created to store the increased data volumes.AD-Hoc Query : In this process we configure an ad-hoc query tool to operate against the DWH. These end-user access tools are capable of automatically generating the database query that answer any question posed by the user.
May 5, 2023Data Mining: Concepts and
Techniques 16
The Data Warehouse Delivery process
Automation : The automation phase is where many of the operational management processes are fully automated within the DWH. These would include:
Extracting & loading the data from a variety of sources systems Transforming the data into a form suitable fro analysis Backing up, restoring & archiving data Generating aggregations from predefined definitions within DWH Monitoring query profiles & determining the appropriate aggregations to
maintain system performance. Extending Scope : In this phase, the scope of DWH is extended to address a
new set of business requirements. This involves the loading of additional data sources into the DWH i.e. introduction of new data marts .
Requirements evolution : This is most important aspect of the delivery process is that the requirements are never static. Business requirements will constantly change during the life of the DWH, so it is imperative that the process support this, and allows these changes to be reflected within the system.
May 5, 2023Data Mining: Concepts and
Techniques 17
Chapter 3: Data Warehousing and OLAP Technology: An Overview
What is a data warehouse? A multi-dimensional data model Data warehouse architecture Data warehouse implementation From data warehousing to data mining
May 5, 2023Data Mining: Concepts and
Techniques 18
From Tables and Spreadsheets to Data Cubes
A data warehouse is based on a multidimensional data model which views data in the form of a data cube
A data cube, such as sales, allows data to be modeled and viewed in multiple dimensions
Dimension tables, such as item (item_name, brand, type), or time(day, week, month, quarter, year)
Fact table contains measures (such as dollars_sold) and keys to each of the related dimension tables
In data warehousing literature, an n-D base cube is called a base cuboid. The top most 0-D cuboid, which holds the highest-level of summarization, is called the apex cuboid. The lattice of cuboids forms a data cube.
May 5, 2023Data Mining: Concepts and
Techniques 19
Cube: A Lattice of Cuboids
time,item
time,item,location
time, item, location, supplier
all
time item location supplier
time,location
time,supplier
item,location
item,supplier
location,supplier
time,item,supplier
time,location,supplier
item,location,supplier
0-D(apex) cuboid
1-D cuboids
2-D cuboids
3-D cuboids
4-D(base) cuboid
May 5, 2023Data Mining: Concepts and
Techniques 20
Conceptual Modeling of Data Warehouses
Modeling data warehouses: dimensions & measures Star schema: A fact table in the middle connected
to a set of dimension tables Snowflake schema: A refinement of star schema
where some dimensional hierarchy is normalized into a set of smaller dimension tables, forming a shape similar to snowflake
Fact constellations: Multiple fact tables share dimension tables, viewed as a collection of stars, therefore called galaxy schema or fact constellation
May 5, 2023Data Mining: Concepts and
Techniques 21
Example of Star Schema
time_keydayday_of_the_weekmonthquarteryear
time
location_keystreetcitystate_or_provincecountry
location
Sales Fact Table
time_key
item_key
branch_key
location_key
units_sold
dollars_sold
avg_salesMeasures
item_keyitem_namebrandtypesupplier_type
item
branch_keybranch_namebranch_type
branch
May 5, 2023Data Mining: Concepts and
Techniques 22
Example of Snowflake Schema
time_keydayday_of_the_weekmonthquarteryear
time
location_keystreetcity_key
location
Sales Fact Table
time_key
item_key
branch_key
location_key
units_sold
dollars_sold
avg_sales
Measures
item_keyitem_namebrandtypesupplier_key
item
branch_keybranch_namebranch_type
branch
supplier_keysupplier_type
supplier
city_keycitystate_or_provincecountry
city
May 5, 2023Data Mining: Concepts and
Techniques 23
Example of Fact Constellation
time_keydayday_of_the_weekmonthquarteryear
time
location_keystreetcityprovince_or_statecountry
location
Sales Fact Table
time_key
item_key
branch_key
location_key
units_sold
dollars_sold
avg_salesMeasures
item_keyitem_namebrandtypesupplier_type
item
branch_keybranch_namebranch_type
branch
Shipping Fact Table
time_key
item_key
shipper_key
from_location
to_location
dollars_cost
units_shipped
shipper_keyshipper_namelocation_keyshipper_type
shipper
May 5, 2023Data Mining: Concepts and
Techniques 24
Cube Definition Syntax (BNF) in DMQL
Cube Definition (Fact Table)define cube <cube_name> [<dimension_list>]:
<measure_list> Dimension Definition (Dimension Table)
define dimension <dimension_name> as (<attribute_or_subdimension_list>)
Special Case (Shared Dimension Tables) First time as “cube definition” define dimension <dimension_name> as
<dimension_name_first_time> in cube <cube_name_first_time>
May 5, 2023Data Mining: Concepts and
Techniques 25
Defining Star Schema in DMQL
define cube sales_star [time, item, branch, location]:dollars_sold = sum(sales_in_dollars), avg_sales
= avg(sales_in_dollars), units_sold = count(*)define dimension time as (time_key, day,
day_of_week, month, quarter, year)define dimension item as (item_key, item_name,
brand, type, supplier_type)define dimension branch as (branch_key,
branch_name, branch_type)define dimension location as (location_key, street,
city, province_or_state, country)
May 5, 2023Data Mining: Concepts and
Techniques 26
Defining Snowflake Schema in DMQL
define cube sales_snowflake [time, item, branch, location]:dollars_sold = sum(sales_in_dollars), avg_sales =
avg(sales_in_dollars), units_sold = count(*)define dimension time as (time_key, day, day_of_week, month,
quarter, year)define dimension item as (item_key, item_name, brand, type,
supplier(supplier_key, supplier_type))define dimension branch as (branch_key, branch_name,
branch_type)define dimension location as (location_key, street,
city(city_key, province_or_state, country))
May 5, 2023Data Mining: Concepts and
Techniques 27
Defining Fact Constellation in DMQL
define cube sales [time, item, branch, location]:dollars_sold = sum(sales_in_dollars), avg_sales =
avg(sales_in_dollars), units_sold = count(*)define dimension time as (time_key, day, day_of_week, month, quarter,
year)define dimension item as (item_key, item_name, brand, type,
supplier_type)define dimension branch as (branch_key, branch_name, branch_type)define dimension location as (location_key, street, city, province_or_state,
country)define cube shipping [time, item, shipper, from_location, to_location]:
dollar_cost = sum(cost_in_dollars), unit_shipped = count(*)define dimension time as time in cube salesdefine dimension item as item in cube salesdefine dimension shipper as (shipper_key, shipper_name, location as
location in cube sales, shipper_type)define dimension from_location as location in cube salesdefine dimension to_location as location in cube sales
May 5, 2023Data Mining: Concepts and
Techniques 28
Measures of Data Cube: Three Categories
Distributive: if the result derived by applying the function to n aggregate values is the same as that derived by applying the function on all the data without partitioning
E.g., count(), sum(), min(), max() Algebraic: if it can be computed by an algebraic function
with M arguments (where M is a bounded integer), each of which is obtained by applying a distributive aggregate function
E.g., avg(), min_N(), standard_deviation() Holistic: if there is no constant bound on the storage size
needed to describe a subaggregate. E.g., median(), mode(), rank()
May 5, 2023Data Mining: Concepts and
Techniques 29
A Concept Hierarchy: Dimension (location)
all
Europe North_America
MexicoCanadaSpainGermany
Vancouver
M. WindL. Chan
...
......
... ...
...
all
region
office
country
TorontoFrankfurtcity
May 5, 2023Data Mining: Concepts and
Techniques 30
View of Warehouses and Hierarchies
Specification of hierarchies Schema hierarchy
day < {month < quarter; week} < year
Set_grouping hierarchy{1..10} < inexpensive
May 5, 2023Data Mining: Concepts and
Techniques 31
Multidimensional Data Sales volume as a function of product,
month, and region
Prod
uct
Region
Month
Dimensions: Product, Location, TimeHierarchical summarization paths
Industry Region Year
Category Country Quarter
Product City Month Week
Office Day
May 5, 2023Data Mining: Concepts and
Techniques 32
A Sample Data CubeTotal annual salesof TV in U.S.A.Date
Produ
ct
Cou
ntrysum
sum TV
VCRPC
1Qtr 2Qtr 3Qtr 4Qtr
U.S.A
Canada
Mexico
sum
May 5, 2023Data Mining: Concepts and
Techniques 33
Cuboids Corresponding to the Cube
all
product date country
product,date product,country date, country
product, date, country
0-D(apex) cuboid
1-D cuboids
2-D cuboids
3-D(base) cuboid
May 5, 2023Data Mining: Concepts and
Techniques 34
Browsing a Data Cube
Visualization OLAP capabilities Interactive manipulation
May 5, 2023Data Mining: Concepts and
Techniques 35
Typical OLAP Operations Roll up (drill-up): summarize data
by climbing up hierarchy or by dimension reduction Drill down (roll down): reverse of roll-up
from higher level summary to lower level summary or detailed data, or introducing new dimensions
Slice and dice: project and select Pivot (rotate):
reorient the cube, visualization, 3D to series of 2D planes
Other operations drill across: involving (across) more than one fact table drill through: through the bottom level of the cube to
its back-end relational tables (using SQL)
May 5, 2023Data Mining: Concepts and
Techniques 36
Fig. 3.10 Typical OLAP Operations
May 5, 2023Data Mining: Concepts and
Techniques 37
A Star-Net Query Model Shipping Method
AIR-EXPRESS
TRUCKORDER
Customer Orders
CONTRACTS
Customer
Product
PRODUCT GROUP
PRODUCT LINE
PRODUCT ITEM
SALES PERSON
DISTRICT
DIVISION
OrganizationPromotion
CITY
COUNTRY
REGION
Location
DAILYQTRLYANNUALYTime
Each circle is called a footprint
May 5, 2023Data Mining: Concepts and
Techniques 38
Chapter 3: Data Warehousing and OLAP Technology: An Overview
What is a data warehouse? A multi-dimensional data model Data warehouse architecture Data warehouse implementation From data warehousing to data mining
May 5, 2023Data Mining: Concepts and
Techniques 39
Design of Data Warehouse: A Business Analysis Framework
Four views regarding the design of a data warehouse Top-down view
allows selection of the relevant information necessary for the data warehouse
Data source view exposes the information being captured, stored, and
managed by operational systems Data warehouse view
consists of fact tables and dimension tables Business query view
sees the perspectives of data in the warehouse from the view of end-user
May 5, 2023Data Mining: Concepts and
Techniques 40
Data Warehouse Design Process
Top-down, bottom-up approaches or a combination of both Top-down: Starts with overall design and planning (mature). It is useful in cases where the
technology is mature and well known, and where the business problems that must be solved are clear and well understood.
Bottom-up: Starts with experiments and prototypes (rapid). This is useful intheearly stage of business modeling and technology development. It allows an organization to move forward at considerably less expense and to evaluate the benefits of the technology before making significant commitments.
From software engineering point of view Waterfall : structured and systematic analysis at each step before proceeding to the next Spiral : rapid generation of increasingly functional systems, short turn around time, quick
turn around Typical data warehouse design process
Choose a business process to model, e.g., orders, invoices, etc. Choose the grain (atomic level of data) of the business process Choose the dimensions that will apply to each fact table record Choose the measure that will populate each fact table record
May 5, 2023Data Mining: Concepts and
Techniques 41
Process Architecture
Operational data
External data
Load Manager
Query Manager
Detailed Information
Meta data
Summaery Info
Warehouse Manger
Data Information Data
OLAP tools
Data dippers
May 5, 2023Data Mining: Concepts and
Techniques 42
Data Warehouse: A Multi-Tiered ArchitectureData Warehouse: A Multi-Tiered Architecture
DataWarehouse
ExtractTransformLoadRefresh
OLAP Engine
AnalysisQueryReportsData mining
Monitor&
IntegratorMetadata
Data Sources Front-End Tools
Serve
Data Marts
Operational DBs
Othersources
Data Storage
OLAP Server
May 5, 2023Data Mining: Concepts and
Techniques 43
Data Warehouse: A Multi-Tiered Data Warehouse: A Multi-Tiered ArchitectureArchitecture
1. The transaction or other operational database(s) from which the data warehouse is populated.
2. A process to extract data from this database or these databases, and bring it into the DWH. This process must often transform the data into the database structure & internal formats of DWH.
3. A process to cleanse the data to make sure it is of sufficient quality for the decision making purposes for which it will be used.
4. A process to load the cleansed data into DWH database. The four processes from extraction through loading are often referred to collectively as data staging.
5. A process to create any desired summaries of the data: pre-calculated totals, averages, and the like, which are expected to be requested often.
6. Metadata “ data about data”. It is useful to have a central information repository to tell users What is in DWH where it came from, who is in-charge of it and more.
7. The DWH database itself .This database contains the detailed and summary data of the DWH.
8. Querry tools usually include and end-user interface for posing questions to the database, in a process called OLAP. They may also include automated tools for uncovering patterns in the data, often reffered to as data mining
9. The users or users for whom the data warehouse exists and without whom it would be useless.
May 5, 2023Data Mining: Concepts and
Techniques 44
DWH Components1. Data sourcing, cleanup, transformation &
migration tools.2. Metadata Repository3. DWH database Technology4. Datamarts5. Data query, reporting, analysis and mining
tools6. DWH administration & management7. Information delivery system8. Operational data store.
May 5, 2023Data Mining: Concepts and
Techniques 45
Building a DWH Business considerations(a) Approach(b) Organizational Issues Design considerations(a) Data control(b) Meta data(c) Data Distribution(d) Tools(e) Performance considerations(f) Nine decisions in the design of DWH
May 5, 2023Data Mining: Concepts and
Techniques 46
Building a DWH Technical considerations1. H/W Platforms2. DWH & DBMS specialization3. Communication Infrastructure Implementation considerations1. Access Tools2. Data extraction, Cleanup, transformation &
Migration.3. Data Placement Strategies4. Metadata5. User sophistication levels.
May 5, 2023Data Mining: Concepts and
Techniques 47
Three Data Warehouse Models Enterprise warehouse
collects all of the information about subjects spanning the entire organization
Data Mart a subset of corporate-wide data that is of value to a
specific groups of users. Its scope is confined to specific, selected groups, such as marketing data mart Independent vs. dependent (directly from warehouse) data
mart Virtual warehouse
A set of views over operational databases Only some of the possible summary views may be
materialized
May 5, 2023Data Mining: Concepts and
Techniques 48
Data Warehouse Development: A
Recommended Approach
Define a high-level corporate data model
Data Mart
Data Mart
Distributed Data Marts
Multi-Tier Data Warehouse
Enterprise Data Warehouse
Model refinementModel refinement
May 5, 2023Data Mining: Concepts and
Techniques 49
Data Warehouse Back-End Tools and Utilities
Data extraction get data from multiple, heterogeneous, and external
sources Data cleaning
detect errors in the data and rectify them when possible
Data transformation convert data from legacy or host format to warehouse
format Load
sort, summarize, consolidate, compute views, check integrity, and build indicies and partitions
Refresh propagate the updates from the data sources to the
warehouse
May 5, 2023Data Mining: Concepts and
Techniques 50
Metadata Repository Meta data is the data defining warehouse objects. It stores: Description of the structure of the data warehouse
schema, view, dimensions, hierarchies, derived data defn, data mart locations and contents
Operational meta-data data lineage (history of migrated data and transformation path),
currency of data (active, archived, or purged), monitoring information (warehouse usage statistics, error reports, audit trails)
The algorithms used for summarization The mapping from operational environment to the data
warehouse Data related to system performance
warehouse schema, view and derived data definitions Business data
business terms and definitions, ownership of data, charging policies
May 5, 2023Data Mining: Concepts and
Techniques 51
Data Marting A data mart is a subset of an organizational data store, usually oriented
to a specific purpose or major data subject, that may be distributed to support business needs. Data marts are analytical data stores designed to focus on specific business functions for a specific community within an organization. Data marts are often derived from subsets of data in a data warehouse, though in the bottom-up data warehouse design methodology the data warehouse is created from the union of organizational data marts.
A data mart is a repository of data gathered from operational data and other sources that is designed to serve a particular community of knowledge workers
Data Marts are created for the following reasons: To speed up queries by reducing the volume of data to be scanned. To structure data in a form suitable for a user access tool. To partition data in order to impose access control strategies. To segment data into different hardware platforms.
May 5, 2023Data Mining: Concepts and
Techniques 52
Data Marting Situation for creation of Data Marts to identify whether
There is a natural functional split within the organization. There is a natural split of the data. The proposed user access tool uses its own database
structures. Any infrastructure issues predicate the use of data marts. There are any access control issues that require data marts to
provide Chinese walls. Costs of Data Marting : data Marting strategies incur costs in
the following areas: Hardware and software Network Access Time-window constraints
May 5, 2023Data Mining: Concepts and
Techniques 53
Aggregations Data aggregation is an essential component of decision support
DWH. It allows us to provide cost-effective query performance by avoiding the need for substantial.
Aggregation strategies rely on the fact that most common queries will analyze either a subset or an aggregation of the detailed data.
We can see that the appropriate aggregation will substantially reduce the processing time required to run a query, at the cost of processing and storing the intermediate results.
The aggregations make trends more apparent, by providing an overview of the whole picture rather than small parts of the picture.
May 5, 2023Data Mining: Concepts and
Techniques 54
Aggregations Designing Summary Tables : The primary purpose of using
summary tables is to cut down the time it takes to execute queries. The objective of using summary tables is to minimize the volume of data being scanned, by storing as many partial results as possible.
The summary tables are designed using following steps: Determine which dimensions are aggregated. Determine the aggregation of multiple values. Aggregate multiple facts into the summary table. Determine the level of aggregation. Determine the extent of embedding dimension data in the
summary. Design time into the summary table. Index the summary table.
May 5, 2023Data Mining: Concepts and
Techniques 55
OLAP Server Architectures Relational OLAP (ROLAP)
Use relational or extended-relational DBMS to store and manage warehouse data and OLAP middle ware
Include optimization of DBMS backend, implementation of aggregation navigation logic, and additional tools and services
Greater scalability Multidimensional OLAP (MOLAP)
Sparse array-based multidimensional storage engine Fast indexing to pre-computed summarized data
Hybrid OLAP (HOLAP) (e.g., Microsoft SQLServer) Flexibility, e.g., low level: relational, high-level: array
Specialized SQL servers (e.g., Redbricks) Specialized support for SQL queries over star/snowflake
schemas
May 5, 2023Data Mining: Concepts and
Techniques 56
OLAP Guidelines (Dr. E.F. Codd Rule)
Dr. E.F. Codd the “father” of the relational model has formulated a list of 12 guidelines and requirements as the basis for selecting OLAP systems:
1. Multidimensional conceptual view2. Transparency3. Accessibility4. Consistent reporting performance5. Client / Server Architecture6. Generic dimensionality7. Dynamic sparse matrix handling8. Multiuser support9. Unrestricted cross-dimensional operations10. Intuitive data manipulation11. Flexible reporting12. Unlimited dimensions & aggregation levels
May 5, 2023Data Mining: Concepts and
Techniques 57
MOLAP In the OLAP world, there are mainly two different types: Multidimensional OLAP (MOLAP)
and Relational OLAP (ROLAP). Hybrid OLAP (HOLAP) refers to technologies that combine MOLAP and ROLAP.
MOLAPExcellent performance- this is the more traditional way of OLAP analysis. In MOLAP, data is stored in a multidimensional cube. The storage is not in the relational database, but in proprietary formats.
Advantages:MOLAP cubes are built for fast data retrieval, and are optimal for slicing and dicing operations.
They can also perform complex calculations. All calculations have been pre-generated when the cube is created. Hence, complex calculations are not only doable, but they return quickly.
Excellent performance: MOLAP cubes are built for fast data retrieval, and is optimal for slicing and dicing operations.
Disadvantages:It is limited in the amount of data it can handle. Because all calculations are performed when the cube is built, it is not possible to include a large amount of data in the cube itself. This is not to say that the data in the cube cannot be derived from a large amount of data. Indeed, this is possible. But in this case, only summary-level information will be included in the cube itself.
It requires an additional investment. Cube technology are often proprietary and do not already exist in the organization.
May 5, 2023Data Mining: Concepts and
Techniques 58
ROLAPThis methodology relies on manipulating the data stored in the relational database to give the appearance of traditional OLAP's slicing and dicing functionality. In essence, each action of slicing and dicing is equivalent to adding a "WHERE" clause in the SQL statement.
Advantages:It can handle large amounts of data. The data size limitation of ROLAP technology is the limitation on data size of the underlying relational database. In other words, ROLAP itself places no limitation on data amount.
It can leverage functionalities inherent in the relational database. Often, relational database already comes with a host of functionalities. ROLAP technologies, since they sit on top of the relational database, can therefore leverage these functionalities.
Disadvantages:Performance can be slow. Because each ROLAP report is essentially a SQL query (or multiple SQL queries) in the relational database, the query time can be long if the underlying data size is large.
It has limited by SQL functionalities. Because ROLAP technology mainly relies on generating SQL statements to query the relational database, and SQL statements do not fit all needs (for example, it is difficult to perform complex calculations using SQL), ROLAP technologies are therefore traditionally limited by what SQL can do. ROLAP vendors have mitigated this risk by building into the tool out-of-the-box complex functions as well as the ability to allow users to define their own functions.
HOLAPHOLAP technologies attempt to combine the advantages of MOLAP and ROLAP. For summary-type information, HOLAP leverages cube technology for faster performance. When detail information is needed, HOLAP can "drill through" from the cube into the underlying relational data
May 5, 2023Data Mining: Concepts and
Techniques 59
The FASMI test FAST means that the system is targeted to deliver most responses to users within about five seconds, with the
simplest analyses taking no more than one second and very few taking more than 20 seconds. Independent research in The Netherlands has shown that end-users assume that a process has failed if results are not received with 30 seconds, and they are apt to hit ‘Alt+Ctrl+Delete’ unless the system warns them that the report will take longer. Even if they have been warned that it will take significantly longer, users are likely to get distracted and lose their chain of thought, so the quality of analysis suffers. This speed is not easy to achieve with large amounts of data, particularly if on-the-fly and ad hoc calculations are required. Vendors resort to a wide variety of techniques to achieve this goal, including specialized forms of data storage, extensive pre-calculations and specific hardware requirements, but we do not think any products are yet fully optimized, so we expect this to be an area of developing technology. In particular, the full pre-calculation approach fails with very large, sparse applications as the databases simply get too large (the database explosion problem), whereas doing everything on-the-fly is much too slow with large databases, even if exotic hardware is used. Even though it may seem miraculous at first if reports that previously took days now take only minutes, users soon get bored of waiting, and the project will be much less successful than if it had delivered a near instantaneous response, even at the cost of less detailed analysis. The OLAP Survey has found that slow query response is consistently the most often-cited technical problem with OLAP products, so too many deployments are clearly still failing to pass this test.
ANALYSIS means that the system can cope with any business logic and statistical analysis that is relevant for the application and the user, and keep it easy enough for the target user. Although some pre-programming may be needed, we do not think it acceptable if all application definitions have to be done using a professional 4GL. It is certainly necessary to allow the user to define new ad hoc calculations as part of the analysis and to report on the data in any desired way, without having to program, so we exclude products (like Oracle Discoverer) that do not allow adequate end-user oriented calculation flexibility. We do not mind whether this analysis is done in the vendor's own tools or in a linked external product such as a spreadsheet, simply that all the required analysis functionality be provided in an intuitive manner for the target users. This could include specific features like time series analysis, cost allocations, currency translation, goal seeking, ad hoc multidimensional structural changes, non-procedural modeling, exception alerting, data mining and other application dependent features. These capabilities differ widely between products, depending on their target markets.
May 5, 2023Data Mining: Concepts and
Techniques 60
The FASMI test SHARED means that the system implements all the security requirements for
confidentiality (possibly down to cell level) and, if multiple write access is needed, concurrent update locking at an appropriate level. Not all applications need users to write data back, but for the growing number that do, the system should be able to handle multiple updates in a timely, secure manner. This is a major area of weakness in many OLAP products, which tend to assume that all OLAP applications will be read-only, with simplistic security controls. Even products with multi-user read-write often have crude security models; an example is Microsoft OLAP Services.
MULTIDIMENSIONAL is our key requirement. If we had to pick a one-word definition of OLAP, this is it. The system must provide a multidimensional conceptual view of the data, including full support for hierarchies and multiple hierarchies, as this is certainly the most logical way to analyze businesses and organizations. We are not setting up a specific minimum number of dimensions that must be handled as it is too application dependent and most products seem to have enough for their target markets. Again, we do not specify what underlying database technology should be used providing that the user gets a truly multidimensional conceptual view.
INFORMATION is all of the data and derived information needed, wherever it is and however much is relevant for the application. We are measuring the capacity of various products in terms of how much input data they can handle, not how many Gigabytes they take to store it. The capacities of the products differ greatly — the largest OLAP products can hold at least a thousand times as much data as the smallest. There are many considerations here, including data duplication, RAM required, disk space utilization, performance, integration with data warehouses and the like.
May 5, 2023Data Mining: Concepts and
Techniques 61
Cube Operation Cube definition and computation in DMQL
define cube sales[item, city, year]: sum(sales_in_dollars)
compute cube sales Transform it into a SQL-like language (with a new operator
cube by, introduced by Gray et al.’96)SELECT item, city, year, SUM (amount)FROM SALESCUBE BY item, city, year
Need compute the following Group-Bys (date, product, customer),(date,product),(date, customer), (product, customer),(date), (product), (customer)()
(item)(city)
()
(year)
(city, item) (city, year) (item, year)
(city, item, year)
May 5, 2023Data Mining: Concepts and
Techniques 62
Efficient Processing OLAP Queries Determine which operations should be performed on the available cuboids
Transform drill, roll, etc. into corresponding SQL and/or OLAP operations, e.g., dice = selection + projection
Determine which materialized cuboid(s) should be selected for OLAP op. Let the query to be processed be on {brand, province_or_state} with the
condition “year = 2004”, and there are 4 materialized cuboids available:1) {year, item_name, city} 2) {year, brand, country}3) {year, brand, province_or_state}4) {item_name, province_or_state} where year = 2004Which should be selected to process the query?
Explore indexing structures and compressed vs. dense array structs in MOLAP
May 5, 2023Data Mining: Concepts and
Techniques 63
Data Warehouse Usage Three kinds of data warehouse applications
Information processing supports querying, basic statistical analysis, and reporting
using crosstabs, tables, charts and graphs Analytical processing
multidimensional analysis of data warehouse data supports basic OLAP operations, slice-dice, drilling,
pivoting Data mining
knowledge discovery from hidden patterns supports associations, constructing analytical models,
performing classification and prediction, and presenting the mining results using visualization tools
May 5, 2023Data Mining: Concepts and
Techniques 64
From On-Line Analytical Processing (OLAP)
to On Line Analytical Mining (OLAM) Why online analytical mining?
High quality of data in data warehouses DW contains integrated, consistent, cleaned data
Available information processing structure surrounding data warehouses
ODBC, OLEDB, Web accessing, service facilities, reporting and OLAP tools
OLAP-based exploratory data analysis Mining with drilling, dicing, pivoting, etc.
On-line selection of data mining functions Integration and swapping of multiple mining
functions, algorithms, and tasks
May 5, 2023Data Mining: Concepts and
Techniques 65
An OLAM System Architecture
Data Warehouse
Meta Data
MDDB
OLAMEngine
OLAPEngine
User GUI API
Data Cube API
Database API
Data cleaning
Data integration
Layer3
OLAP/OLAM
Layer2
MDDB
Layer1
Data Repository
Layer4
User Interface
Filtering&Integration Filtering
Databases
Mining query Mining result
May 5, 2023Data Mining: Concepts and
Techniques 66
Data Warehouse Security Responsibility and Confidentiality : The Data Warehouse contains
confidential and sensitive University data. In order to use its data, you must have proper authorization. Your authorization means that you have the authority to use the data and the responsibility to share stewardship of the data with the other users of the collection.
Once authorized, you can access the data that you need to do your job. All authorized users are cautioned, however, that they are entrusted to use the data they retrieve from the Warehouse with care. Confidential data should not be released to others except for those with a "legitimate need to know."
Please remember that you should never share Business Objects queries with other users with the data intact -- send the query without the data. More information about sending and saving Business Objects documents.
Querying Data with Security Restrictions : If you execute a query requesting data that you are not authorized to access, you will get results which may be incomplete because they are missing the data you are not allowed to access.
If your authorization is limited to a specific set of data, be sure when querying the data that your record selection conditions include your security restrictions. For example, if you are authorized to access just data for a particular department, one of your record selection conditions should state something like "If Organization= 'My Organization'," where My Organization is the code of your department. This will document why the query gets the results it does, and will also help your query run faster.
May 5, 2023Data Mining: Concepts and
Techniques 67
SECURITY A DWH by nature is an open accessible system. The aim of DWH is generally to make
large amounts of data easily accessible to the users, thereby enabling the users to extract information about the business as a whole.
It is important to establish early any security and audit requirements that will be placed on the DWH.
Clearly, adding security will affect performance because further checks require CPU cycles and time to perform.
Requirement :Security can affect many different parts of the DWH such as : User access – can be done by
Data classification By Level of security required By Data sensitivity By job Function
User Classification Top-down company hierarchy (department , section, group) By Role
Data load Legal Requirements : it is vital to establish any legal requirements (law) on the data
being stored. Audit Requirements : such as connections, disconnections, data access, data change. Network Requirements : (routes)
May 5, 2023Data Mining: Concepts and
Techniques 68
SECURITY Data movement : Type of file to be moved & the manner in which
file has to be moved (flat file, encrypted / decrypted, summary generation, results temporary tables)
Documentation : It is important to get all the security and audit requirements clearly documented as this will be needed as part of any cost justification. This document should contain all the information gathered on:
Data classification User classification Network requirement Data movement & storage requirements All auditable actions
Query generation
May 5, 2023Data Mining: Concepts and
Techniques 69
User Access Hierarchy
Detailed Sales data
Detailed Customer
DataSummarized sales dataReference Data
Senior AnalystAdministrators
Administer
Administer
MarketingSales
Data Warehouse Inc.
Analyst
Analyst
Analyst
Analyst
Analyst
AnalystAnalyst
May 5, 2023Data Mining: Concepts and
Techniques 70
Backup and Recovery Backup is one of the most important regular operations carried
out on any system. It is important in the DWH environment because of the volumes
of data involved and the complexity of the system. Types of Backup
In a complete backup, the entire database is backedup at the same time. This includes all database data files, the control files and the journal files.
Partial backup is any backup that is not complete. A cold backup is a backup that is taken while the database
is completely shutdown. In a multi-instance environment, all instances of that database must be shut down.
Hot backup : any backup that is not cold is considered to be hot.
Online backup is a synonym for hot backup
May 5, 2023Data Mining: Concepts and
Techniques 71
Backup and Recovery Hardware: When you are choosing a backup strategy, one of the first
decisions you will have to make is which H/W to use. The choice depend on factors such as speed at which a backup or restore can be processed, H/W connection, N/W bandwidth, backup S/W used, speed of server’s I/O subsystem and components.
Tape Technology Tape media (reliability / life / cost of tape medium/drive, scalability) Standalone tape drives (connection directly to servers, as a network-
available device, remotely to another machine) Tape stackers (a method of loading multiple tapes into a single tape
drive. Only one tape can be accessed at a time, but the stacker will automatically dismount the current tape when it has finished with it and load the next tape)
Tape silos (These are large tape storage facilities, which can store and manage thousands of tapes. These are generally sealed environments with robotic arms for manipulating the tapes.)
Disk-to-disk backups (the backup is performed to disk rather than to tape)
May 5, 2023Data Mining: Concepts and
Techniques 72
Backup and Recovery Software (Omniback II, ADSM, Alexandria, Epoch, Networker)
Performance- such as degree of parallelism, I/O bottlenecks
Requirements : When considering which backup package to use it is important to check the following criteria.
What degree of parallelism is possible? How scalable is the product as tape drives are added? What platforms are supported by the package? What tape drives and tape media are supported by the
package? Does the package support easy access to information
about tape contents? Backup strategies:
Effect on database Design- such as DB partitioning strategies.
Design Strategies -main aim should be to reduce the amount of data that has to be backed up on regular basis, e.g. Read-only tablespace, automation of backup
May 5, 2023Data Mining: Concepts and
Techniques 73
Backup and Recovery Recovery Strategies- depend on kind of failure and consist of a set of
failure scenarios & their resolution. Each of the following failure scenarios indicated below needs to be centred for recovery steps and must be documented:1. Instance Failure 2. Media failure3. Loss or damage of table space or data file 4. Loss or damage of a table5. Loss or damage of control file 6. Failure during data movement
The plan must be made for following data movement scenarios :1. Data load into staging tables2. Movement from staging to fact table3. Partition roll-up into larger partitions4. Creation of Aggregations.
Testing the Strategy- The backup and recovery tests need to be carried out on a regular basis, but it is advisable to avoid performing tests at busy times such as end of year and try to test to be run at low load in the business year.
May 5, 2023Data Mining: Concepts and
Techniques 74
Backup and Recovery Disaster Recovery- A disaster can be defined where a major site loss
has taken place or destroyed or damaged beyond immediate repair. It is advisable to decide a criteria for making that judgment before any situation occurs, as attempting to make such a decision while in the middle of crises is likely to lead problems. Deal with following requirements for disaster recovery:
Planning Disaster recovery with minimal system required. Replacement / standby machine Sufficient tape and disk capacity Communication links to users. Communication links to data sources. Copies of all relevant pieces of software. Backup of database. Application-aware systems administration and operation staff.
May 5, 2023Data Mining: Concepts and
Techniques 75
Tuning the Data WarehouseTuning the data warehouse deal with the measures such as
Average query response timesscan ratesI/O throughput ratesTime used per query (fixed or ad-hoc)
No. of users in the groupWhether they use adhoc queries frequently or occasionally at unknown intervals or at regular or predictable timesThe average / maximum size of query they tend to runThe peak time of daily usageThe more unpredictable the load, the larger the queries, or the greater the number of users the bigger the tuning task.
Memory usage per process
May 5, 2023Data Mining: Concepts and
Techniques 76
Testing the data warehouse Three levels of testing
Unit testing : each development unit is tested on its own Integration testing : the separate development units that
make up a component of DWH application are tested to ensure that they work together.
System Testing :the whole DWH application is tested together. The components are tested to ensure that they work properly together, that they don’t cause system bottlenecks.
Developing the Test Plan Test Schedule – metrics for estimating the amount of time
required for testing Data Load
How will the data be generated ? Where will the data be generated ? How will the generated data be loaded ? Will the data be correctly skewed ?
May 5, 2023Data Mining: Concepts and
Techniques 77
Testing the data warehouse Testing Backup Recovery: To test the recovery of a lost data file, a
data file should actually be deleted and recovered from backup. Check that the backup database is working correctly, and actually tracks what has been backed up and where it has been backed upto. If that works then check that the information can be retrieved. Check that all the backup hardware is working: tapes, tape drives, controllers etc. Each of the scenarios indicated below needs to be centred for 1. Instance Failure 2. Media failure3. Loss or damage of table space or data file4. Loss or damage of a table 5. Loss or damage of control file6. Failure during data movement
Testing the Operational Environment : Testing of the DWH operational environment is another key set o tests that will have to be performed. There are following aspects that need to be tested :
Security – document what is not allowed, disallowed operations and devising a test for each
Disk configuration – test thoroughly to identify any potential I/O bottlenecks.
May 5, 2023Data Mining: Concepts and
Techniques 78
Testing the data warehouse Scheduler – Given the possibility for many of the processes in the
DHW to swamp the system resources if allowed to run at the wrong time, scheduling control of these processes is essential to the success of the DWH.
Management Tools – (event / system / configuration / backup recovery / database)
Database ManagementTesting the database : It can be broken down into three separate sets of tests:
Testing the database manager and monitoring tools (creation, running & management of the test database)
Testing database features (querying / create index / data load in parallel) Testing database performance (test queries with different aggregations,
index strategies , degree of parallel, different-sized data sets) Testing the application Logistic of the Test (DWH application code, day-to-day operational
procedures, backup recovery strategy, query performance, management & monitoring tools, scheduling software)
May 5, 2023Data Mining: Concepts and
Techniques 79
Chapter 3: Data Warehousing and OLAP Technology: An Overview
What is a data warehouse? A multi-dimensional data model Data warehouse architecture Data warehouse implementation From data warehousing to data mining Summary
May 5, 2023Data Mining: Concepts and
Techniques 80
References (I) S. Agarwal, R. Agrawal, P. M. Deshpande, A. Gupta, J. F. Naughton, R.
Ramakrishnan, and S. Sarawagi. On the computation of multidimensional aggregates. VLDB’96
D. Agrawal, A. E. Abbadi, A. Singh, and T. Yurek. Efficient view maintenance in data warehouses. SIGMOD’97
R. Agrawal, A. Gupta, and S. Sarawagi. Modeling multidimensional databases. ICDE’97
S. Chaudhuri and U. Dayal. An overview of data warehousing and OLAP technology. ACM SIGMOD Record, 26:65-74, 1997
E. F. Codd, S. B. Codd, and C. T. Salley. Beyond decision support. Computer World, 27, July 1993.
J. Gray, et al. Data cube: A relational aggregation operator generalizing group-by, cross-tab and sub-totals. Data Mining and Knowledge Discovery, 1:29-54, 1997.
A. Gupta and I. S. Mumick. Materialized Views: Techniques, Implementations, and Applications. MIT Press, 1999.
J. Han. Towards on-line analytical mining in large databases. ACM SIGMOD Record, 27:97-107, 1998.
V. Harinarayan, A. Rajaraman, and J. D. Ullman. Implementing data cubes efficiently. SIGMOD’96
May 5, 2023Data Mining: Concepts and
Techniques 81
References (II) C. Imhoff, N. Galemmo, and J. G. Geiger. Mastering Data Warehouse Design:
Relational and Dimensional Techniques. John Wiley, 2003 W. H. Inmon. Building the Data Warehouse. John Wiley, 1996 R. Kimball and M. Ross. The Data Warehouse Toolkit: The Complete Guide to
Dimensional Modeling. 2ed. John Wiley, 2002 P. O'Neil and D. Quass. Improved query performance with variant indexes.
SIGMOD'97 Microsoft. OLEDB for OLAP programmer's reference version 1.0. In
http://www.microsoft.com/data/oledb/olap, 1998 A. Shoshani. OLAP and statistical databases: Similarities and differences. PODS’00. S. Sarawagi and M. Stonebraker. Efficient organization of large multidimensional
arrays. ICDE'94 OLAP council. MDAPI specification version 2.0. In
http://www.olapcouncil.org/research/apily.htm, 1998 E. Thomsen. OLAP Solutions: Building Multidimensional Information Systems. John
Wiley, 1997 P. Valduriez. Join indices. ACM Trans. Database Systems, 12:218-246, 1987. J. Widom. Research problems in data warehousing. CIKM’95.
May 5, 2023Data Mining: Concepts and
Techniques 82