Download Unit 4 & 5 Notes

Document related concepts

Nonlinear dimensionality reduction wikipedia , lookup

Transcript
Data Mining:
Concepts and Techniques
— UNIT IV & V —
Nitin Sharma
Asstt. Profffesor in Department of CS & IET, Alwar
Books Recommended:




M.H. Dunham, “ Data Mining: Introductory & Advanced
Topics” Pearson Education
Jiawei Han, Micheline Kamber, “ Data Mining Concepts &
Techniques” Elsevier
Sam Anahory, Denniss Murray,” data warehousing
in the Real World: A Practical Guide fro Building
Decision Support Systems, “ Pearson Education
Mallach,” Data Warehousing System”, TMH
May 22, 2017
Data Mining: Concepts and Techniques
1
UNIT IV: Data Warehousing and OLAP
Technology: An Overview

What is a data warehouse?

A multi-dimensional data model

Data warehouse architecture

Data warehouse implementation

From data warehousing to data mining
May 22, 2017
Data Mining: Concepts and Techniques
2
What is Data Warehouse?

Defined in many different ways, but not rigorously.

A decision support database that is maintained separately from
the organization’s operational database

Support information processing by providing a solid platform of
consolidated, historical data for analysis.

“A data warehouse is a subject-oriented, integrated, time-variant,
and nonvolatile collection of data in support of management’s
decision-making process.”—W. H. Inmon

Data warehousing:

May 22, 2017
The process of constructing and using data warehouses
Data Mining: Concepts and Techniques
3

Data Warehouse—Subject-Oriented

Organized around major subjects, such as customer, product, sales

Focusing on the modeling and analysis of data for decision makers, not
on daily operations or transaction processing

Provide a simple and concise view around particular subject issues by
excluding data that are not useful in the decision support process



Data Warehouse—Integrated
Constructed by integrating multiple, heterogeneous data sources
 relational databases, flat files, on-line transaction records
Data cleaning and data integration techniques are applied.
 Ensure consistency in naming conventions, encoding structures,
attribute measures, etc. among different data sources
 E.g., Hotel price: currency, tax, breakfast covered, etc.
 When data is moved to the warehouse, it is converted.
May 22, 2017
Data Mining: Concepts and Techniques
4

Data Warehouse—Time Variant
The time horizon for the data warehouse is significantly longer than that of operational
systems

Operational database: current value data

Data warehouse data: provide information from a historical perspective (e.g., past 510 years)


Every key structure in the data warehouse
Contains an element of time, explicitly or implicitly

But the key of operational data may or may not contain “time element”

Data Warehouse—Nonvolatile

A physically separate store of data transformed from the operational environment

Operational update of data does not occur in the data warehouse environment
Does not require transaction processing, recovery, and concurrency control

mechanisms
Requires only two operations in data accessing:


May 22, 2017
initial loading of data and access of data
Data Mining: Concepts and Techniques
5
Data Warehouse vs. Heterogeneous DBMS

Traditional heterogeneous DB integration: A query driven approach

Build wrappers/mediators on top of heterogeneous databases

When a query is posed to a client site, a meta-dictionary is used
to translate the query into queries appropriate for individual
heterogeneous sites involved, and the results are integrated into
a global answer set


Complex information filtering, compete for resources
Data warehouse: update-driven, high performance

Information from heterogeneous sources is integrated in advance
and stored in warehouses for direct query and analysis
May 22, 2017
Data Mining: Concepts and Techniques
6
Data Warehouse vs. Operational DBMS



OLTP (on-line transaction processing)

Major task of traditional relational DBMS

Day-to-day operations: purchasing, inventory, banking, manufacturing,
payroll, registration, accounting, etc.
OLAP (on-line analytical processing)

Major task of data warehouse system

Data analysis and decision making
Distinct features (OLTP vs. OLAP):

User and system orientation: customer vs. market

Data contents: current, detailed vs. historical, consolidated

Database design: ER + application vs. star + subject

View: current, local vs. evolutionary, integrated

Access patterns: update vs. read-only but complex queries
May 22, 2017
Data Mining: Concepts and Techniques
7
OLTP vs. OLAP
OLTP
OLAP
users
clerk, IT professional
knowledge worker
function
day to day operations
decision support
DB design
application-oriented
subject-oriented
data
current, up-to-date
detailed, flat relational
isolated
repetitive
historical,
summarized, multidimensional
integrated, consolidated
ad-hoc
lots of scans
unit of work
read/write
index/hash on prim. key
short, simple transaction
# records accessed
tens
millions
#users
thousands
hundreds
DB size
100MB-GB
100GB-TB
metric
transaction throughput
query throughput, response
usage
access
May 22, 2017
complex query
Data Mining: Concepts and Techniques
8
Why Separate Data Warehouse?

High performance for both systems



Warehouse—tuned
for
OLAP:
complex
multidimensional view, consolidation
OLAP
queries,
Different functions and different data:




DBMS— tuned for OLTP: access methods, indexing, concurrency
control, recovery
missing data: Decision support requires historical data which
operational DBs do not typically maintain
data consolidation:
DS requires consolidation (aggregation,
summarization) of data from heterogeneous sources
data quality: different sources typically use inconsistent data
representations, codes and formats which have to be reconciled
Note: There are more and more systems which perform OLAP
analysis directly on relational databases
May 22, 2017
Data Mining: Concepts and Techniques
9
Characteristics of Data Warehouse
A DWH can be viewed as an information system with the following
attributes:
1.
It is a database designed for analytical tasks, using data from
multiple applications.
2.
It supports a relatively small no. of users with relatively long
interactions.
3.
Its usage is read-intensive.
4.
Its content is periodically updated (mostly additions).
5.
It contains current and historical data to provide a historical
perspective of information.
6.
It contains a few large tables.
7.
Each query frequently results in a large result set and involves
frequent full table scan and multi-table joins.
8.
DWH is an environment, not a product. It is an architectural
construct of information systems that provides users with current
and historical decision support information.
9.
It is blend of technologies aimed at the effective integration of
operational database into an environment.
May 22, 2017
Data Mining: Concepts and Techniques
10
Need of DWH
1.
2.
3.
4.
5.
6.
Decisions need to be made quickly and correctly using
all available data.
Users are business domain experts, not computer
professionals.
The amount of data doubles every 18 months which
affects response time & the sheer ability to comprehend
its content.
Competition is heating up in the area of business
intelligence and added information value.
The DWH is designed to address the incompatibility of
informational and operational transactional systems.
The IT infrastructure is changing( such as price of
computer,
speed,
storage,
N/W
bandwidth,
heterogeneous workplace)
May 22, 2017
Data Mining: Concepts and Techniques
11
Benefits of Datawarehousing

1.
2.
3.
4.

1.
2.
3.
4.
Tangible Benefits:
Product inventory turnover is improved.
Cost of product introduction are decreased with improved selection
of target markets.
More cost-effective decision making is enabled by separating query
processing from running against operational databases.
Better business intelligence is enabled by increased quality &
flexibility of market analysis available.
Intangible Benefits:
Improved productivity
Reduced redundant processing, support and S/W to support
overlapping decision support application.
Enhanced customer relations
Enabling business process reengineering DWH can provide useful
insights into the work processes themselves.
May 22, 2017
Data Mining: Concepts and Techniques
12
The Data Warehouse Delivery process
IT Strategy
Education
Business Case Analysis
Technical Blueprint
Business Requirements
Requirements Evolution
Build the Vision
History Load
Ad-hoc Query
Extending Scope
Automation
May 22, 2017
Data Mining: Concepts and Techniques
13
The Data Warehouse Delivery process




IT Strategy : DWH project must include IT strategy for procuring and retaining
funding.
Business Case Analysis : It is necessary to understand the level of investment that
can be justified and to identify the projected business benefits that should be derived
from using DWH.
Education & Prototyping : Organizations will experiment with the concept of data
analysis and educate themselves on the value of DWH. This is valuable and should
be considered if this is the organization’s first exposure to the benefits of DS
information. The prototyping activity can progress the growth of education. It is
better than working models. Prototyping require business requirement, technical
blueprint ,architecture.
Business Requirement : It includes such as

The logical model for information within the DWH.

The source system that provide this data( mapping rules)

The business rules to be applied to data.

The query profiles for the immediate requirement
May 22, 2017
Data Mining: Concepts and Techniques
14
The Data Warehouse Delivery process
Technical Blueprint : It must identify :
The overall system architecture
The server & data mart architecture for both data & applications.
The essential components of the data base design.
The data retention strategy
The backup & recovery strategy
The capacity plan fro H/w and infrastructure
Building the vision :
It is the stage where the first production deliverable is
produced. This stage will probably build the major infrastructure components for
extracting and loading data, but limit them to the extraction & load of data sources.
 History Load : In most cases, the next phase is one where the remainder of the
required history is loaded into the DWH. This means that new entities would not be
added to the DWH, but additional physical tables would probably be created to store
the increased data volumes.
AD-Hoc Query : In this process we configure an ad-hoc query tool to operate
against the DWH. These end-user access tools are capable of automatically
generating the database query that answer any question posed by the user.

May 22, 2017
Data Mining: Concepts and Techniques
15
The Data Warehouse Delivery process



Automation : The automation phase is where many of the operational
management processes are fully automated within the DWH. These would
include:
 Extracting & loading the data from a variety of sources systems
 Transforming the data into a form suitable fro analysis
 Backing up, restoring & archiving data
 Generating aggregations from predefined definitions within DWH
 Monitoring query profiles & determining the appropriate aggregations to
maintain system performance.
Extending Scope : In this phase, the scope of DWH is extended to address a
new set of business requirements. This involves the loading of additional
data sources into the DWH i.e. introduction of new data marts .
Requirements evolution : This is most important aspect of the delivery
process is that the requirements are never static. Business requirements will
constantly change during the life of the DWH, so it is imperative that the
process support this, and allows these changes to be reflected within the
system.
May 22, 2017
Data Mining: Concepts and Techniques
16
Chapter 3: Data Warehousing and
OLAP Technology: An Overview

What is a data warehouse?

A multi-dimensional data model

Data warehouse architecture

Data warehouse implementation

From data warehousing to data mining
May 22, 2017
Data Mining: Concepts and Techniques
17
From Tables and Spreadsheets to Data Cubes

A data warehouse is based on a multidimensional data model which
views data in the form of a data cube

A data cube, such as sales, allows data to be modeled and viewed in
multiple dimensions

Dimension tables, such as item (item_name, brand, type), or
time(day, week, month, quarter, year)

Fact table contains measures (such as dollars_sold) and keys to
each of the related dimension tables

In data warehousing literature, an n-D base cube is called a base
cuboid. The top most 0-D cuboid, which holds the highest-level of
summarization, is called the apex cuboid.
The lattice of cuboids
forms a data cube.
May 22, 2017
Data Mining: Concepts and Techniques
18
Cube: A Lattice of Cuboids
all
time
0-D(apex) cuboid
item
time,location
time,item
location
supplier
item,location
time,supplier
1-D cuboids
location,supplier
2-D cuboids
item,supplier
time,location,supplier
3-D cuboids
time,item,location
time,item,supplier
item,location,supplier
4-D(base) cuboid
time, item, location, supplier
May 22, 2017
Data Mining: Concepts and Techniques
19
Conceptual Modeling of Data Warehouses

Modeling data warehouses: dimensions & measures

Star schema: A fact table in the middle connected to a
set of dimension tables

Snowflake schema:
A refinement of star schema
where some dimensional hierarchy is normalized into a
set of smaller dimension tables, forming a shape
similar to snowflake

Fact
constellations:
Multiple
fact
tables
share
dimension tables, viewed as a collection of stars,
therefore called galaxy schema or fact constellation
May 22, 2017
Data Mining: Concepts and Techniques
20
Example of Star Schema
time
item
time_key
day
day_of_the_week
month
quarter
year
Sales Fact Table
time_key
item_key
branch_key
branch
location_key
branch_key
branch_name
branch_type
units_sold
dollars_sold
avg_sales
item_key
item_name
brand
type
supplier_type
location
location_key
street
city
state_or_province
country
Measures
May 22, 2017
Data Mining: Concepts and Techniques
21
Example of Snowflake Schema
time
time_key
day
day_of_the_week
month
quarter
year
item
Sales Fact Table
time_key
item_key
branch_key
branch
location_key
branch_key
branch_name
branch_type
units_sold
dollars_sold
avg_sales
Measures
May 22, 2017
Data Mining: Concepts and Techniques
item_key
item_name
brand
type
supplier_key
supplier
supplier_key
supplier_type
location
location_key
street
city_key
city
city_key
city
state_or_province
country
22
Example of Fact Constellation
time
time_key
day
day_of_the_week
month
quarter
year
item
Sales Fact Table
time_key
item_key
item_name
brand
type
supplier_type
item_key
location_key
branch_key
branch_name
branch_type
units_sold
dollars_sold
avg_sales
item_key
shipper_key
location
to_location
location_key
street
city
province_or_state
country
dollars_cost
Measures
May 22, 2017
time_key
from_location
branch_key
branch
Shipping Fact Table
Data Mining: Concepts and Techniques
units_shipped
shipper
shipper_key
shipper_name
location_key
shipper_type 23
Cube Definition Syntax (BNF) in DMQL



Cube Definition (Fact Table)
define cube <cube_name> [<dimension_list>]:
<measure_list>
Dimension Definition (Dimension Table)
define dimension <dimension_name> as
(<attribute_or_subdimension_list>)
Special Case (Shared Dimension Tables)
 First time as “cube definition”
 define dimension <dimension_name> as
<dimension_name_first_time> in cube
<cube_name_first_time>
May 22, 2017
Data Mining: Concepts and Techniques
24
Defining Star Schema in DMQL
define cube sales_star [time, item, branch, location]:
dollars_sold = sum(sales_in_dollars), avg_sales =
avg(sales_in_dollars), units_sold = count(*)
define dimension time as (time_key, day, day_of_week,
month, quarter, year)
define dimension item as (item_key, item_name, brand,
type, supplier_type)
define dimension branch as (branch_key, branch_name,
branch_type)
define dimension location as (location_key, street, city,
province_or_state, country)
May 22, 2017
Data Mining: Concepts and Techniques
25
Defining Snowflake Schema in DMQL
define cube sales_snowflake [time, item, branch, location]:
dollars_sold = sum(sales_in_dollars), avg_sales =
avg(sales_in_dollars), units_sold = count(*)
define dimension time as (time_key, day, day_of_week, month, quarter,
year)
define dimension item as (item_key, item_name, brand, type,
supplier(supplier_key, supplier_type))
define dimension branch as (branch_key, branch_name, branch_type)
define dimension location as (location_key, street, city(city_key,
province_or_state, country))
May 22, 2017
Data Mining: Concepts and Techniques
26
Defining Fact Constellation in DMQL
define cube sales [time, item, branch, location]:
dollars_sold = sum(sales_in_dollars), avg_sales =
avg(sales_in_dollars), units_sold = count(*)
define dimension time as (time_key, day, day_of_week, month, quarter, year)
define dimension item as (item_key, item_name, brand, type, supplier_type)
define dimension branch as (branch_key, branch_name, branch_type)
define dimension location as (location_key, street, city, province_or_state,
country)
define cube shipping [time, item, shipper, from_location, to_location]:
dollar_cost = sum(cost_in_dollars), unit_shipped = count(*)
define dimension time as time in cube sales
define dimension item as item in cube sales
define dimension shipper as (shipper_key, shipper_name, location as location
in cube sales, shipper_type)
define dimension from_location as location in cube sales
define dimension to_location as location in cube sales
May 22, 2017
Data Mining: Concepts and Techniques
27
Measures of Data Cube: Three Categories

Distributive: if the result derived by applying the function
to n aggregate values is the same as that derived by
applying the function on all the data without partitioning


Algebraic: if it can be computed by an algebraic function
with M arguments (where M is a bounded integer), each of
which is obtained by applying a distributive aggregate
function


E.g., count(), sum(), min(), max()
E.g., avg(), min_N(), standard_deviation()
Holistic: if there is no constant bound on the storage size
needed to describe a subaggregate.

May 22, 2017
E.g., median(), mode(), rank()
Data Mining: Concepts and Techniques
28
A Concept Hierarchy: Dimension (location)
all
all
Europe
region
country
city
office
May 22, 2017
Germany
Frankfurt
...
...
...
Spain
North_America
Canada
Vancouver ...
L. Chan
...
Data Mining: Concepts and Techniques
...
Mexico
Toronto
M. Wind
29
View of Warehouses and Hierarchies
Specification of hierarchies

Schema hierarchy
day < {month <
quarter; week} < year

Set_grouping hierarchy
{1..10} < inexpensive
May 22, 2017
Data Mining: Concepts and Techniques
30
Multidimensional Data

Sales volume as a function of product, month,
and region
Dimensions: Product, Location, Time
Hierarchical summarization paths
Industry Region
Year
Product
Category Country Quarter
Product
City
Office
Month Week
Day
Month
May 22, 2017
Data Mining: Concepts and Techniques
31
A Sample Data Cube
2Qtr
3Qtr
4Qtr
sum
U.S.A
Canada
Mexico
Country
TV
PC
VCR
sum
1Qtr
Date
Total annual sales
of TV in U.S.A.
sum
May 22, 2017
Data Mining: Concepts and Techniques
32
Cuboids Corresponding to the Cube
all
0-D(apex) cuboid
product
product,date
date
country
product,country
1-D cuboids
date, country
2-D cuboids
3-D(base) cuboid
product, date, country
May 22, 2017
Data Mining: Concepts and Techniques
33
Browsing a Data Cube



May 22, 2017
Visualization
OLAP capabilities
Interactive manipulation
Data Mining: Concepts and Techniques
34
Typical OLAP Operations

Roll up (drill-up): summarize data

by climbing up hierarchy or by dimension reduction

Drill down (roll down): reverse of roll-up

from higher level summary to lower level summary or
detailed data, or introducing new dimensions
Slice and dice: project and select

Pivot (rotate):



reorient the cube, visualization, 3D to series of 2D planes
Other operations


drill across: involving (across) more than one fact table
drill through: through the bottom level of the cube to its
back-end relational tables (using SQL)
May 22, 2017
Data Mining: Concepts and Techniques
35
Fig. 3.10 Typical OLAP
Operations
May 22, 2017
Data Mining: Concepts and Techniques
36
A Star-Net Query Model
Customer Orders
Shipping Method
Customer
CONTRACTS
AIR-EXPRESS
ORDER
TRUCK
PRODUCT LINE
Time
Product
ANNUALY QTRLY
DAILY
PRODUCT ITEM PRODUCT GROUP
CITY
SALES PERSON
COUNTRY
DISTRICT
REGION
Location
May 22, 2017
Each circle is
called a footprint
DIVISION
Promotion
Data Mining: Concepts and Techniques
Organization
37
Chapter 3: Data Warehousing and
OLAP Technology: An Overview

What is a data warehouse?

A multi-dimensional data model

Data warehouse architecture

Data warehouse implementation

From data warehousing to data mining
May 22, 2017
Data Mining: Concepts and Techniques
38
Design of Data Warehouse: A Business
Analysis Framework

Four views regarding the design of a data warehouse

Top-down view


Data source view


consists of fact tables and dimension tables
Business query view

May 22, 2017
exposes the information being captured, stored, and
managed by operational systems
Data warehouse view


allows selection of the relevant information necessary for the
data warehouse
sees the perspectives of data in the warehouse from the view
of end-user
Data Mining: Concepts and Techniques
39
Data Warehouse Design Process



Top-down, bottom-up approaches or a combination of both

Top-down: Starts with overall design and planning (mature). It is useful in cases where the
technology is mature and well known, and where the business problems that must be solved
are clear and well understood.

Bottom-up: Starts with experiments and prototypes (rapid). This is useful intheearly stage
of business modeling and technology development. It allows an organization to move
forward at considerably less expense and to evaluate the benefits of the technology before
making significant commitments.
From software engineering point of view

Waterfall : structured and systematic analysis at each step before proceeding to the next

Spiral : rapid generation of increasingly functional systems, short turn around time, quick
turn around
Typical data warehouse design process

Choose a business process to model, e.g., orders, invoices, etc.

Choose the grain (atomic level of data) of the business process

Choose the dimensions that will apply to each fact table record

Choose the measure that will populate each fact table record
May 22, 2017
Data Mining: Concepts and Techniques
40
Process Architecture
Data
Detailed
Information
Data
Summa
ery Info
Meta
data
Query Manager
External data
Load Manager
Operational data
Information
Warehouse Manger
May 22, 2017
Data Mining: Concepts and Techniques
Data dippers
OLAP tools
41
Data Warehouse: A Multi-Tiered Architecture
Other
sources
Operational
DBs
Metadata
Extract
Transform
Load
Refresh
Monitor
&
Integrator
Data
Warehouse
OLAP Server
Serve
Analysis
Query
Reports
Data mining
Data Marts
Data Sources
May 22, 2017
Data Storage
OLAP Engine Front-End Tools
Data Mining: Concepts and Techniques
42
Data Warehouse: A Multi-Tiered Architecture
1.
2.
3.
4.
5.
6.
7.
8.
9.
The transaction or other operational database(s) from which the data
warehouse is populated.
A process to extract data from this database or these databases, and bring
it into the DWH. This process must often transform the data into the
database structure & internal formats of DWH.
A process to cleanse the data to make sure it is of sufficient quality for the
decision making purposes for which it will be used.
A process to load the cleansed data into DWH database. The four
processes from extraction through loading are often referred to collectively
as data staging.
A process to create any desired summaries of the data: pre-calculated
totals, averages, and the like, which are expected to be requested often.
Metadata “ data about data”. It is useful to have a central information
repository to tell users What is in DWH where it came from, who is incharge of it and more.
The DWH database itself .This database contains the detailed and summary
data of the DWH.
Querry tools usually include and end-user interface for posing questions to
the database, in a process called OLAP. They may also include automated
tools for uncovering patterns in the data, often reffered to as data mining
The users or users for whom the data warehouse exists and without whom
it would be useless.
May 22, 2017
Data Mining: Concepts and Techniques
43
DWH Components
1.
2.
3.
4.
5.
6.
7.
8.
Data sourcing, cleanup, transformation &
migration tools.
Metadata Repository
DWH database Technology
Datamarts
Data query, reporting, analysis and mining tools
DWH administration & management
Information delivery system
Operational data store.
May 22, 2017
Data Mining: Concepts and Techniques
44
Building a DWH

(a)
(b)

(a)
(b)
(c)
(d)
(e)
(f)
Business considerations
Approach
Organizational Issues
Design considerations
Data control
Meta data
Data Distribution
Tools
Performance considerations
Nine decisions in the design of DWH
May 22, 2017
Data Mining: Concepts and Techniques
45
Building a DWH

1.
2.
3.

1.
2.
3.
4.
5.
Technical considerations
H/W Platforms
DWH & DBMS specialization
Communication Infrastructure
Implementation considerations
Access Tools
Data extraction, Cleanup, transformation & Migration.
Data Placement Strategies
Metadata
User sophistication levels.
May 22, 2017
Data Mining: Concepts and Techniques
46
Three Data Warehouse Models


Enterprise warehouse
 collects all of the information about subjects spanning
the entire organization
Data Mart
 a subset of corporate-wide data that is of value to a
specific groups of users. Its scope is confined to
specific, selected groups, such as marketing data mart


Independent vs. dependent (directly from warehouse) data mart
Virtual warehouse
 A set of views over operational databases
 Only some of the possible summary views may be
materialized
May 22, 2017
Data Mining: Concepts and Techniques
47
Data Warehouse Development:
A Recommended Approach
Multi-Tier Data
Warehouse
Distributed
Data Marts
Data
Mart
Data
Mart
Model refinement
Enterprise
Data
Warehouse
Model refinement
Define a high-level corporate data model
May 22, 2017
Data Mining: Concepts and Techniques
48
Data Warehouse Back-End Tools and Utilities





Data extraction
 get data from multiple, heterogeneous, and external
sources
Data cleaning
 detect errors in the data and rectify them when possible
Data transformation
 convert data from legacy or host format to warehouse
format
Load
 sort, summarize, consolidate, compute views, check
integrity, and build indicies and partitions
Refresh
 propagate the updates from the data sources to the
warehouse
May 22, 2017
Data Mining: Concepts and Techniques
49
Metadata Repository

Meta data is the data defining warehouse objects. It stores:

Description of the structure of the data warehouse


Operational meta-data





schema, view, dimensions, hierarchies, derived data defn, data
mart locations and contents
data lineage (history of migrated data and transformation path),
currency of data (active, archived, or purged), monitoring
information (warehouse usage statistics, error reports, audit trails)
The algorithms used for summarization
The mapping from operational environment to the data
warehouse
Data related to system performance
 warehouse schema, view and derived data definitions
Business data

business terms and Data
definitions,
ownership of data, charging policies
Mining: Concepts and Techniques
May 22, 2017
50
Data Marting



A data mart is a subset of an organizational data store, usually oriented
to a specific purpose or major data subject, that may be distributed to
support business needs. Data marts are analytical data stores designed to
focus on specific business functions for a specific community within an
organization. Data marts are often derived from subsets of data in a data
warehouse, though in the bottom-up data warehouse design methodology
the data warehouse is created from the union of organizational data
marts.
A data mart is a repository of data gathered from operational data and
other sources that is designed to serve a particular community of
knowledge workers
Data Marts are created for the following reasons:
 To speed up queries by reducing the volume of data to be scanned.
 To structure data in a form suitable for a user access tool.
 To partition data in order to impose access control strategies.
 To segment data into different hardware platforms.
May 22, 2017
Data Mining: Concepts and Techniques
51
Data Marting


Situation for creation of Data Marts to identify whether
 There is a natural functional split within the organization.
 There is a natural split of the data.
 The proposed user access tool uses its own database
structures.
 Any infrastructure issues predicate the use of data marts.
 There are any access control issues that require data marts to
provide Chinese walls.
Costs of Data Marting : data Marting strategies incur costs in
the following areas:
 Hardware and software
 Network Access
 Time-window constraints
May 22, 2017
Data Mining: Concepts and Techniques
52
Aggregations




Data aggregation is an essential component of decision support
DWH. It allows us to provide cost-effective query performance
by avoiding the need for substantial.
Aggregation strategies rely on the fact that most common
queries will analyze either a subset or an aggregation of the
detailed data.
We can see that the appropriate aggregation will substantially
reduce the processing time required to run a query, at the cost of
processing and storing the intermediate results.
The aggregations make trends more apparent, by providing an
overview of the whole picture rather than small parts of the
picture.
May 22, 2017
Data Mining: Concepts and Techniques
53
Aggregations


Designing Summary Tables : The primary purpose of using
summary tables is to cut down the time it takes to execute
queries. The objective of using summary tables is to minimize
the volume of data being scanned, by storing as many partial
results as possible.
The summary tables are designed using following steps:
 Determine which dimensions are aggregated.
 Determine the aggregation of multiple values.
 Aggregate multiple facts into the summary table.
 Determine the level of aggregation.
 Determine the extent of embedding dimension data in the
summary.
 Design time into the summary table.
 Index the summary table.
May 22, 2017
Data Mining: Concepts and Techniques
54
OLAP Server Architectures

Relational OLAP (ROLAP)





Include optimization of DBMS backend, implementation of
aggregation navigation logic, and additional tools and services
Greater scalability
Multidimensional OLAP (MOLAP)

Sparse array-based multidimensional storage engine

Fast indexing to pre-computed summarized data
Hybrid OLAP (HOLAP) (e.g., Microsoft SQLServer)


Use relational or extended-relational DBMS to store and manage
warehouse data and OLAP middle ware
Flexibility, e.g., low level: relational, high-level: array
Specialized SQL servers (e.g., Redbricks)

May 22, 2017
Specialized support for SQL queries over star/snowflake schemas
Data Mining: Concepts and Techniques
55
OLAP Guidelines (Dr. E.F. Codd Rule)

1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
Dr. E.F. Codd the “father” of the relational model has formulated a
list of 12 guidelines and requirements as the basis for selecting
OLAP systems:
Multidimensional conceptual view
Transparency
Accessibility
Consistent reporting performance
Client / Server Architecture
Generic dimensionality
Dynamic sparse matrix handling
Multiuser support
Unrestricted cross-dimensional operations
Intuitive data manipulation
Flexible reporting
Unlimited dimensions & aggregation levels
May 22, 2017
Data Mining: Concepts and Techniques
56
MOLAP

In the OLAP world, there are mainly two different types: Multidimensional OLAP (MOLAP)
and Relational OLAP (ROLAP). Hybrid OLAP (HOLAP) refers to technologies that combine
MOLAP and ROLAP.
MOLAP
Excellent performance- this is the more traditional way of OLAP analysis. In MOLAP, data is
stored in a multidimensional cube. The storage is not in the relational database, but in
proprietary formats.
Advantages:
MOLAP cubes are built for fast data retrieval, and are optimal for slicing and dicing operations.

They can also perform complex calculations. All calculations have been pre-generated when the
cube is created. Hence, complex calculations are not only doable, but they return quickly.
Excellent performance: MOLAP cubes are built for fast data retrieval, and is optimal for slicing
and dicing operations.

Disadvantages:
It is limited in the amount of data it can handle. Because all calculations are performed when
the cube is built, it is not possible to include a large amount of data in the cube itself. This is not
to say that the data in the cube cannot be derived from a large amount of data. Indeed, this is
possible. But in this case, only summary-level information will be included in the cube itself.
It requires an additional investment. Cube technology are often proprietary and do not already
exist in the organization.
May 22, 2017
Data Mining: Concepts and Techniques
57
ROLAP
This methodology relies on manipulating the data stored in the relational database to give the
appearance of traditional OLAP's slicing and dicing functionality. In essence, each action of
slicing and dicing is equivalent to adding a "WHERE" clause in the SQL statement.

Advantages:
It can handle large amounts of data. The data size limitation of ROLAP technology is the
limitation on data size of the underlying relational database. In other words, ROLAP itself
places no limitation on data amount.

It can leverage functionalities inherent in the relational database. Often, relational database
already comes with a host of functionalities. ROLAP technologies, since they sit on top of the
relational database, can therefore leverage these functionalities.

Disadvantages:
Performance can be slow. Because each ROLAP report is essentially a SQL query (or multiple
SQL queries) in the relational database, the query time can be long if the underlying data size is
large.
It has limited by SQL functionalities. Because ROLAP technology mainly relies on generating
SQL statements to query the relational database, and SQL statements do not fit all needs (for
example, it is difficult to perform complex calculations using SQL), ROLAP technologies are
therefore traditionally limited by what SQL can do. ROLAP vendors have mitigated this risk by
building into the tool out-of-the-box complex functions as well as the ability to allow users to
define their own functions.

HOLAP
HOLAP technologies attempt to combine the advantages of MOLAP and ROLAP. For
summary-type information, HOLAP leverages cube technology for faster performance. When
detail information is needed, HOLAP can "drill through" from the cube into the underlying
relational data
May 22, 2017
Data Mining: Concepts and Techniques
58
The FASMI test


FAST means that the system is targeted to deliver most responses to users within about five seconds, with the
simplest analyses taking no more than one second and very few taking more than 20 seconds. Independent
research in The Netherlands has shown that end-users assume that a process has failed if results are not
received with 30 seconds, and they are apt to hit ‘Alt+Ctrl+Delete’ unless the system warns them that the
report will take longer. Even if they have been warned that it will take significantly longer, users are likely to
get distracted and lose their chain of thought, so the quality of analysis suffers. This speed is not easy to
achieve with large amounts of data, particularly if on-the-fly and ad hoc calculations are required. Vendors
resort to a wide variety of techniques to achieve this goal, including specialized forms of data storage,
extensive pre-calculations and specific hardware requirements, but we do not think any products are yet fully
optimized, so we expect this to be an area of developing technology. In particular, the full pre-calculation
approach fails with very large, sparse applications as the databases simply get too large (the database
explosion problem), whereas doing everything on-the-fly is much too slow with large databases, even if
exotic hardware is used. Even though it may seem miraculous at first if reports that previously took days now
take only minutes, users soon get bored of waiting, and the project will be much less successful than if it had
delivered a near instantaneous response, even at the cost of less detailed analysis. The OLAP Survey has
found that slow query response is consistently the most often-cited technical problem with OLAP products, so
too many deployments are clearly still failing to pass this test.
ANALYSIS means that the system can cope with any business logic and statistical analysis that is relevant for
the application and the user, and keep it easy enough for the target user. Although some pre-programming
may be needed, we do not think it acceptable if all application definitions have to be done using a
professional 4GL. It is certainly necessary to allow the user to define new ad hoc calculations as part of the
analysis and to report on the data in any desired way, without having to program, so we exclude products (like
Oracle Discoverer) that do not allow adequate end-user oriented calculation flexibility. We do not mind
whether this analysis is done in the vendor's own tools or in a linked external product such as a spreadsheet,
simply that all the required analysis functionality be provided in an intuitive manner for the target users. This
could include specific features like time series analysis, cost allocations, currency translation, goal seeking,
ad hoc multidimensional structural changes, non-procedural modeling, exception alerting, data mining and
other application dependent features. These capabilities differ widely between products, depending on their
target markets.
May 22, 2017
Data Mining: Concepts and Techniques
59
The FASMI test



SHARED means that the system implements all the security requirements for
confidentiality (possibly down to cell level) and, if multiple write access is needed,
concurrent update locking at an appropriate level. Not all applications need users to
write data back, but for the growing number that do, the system should be able to
handle multiple updates in a timely, secure manner. This is a major area of weakness
in many OLAP products, which tend to assume that all OLAP applications will be readonly, with simplistic security controls. Even products with multi-user read-write often
have crude security models; an example is Microsoft OLAP Services.
MULTIDIMENSIONAL is our key requirement. If we had to pick a one-word
definition of OLAP, this is it. The system must provide a multidimensional conceptual
view of the data, including full support for hierarchies and multiple hierarchies, as this
is certainly the most logical way to analyze businesses and organizations. We are not
setting up a specific minimum number of dimensions that must be handled as it is too
application dependent and most products seem to have enough for their target
markets. Again, we do not specify what underlying database technology should be
used providing that the user gets a truly multidimensional conceptual view.
INFORMATION is all of the data and derived information needed, wherever it is and
however much is relevant for the application. We are measuring the capacity of various
products in terms of how much input data they can handle, not how many Gigabytes
they take to store it. The capacities of the products differ greatly — the largest OLAP
products can hold at least a thousand times as much data as the smallest. There are
many considerations here, including data duplication, RAM required, disk space
utilization, performance, integration with data warehouses and the like.
May 22, 2017
Data Mining: Concepts and Techniques
60
Cube Operation

Cube definition and computation in DMQL
define cube sales[item, city, year]: sum(sales_in_dollars)
compute cube sales

Transform it into a SQL-like language (with a new operator
cube by, introduced by Gray et al.’96)
()
SELECT item, city, year, SUM (amount)
FROM SALES

(city)
CUBE BY item, city, year
Need compute the following Group-Bys
(city, item)
(item)
(city, year)
(date, product, customer),
(date,product),(date, customer), (product, customer),
(city, item, year)
(date), (product), (customer)
()
May 22, 2017
Data Mining: Concepts and Techniques
(year)
(item, year)
61
Efficient Processing OLAP Queries

Determine which operations should be performed on the available cuboids

Transform drill, roll, etc. into corresponding SQL and/or OLAP operations,
e.g., dice = selection + projection

Determine which materialized cuboid(s) should be selected for OLAP op.

Let the query to be processed be on {brand, province_or_state} with the
condition “year = 2004”, and there are 4 materialized cuboids available:
1) {year, item_name, city}
2) {year, brand, country}
3) {year, brand, province_or_state}
4) {item_name, province_or_state} where year = 2004
Which should be selected to process the query?

Explore indexing structures and compressed vs. dense array structs in MOLAP
May 22, 2017
Data Mining: Concepts and Techniques
62
Data Warehouse Usage

Three kinds of data warehouse applications

Information processing



Analytical processing

multidimensional analysis of data warehouse data

supports basic OLAP operations, slice-dice, drilling, pivoting
Data mining


May 22, 2017
supports querying, basic statistical analysis, and reporting
using crosstabs, tables, charts and graphs
knowledge discovery from hidden patterns
supports associations, constructing analytical models,
performing classification and prediction, and presenting the
mining results using visualization tools
Data Mining: Concepts and Techniques
63
From On-Line Analytical Processing (OLAP)
to On Line Analytical Mining (OLAM)

Why online analytical mining?
 High quality of data in data warehouses
 DW contains integrated, consistent, cleaned data
 Available information processing structure surrounding
data warehouses
 ODBC, OLEDB, Web accessing, service facilities,
reporting and OLAP tools
 OLAP-based exploratory data analysis
 Mining with drilling, dicing, pivoting, etc.
 On-line selection of data mining functions
 Integration and swapping of multiple mining
functions, algorithms, and tasks
May 22, 2017
Data Mining: Concepts and Techniques
64
An OLAM System Architecture
Mining query
Mining result
Layer4
User Interface
User GUI API
OLAM
Engine
OLAP
Engine
Layer3
OLAP/OLAM
Data Cube API
Layer2
MDDB
MDDB
Meta Data
Filtering&Integration
Database API
Filtering
Layer1
Data cleaning
Databases
May 22, 2017
Data
Data integration Warehouse
Data Mining: Concepts and Techniques
Data
Repository
65
Data Warehouse Security





Responsibility and Confidentiality : The Data Warehouse contains
confidential and sensitive University data. In order to use its data, you must
have proper authorization. Your authorization means that you have the
authority to use the data and the responsibility to share stewardship of the
data with the other users of the collection.
Once authorized, you can access the data that you need to do your job. All
authorized users are cautioned, however, that they are entrusted to use the
data they retrieve from the Warehouse with care. Confidential data should
not be released to others except for those with a "legitimate need to know."
Please remember that you should never share Business Objects queries with
other users with the data intact -- send the query without the data. More
information about sending and saving Business Objects documents.
Querying Data with Security Restrictions : If you execute a query
requesting data that you are not authorized to access, you will get results
which may be incomplete because they are missing the data you are not
allowed to access.
If your authorization is limited to a specific set of data, be sure when
querying the data that your record selection conditions include your security
restrictions. For example, if you are authorized to access just data for a
particular department, one of your record selection conditions should state
something like "If Organization= 'My Organization'," where My Organization
is the code of your department. This will document why the query gets the
results it does, and will also help your query run faster.
May 22, 2017
Data Mining: Concepts and Techniques
66
SECURITY
A DWH by nature is an open accessible system. The aim of DWH is generally to
make large amounts of data easily accessible to the users, thereby enabling the users
to extract information about the business as a whole.

It is important to establish early any security and audit requirements that will be
placed on the DWH.

Clearly, adding security will affect performance because further checks require CPU
cycles and time to perform.

Requirement :Security can affect many different parts of the DWH such as :

User access – can be done by
 Data classification
 By Level of security required
 By Data sensitivity
 By job Function
 User Classification
 Top-down company hierarchy (department , section, group)
 By Role

Data load

Legal Requirements : it is vital to establish any legal requirements (law) on the
data being stored.

Audit Requirements : such as connections, disconnections, data access, data
change.

Network RequirementsData
: (routes)
May 22, 2017
Mining: Concepts and Techniques

67
SECURITY



May 22, 2017
Data movement : Type of file to be moved & the manner in which
file has to be moved (flat file, encrypted / decrypted, summary
generation, results temporary tables)
Documentation : It is important to get all the security and audit
requirements clearly documented as this will be needed as part of
any cost justification. This document should contain all the
information gathered on:
 Data classification
 User classification
 Network requirement
 Data movement & storage requirements
 All auditable actions
Query generation
Data Mining: Concepts and Techniques
68
User Access Hierarchy
Data Warehouse Inc.
Sales
Analyst
Marketing
Administrators
Senior Analyst
Analyst
Administer
Analyst
Analyst
Administer
Analyst
Analyst
Analyst
Detailed Sales
data
May 22, 2017
Reference Data
Summarized
sales data
Data Mining: Concepts and Techniques
Detailed
Customer Data
69
Backup and Recovery



Backup is one of the most important regular operations carried
out on any system.
It is important in the DWH environment because of the volumes
of data involved and the complexity of the system.
Types of Backup
 In a complete backup, the entire database is backedup at the
same time. This includes all database data files, the control
files and the journal files.
 Partial backup is any backup that is not complete.
 A cold backup is a backup that is taken while the database
is completely shutdown. In a multi-instance environment, all
instances of that database must be shut down.
 Hot backup : any backup that is not cold is considered to be
hot.
 Online backup is a synonym for hot backup
May 22, 2017
Data Mining: Concepts and Techniques
70
Backup and Recovery

Hardware: When you are choosing a backup strategy, one of the first
decisions you will have to make is which H/W to use. The choice depend on
factors such as speed at which a backup or restore can be processed, H/W
connection, N/W bandwidth, backup S/W used, speed of server’s I/O
subsystem and components.
 Tape Technology
 Tape media (reliability / life / cost of tape medium/drive, scalability)
 Standalone tape drives (connection directly to servers, as a
network-available device, remotely to another machine)
 Tape stackers (a method of loading multiple tapes into a single tape
drive. Only one tape can be accessed at a time, but the stacker will
automatically dismount the current tape when it has finished with it
and load the next tape)
 Tape silos (These are large tape storage facilities, which can store
and manage thousands of tapes. These are generally sealed
environments with robotic arms for manipulating the tapes.)
 Disk-to-disk backups (the backup is performed to disk rather than
to tape)
May 22, 2017
Data Mining: Concepts and Techniques
71
Backup and Recovery

Software (Omniback II, ADSM, Alexandria, Epoch, Networker)
Performance- such as degree of parallelism, I/O bottlenecks
 Requirements : When considering which backup package to use
it is important to check the following criteria.
 What degree of parallelism is possible?
 How scalable is the product as tape drives are added?
 What platforms are supported by the package?
 What tape drives and tape media are supported by the
package?
 Does the package support easy access to information about
tape contents?
Backup strategies:
 Effect on database Design- such as DB partitioning strategies.
 Design Strategies -main aim should be to reduce the amount of
data that has to be backed up on regular basis, e.g. Read-only
tablespace, automation of backup


May 22, 2017
Data Mining: Concepts and Techniques
72
Backup and Recovery

Recovery Strategies- depend on kind of failure and consist of a set of
failure scenarios & their resolution. Each of the following failure
scenarios indicated below needs to be centred for recovery steps and
must be documented:
1. Instance Failure
2. Media failure
3. Loss or damage of table space or data file
4. Loss or damage of a table
5. Loss or damage of control file
6. Failure during data movement

The plan must be made for following data movement scenarios :
1.
Data load into staging tables
2.
Movement from staging to fact table
3.
Partition roll-up into larger partitions
4.
Creation of Aggregations.
Testing the Strategy- The backup and recovery tests need to be carried out
on a regular basis, but it is advisable to avoid performing tests at busy times
such as end of year and try to test to be run at low load in the business year.
May 22, 2017
Data Mining: Concepts and Techniques
73
Backup and Recovery

Disaster Recovery- A disaster can be defined where a major site loss
has taken place or destroyed or damaged beyond immediate repair. It
is advisable to decide a criteria for making that judgment before any
situation occurs, as attempting to make such a decision while in the
middle of crises is likely to lead problems. Deal with following
requirements for disaster recovery:
 Planning Disaster recovery with minimal system required.
 Replacement / standby machine
 Sufficient tape and disk capacity
 Communication links to users.
 Communication links to data sources.
 Copies of all relevant pieces of software.
 Backup of database.
 Application-aware systems administration and operation staff.
May 22, 2017
Data Mining: Concepts and Techniques
74
Tuning the Data Warehouse

Tuning the data warehouse deal with the measures such as
Average query response times
scan rates
I/O throughput rates
Time used per query (fixed or ad-hoc)
No. of users in the group
Whether they use adhoc queries frequently or
occasionally at unknown intervals or at regular or
predictable times
The average / maximum size of query they tend to run
The peak time of daily usage
The more unpredictable the load, the larger the queries, or
the greater the number of users the bigger the tuning task.
Memory usage per process
May 22, 2017
Data Mining: Concepts and Techniques
75
Testing the data warehouse


Three levels of testing
 Unit testing : each development unit is tested on its own
 Integration testing : the separate development units that
make up a component of DWH application are tested to
ensure that they work together.
 System Testing :the whole DWH application is tested
together. The components are tested to ensure that they work
properly together, that they don’t cause system bottlenecks.
Developing the Test Plan
 Test Schedule – metrics for estimating the amount of time
required for testing
 Data Load
 How will the data be generated ?
 Where will the data be generated ?
 How will the generated data be loaded ?
 Will the data be correctly skewed ?
May 22, 2017
Data Mining: Concepts and Techniques
76
Testing the data warehouse


Testing Backup Recovery: To test the recovery of a lost data file, a
data file should actually be deleted and recovered from backup. Check
that the backup database is working correctly, and actually tracks what
has been backed up and where it has been backed upto. If that works
then check that the information can be retrieved. Check that all the
backup hardware is working: tapes, tape drives, controllers etc. Each of
the scenarios indicated below needs to be centred for
1. Instance Failure
2. Media failure
3. Loss or damage of table space or data file
4. Loss or damage of a table
5. Loss or damage of control file
6. Failure during data movement
Testing the Operational Environment : Testing of the DWH
operational environment is another key set o tests that will have to be
performed. There are following aspects that need to be tested :
 Security – document what is not allowed, disallowed operations
and devising a test for each
 Disk configuration – test thoroughly to identify any potential I/O
bottlenecks.
May 22, 2017
Data Mining: Concepts and Techniques
77
Testing the data warehouse
Scheduler – Given the possibility for many of the processes in the DHW
to swamp the system resources if allowed to run at the wrong time,
scheduling control of these processes is essential to the success of the
DWH.
 Management Tools – (event / system / configuration / backup recovery /
database)
 Database Management
Testing the database : It can be broken down into three separate sets of tests:
 Testing the database manager and monitoring tools (creation, running &
management of the test database)
 Testing database features (querying / create index / data load in parallel)
 Testing database performance (test queries with different aggregations,
index strategies , degree of parallel, different-sized data sets)

Testing the application

Logistic of the Test (DWH application code, day-to-day operational
procedures, backup recovery strategy, query performance, management &
monitoring tools, scheduling software)

May 22, 2017
Data Mining: Concepts and Techniques
78
Chapter 3: Data Warehousing and
OLAP Technology: An Overview

What is a data warehouse?

A multi-dimensional data model

Data warehouse architecture

Data warehouse implementation

From data warehousing to data mining

Summary
May 22, 2017
Data Mining: Concepts and Techniques
79
References (I)









S. Agarwal, R. Agrawal, P. M. Deshpande, A. Gupta, J. F. Naughton, R. Ramakrishnan,
and S. Sarawagi. On the computation of multidimensional aggregates. VLDB’96
D. Agrawal, A. E. Abbadi, A. Singh, and T. Yurek. Efficient view maintenance in data
warehouses. SIGMOD’97
R. Agrawal, A. Gupta, and S. Sarawagi. Modeling multidimensional databases. ICDE’97
S. Chaudhuri and U. Dayal. An overview of data warehousing and OLAP technology.
ACM SIGMOD Record, 26:65-74, 1997
E. F. Codd, S. B. Codd, and C. T. Salley. Beyond decision support. Computer World, 27,
July 1993.
J. Gray, et al. Data cube: A relational aggregation operator generalizing group-by,
cross-tab and sub-totals. Data Mining and Knowledge Discovery, 1:29-54, 1997.
A. Gupta and I. S. Mumick. Materialized Views: Techniques, Implementations, and
Applications. MIT Press, 1999.
J. Han. Towards on-line analytical mining in large databases. ACM SIGMOD Record,
27:97-107, 1998.
V. Harinarayan, A. Rajaraman, and J. D. Ullman. Implementing data cubes efficiently.
SIGMOD’96
May 22, 2017
Data Mining: Concepts and Techniques
80
References (II)









C. Imhoff, N. Galemmo, and J. G. Geiger. Mastering Data Warehouse Design: Relational
and Dimensional Techniques. John Wiley, 2003
W. H. Inmon. Building the Data Warehouse. John Wiley, 1996
R. Kimball and M. Ross. The Data Warehouse Toolkit: The Complete Guide to
Dimensional Modeling. 2ed. John Wiley, 2002
P. O'Neil and D. Quass. Improved query performance with variant indexes. SIGMOD'97
Microsoft. OLEDB for OLAP programmer's reference version 1.0. In
http://www.microsoft.com/data/oledb/olap, 1998
A. Shoshani. OLAP and statistical databases: Similarities and differences. PODS’00.
S. Sarawagi and M. Stonebraker. Efficient organization of large multidimensional arrays.
ICDE'94
OLAP council. MDAPI specification version 2.0. In
http://www.olapcouncil.org/research/apily.htm, 1998
E. Thomsen. OLAP Solutions: Building Multidimensional Information Systems. John Wiley,
1997

P. Valduriez. Join indices. ACM Trans. Database Systems, 12:218-246, 1987.

J. Widom. Research problems in data warehousing. CIKM’95.
May 22, 2017
Data Mining: Concepts and Techniques
81
May 22, 2017
Data Mining: Concepts and Techniques
82