Data Mining Data quality Missing values imputation using Mean, Median and k-Nearest Neighbor...

33
Data Mining Data quality Missing values imputation using Mean, Median and k-Nearest Neighbor approach Distance Measure

Transcript of Data Mining Data quality Missing values imputation using Mean, Median and k-Nearest Neighbor...

Page 1: Data Mining Data quality Missing values imputation using Mean, Median and k-Nearest Neighbor approach Distance Measure.

Data Mining

• Data quality• Missing values imputation using

Mean, Median and k-Nearest Neighbor approach

• Distance Measure

Page 2: Data Mining Data quality Missing values imputation using Mean, Median and k-Nearest Neighbor approach Distance Measure.

Data Quality

• Data quality is a major concern in Data Mining and Knowledge Discovery tasks.

• Why: At most all Data Mining algorithms induce knowledge strictly from data.

• The quality of knowledge extracted highly depends on the quality of data.

• There are two main problems in data quality:-– Missing data: The data not present.– Noisy data: The data present but not correct.

• Missing/Noisy data sources:-– Hardware failure.– Data transmission error.– Data entry problem.– Refusal of responds to answer certain questions.

Page 3: Data Mining Data quality Missing values imputation using Mean, Median and k-Nearest Neighbor approach Distance Measure.

Effect of Noisy Data on Results Accuracy

age income student buys_computer<=30 high no ?>40 medium yes ?31…40 medium yes ?

age income student buys_computer<=30 high yes yes<=30 high no yes>40 medium yes no>40 medium no no>40 low yes yes31…40 no yes31…40 medium yes yes

Data Mining

• If ‘age <= 30’ and income = ‘high’ then buys_computer = ‘yes’

• If ‘age > 40’ and income = ‘medium’ then buys_computer = ‘no’

Discover only those rules which contain support (frequency) greater >= 2

Due to the missing value in training dataset, the accuracy of prediction decreases and becomes “66.7%”

Training data

Testing data or actual data

Page 4: Data Mining Data quality Missing values imputation using Mean, Median and k-Nearest Neighbor approach Distance Measure.

Imputation of Missing Data (Basic)

• Imputation is a term that denotes a procedure that replaces the missing values in a dataset by some plausible values– i.e. by considering relationship among correlated

values among the attributes of the dataset.

Attribute 1 Attribute 2 Attribute 3 Attribute 420 cool high false

cool high true20 cool high true20 mild low false30 cool normal false10 mild high true

If we consider only {attribute#2}, then value “cool” appears in 4 records.

Probability of Imputing value (20) = 75%

Probability of Imputing value (30) = 25%

Page 5: Data Mining Data quality Missing values imputation using Mean, Median and k-Nearest Neighbor approach Distance Measure.

Imputation of Missing Data (Basic)Attribute 1 Attribute 2 Attribute 3 Attribute 4

20 cool high falsecool high true

20 cool high true20 mild low false30 cool normal false10 mild high true

For {attribute#4} the value “true” appears in 3 records

Probability of Imputing value (20) = 50%

Probability of Imputing value (10) = 50%

Attribute 1 Attribute 2 Attribute 3 Attribute 420 cool high false

cool high true20 cool high true20 mild low false30 cool normal false10 mild high true

For {attribute#2, attribute#3} the value {“cool”, “high”} appears in only 2 records

Probability of Imputing value (20) = 100%

Page 6: Data Mining Data quality Missing values imputation using Mean, Median and k-Nearest Neighbor approach Distance Measure.

6

Measuring the Central Tendency

• Mean (algebraic measure):

– Weighted arithmetic mean:

– Trimmed mean: chopping extreme values

• Median: A holistic measure

– Middle value if odd number of values, or average of the

middle two values otherwise

– Estimated by interpolation (for grouped data):

• Mode

– Value that occurs most frequently in the data

– Unimodal, bimodal, trimodal

– Empirical formula:

n

iix

nx

1

1

n

ii

n

iii

w

xwx

1

1

cf

lfnLmedian

median

))(2/

(1

)(3 medianmeanmodemean

N

x

Page 7: Data Mining Data quality Missing values imputation using Mean, Median and k-Nearest Neighbor approach Distance Measure.

7

Symmetric vs. Skewed Data

• Median, mean and mode of symmetric, positively and negatively skewed data

Page 8: Data Mining Data quality Missing values imputation using Mean, Median and k-Nearest Neighbor approach Distance Measure.

Randomness of Missing Data

• Missing data randomness is divided into three classes.

1. Missing completely at random (MCAR):- It occurs when the probability of instance (case) having missing value for an attribute does not depend on either the known attribute values or missing data attribute.

2. Missing at random (MAR):- It occurs when the probability of instance (case) having missing value for an attribute depends on the known attribute values, but not on the missing data attribute.

3. Not missing at random (NMAR):- When the probability of an instance having a missing value for an attribute could depend on the value of that attribute.

Page 9: Data Mining Data quality Missing values imputation using Mean, Median and k-Nearest Neighbor approach Distance Measure.

Methods of Treating Missing Data

• Ignoring and discarding data:- There are two main ways to discard data with missing values.– Discard all those records which have missing data

also called as discard case analysis.– Discarding only those attributes which have high

level of missing data.

• Imputation using Mean/median or Mod:- One of the most frequently used method (Statistical technique). – Replace (numeric continuous) type “attribute

missing values” using mean/median. (Median robust against noise).

– Replace (discrete) type attribute missing values using MOD.

Page 10: Data Mining Data quality Missing values imputation using Mean, Median and k-Nearest Neighbor approach Distance Measure.

Methods of Treating Missing Data

• Replace missing values using prediction/classification model:-– Advantage:- it considers relationship among the

known attribute values and the missing values, so the imputation accuracy is very high.

– Disadvantage:- If there is no correlation exist for some missing attribute values and know attribute values. The imputation can’t be performed.

– (Alternative approach):- Use hybrid combination of Prediction/Classification model and Mean/MOD.• First try to impute missing value using

prediction/classification model, and then Median/MOD.

– We will study more about this topic in Association Rules Mining.

Page 11: Data Mining Data quality Missing values imputation using Mean, Median and k-Nearest Neighbor approach Distance Measure.

Methods of Treating Missing Data

• K-Nearest Neighbor (k-NN) approach (Best approach):- – k-NN imputes the missing attribute values on

the basis of nearest K neighbor. Neighbors are determined on the basis of distance measure.

– Once K neighbors are determined, missing value are imputed by taking mean/median or MOD of known attribute values of missing attribute.

– Pseudo-code/analysis after studying distance measure.

Missing value record

Other dataset records

Page 12: Data Mining Data quality Missing values imputation using Mean, Median and k-Nearest Neighbor approach Distance Measure.

Similarity and Dissimilarity

• Similarity– Numerical measure of how alike two data objects

are.– Is higher when objects are more alike.– Often falls in the range [0,1]

• Dissimilarity– Numerical measure of how different are two data

objects– Lower when objects are more alike– Minimum dissimilarity is often 0– Upper limit varies

• Proximity refers to a similarity or dissimilarity

Page 13: Data Mining Data quality Missing values imputation using Mean, Median and k-Nearest Neighbor approach Distance Measure.

Distance Measures

• Remember K-Nearest Neighbor are determined on the bases of some kind of “distance” between points.

• Two major classes of distance measure:1. Euclidean : based on position of points in some

k -dimensional space.2. Noneuclidean : not related to position or space.

Page 14: Data Mining Data quality Missing values imputation using Mean, Median and k-Nearest Neighbor approach Distance Measure.

Scales of Measurement

• Applying a distance measure largely depends on

the type of input data

• Major scales of measurement:

1. Nominal Data (aka Nominal Scale Variables)

• Typically classification data, e.g. m/f • no ordering, e.g. it makes no sense to state that M > F • Binary variables are a special case of Nominal scale variables.

2. Ordinal Data (aka Ordinal Scale)• ordered but differences between values are not important • e.g., political parties on left to right spectrum given labels 0,

1, 2 • e.g., Liker scales, rank on a scale of 1..5 your degree of

satisfaction • e.g., restaurant ratings

Page 15: Data Mining Data quality Missing values imputation using Mean, Median and k-Nearest Neighbor approach Distance Measure.

Scales of Measurement

• Applying a distance function largely depends on

the type of input data

• Major scales of measurement:

3. Numeric type Data (aka interval scaled)• Ordered and equal intervals. Measured on a linear scale. • Differences make sense• e.g., temperature (C,F), height, weight, age, date

Page 16: Data Mining Data quality Missing values imputation using Mean, Median and k-Nearest Neighbor approach Distance Measure.

Scales of Measurement

• Only certain operations can be performed on certain scales of measurement.

Nominal Scale

Ordinal Scale

Interval Scale

1. Equality2. Count

3. Rank(Cannot quantify difference)

4. Quantify the difference

Page 17: Data Mining Data quality Missing values imputation using Mean, Median and k-Nearest Neighbor approach Distance Measure.

Some Euclidean Distances

• L2 norm (also common or Euclidean distance):

– The most common notion of “distance.”

• L1 norm (also Manhattan distance)

– distance if you had to travel along coordinates only.

)||...|||(|),( 22

22

2

11 pp jx

ix

jx

ix

jx

ixjid

||...||||),(2211 pp jxixjxixjxixjid

Page 18: Data Mining Data quality Missing values imputation using Mean, Median and k-Nearest Neighbor approach Distance Measure.

Examples L1 and L2 norms

x = (5,5)

y = (9,8)L2-norm:dist(x,y) = (42+32) = 5

L1-norm:dist(x,y) = 4+3 = 7

4

35

Page 19: Data Mining Data quality Missing values imputation using Mean, Median and k-Nearest Neighbor approach Distance Measure.

Another Euclidean Distance

• L∞ norm : d(x,y) = the maximum of the differences between x and y in any dimension.

Page 20: Data Mining Data quality Missing values imputation using Mean, Median and k-Nearest Neighbor approach Distance Measure.

Non-Euclidean Distances

• Jaccard measure for binary vectors

• Cosine measure = angle between vectors from the origin to the points in question.

• Edit distance = number of inserts and deletes to change one string into another.

Page 21: Data Mining Data quality Missing values imputation using Mean, Median and k-Nearest Neighbor approach Distance Measure.

Jaccard Measure

• A note about Binary variables first – Symmetric binary variable

• If both states are equally valuable and carry the same weight, that is, there is no preference on which outcome should be coded as 0 or 1.

• Like “gender” having the states male and female

– Asymmetric binary variable:• If the outcomes of the states are not equally important,

such as the positive and negative outcomes of a disease test.

• We should code the rarest one by 1 (e.g., HIV positive), and the other by 0 (HIV negative).

– Given two asymmetric binary variables, the agreement of two 1s (a positive match) is then considered more important than that of two 0s (a negative match).

Page 22: Data Mining Data quality Missing values imputation using Mean, Median and k-Nearest Neighbor approach Distance Measure.

• A contingency table for binary data

• Simple matching coefficient (invariant, if the

binary variable is symmetric):

• Jaccard coefficient (noninvariant if the binary

variable is asymmetric):

dcbacb jid

),(

cbacb jid

),(

Jaccard Measure

pdbcasum

dcdc

baba

sum

0

1

01

Object i

Object j

Page 23: Data Mining Data quality Missing values imputation using Mean, Median and k-Nearest Neighbor approach Distance Measure.

Jaccard Measure Example

• Example

– All attributes are asymmetric binary– let the values Y and P be set to 1, and the value N be set to

0

Name Fever Cough Test-1 Test-2 Test-3 Test-4 Jack Y N P N N N Mary Y N P N P N Jim Y P N N N N

75.0211

21),(

67.0111

11),(

33.0102

10),(

maryjimd

jimjackd

maryjackd

pdbcasum

dcdc

baba

sum

0

1

01

cbacb jid

),(

Page 24: Data Mining Data quality Missing values imputation using Mean, Median and k-Nearest Neighbor approach Distance Measure.

Cosine Measure

• Think of a point as a vector from the origin (0,0,…,0) to its location.

• Two points’ vectors make an angle, whose cosine is the normalized dot-product of the vectors.– Example:– p1.p2 = 2; |p1| = |p2| = 3.– cos() = 2/3; is about 48 degrees.

p1

p2p1.p2

|p2|dist(p1, p2) = = arccos(p1.p2/|p2||p1|)

Page 25: Data Mining Data quality Missing values imputation using Mean, Median and k-Nearest Neighbor approach Distance Measure.

Distance for Ordinal variables

• The value of the ordinal variable f for the ith object is rif. Where variable f has Mf ordered states.– rif Є {1…Mf}

• Since each ordinal variable can have a different number of states, therefore map the range of each variable onto {0-1}, so that each variable has equal weight. This can be achieved using the following formula.

• for each value rif in ordinal variable f , replace it by zif

• After calculating zif , calculate the distance using Euclidean distance formulas.

Page 26: Data Mining Data quality Missing values imputation using Mean, Median and k-Nearest Neighbor approach Distance Measure.

Edit Distance

• The edit distance of two strings is the number of inserts and deletes of characters needed to turn one into the other.

• Equivalently, d(x,y) = |x| + |y| -2|LCS(x,y)|.– LCS = longest common subsequence =

longest string obtained both by deleting from x and deleting from y.

Page 27: Data Mining Data quality Missing values imputation using Mean, Median and k-Nearest Neighbor approach Distance Measure.

Example

• x = abcde ; y = bcduve.

• LCS(x,y) = bcde.• D(x,y) = |x| + |y| - 2|LCS(x,y)| = 5 + 6 –

2*4 = 3.

• What left?• Normalize it in the range [0-1]. We will

study normalization formulas in next lecture.

Page 28: Data Mining Data quality Missing values imputation using Mean, Median and k-Nearest Neighbor approach Distance Measure.

Back to k-Nearest Neighbor (Pseudo-code)

• Missing values Imputation using k-NN.• Input: Dataset (D), size of K

• for each record (x) with at least on missing value in D.– for each data object (y) in D.

• Take the Distance (x,y)• Save the distance and y in array Similarity (S) array.

– Sort the array S in descending order– Pick the top K data objects from S

• Impute the missing attribute value (s) of x on the basic of known values of S (use Mean/Median or MOD).

Page 29: Data Mining Data quality Missing values imputation using Mean, Median and k-Nearest Neighbor approach Distance Measure.

K-Nearest Neighbor Drawbacks

• The major drawbacks of this approach are the– Choice of selecting exact distance functions.– Considering all attributes when attempting to

retrieve the similar type of examples.– Searching through all the dataset for finding the

same type of instances.– Algorithm Cost: ?

Page 30: Data Mining Data quality Missing values imputation using Mean, Median and k-Nearest Neighbor approach Distance Measure.

Noisy Data

• Noise: Random error, Data Present but not correct.– Data Transmission error– Data Entry problem

• Removing noise– Data Smoothing (rounding, averaging within a window).– Clustering/merging and Detecting outliers.

• Data Smoothing– First sort the data and partition it into (equi-depth) bins.– Then the values in each bin using Smooth by Bin Means,

Smooth by Bin Median, Smooth by Bin Boundaries, etc.

Page 31: Data Mining Data quality Missing values imputation using Mean, Median and k-Nearest Neighbor approach Distance Measure.

Noisy Data (Binning Methods)

Sorted data for price (in dollars): 4, 8, 9, 15, 21, 21, 24, 25, 26, 28, 29, 34* Partition into (equi-depth) bins: - Bin 1: 4, 8, 9, 15 - Bin 2: 21, 21, 24, 25 - Bin 3: 26, 28, 29, 34* Smoothing by bin means: - Bin 1: 9, 9, 9, 9 - Bin 2: 23, 23, 23, 23 - Bin 3: 29, 29, 29, 29* Smoothing by bin boundaries: - Bin 1: 4, 4, 4, 15 - Bin 2: 21, 21, 25, 25 - Bin 3: 26, 26, 26, 34

Page 32: Data Mining Data quality Missing values imputation using Mean, Median and k-Nearest Neighbor approach Distance Measure.

Noisy Data (Clustering)

• Outliers may be detected by clustering, where similar values are organized into groups or “clusters”.

• Values which falls outside of the set of clusters may be considered outliers.

Page 33: Data Mining Data quality Missing values imputation using Mean, Median and k-Nearest Neighbor approach Distance Measure.

References

– G. Batista and M. Monard, “The study of K-Nearest Neighbor as a Imputation Method”, 2002 . (I will placed at the course folder)

– “CS345 --- Lecture Notes”, by Jeff D Ullman at Stanford. http://www-db.stanford.edu/~ullman/cs345-notes.html

– Vipin Kumar’s course in data mining offered at University of Minnesota

– official text book slides of Jiawei Han and Micheline Kamber, “Data Mining: Concepts and Techniques”, Morgan Kaufmann Publishers, August 2000.