Department of Computer Science, University of Waikato, New ...

44
Department of Computer Science, University of Waikato, New Zealand Geoff Holmes WEKA project and team Data Mining process Data format Preprocessing Classification Regression Clustering Associations Attribute selection Visualization Performing experiments New Directions Conclusion Data Mining using WEKA

description

 

Transcript of Department of Computer Science, University of Waikato, New ...

Page 1: Department of Computer Science, University of Waikato, New ...

Department of Computer Science, University of Waikato, New Zealand

Geoff Holmes

WEKA project and team Data Mining process Data format Preprocessing Classification Regression Clustering Associations Attribute selection Visualization Performing experiments New Directions Conclusion

Data Mining using WEKA

Page 2: Department of Computer Science, University of Waikato, New ...

2

Waikato Environment forKnowledge Analysis

Copyright: Martin Kramer ([email protected])

• PGSF/NERF project been going since 1994.

• New Java software development from 98 on.

• Project goals:• Develop a state-of-the-art workbench of data mining tools• Explore fielded applications• Develop new fundamental methods

Page 3: Department of Computer Science, University of Waikato, New ...

3

WEKA TEAM Geoff Holmes, Ian Witten, Bernhard Pfahringer,

Eibe Frank, Mark Hall, Yong Wang, Remco Bouckaert, Peter Reutemann, Gabi Schmidberger, Dale Fletcher, Tony Smith, Mike Mayo and Richard Kirkby

Members on editorial board of MLJ, programme committees for ICML, ECML, KDD, ….

Authors of a widely adopted data mining textbook.

Page 4: Department of Computer Science, University of Waikato, New ...

4

Data mining process

Select Preprocess Transform Mine Analyze &Assimilate

Selecteddata

Preprocesseddata

Transformeddata Extracted

informationAssimilatedknowledge

Page 5: Department of Computer Science, University of Waikato, New ...

5

Data mining software Commercial packages (Cost ? X 106 dollars)

IBM Intelligent Miner SAS Enterprise Miner Clementine

WEKA (Free = GPL licence!) Java => Multi-platform Open source – means you get source code

Page 6: Department of Computer Science, University of Waikato, New ...

6

Data format

Rectangular table format (flat file) very common Most techniques exist to deal with table format

Row=instance=individual=data point=case=record Column=attribute=field=variable=characteristic=dimension

Outlook Temperature Humidity Windy Play

Sunny Hot High False No

Sunny Hot High True No

Overcast Hot High False Yes

Rainy Mild Normal False Yes

… … … … …

Page 7: Department of Computer Science, University of Waikato, New ...

7

Data complications Volume of data – sampling; essential attributes Missing data Inaccurate data Data filtering Data aggregation

Page 8: Department of Computer Science, University of Waikato, New ...

8

WEKA’s ARFF format%% ARFF file for weather data with some numeric features%@relation weather

@attribute outlook {sunny, overcast, rainy}@attribute temperature numeric@attribute humidity numeric@attribute windy {true, false}@attribute play? {yes, no}

@datasunny, 85, 85, false, nosunny, 80, 90, true, noovercast, 83, 86, false, yes...

Page 9: Department of Computer Science, University of Waikato, New ...

9

Attribute types ARFF supports numeric and nominal attributes Interpretation depends on learning scheme

Numeric attributes are interpreted as- ordinal scales if less-than and greater-than are used- ratio scales if distance calculations are performed

(normalization/standardization may be required) Instance-based schemes define distance between

nominal values (0 if values are equal, 1 otherwise) Integers: nominal, ordinal, or ratio scale?

Page 10: Department of Computer Science, University of Waikato, New ...

10

Missing values Frequently indicated by out-of-range entries

Types: unknown, unrecorded, irrelevant Reasons: malfunctioning equipment, changes in

experimental design, collation of different datasets, measurement not possible

Missing value may have significance in itself (e.g. missing test in a medical examination) Most schemes assume that is not the case

“missing” may need to be coded as additional value

Page 11: Department of Computer Science, University of Waikato, New ...

11

Getting to know the data Simple visualization tools are very useful for

identifying problems Nominal attributes: histograms (Distribution

consistent with background knowledge?) Numeric attributes: graphs (Any obvious outliers?)

2-D and 3-D visualizations show dependencies Domain experts need to be consulted Too much data to inspect? Take a sample!

Page 12: Department of Computer Science, University of Waikato, New ...

12

Learning and using a model Learning

Learning algorithm takes instances of concept as input Produces a structural description (model) as output

Input:conceptto learn

Learningalgorithm Model

Prediction Model takes new instance as input Outputs prediction

Input Model Prediction

Page 13: Department of Computer Science, University of Waikato, New ...

13

Structural descriptions (models) Some models are better than others

Accuracy Understandability

Models range from “easy to understand” to virtually incomprehensible Decision trees Rule induction Regression models Neural networks

Easier

Harder

Page 14: Department of Computer Science, University of Waikato, New ...

14

Pre-processing the data Data can be imported from a file in various

formats: ARFF, CSV, C4.5, binary Data can also be read from a URL or from SQL

databases using JDBC Pre-processing tools in WEKA are called “filters” WEKA contains filters for:

Discretization, normalization, resampling, attribute selection, attribute combination, …

Page 15: Department of Computer Science, University of Waikato, New ...

15

Explorer: pre-processing

Page 16: Department of Computer Science, University of Waikato, New ...

16

Building classification models “Classifiers” in WEKA are models for predicting

nominal or numeric quantities Implemented schemes include:

Decision trees and lists, instance-based classifiers, support vector machines, multi-layer perceptrons, logistic regression, Bayes’ nets, …

“Meta”-classifiers include: Bagging, boosting, stacking, error-correcting output

codes, data cleansing, …

Page 17: Department of Computer Science, University of Waikato, New ...

17

Explorer: classification

Page 18: Department of Computer Science, University of Waikato, New ...

18

Explorer: classification

Page 19: Department of Computer Science, University of Waikato, New ...

19

Explorer: classification

Page 20: Department of Computer Science, University of Waikato, New ...

20

Explorer: classification

Page 21: Department of Computer Science, University of Waikato, New ...

21

Explorer: classification

Page 22: Department of Computer Science, University of Waikato, New ...

22

Explorer: classification

Page 23: Department of Computer Science, University of Waikato, New ...

23

Explorer: classification

Page 24: Department of Computer Science, University of Waikato, New ...

24

Explorer: classification

Page 25: Department of Computer Science, University of Waikato, New ...

25

Explorer: classification/regression

Page 26: Department of Computer Science, University of Waikato, New ...

26

Explorer: classification

Page 27: Department of Computer Science, University of Waikato, New ...

27

Clustering data WEKA contains “clusterers” for finding groups of

instances in a datasets Implemented schemes are:

k-Means, EM, Cobweb Coming soon: x-means Clusters can be visualized and compared to “true”

clusters (if given) Evaluation based on loglikelihood if clustering

scheme produces a probability distribution

Page 28: Department of Computer Science, University of Waikato, New ...

28

Explorer: clustering

Page 29: Department of Computer Science, University of Waikato, New ...

29

Explorer: clustering

Page 30: Department of Computer Science, University of Waikato, New ...

30

Explorer: clustering

Page 31: Department of Computer Science, University of Waikato, New ...

31

Explorer: clustering

Page 32: Department of Computer Science, University of Waikato, New ...

32

Finding associations WEKA contains an implementation of the Apriori

algorithm for learning association rules Works only with discrete data

Allows you to identify statistical dependencies between groups of attributes: milk, butter bread, eggs (with confidence 0.9 and

support 2000) Apriori can compute all rules that have a given

minimum support and exceed a given confidence

Page 33: Department of Computer Science, University of Waikato, New ...

33

Explorer: association rules

Page 34: Department of Computer Science, University of Waikato, New ...

34

Attribute selection Separate panel allows you to investigate which

(subsets of) attributes are the most predictive ones Attribute selection methods contain two parts:

A search method: best-first, forward selection, random, exhaustive, race search, ranking

An evaluation method: correlation-based, wrapper, information gain, chi-squared, PCA, …

Very flexible: WEKA allows (almost) arbitrary combinations of these two

Page 35: Department of Computer Science, University of Waikato, New ...

35

Explorer: attribute selection

Page 36: Department of Computer Science, University of Waikato, New ...

36

Data visualization Visualization is very useful in practice: e.g. helps

to determine difficulty of the learning problem WEKA can visualize single attributes (1-d) and

pairs of attributes (2-d) To do: rotating 3-d visualizations (Xgobi-style)

Color-coded class values “Jitter” option to deal with nominal attributes (and

to detect “hidden” data points) “Zoom-in” function

Page 37: Department of Computer Science, University of Waikato, New ...

37

Explorer: data visualization

Page 38: Department of Computer Science, University of Waikato, New ...

38

Performing experiments The Experimenter makes it easy to compare the

performance of different learning schemes applied to the same data.

Designed for nominal and numeric class problems Results can be written into file or database Evaluation options: cross-validation, learning

curve, hold-out Can also iterate over different parameter settings Significance-testing built in!

Page 39: Department of Computer Science, University of Waikato, New ...

39

Experimenter: setting it up

Page 40: Department of Computer Science, University of Waikato, New ...

40

Experimenter: running it

Page 41: Department of Computer Science, University of Waikato, New ...

41

Experimenter: analysis

Page 42: Department of Computer Science, University of Waikato, New ...

42

New Directions for Weka New user interface based on work flows New data mining techniques

PACE regression Bayesian Networks Logistic option trees

New frameworks for very large data sources (MOA) New applications in the agricultural sector

Matchmaker for RPBC Ltd Pest control for kiwifruit management Crop forecasting

Page 43: Department of Computer Science, University of Waikato, New ...

43

Next Generation Weka: Knowledge flow GUI

Page 44: Department of Computer Science, University of Waikato, New ...

44

Conclusions Weka is a comprehensive suite of Java programs

united under a common interface to permit exploration and experimentation on datasets using state-of-the-art techniques.

The software is available under the GPL from http://www.cs.waikato.ac.nz/~ml

Weka provides the perfect environment for ongoing research in data mining.