Introduction to Machine Learning with SciKit-Learn

Post on 14-Jul-2015

1.281 views 4 download

Transcript of Introduction to Machine Learning with SciKit-Learn

Introduction to Machine Learning with Scikit-Learn

- Preface- What is Machine Learning?- An Architecture for ML Data Products- What is Scikit-Learn?- Data Handling and Loading- Model Evaluation- Regressions- Classification- Clustering- Workshop

Plan of Study

PrefaceOn teaching Machine Learning...

Is Machine Learning a one semester course?

Statistics Artificial Intelligence

Computer Science

Probability

Normalization

Distributions

Smoothing

Bayes Theorem

Regression

Logits

Optimization

Planning

Computer Vision

Natural Language Processing

Reinforcement

Neural Models

Computer Vision

Anomaly Detection

Entropy

Function Approximation

Data Mining

Graph Algorithms

Big Data

Machine Learning Practitioner ContinuumHacker Academic

Machine Learning Practitioner Domains

Statistician Expert Programmer

Context to Data Mining & Statistics

Data

Machine or App

Users

Data Mining

Machine Learning

Statistics

Computer Science

What is Machine Learning?

Learning by Example

Given a bunch of examples (data) extract a meaningful pattern upon which to act.

Problem Domain Machine Learning Class

Infer a function from labeled data Supervised learning

Find structure of data without feedback Unsupervised learning

Interact with environment towards goal Reinforcement learning

How do you make predictions?

What patterns do you see?

What is the Y value?

How do you determine red from blue?

Input training data to fit a model which is then used to predict incoming inputs into ...

Types of Algorithms by Output

Type of Output Algorithm Category

Output is one or more discrete classes Classification (supervised)

Output is continuous Regression (supervised)

Output is membership in a similar group Clustering (unsupervised)

Output is the distribution of inputs Density Estimation

Output is simplified from higher dimensions Dimensionality Reduction

Classification

Given labeled input data (with two or more labels), fit a function that can determine for any input, what the label is.

Regression

Given continuous input data fit a function that is able to predict the continuous value of input given other data.

Clustering

Given data, determine a pattern of associated data points or clusters via their similarity or distance from one another.

“Model” is an overloaded term.

• Model family describes, at the broadest possible level, the connection between the variables of interest.

• Model form specifies exactly how the variables of interest are connected within the framework of the model family.

http://had.co.nz/stat645/model-vis.pdf

Hadley Wickham (2015)

• A fitted model is a concrete instance of the model form where all parameters have been estimated from data, and the model can be used to generate predictions.

Dimensions and Features

In order to do machine learning you need a data set containing instances (examples) that are composed of features from which you compose dimensions.

Instance: a single data point or example composed of fieldsFeature: a quantity describing an instance Dimension: one or more attributes that describe a property from sklearn.datasets import load_digits

digits = load_digits()

X = digits.data # X.shape == (n_samples, n_features)

y = digits.target # y.shape == (n_samples,)

Feature SpaceFeature space refers to the n-dimensions where your variables live (not including a target variable or class). The term is used often in ML literature because in ML all variables are features (usually) and feature extraction is the art of creating a space with decision boundaries.

Target1. Y ≡ Thickness of car tires after some testing period

Variables1. X1 ≡ distance travelled in test2. X2 ≡ time duration of test3. X3 ≡ amount of chemical C in tires

The feature space is R3, or more accurately, the positive quadrant in R3 as all the X variables can only be positive quantities.

http://stats.stackexchange.com/questions/46425/what-is-feature-space

MappingsDomain knowledge about tires might suggest that the speed the vehicle was moving at is important, hence we generate another variable, X4 (this is the feature extraction part):

X4 = X1*X2 ≡ the speed of the vehicle during testing.

This extends our old feature space into a new one, the positive part of R4.

A mapping is a function, ϕ, from R3 to R4:

ϕ(x1,x2,x3) = (x1,x2,x3,x1x2)

http://stats.stackexchange.com/questions/46425/what-is-feature-space

Your Task

Given a data set of instances of size N, create a model that is fit from the data (built) by extracting features and dimensions. Then use that model to predict outcomes …

1. Data Wrangling (normalization, standardization, imputing)

2. Feature Analysis/Extraction3. Model Selection/Building4. Model Evaluation5. Operationalize Model

A Tour of Machine Learning Algorithms

Models: Instance Methods

Compare instances in data set with a similarity measure to find best matches. - Suffers from curse of dimensionality. - Focus on feature representation and

similarity metrics between instances

● k-Nearest Neighbors (kNN)● Self-Organizing Maps (SOM)● Learning Vector Quantization (LVQ)

Models: Regression

Model relationship of independent variables, X to dependent variable Y by iteratively optimizing error made in predictions.

● Ordinary Least Squares● Logistic Regression● Stepwise Regression● Multivariate Adaptive Regression Splines (MARS)● Locally Estimated Scatterplot Smoothing (LOESS)

Models: Regularization Methods

Extend another method (usually regression), penalizing complexity (minimize overfit)- simple, popular, powerful - better at generalization

● Ridge Regression● LASSO (Least Absolute Shrinkage & Selection Operator)● Elastic Net

Models: Decision Trees

Model of decisions based on data attributes. Predictions are made by following forks in a tree structure until a decision is made. Used for classification & regression.

● Classification and Regression Tree (CART)● Decision Stump● Random Forest● Multivariate Adaptive Regression Splines (MARS)● Gradient Boosting Machines (GBM)

Models: Bayesian

Explicitly apply Bayes’ Theorem for classification and regression tasks. Usually by fitting a probability function constructed via the chain rule and a naive simplification of Bayes.

● Naive Bayes● Averaged One-Dependence Estimators (AODE)● Bayesian Belief Network (BBN)

Models: Kernel Methods

Map input data into higher dimensional vector space where the problem is easier to model. Named after the “kernel trick” which computes the inner product of images of pairs of data.

● Support Vector Machines (SVM)● Radial Basis Function (RBF)● Linear Discriminant Analysis (LDA)

Models: Clustering Methods

Organize data into into groups whose members share maximum similarity (defined usually by a distance metric). Two main approaches: centroids and hierarchical clustering.

● k-Means● Affinity Propegation● OPTICS (Ordering Points to Identify Cluster Structure)● Agglomerative Clustering

Models: Artificial Neural Networks

Inspired by biological neural networks, ANNs are nonlinear function approximators that estimate functions with a large number of inputs.- System of interconnected neurons that activate - Deep learning extends simple networks recursively

● Perceptron● Back-Propagation● Hopfield Network● Restricted Boltzmann Machine (RBM)● Deep Belief Networks (DBN)

Models: Ensembles

Models composed of multiple weak models that are trained independently and whose outputs are combined to make an overall prediction.

● Boosting● Bootstrapped Aggregation (Bagging)● AdaBoost● Stacked Generalization (blending)● Gradient Boosting Machines (GBM)● Random Forest

Models: Other

The list before was not comprehensive, other algorithm and model classes include:

● Conditional Random Fields (CRF)● Markovian Models (HMMs)● Dimensionality Reduction (PCA, PLS)● Rule Learning (Apriori, Brill)● More ...

An Architecture for Operationalizing Machine Learning Algorithms

Operational Phase

Build Phase

Architecture of Machine Learning Operations

Training Data

Labels

Feature Vectors

EstimationAlgorithm

New Data Feature Vector

Predictive Model Prediction

Feedback!

The Learning Part of Machine Learning

Model Building

Initial Fixtures

Service/API

Feedback

Deploying Machine Learning as a Web Service

Annotation Service Example

Architecture Demohttps://github.com/DistrictDataLabs/product-classifier

What is Scikit-Learn?

Extensions to SciPy (Scientific Python) are called SciKits. SciKit-Learn provides machine learning algorithms. ● Algorithms for supervised & unsupervised learning● Built on SciPy and Numpy● Standard Python API interface● Sits on top of c libraries, LAPACK, LibSVM, and Cython● Open Source: BSD License (part of Linux)

Probably the best general ML framework out there.

Where did it come from?

Started as a Google summer of code project in 2007 by David Cournapeau, then used as a thesis project by Matthieu Brucher.

In 2010, INRIA pushed the first public release, and sponsors the project, as do Google, Tinyclues, and the Python Software Foundation.

Who uses Scikit-Learn?

Primary Features

- Generalized Linear Models- SVMs, kNN, Bayes, Decision Trees, Ensembles- Clustering and Density algorithms- Cross Validation- Grid Search- Pipelining- Model Evaluations- Dataset Transformations- Dataset Loading

A Guide to Scikit-Learn

Object-oriented interface centered around the concept of an Estimator:

“An estimator is any object that learns from data; it may be a classification, regression or clustering algorithm or a transformer that extracts/filters useful features from raw data.”

- Scikit-Learn Tutorial

Scikit-Learn API

The Scikit-Learn Estimator API

class Estimator(object):

def fit(self, X, y=None):

"""Fits estimator to data. """

# set state of ``self``

return self

def predict(self, X):

"""Predict response of ``X``. """

# compute predictions ``pred``

return pred

- fit(X,y) sets the state of the estimator.- X is usually a 2D numpy array of shape (num_samples, num_features).

- y is a 1D array with shape (n_samples,)- predict(X) returns the class or value- predict_proba() returns a 2D array of

shape (n_samples, n_classes)

Estimators

Basic methodology

from sklearn import svm

estimator = svm.SVC(gamma=0.001)

estimator.fit(X, y)

estimator.predict(x)

We’ve already discussed a broad workflow, the following is a development workflow:

Wrapping fit and predict

Load & Transform Data

Raw Data Feature Extraction

Build Model Evaluate Model

Feature Evaluation

Transformers

class Transformer(Estimator):

def transform(self, X):

"""Transforms the input data. """

# transform ``X`` to ``X_prime``

return X_prime

from sklearn import preprocessing

Xt = preprocessing.normalize(X) # Normalizer

Xt = preprocessing.scale(X) # StandardScaler

imputer =Imputer(missing_values='Nan',

strategy='mean')

Xt = imputer.fit_transform(X)

Evaluation

Underfitting

Not enough information to accurately model real life. Can be due to high bias, or just a too simplistic model.

Solution: Cross Validation

OverfittingCreate a model with too many parameters or is too complex. “Memorization of the data” - and the model can’t generalize very well.

Solution: Benchmark Testing, Ridge Regression, Feature Analyses, Dimensionality Reduction

Error: Bias vs Variance

Bias: the difference between expected (average) prediction of the model and the correct value.

Variance: how the predictions for a given point vary between different realizations for the model.

http://scott.fortmann-roe.com/docs/BiasVariance.html

Bias vs. Variance Trade-Off

Related to model complexity:

The more parameters added to the model (the more complex), Bias is reduced, and variance increased.

Sources of complexity:- k (nearest neighbors)- epochs (neural nets)- # of features- learning rate

http://scott.fortmann-roe.com/docs/BiasVariance.html

Assess how model will generalize to independent data set (e.g. data not in the training set).

1. Divide data into training and test splits2. Fit model on training, predict on test3. Determine accuracy, precision and recall4. Repeat k times with different splits then average as F1

Cross Validation (classification)

Predicted Class A Predicted Class B

Actual A True A False B #A

Actual B False A True B #B

#P(A) #P(B) total

https://en.wikipedia.org/wiki/Precision_and_recall

accuracy = true positives + true negatives / total

precision = true positives / (true positives + false positives)

recall = true positives / (false negatives + true positives)

F1 score = 2 * ((precision * recall) / (precision + recall))

Cross Validation in Scikit-Learn

from sklearn import metrics

from sklearn import cross_validation as cv

splits = cv.train_test_split(X, y, test_size=0.2)

X_train, X_test, y_train, y_test = splits

model = ClassifierEstimator()

model.fit(X_train, y_train)

expected = y_test

predicted = model.predict(X_test)

print metrics.classification_report(expected, predicted)

print metrics.confusion_matrix(expected, predicted)

print metrics.f1_score(expected, predicted)

MSE & Coefficient of Determination

In regressions we can determine how well the model fits by computing the mean square error and the coefficient of determination.

MSE = np.mean((predicted-expected)**2)

R2 is a predictor of “goodness of fit” and is a value ∈ [0,1] where 1 is perfect fit.

K-Part Cross Validation

from sklearn import metrics

from sklearn import cross_validation as cv

splits = cv.train_test_split(X, y, test_size=0.2)

X_train, X_test, y_train, y_test = splits

model = RegressionEstimator()

model.fit(X_train, y_train)

expected = y_test

predicted = model.predict(y_test)

print metrics.mean_squared_error(expected, predicted)

print metrics.r2_score(expected, predicted)

How to evaluate clusters? Visualization (but only in 2D)

Other Evaluation

Unstable Data

Randomness is a significant part of data in the real world but problems with data can significantly affect results:

- outliers- skew- missing information- incorrect data

Solution: seam testing/integration testing

Unpredictable Future

Machine learning models attempt to predict the future as new inputs come in - but human systems and processes are subject to change.

Solution: Precision/Recall tracking over time

Standardized Data Model Demo(Wheat Kernel Sizes)

A Tour of Scikit-Learn

Workshop

Select a data set from:https://archive.ics.uci.edu/ml/index.html

- Layout the data in our data model- Choose regression, classification, or clustering

and build the best model you can from it. - Report an evaluation of the model built- Visualize aspects of your model - Compare and contrast different algorithms

Submit your code via pull-request to repository!

Advanced Scikit-Learn

sklearn.pipeline.Pipeline(steps)

- Sequentially apply repeatable transformations to final estimator that can be validated at every step.

- Each step (except for the last) must implement Transformer, e.g. fit and transform methods.

- Pipeline itself implements both methods of Transformer and Estimator interfaces.

Pipelines

The Scikit-Learn Transformer API

class Transformer(Estimator):

def transform(self, X):

"""Transforms the input data. """

# transform ``X`` to ``X_prime``

return X_prime

The most common use for the Pipeline is to combine multiple feature extraction methodologies into a single, repeatable processing step.

- FeatureUnion- SelectKBest- TruncatedSVD- DictVectorizer

An example of a distance based ML pipeline:

https://github.com/mclumd/shaku/blob/master/Shaku.ipynb

Pipelined Feature Extraction

Pipelined Model

>>> from sklearn.preprocessing import PolynomialFeatures

>>> from sklearn.pipeline import make_pipeline

>>> model = make_pipeline(PolynomialFeatures(2), linear_model.

Ridge())

>>> model.fit(X_train, y_train)

Pipeline(steps=[('polynomialfeatures', PolynomialFeatures(degree=2,

include_bias=True, interaction_only=False)), ('ridge', Ridge

(alpha=1.0, copy_X=True, fit_intercept=True, max_iter=None,

normalize=False, solver='auto', tol=0.001))])

>>> mean_squared_error(y_test, model.predict(X_test))

3.1498887586451594

>>> model.score(X_test, y_test)

0.97090576345108104

Grid Search

- Prevent overfit/collinearity by penalizing the size of coefficients - minimize the penalized residual sum of squares:

- Said another way, shrink the coefficients to zero.

- Where > 0 is complexity parameter that controls shrinkage. The larger , the more robust the model to collinearity.

- Said another way, this is the bias/variance tradeoff: the larger the ridge alpha, the higher the bias and the lower the variance.

Ridge Regression

Ridge Regression

>>> clf = linear_model.Ridge(alpha=0.5)

>>> clf.fit(X_train, y_train)

Ridge(alpha=0.5, copy_X=True, fit_intercept=True, max_iter=None,

normalize=False, solver='auto', tol=0.001)

>>> print mean_squared_error(y_test, clf.predict(X_test))

8.34260312032

>>> clf.score(X_test, y_test)

0.92129741176557278

We can search for the best parameter using the RidgeCV which is a form of Grid Search, but uses a more efficient form of leave-one-out cross-validation.

>>> import numpy as np

>>> n_alphas = 200

>>> alphas = np.logspace(-10, -2, n_alphas)

>>> clf = linear_model.RidgeCV(alphas=alphas)

>>> clf.fit(X_train, y_train)

>>> print clf.alpha_

0.0010843659686896108

>>> clf.score(X_test, y_test)

0.92542477512171173

Choosing alpha

Error as a function of alpha

>>> clf = linear_model.Ridge(fit_intercept=False)

>>> errors = []

>>> for alpha in alphas:

... splits = tts(dataset.data, dataset.target('Y1'), test_size=0.2)

... X_train, X_test, y_train, y_test = splits

... clf.set_params(alpha=alpha)

... clf.fit(X_train, y_train)

... error = mean_squared_error(y_test, clf.predict(X_test))

... errors.append(error)

...

>>> axe = plt.gca()

>>> axe.plot(alphas, errors)

>>> plt.show()

Questions, Comments?