Playing with features for learning and prediction Jongmin Kim Seoul National University.

22
Playing with features for learning and prediction Jongmin Kim Seoul National University

Transcript of Playing with features for learning and prediction Jongmin Kim Seoul National University.

Page 1: Playing with features for learning and prediction Jongmin Kim Seoul National University.

Playing with features forlearning and prediction

Jongmin KimSeoul National University

Page 2: Playing with features for learning and prediction Jongmin Kim Seoul National University.

Problem statement

• Predicting outcome of surgery

Page 3: Playing with features for learning and prediction Jongmin Kim Seoul National University.

Predicting outcome of surgery

• Ideal approach

. . . .

?

Training Data

Predicting out-come

surgery

Page 4: Playing with features for learning and prediction Jongmin Kim Seoul National University.

Predicting outcome of surgery

• Initial approach– Predicting partial features

• Predict witch features?

Page 5: Playing with features for learning and prediction Jongmin Kim Seoul National University.

Predicting outcome of surgery

• 4 Surgery– DHL+RFT+TAL+FDO

flexion of the knee( min / max )

dorsiflexion of the ankle( min )

rotation of the foot( min / max )

Page 6: Playing with features for learning and prediction Jongmin Kim Seoul National University.

Predicting outcome of surgery

• Is it good features?

• Number of Training data– DHL+RFT+TAL : 35 data– FDO+DHL+TAL+RFT : 33 data

Page 7: Playing with features for learning and prediction Jongmin Kim Seoul National University.

Machine learning and feature

DataFeature

representationLearningalgorithm

Featurerepresentation

Learningalgorithm

Page 8: Playing with features for learning and prediction Jongmin Kim Seoul National University.

• Joint position / angle• Velocity / acceleration• Distance between body parts• Contact status• …

Features in motion

Page 9: Playing with features for learning and prediction Jongmin Kim Seoul National University.

Features in computer vision

SIFT Spin image

HoG RIFT

Textons GLOH

Page 10: Playing with features for learning and prediction Jongmin Kim Seoul National University.

Machine learning and feature

Page 11: Playing with features for learning and prediction Jongmin Kim Seoul National University.

Outline

• Feature selection• - Feature ranking• - Subset selection: wrapper, filter, embedded• - Recursive Feature Elimination• - Combination of weak prior (Boosting)• - ADAboosting(clsf) / joint boosting (clsf)/ Gradi-

entboost (regression)

• Prediction result with feature selection

• Feature learning?

Page 12: Playing with features for learning and prediction Jongmin Kim Seoul National University.

Feature selection

• Alleviating the effect of the curse of dimensionality

• Improve the prediction performance• Faster and more cost-effective• Providing a better understanding of

the data

Page 13: Playing with features for learning and prediction Jongmin Kim Seoul National University.

Subset selection

• Wrapper

• Filter

• Embedded

Page 14: Playing with features for learning and prediction Jongmin Kim Seoul National University.

Feature learning?

• Can we automatically learn a good feature represen-tation?

• Known as: unsupervised feature learning, feature learning, deep learning, representation learning, etc.

• Hand-designed features (by human):• 1. need expert knowledge• 2. requires time-consuming hand-tuning.

• When it’s unclear how to hand design features: au-tomatically learned features (by machine)

Page 15: Playing with features for learning and prediction Jongmin Kim Seoul National University.

Learning Feature Representations

• Key idea: • –Learn statistical structure or correlation of the

data from unlabeled data • –The learned representations can be used as fea-

tures in supervised and semi-supervised settings

Page 16: Playing with features for learning and prediction Jongmin Kim Seoul National University.

Learning Feature Representations

EncoderDecoder

Input (Image/ Features)

Output Features

e.g.Feed-back /generative /top-downpath

Feed-forward /bottom-up path

Page 17: Playing with features for learning and prediction Jongmin Kim Seoul National University.

Learning Feature Representations

σ(Wx)Dz

Input Patch x

Sparse Features z

e.g.

• Predictive Sparse Decomposition [Kavukcuoglu et al., ‘09]

Encoder filters W

Sigmoid function σ(.)

Decoder filters D

L1 Spar-sity

Page 18: Playing with features for learning and prediction Jongmin Kim Seoul National University.

Stacked Auto-Encoders

En-coder

De-coder

Input Image

Class label

Features

En-coder

De-coder

Features

En-coder

De-coder

[Hinton & Salakhutdinov Science ‘06]

Page 19: Playing with features for learning and prediction Jongmin Kim Seoul National University.

At Test Time

En-coder

Input Image

Class label

Features

En-coder

Features

En-coder

[Hinton & Salakhutdinov Science ‘06]

• Remove decoders• Use feed-forward

path

• Gives standard(Convolutional)Neural Network

• Can fine-tune with backprop

Page 20: Playing with features for learning and prediction Jongmin Kim Seoul National University.

Status & plan

• Data 파악 / learning technique survey…

• Plan : 11 월 실험 끝• 12 월 논문 writing• 1 월 시그랩 submit• 8 월에 미국에서 발표

• But before all of that….

Page 21: Playing with features for learning and prediction Jongmin Kim Seoul National University.

Deep neural net vs. boost-ing

• Deep Nets:• - single highly non-linear system• - “deep” stack of simpler modules• - all parameters are subject to learning

• Boosting & Forests:• - sequence of “weak” (simple) classifiers that are lin-

early combined to produce a powerful classifier• - subsequent classifiers do not exploit representa-

tions of earlier classifiers, it's a “shallow” linear mix-ture

• - typically features are not learned

Page 22: Playing with features for learning and prediction Jongmin Kim Seoul National University.

Deep neural net vs. boost-ing