CS 60050 Machine Learning

40
CS 60050 Machine Learning

description

CS 60050 Machine Learning. What is Machine Learning?. Adapt to / learn from data To optimize a performance function Can be used to: Extract knowledge from data Learn tasks that are difficult to formalise Create software that improves over time. When to learn - PowerPoint PPT Presentation

Transcript of CS 60050 Machine Learning

Page 1: CS 60050  Machine Learning

CS 60050 Machine Learning

Page 2: CS 60050  Machine Learning

What is Machine Learning?

Adapt to / learn from dataTo optimize a performance function

Can be used to:Extract knowledge from data

Learn tasks that are difficult to formalise

Create software that improves over time

Page 3: CS 60050  Machine Learning

When to learnHuman expertise does not exist (navigating on Mars)Humans are unable to explain their expertise (speech

recognition)Solution changes in time (routing on a computer network)Solution needs to be adapted to particular cases (user

biometrics)

Learning involvesLearning general models from dataData is cheap and abundant. Knowledge is expensive and

scarceCustomer transactions to computer behaviourBuild a model that is a good and useful approximation to the

data

Page 4: CS 60050  Machine Learning

ApplicationsSpeech and hand-writing recognitionAutonomous robot controlData mining and bioinformatics: motifs, alignment, …Playing gamesFault detectionClinical diagnosisSpam email detectionCredit scoring, fraud detectionWeb mining: search enginesMarket basket analysis,

Applications are diverse but methods are generic

Page 5: CS 60050  Machine Learning

Polynomial Curve Fitting

Page 6: CS 60050  Machine Learning

0th Order Polynomial

Page 7: CS 60050  Machine Learning

1st Order Polynomial

Page 8: CS 60050  Machine Learning

3rd Order Polynomial

Page 9: CS 60050  Machine Learning

9th Order Polynomial

Page 10: CS 60050  Machine Learning

Over-fitting

Root-Mean-Square (RMS) Error:

Page 11: CS 60050  Machine Learning

Data Set Size:

9th Order Polynomial

Page 12: CS 60050  Machine Learning

Data Set Size:

9th Order Polynomial

Page 13: CS 60050  Machine Learning

Model Selection

Cross-Validation

Page 14: CS 60050  Machine Learning

                                                

Page 15: CS 60050  Machine Learning

Example 2: Speech recognition

Data representation: features from spectral analysis of speech signals (two in this simple example).

Task: Classification of vowel sounds in words of the form “h-?-d”

Problem features:

Highly variable data with same classification.

Good feature selection is very important.

Speech recognition is often broken into a number of smaller tasks like this.

Page 16: CS 60050  Machine Learning
Page 17: CS 60050  Machine Learning

Example 3: DNA microarrays

DNA from ~10000 genes attached to a glass slide (the microarray).

Green and red labels attached to mRNA from two different samples.

mRNA is hybridized (stuck) to the DNA on the chip and green/red ratio is used to measure relative abundance of gene products.

Page 18: CS 60050  Machine Learning
Page 19: CS 60050  Machine Learning

DNA microarrays

Data representation: ~10000 Green/red intensity levels ranging from 10-10000.

Tasks: Sample classification, gene classification, visualisation and clustering of genes/samples.

Problem features:

High-dimensional data but relatively small number of examples.

Extremely noisy data (noise ~ signal).

Lack of good domain knowledge.

Page 20: CS 60050  Machine Learning

Projection of 10000 dimensional data onto 2D using PCA effectively separates cancer subtypes.

Page 21: CS 60050  Machine Learning

Probabilistic models

A large part of the module will deal with methods

that have an explicit probabilistic interpretation:

Good for dealing with uncertainty

eg. is a handwritten digit a three or an eight ?

Provides interpretable results

Unifies methods from different fields

Page 22: CS 60050  Machine Learning

Many of the next slides borrowed from material from

Raymond J. MooneyUniversity of Texas at Austin

Page 23: CS 60050  Machine Learning

Designing a Learning SystemChoose the training experienceChoose exactly what is too be learned, i.e. the target

function.Choose how to represent the target function.Choose a learning algorithm to infer the target function

from the experience.

Environment/Experience

Learner

Knowledge

PerformanceElement

Page 24: CS 60050  Machine Learning

Sample Learning Problem

Learn to play checkers from self-play

We will develop an approach analogous to that used in the first machine learning system developed by Arthur Samuels at IBM in 1959.

Page 25: CS 60050  Machine Learning

Training ExperienceDirect experience: Given sample input and output pairs

for a useful target function.Checker boards labeled with the correct move, e.g. extracted from

record of expert play

Indirect experience: Given feedback which is not direct I/O pairs for a useful target function.Potentially arbitrary sequences of game moves and their final

game results.

Credit/Blame Assignment Problem: How to assign credit blame to individual moves given only indirect feedback?

Page 26: CS 60050  Machine Learning

Source of Training DataProvided random examples outside of the learner’s

control.Negative examples available or only positive?

Good training examples selected by a “benevolent teacher.”“Near miss” examples

Learner can query an oracle about class of an unlabeled example in the environment.

Learner can construct an arbitrary example and query an oracle for its label.

Learner can design and run experiments directly in the environment without any human guidance.

Page 27: CS 60050  Machine Learning

Training vs. Test Distribution

Generally assume that the training and test examples are independently drawn from the same overall distribution of data.IID: Independently and identically distributed

If examples are not independent, requires collective classification.

If test distribution is different, requires transfer learning.

Page 28: CS 60050  Machine Learning

Choosing a Target FunctionWhat function is to be learned and how will it be used

by the performance system?For checkers, assume we are given a function for

generating the legal moves for a given board position and want to decide the best move.Could learn a function: ChooseMove(board, legal-moves) → best-moveOr could learn an evaluation function, V(board) → R, that gives

each board position a score for how favorable it is. V can be used to pick a move by applying each legal move, scoring the resulting board position, and choosing the move that results in the highest scoring board position.

Page 29: CS 60050  Machine Learning

Ideal Definition of V(b)If b is a final winning board, then V(b) = 100

If b is a final losing board, then V(b) = –100

If b is a final draw board, then V(b) = 0

Otherwise, then V(b) = V(b´), where b´ is the highest scoring final board position that is achieved starting from b and playing optimally until the end of the game (assuming the opponent plays optimally as well).Can be computed using complete mini-max search of the finite

game tree.

Page 30: CS 60050  Machine Learning

Approximating V(b)

Computing V(b) is intractable since it involves searching the complete exponential game tree.

Therefore, this definition is said to be non-operational.

An operational definition can be computed in reasonable (polynomial) time.

Need to learn an operational approximation to the ideal evaluation function.

Page 31: CS 60050  Machine Learning

Representing the Target Function

Target function can be represented in many ways: lookup table, symbolic rules, numerical function, neural network.

There is a trade-off between the expressiveness of a representation and the ease of learning.

The more expressive a representation, the better it will be at approximating an arbitrary function; however, the more examples will be needed to learn an accurate function.

Page 32: CS 60050  Machine Learning

Linear Function for Representing V(b)In checkers, use a linear approximation of the evaluation

function.

bp(b): number of black pieces on board b

rp(b): number of red pieces on board b

bk(b): number of black kings on board b

rk(b): number of red kings on board b

bt(b): number of black pieces threatened (i.e. which can be immediately taken by red on its next turn)

rt(b): number of red pieces threatened

)()()()()()()( 6543210 brtwbbtwbrkwbbkwbrpwbbpwwbV

Page 33: CS 60050  Machine Learning

Obtaining Training Values

Direct supervision may be available for the target function.< <bp=3,rp=0,bk=1,rk=0,bt=0,rt=0>, 100>

(win for black)

With indirect feedback, training values can be estimated using temporal difference learning (used in reinforcement learning where supervision is delayed reward).

Page 34: CS 60050  Machine Learning

Lessons Learned about Learning

Learning can be viewed as using direct or indirect experience to approximate a chosen target function.

Function approximation can be viewed as a search through a space of hypotheses (representations of functions) for one that best fits a set of training data.

Different learning methods assume different hypothesis spaces (representation languages) and/or employ different search techniques.

Page 35: CS 60050  Machine Learning

Various Function RepresentationsNumerical functions

Linear regressionNeural networksSupport vector machines

Symbolic functionsDecision treesRules in propositional logicRules in first-order predicate logic

Instance-based functionsNearest-neighborCase-based

Probabilistic Graphical ModelsNaïve BayesBayesian networksHidden-Markov Models (HMMs)Probabilistic Context Free Grammars (PCFGs)Markov networks

Page 36: CS 60050  Machine Learning

Various Search AlgorithmsGradient descent

PerceptronBackpropagation

Dynamic ProgrammingHMM LearningPCFG Learning

Divide and ConquerDecision tree inductionRule learning

Evolutionary ComputationGenetic Algorithms (GAs)Genetic Programming (GP)Neuro-evolution

Page 37: CS 60050  Machine Learning

Evaluation of Learning SystemsExperimental

Conduct controlled cross-validation experiments to compare various methods on a variety of benchmark datasets.

Gather data on their performance, e.g. test accuracy, training-time, testing-time.

Analyze differences for statistical significance.

TheoreticalAnalyze algorithms mathematically and prove theorems about

their:Computational complexityAbility to fit training dataSample complexity (number of training examples

needed to learn an accurate function)

Page 38: CS 60050  Machine Learning

History of Machine Learning1950s

Samuel’s checker playerSelfridge’s Pandemonium

1960s: Neural networks: PerceptronPattern recognition Learning in the limit theoryMinsky and Papert prove limitations of Perceptron

1970s: Symbolic concept inductionWinston’s arch learnerExpert systems and the knowledge acquisition bottleneckQuinlan’s ID3Michalski’s AQ and soybean diagnosisScientific discovery with BACONMathematical discovery with AM

Page 39: CS 60050  Machine Learning

History of Machine Learning 1980s:

Advanced decision tree and rule learningExplanation-based Learning (EBL)Learning and planning and problem solvingUtility problemAnalogyCognitive architecturesResurgence of neural networks (connectionism, backpropagation)Valiant’s PAC Learning TheoryFocus on experimental methodology

1990sData miningAdaptive software agents and web applicationsText learningReinforcement learning (RL)Inductive Logic Programming (ILP)Ensembles: Bagging, Boosting, and StackingBayes Net learning

Page 40: CS 60050  Machine Learning

History of Machine Learning2000s

Support vector machinesKernel methodsGraphical modelsStatistical relational learningTransfer learningSequence labelingCollective classification and structured outputsComputer Systems Applications

CompilersDebuggingGraphicsSecurity (intrusion, virus, and worm detection)

E mail managementPersonalized assistants that learnLearning in robotics and vision