Speech, Perception, & AI

19
Speech, Perception, & AI Artificial Intelligence CMSC 25000 March 5, 2002

description

Speech, Perception, & AI. Artificial Intelligence CMSC 25000 March 5, 2002. Agenda. Speech AI & Perception Speech applications Speech Recognition Coping with uncertainty Conclusions Exam admin Applied AI talk Evaluations. AI & Perception. Classic AI - PowerPoint PPT Presentation

Transcript of Speech, Perception, & AI

Page 1: Speech, Perception, & AI

Speech, Perception, & AI

Artificial Intelligence

CMSC 25000

March 5, 2002

Page 2: Speech, Perception, & AI

Agenda

• Speech – AI & Perception– Speech applications

• Speech Recognition– Coping with uncertainty

• Conclusions– Exam admin– Applied AI talk – Evaluations

Page 3: Speech, Perception, & AI

AI & Perception

• Classic AI– Disembodied Reasoning: Think of an action

• Expert systems

• Theorem Provers

• Full knowledge planners

• Contemporary AI– Situated reasoning: Thinking & Acting

• Spoken language systems

• Image registration

• Behavior-based robots

Page 4: Speech, Perception, & AI

Speech Applications

• Speech-to-speech translation

• Dictation systems

• Spoken language systems

• Adaptive & Augmentative technologies– E.g. talking books

• Speaker identification & verification

• Ubiquitous computing

Page 5: Speech, Perception, & AI

Speech Recognition

• Goal:– Given an acoustic signal, identify the sequence of

words that produced it– Speech understanding goal:

• Given an acoustic signal, identify the meaning intended by the speaker

• Issues:– Ambiguity: many possible pronunciations, – Uncertainty: what signal, what word/sense produced

this sound sequence

Page 6: Speech, Perception, & AI

Decomposing Speech Recognition

• Q1: What speech sounds were uttered?– Human languages: 40-50 phones

• Basic sound units: b, m, k, ax, ey, …(arpabet)

• Distinctions categorical to speakers– Acoustically continuous

• Part of knowledge of language– Build per-language inventory

– Could we learn these?

Page 7: Speech, Perception, & AI

Decomposing Speech Recognition

• Q2: What words produced these sounds?– Look up sound sequences in dictionary– Problem 1: Homophones

• Two words, same sounds: too, two

– Problem 2: Segmentation• No “space” between words in continuous speech• “I scream”/”ice cream”, “Wreck a nice beach”/”Recognize

speech”

• Q3: What meaning produced these words?– NLP (But that’s not all!)

Page 8: Speech, Perception, & AI

Signal Processing

• Goal: Convert impulses from microphone into a representation that – is compact– encodes features relevant for speech recognition

• Compactness: Step 1– Sampling rate: how often look at data

• 8KHz, 16KHz,(44.1KHz= CD quality)

– Quantization factor: how much precision• 8-bit, 16-bit (encoding: u-law, linear…)

Page 9: Speech, Perception, & AI

(A Little More) Signal Processing

• Compactness & Feature identification– Capture mid-length speech phenomena

• Typically “frames” of 10ms (80 samples)– Overlapping

– Vector of features: e.g. energy at some frequency– Vector quantization:

• n-feature vectors: n-dimension space– Divide into m regions (e.g. 256)

– All vectors in region get same label - e.g. C256

Page 10: Speech, Perception, & AI

Speech Recognition Model

• Question: Given signal, what words?• Problem: uncertainty

– Capture of sound by microphone, how phones produce sounds, which words make phones, etc

• Solution: Probabilistic model– P(words|signal) =– P(signal|words)P(words)/P(signal)– Idea: Maximize P(signal|words)*P(words)

• P(signal|words): acoustic model; P(words): lang model

Page 11: Speech, Perception, & AI

Language Model

• Idea: some utterances more probable

• Possible solution: PCFGs???– + Probabilities, - Little context effect

• “Dog bites man” as probable as “Man bites dog”

• Standard solution: “n-gram” model– Typically tri-gram: P(wi|wi-1,wi-2)

• Collect training data – Smooth with bi- & uni-grams to handle sparseness

– Product over words in utterance

Page 12: Speech, Perception, & AI

Acoustic Model

• P(signal|words)– words -> phones + phones -> vector quantiz’n

• Words -> phones– Pronunciation dictionary lookup

• Multiple pronunciations?– Probability distribution

» Dialect Variation: tomato

» +Coarticulation

– Product along path

t ow maa

eyt ow

0.5

0.5

tow

maa

eyt ow

0.5ax

0.50.2

0.8

Page 13: Speech, Perception, & AI

Acoustic Model

• P(signal| phones)– Use Hidden Markov Model (HMM)

– Transition probabilities, Output probabilities– Probability of sequence: sum of prob of paths

• Combine into big HMM

Onset Mid End Final0.7

0.3 0.9

0.1

0.4

0.6

C1:0.5

C2:0.2

C3:0.3 C3:

0.2C4:0.7

C5:0.1 C4:

0.1C6:0.5

C6:0.4

Page 14: Speech, Perception, & AI

Viterbi Algorithm

• Find BEST word sequence given signal– Best P(words|signal)– Take HMM & VQ sequence

• => word seq (prob)

• Dynamic programming solution– Record most probable path ending at a state i

• Then most probable path from i to end

• O(bMn)

Page 15: Speech, Perception, & AI

Learning HMMs

• Issue: Where do the probabilities come from?

• Solution: Learn from data– Signal + word pairs– Baum-Welch algorithm learns efficiently

• See CMSC 35100

Page 16: Speech, Perception, & AI

Does it work?

• Yes:– 99% on isolate single digits– 95% on restricted short utterances (air travel)– 80+% professional news broadcast

• No:– 55% Conversational English– 35% Conversational Mandarin– ?? Noisy cocktail parties

Page 17: Speech, Perception, & AI

Speech Recognition asModern AI

• Draws on wide range of AI techniques– Knowledge representation & manipulation

• Optimal search: Viterbi decoding

– Machine Learning• Baum-Welch for HMMs• Nearest neighbor & k-means clustering for signal id

– Probabilistic reasoning/Bayes rule• Manage uncertainty in signal, phone, word mapping

• Enables real world application

Page 18: Speech, Perception, & AI

Final Exam

• Thursday March 14: 10:30-12:30– One “cheat sheet”

• Cumulative– Higher proportion on 2nd half of course

• Style: Like midterm: apply concepts• No coding necessary

• Review session: ???• Review materials: will be available

Page 19: Speech, Perception, & AI

Seminar

• Tomorrow: Wednesday March 6, 2:30pm• Location: Ry 251• Title:

– Computational Complexity of Air Travel Planning

• Speaker: Carl de Marcken, ITA Software• Very abstract abstract:

– Application of AI techniques to real world problems– aka: how to make money doing AI