Bang-Xuan Huang Department of Computer Science & Information Engineering
-
Upload
kyra-cline -
Category
Documents
-
view
16 -
download
0
description
Transcript of Bang-Xuan Huang Department of Computer Science & Information Engineering
Syntactic And Sub-lexical Features For Turkish Discriminative Language Models
ICASSP 2010Ebru Arısoy, Murat Sarac¸lar, Brian Roark, Izhak Shafran
Bang-Xuan Huang
Department of Computer Science & Information Engineering
National Taiwan Normal University
2
Outline
• Introduction
• Sub-lexical language models
• Feature sets for DLM– Morphological Features– Syntactic Features– Sub-lexical Features
• Experiments
• Conclusions and Discussion
• In this paper we make use of both sub-lexical recognition units and discriminative training in Turkish language models.
• Turkish is an agglutinative language.• Its agglutinative nature leads to a high number of out-ofvocabulary
(OOV) words which degrade the ASR accuracy. • To handle the OOV problem, vocabularies composed of sub-lexical
units have been proposed for agglutinative languages.
Introduction
3
most words are formed by joining morphemes together
A
article
Syntactic( 句法 )
sentenceEx: 今天 下午 需要 開會
lexical or word
• DLM is a complementary approach to the baseline language model.• In contrast to the generative language model, it is trained on
acoustic sequences with their transcripts to optimize discriminative objective functions using both positive (reference transcriptions) and negative (recognition errors) examples.
• DLM is a feature-based language modeling approach. Therefore, each candidate hypothesis in DLM training data is represented as a feature vector of the acoustic input, x, and the candidate hypothesis, y.
Introduction
4
…..sentence x
….
1234…
. )2,(0 x
Feature vector
0 1 2 3 ….. i
),( yxicandidate hypothesisEx: N-best, lattice
Sub-lexical models
• In this approach, the recognition lexicon is composed of sub-lexical units instead of words.
• Grammatically-derived units, stems, affixes or their groupings, and statistically-derived units, morphs, have both been proposed as lexical items for Turkish ASR.
• Morphs are learned statistically from words by the Morfessor algorithm. Morfessor uses a Minimum Description Length principle to learn a sub-word lexicon in an unsupervised manner.
5
Feature sets for DLM
– Morphological Features– Syntactic Features– Sub-lexical Features
Clustering of sub-lexical unitsBrown et al.’s algorithmminimum edit distance (MED)
Long distance triggers
6
Feature sets for DLM
• Root ( 原型 )
ex: able => dis-able, en-able, un-able, comfort-able-ly, …. • Inflectional groups (IG)• Brown et al.’s algorithm
- semantically-based, syntactically-based• minimum edit distance (MED)
• 將一個字串轉成另一個字串所需的最少編輯 (insertion, deletion, substitution) 次數
• Ex: intension -> execution
del ‘i’ => nttention
sub ‘n’ to ‘e’ => etention
sub ‘t’ to ‘x’ => exention
ins ‘u’ => exenution
sub ‘n’ to ‘c’ => execution
7
Feature sets for DLM
• Long distance triggers• Considering initial morphs as stems and non-initial morphs as
suffixes, we assume that the existence of a morph can trigger another morph in the same sentence.
• we extract all the morph pairs between the morphs of any two words in a sentence as the candidate morph triggers.
• Among the possible candidates, we try to select only the pairs where morphs are occurring together for a special function.
8
Conclusions and Discussion
• The main contributions of this paper are
(i) syntactic information is incorporated into Turkish DLM
(ii) effect of language modeling units on DLMis investigated
(iii) morpho-syntactic information is explored when using sub-lexical
units.
• It is shown that DLM with basic features yields more improvement for morphs than for words.
• Our final observation is that the high number of features are masking the expected gains of the proposed features, mostly due to the sparseness of the observations per parameter.
• This will make feature selection a crucial issue for our future research.
10
Weekly report
• Generate word graph• Recognition result
11
character word
ML_training 83.54 76.24
MPE_iter1 84.83 77.77
• MDLM-D + prior
12
Sigma Train Test Dev
-Train_best
Dev_best
900Train_best 0.937 0.855 0.862
Dev_best 0.923 0.857 0.864
1600Train_best 0.939 0.856 0.863
Dev_best 0.924 0.857 0.865
2500Train_best 0.940 0.856 0.864
Dev_best 0.935 0.858 0.866
3600Train_best 0.941 0.857 0.864
Dev_best 0.932 0.858 0.866
8100Train_best 0.941561 0.857374 0.864554
Dev_best 0.933 0.858 0.866