Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word...

54
This work is licensed under a Creative Commons Attribution 4.0 International License. Applications in NLP Marco Kuhlmann Department of Computer and Information Science Neural Networks and Deep Learning (2019)

Transcript of Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word...

Page 1: Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word vectors for English, Swedish, and various other ... in practice, use tf-idf or PPMI

This work is licensed under a Creative Commons Attribution 4.0 International License.

Applications in NLP

Marco KuhlmannDepartment of Computer and Information Science

Neural Networks and Deep Learning (2019)

Page 2: Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word vectors for English, Swedish, and various other ... in practice, use tf-idf or PPMI

What is natural language processing?

• Natural language processing develops techniques for the analysis and interpretation of natural language.

• Natural language processing is an interdisciplinary research area involving computer science, linguistics, and cognitive science. related names: language technology, computational linguistics

Page 3: Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word vectors for English, Swedish, and various other ... in practice, use tf-idf or PPMI

The Google Search index contains hundreds of billions of webpages

and is well over 100,000,000 gigabytes in size.

Google, How Search Works

Page 4: Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word vectors for English, Swedish, and various other ... in practice, use tf-idf or PPMI

The Knowledge Gap

unstructured data (text)

structured data (database)

natural language processing

😢analyst

😊analyst

Page 5: Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word vectors for English, Swedish, and various other ... in practice, use tf-idf or PPMI

Sour

ce: M

acAr

thur

Fou

ndat

ion

This Stanford University alumnus co-founded educational technology company Coursera.

SELECT DISTINCT ?x WHERE { ?x dbo:almaMater dbr:Stanford_University. dbr:Coursera dbo:foundedBy ?x.}

SPARQL query against DBPedia

Page 6: Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word vectors for English, Swedish, and various other ... in practice, use tf-idf or PPMI

Syntactic structure, semantic relations

Koller co-founded Coursera

subject object

dbr:Coursera dbo:foundedBy dbr:Daphne_Koller

Page 7: Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word vectors for English, Swedish, and various other ... in practice, use tf-idf or PPMI
Page 8: Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word vectors for English, Swedish, and various other ... in practice, use tf-idf or PPMI

A very brief history

• 1950s: Fully automatic translation of Russian into English

• 1960s: Diaspora after funding cuts (ALPAC report)

• 1970s: Conceptual ontologies and chatterbots

• 1980s: Systems based on complex sets of hand-written rules

• 1990s: The surge of statistical techniques

• 2000s: Large corpora. Machine translation once more

• 2010s: Dominance of machine learning, neural networks

Page 9: Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word vectors for English, Swedish, and various other ... in practice, use tf-idf or PPMI

Ambiguity causes combinatorial explosion

I want to live in peace

PRON VERB

ADP ADJ

ADP NOUN

NOUN NOUN

ADV

ADV

PART VERB

ADJ

‘I only want to live in peace, plant potatoes, and dream!’ – Moomin

NOUN

ADV

VERB

Page 10: Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word vectors for English, Swedish, and various other ... in practice, use tf-idf or PPMI

Relative frequencies of tags per word

I want to live in peace

PRON VERB

ADP ADJ

ADP NOUN

NOUN NOUN

ADV

ADV

PART VERB

ADJ

NOUN

ADV

VERB

99.97%

0.00%

100.00%

0.00%

63.46%

35.13%

0.12%

14.52%

0.00%

83.87%

0.27%

0.03%

92.92%

3.61%

100.00%

0.00%

Dat

a: U

D E

nglis

h Tr

eeba

nk (t

rain

ing

data

)

Page 11: Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word vectors for English, Swedish, and various other ... in practice, use tf-idf or PPMI

Overview

• Introduction to natural language processing

• Introduction to word embeddings

• Learning word embeddings

Page 12: Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word vectors for English, Swedish, and various other ... in practice, use tf-idf or PPMI

Introduction to word embeddings

Page 13: Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word vectors for English, Swedish, and various other ... in practice, use tf-idf or PPMI

Words and contexts

What do the following sentences tell us about garrotxa?

• Garrotxa is made from milk.

• Garrotxa pairs well with crusty country bread.

• Garrotxa is aged in caves to enhance mold development.

Sentences taken from the English Wikipedia

Page 14: Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word vectors for English, Swedish, and various other ... in practice, use tf-idf or PPMI

The distributional principle

• The distributional principle states that words that occur in similar contexts tend to have similar meanings.

• ‘You shall know a word by the company it keeps.’ Firth (1957)

Page 15: Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word vectors for English, Swedish, and various other ... in practice, use tf-idf or PPMI

Word embeddings

• A word embedding is a mapping of words to points in a vector space such that nearby words (points) are similar in terms of their distributional properties. distributional principle: have similar meanings

• This idea is similar to the vector space model of information retrieval, where the dimensions of the vector space correspond to the terms that occur in a document. points = documents, nearby points = similar topic

Lin et al. (2015)

Page 16: Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word vectors for English, Swedish, and various other ... in practice, use tf-idf or PPMI

Co-occurrence matrix

context words

butter cake cow deer

cheese 12 2 1 0

bread 5 5 0 0

goat 0 0 6 1

sheep 0 0 7 5

targ

et w

ords

Page 17: Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word vectors for English, Swedish, and various other ... in practice, use tf-idf or PPMI

Co-occurrence matrix

context words

butter cake cow deer

cheese 12 2 1 0

bread 5 5 0 0

goat 0 0 6 1

sheep 0 0 7 5

targ

et w

ords

Page 18: Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word vectors for English, Swedish, and various other ... in practice, use tf-idf or PPMI

From co-occurrences to word vectors

cheese

sheep

bread

cakecontext

cow context

Page 19: Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word vectors for English, Swedish, and various other ... in practice, use tf-idf or PPMI

Sparse vectors versus dense vectors

• The rows of co-occurrence matrices are long and sparse. length corresponds to number of context words = on the order of 104

• State-of-the-art word embeddings that are short and dense. length on the order of 102

• The intuition is that such vectors may be better at capturing generalisations, and easier to use in machine learning.

Page 20: Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word vectors for English, Swedish, and various other ... in practice, use tf-idf or PPMI

Simple applications of word embeddings

• finding similar words

• answering ‘odd one out’-questions

• computing the similarity of short documents

Page 21: Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word vectors for English, Swedish, and various other ... in practice, use tf-idf or PPMI

Recognising textual entailment

Two doctors perform surgery on patient.

Entail Doctors are performing surgery.

Neutral Two doctors are performing surgery on a man.

Contradict Two surgeons are having lunch.

Example from Bowman et al. (2015)

Page 22: Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word vectors for English, Swedish, and various other ... in practice, use tf-idf or PPMI

Obtaining word embeddings

• Word embeddings can be easily trained from any text corpus using available tools. word2vec, Gensim, GloVe

• Pre-trained word vectors for English, Swedish, and various other languages are available for download. word2vec, Swectors, Polyglot project, spaCy

Page 23: Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word vectors for English, Swedish, and various other ... in practice, use tf-idf or PPMI

Compositional structure of word embeddings

queen

king

woman

man

Page 24: Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word vectors for English, Swedish, and various other ... in practice, use tf-idf or PPMI

Limitations of word embeddings

• There are many different facets of ‘similarity’. Is a cat more similar to a dog or to a tiger?

• Text data does not reflect many ‘trivial’ properties of words. more ‘black sheep’ than ‘white sheep’

• Text data does reflect human biases in the real world. king – man + woman = queen, doctor – man + woman = nurse

Goldberg (2017)

Page 25: Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word vectors for English, Swedish, and various other ... in practice, use tf-idf or PPMI

Questions about word embeddings

• How to measure association strength? positive pointwise mutual information

• How to measure similarity? cosine similarity

• How to learn word embeddings from text? matrix factorisation, direct learning of the low-dimensional vectors

Page 26: Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word vectors for English, Swedish, and various other ... in practice, use tf-idf or PPMI

Pointwise mutual information

• Raw counts favour pairs that involve very common contexts. the cat, a cat will receive higher weight than cute cat, small cat

• We want a measure that favours contexts in which the target word occurs more often than other words.

• A suitable measure is pointwise mutual information (PMI):

1.*ਙ ਚ � MPH ৸ਙ ਚ৸ਙ৸ਚ

Exam

ple

from

Gol

dber

g (2

017)

Page 27: Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word vectors for English, Swedish, and various other ... in practice, use tf-idf or PPMI

Pointwise mutual information

• We want to use PMI to measure the associative strength between a word 𝑤 and a context 𝑐 in a data set 𝐷:

• We can estimate the relevant probabilities by counting:

1.*ਘ ਅ � MPH ৸ਘ ਅ৸ਘ৸ਅ1.*ਘ ਅ � MPH �ਘ ਅ�]৬]�ਘ�]৬] ď �ਅ�]৬] � MPH �ਘ ਅ ď ]৬]�ਘ ď �ਅ

Page 28: Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word vectors for English, Swedish, and various other ... in practice, use tf-idf or PPMI

Positive pointwise mutual information

• Note that PMI is infinitely small for unseen word–context pairs, and undefined for unseen target words.

• In positive pointwise mutual information (PPMI), all negative and undefined values are replaced by zero:

• Because PPMI assigns high values to rare events, it is advisable to apply a count threshold or smooth the probabilities.

11.*ਘ ਅ � NBYʐ1.*ਘ ਅ �ʞ

Page 29: Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word vectors for English, Swedish, and various other ... in practice, use tf-idf or PPMI

Questions about word embeddings

• How to measure association strength? positive pointwise mutual information

• How to measure similarity? cosine similarity

• How to learn word embeddings from text? matrix factorisation, direct learning of the low-dimensional vectors

Page 30: Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word vectors for English, Swedish, and various other ... in practice, use tf-idf or PPMI

Distance-based similarity

• If we can represent words as vectors, then we can measure word similarity as the distance between the vectors.

• Most measures of vector similarity are based on the dot product or inner product from linear algebra.

Page 31: Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word vectors for English, Swedish, and various other ... in practice, use tf-idf or PPMI

The dot product

𝑣1 𝑣2 𝑤1 𝑤2

+2 +2 +2 +1

𝑣1 𝑣2 𝑤1 𝑤2

+2 +2 −2 +2

𝑣1 𝑣2 𝑤1 𝑤2

+2 +2 −2 −1

ੋ ď ੌ � k�ੋ ď ੌ � ÷�ੋ ď ੌ � ��

Page 32: Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word vectors for English, Swedish, and various other ... in practice, use tf-idf or PPMI

Problems with the dot product

• The dot product will be higher for vectors that represent words that have high co-occurrence counts or PPMI values.

• This means that, all other things being equal, the dot product of two words will be greater if the words are frequent.

• This makes the dot product problematic because we would like a similarity metric that is independent of frequency.

Page 33: Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word vectors for English, Swedish, and various other ... in practice, use tf-idf or PPMI

Cosine similarity

• We can fix the dot product as a metric by scaling down each vector to the corresponding unit vector:

• This length-normalised dot product is the cosine similarity, whose values range from −1 (opposite) to +1 (identical). cosine of the angle between the two vectors

DPTੋ ੌ � ੋ]ੋ] ď ੌ]ੌ] � ੋ ď ੌ]ੋ]]ੌ] � öਆਊ�� ਗਊਘਊƊöਆਊ�� ਗ�ਊƊöਆਊ�� ਘ�ਊ

Page 34: Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word vectors for English, Swedish, and various other ... in practice, use tf-idf or PPMI

Overview

• Introduction to natural language processing

• Introduction to word embeddings

• Learning word embeddings

Page 35: Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word vectors for English, Swedish, and various other ... in practice, use tf-idf or PPMI

Learning word embeddings

Page 36: Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word vectors for English, Swedish, and various other ... in practice, use tf-idf or PPMI

Learning word embeddings

• Word embeddings from neural language models

• word2vec: continuous bag-of-words and skip-gram

• Word embeddings via singular value decomposition

• Contextualised embeddings – ELMo and BERT

Page 37: Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word vectors for English, Swedish, and various other ... in practice, use tf-idf or PPMI

Neural networks as language models

the cat sat on the

mat

hidden layer

input layer

softmax layer

Page 38: Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word vectors for English, Swedish, and various other ... in practice, use tf-idf or PPMI

Word embeddings via neural language models

• The neural language model is trained to predict the probability of the next word being 𝑤, given the preceding words:

• Each column of the matrix 𝑾 is a dim(𝒉)-dimensional vector that is associated with some vocabulary item 𝑤.

• We can view this vector as a representation of 𝑤 that captures its compatibility with the context represented by the vector 𝒉.

О � ৸ਘ ] QSFDFEJOH XPSET � TPęNBYਲ<latexit sha1_base64="Hn/o4X4feF4WA3liz4FhUQ35RF8=">AAAFmHicjVRdT9swFA0b3Vj3BdsjL9YqJJjaqimI0kmVGCA2pHVkLV0rNRVyk9vWIrEz2yl0UX7afsie97r9h9lpB2sKEpaSOPeee2yf63v7gUeELJV+Lj14uJx59HjlSfbps+cvXq6uvfoqWMgdaDnMY7zTxwI8QqElifSgE3DAft+Ddv/iUPvbY+CCMHomJwH0fDykZEAcLJXpfLVtj7CM7DE40SSOUQ3ZVuP0ILpE9pCMgSJbwpWMFKUDLqFDdMm4K6bAxCPYQPr4Kt5MKEZx8mnHW+eruVKxlAy0ODFnk5wxG9b52vJb22VO6AOVjoeF6JqlQPYizCVxPIizdiggwM4FHkJXTSn2QfSiRIEYbSiLiwaMq4dKlFj/j4iwL3wsRwqpPyJrN+BbSDhYM78OEwE4cTbt6vaZ5xaEnHhQO2me5nX8zW8vCilxmAuFhD5rC1B6EKr5ulmEmuQ7HAOWIQehRIuUCaFIW9VfYa+4k0engToh9ma2vTg/h1GQgrmdRplmCmZuF8xqsbqAq6ZxClRIoyrlWIES5CcyvN7sGXS06XPo99UN0ruvM8qEkhRcReG5Ta2CCutFdULVdUIWZ/8kkKP7SbCbR8c6ZTU05ahrec8IneRRQq8dytR0OAnk9J060u5UxkWWQxzI21juUHiRoAHD0MP8HhzX8i+SNMP+6D4MOjGV8m0MR0QEKXSlXLgLeetaOr03aapPr6oHUpdrzRYTvz+YKxedP8mYJxQs4GxMXHCY72PqTvtCpOxHoOpUl4mqFfdIdSCfSOAdy0paSNfsRVa8uRXF0YZa3eZA4XKew6aMimR/9jvb1QRCnU5tics5l94MC/Si+hw5M86q7mKme8nipFUuVoulLzu5/YNZm1kx1o03xqZhGhVj3/hoWEbLcIwfxi/jt/Ens555n/mQOZlCHyzNYl4bcyPT+Au9asC9</latexit><latexit sha1_base64="Hn/o4X4feF4WA3liz4FhUQ35RF8=">AAAFmHicjVRdT9swFA0b3Vj3BdsjL9YqJJjaqimI0kmVGCA2pHVkLV0rNRVyk9vWIrEz2yl0UX7afsie97r9h9lpB2sKEpaSOPeee2yf63v7gUeELJV+Lj14uJx59HjlSfbps+cvXq6uvfoqWMgdaDnMY7zTxwI8QqElifSgE3DAft+Ddv/iUPvbY+CCMHomJwH0fDykZEAcLJXpfLVtj7CM7DE40SSOUQ3ZVuP0ILpE9pCMgSJbwpWMFKUDLqFDdMm4K6bAxCPYQPr4Kt5MKEZx8mnHW+eruVKxlAy0ODFnk5wxG9b52vJb22VO6AOVjoeF6JqlQPYizCVxPIizdiggwM4FHkJXTSn2QfSiRIEYbSiLiwaMq4dKlFj/j4iwL3wsRwqpPyJrN+BbSDhYM78OEwE4cTbt6vaZ5xaEnHhQO2me5nX8zW8vCilxmAuFhD5rC1B6EKr5ulmEmuQ7HAOWIQehRIuUCaFIW9VfYa+4k0engToh9ma2vTg/h1GQgrmdRplmCmZuF8xqsbqAq6ZxClRIoyrlWIES5CcyvN7sGXS06XPo99UN0ruvM8qEkhRcReG5Ta2CCutFdULVdUIWZ/8kkKP7SbCbR8c6ZTU05ahrec8IneRRQq8dytR0OAnk9J060u5UxkWWQxzI21juUHiRoAHD0MP8HhzX8i+SNMP+6D4MOjGV8m0MR0QEKXSlXLgLeetaOr03aapPr6oHUpdrzRYTvz+YKxedP8mYJxQs4GxMXHCY72PqTvtCpOxHoOpUl4mqFfdIdSCfSOAdy0paSNfsRVa8uRXF0YZa3eZA4XKew6aMimR/9jvb1QRCnU5tics5l94MC/Si+hw5M86q7mKme8nipFUuVoulLzu5/YNZm1kx1o03xqZhGhVj3/hoWEbLcIwfxi/jt/Ens555n/mQOZlCHyzNYl4bcyPT+Au9asC9</latexit><latexit sha1_base64="Hn/o4X4feF4WA3liz4FhUQ35RF8=">AAAFmHicjVRdT9swFA0b3Vj3BdsjL9YqJJjaqimI0kmVGCA2pHVkLV0rNRVyk9vWIrEz2yl0UX7afsie97r9h9lpB2sKEpaSOPeee2yf63v7gUeELJV+Lj14uJx59HjlSfbps+cvXq6uvfoqWMgdaDnMY7zTxwI8QqElifSgE3DAft+Ddv/iUPvbY+CCMHomJwH0fDykZEAcLJXpfLVtj7CM7DE40SSOUQ3ZVuP0ILpE9pCMgSJbwpWMFKUDLqFDdMm4K6bAxCPYQPr4Kt5MKEZx8mnHW+eruVKxlAy0ODFnk5wxG9b52vJb22VO6AOVjoeF6JqlQPYizCVxPIizdiggwM4FHkJXTSn2QfSiRIEYbSiLiwaMq4dKlFj/j4iwL3wsRwqpPyJrN+BbSDhYM78OEwE4cTbt6vaZ5xaEnHhQO2me5nX8zW8vCilxmAuFhD5rC1B6EKr5ulmEmuQ7HAOWIQehRIuUCaFIW9VfYa+4k0engToh9ma2vTg/h1GQgrmdRplmCmZuF8xqsbqAq6ZxClRIoyrlWIES5CcyvN7sGXS06XPo99UN0ruvM8qEkhRcReG5Ta2CCutFdULVdUIWZ/8kkKP7SbCbR8c6ZTU05ahrec8IneRRQq8dytR0OAnk9J060u5UxkWWQxzI21juUHiRoAHD0MP8HhzX8i+SNMP+6D4MOjGV8m0MR0QEKXSlXLgLeetaOr03aapPr6oHUpdrzRYTvz+YKxedP8mYJxQs4GxMXHCY72PqTvtCpOxHoOpUl4mqFfdIdSCfSOAdy0paSNfsRVa8uRXF0YZa3eZA4XKew6aMimR/9jvb1QRCnU5tics5l94MC/Si+hw5M86q7mKme8nipFUuVoulLzu5/YNZm1kx1o03xqZhGhVj3/hoWEbLcIwfxi/jt/Ens555n/mQOZlCHyzNYl4bcyPT+Au9asC9</latexit>

Page 39: Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word vectors for English, Swedish, and various other ... in practice, use tf-idf or PPMI

Network weights = word embeddings

context representation (hidden layer)

word representations (weight matrix)

cat mat on sat the

Intuitively, words that occur in similar contexts will have similar word representations.

Page 40: Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word vectors for English, Swedish, and various other ... in practice, use tf-idf or PPMI

Training word embeddings using a language model

• Initialise the word vectors with random values. typically by uniform sampling from an interval around 0

• Train the network on large volumes of text. word2vec: 100 billion words

• Word vectors will be optimised to the prediction task. Words that tend to precede the same words will get similar vectors.

Page 41: Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word vectors for English, Swedish, and various other ... in practice, use tf-idf or PPMI

Google’s word2vec

• Google’s word2vec implements two different training algorithms for word embeddings: continuous bag-of-words and skip-gram.

• Both algorithms obtain word embeddings from a binary prediction task: ‘Is this an actual word–context pair?’

• Positive examples are generated from a corpus. Negative examples are generated by taking 𝑘 copies of a positive example and randomly replacing the target word with some other word.

Page 42: Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word vectors for English, Swedish, and various other ... in practice, use tf-idf or PPMI

The continuous bag-of-words model

𝑐1 𝑐2

sum

dot product

𝑤

sigmoid

𝑃(observed? | 𝑐1 𝑤 𝑐2)

Page 43: Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word vectors for English, Swedish, and various other ... in practice, use tf-idf or PPMI

The skip-gram model

𝑃(observed? | 𝑐1 𝑤 𝑐2)

dot product

𝑐1

sigmoid

dot product

𝑤 𝑐2

sigmoid

product

Page 44: Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word vectors for English, Swedish, and various other ... in practice, use tf-idf or PPMI

Word embeddings via singular value decomposition

• The rows of co-occurrence matrices are long and sparse. Instead, we would like to have word vectors that are short and dense. length on the order of 102 instead of 104

• One idea is to approximate the co-occurrence matrix by another matrix with fewer columns. in practice, use tf-idf or PPMI instead of raw counts

• This problem can be solved by computing the singular value decomposition of the co-occurrence matrix.

Page 45: Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word vectors for English, Swedish, and various other ... in practice, use tf-idf or PPMI

Singular value decomposition

𝑴 𝑼

𝑽

=

context words

targ

et w

ords

𝜮

𝑤 × 𝑐 𝑤 × 𝑤

𝑤 × 𝑐 𝑐 × 𝑐

Landauer and Dumais (1997)

Page 46: Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word vectors for English, Swedish, and various other ... in practice, use tf-idf or PPMI

Truncated singular value decomposition

𝑴 𝑼

𝑽

=

context words

targ

et w

ords

𝜮

𝑤 × 𝑐 𝑤 × 𝑘

𝑘 × 𝑐 𝑐 × 𝑐

𝑘 = embedding width

Page 47: Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word vectors for English, Swedish, and various other ... in practice, use tf-idf or PPMI

Truncated singular value decomposition

width 200 width 100 width 50

width 20 width 10 width 5

Phot

o cr

edit:

UM

assA

mhe

rst

Page 48: Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word vectors for English, Swedish, and various other ... in practice, use tf-idf or PPMI

Word embeddings via singular value decomposition

• Each row of the (truncated) matrix 𝑼 is a 𝑘-dimensional vector that represents the ‘most important’ information about a word. columns orded in decreasing order of importance

• A practical problem is that computing the singular value decomposition for large matrices is expensive. but has to be done only once!

Page 49: Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word vectors for English, Swedish, and various other ... in practice, use tf-idf or PPMI

Connecting the two worlds

• The two algorithmic approaches that we have seen take two seemingly very different perspectives: ‘count-based’ and ‘neural’.

• However, a careful analysis reveals that the skip-gram model is implicitly computing the already-decomposed PPMI matrix. Levy and Goldberg (2014)

Page 50: Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word vectors for English, Swedish, and various other ... in practice, use tf-idf or PPMI

Contextualised embeddings: ELMo and BERT

• In the bag-of-words and skip-gram model, each word vector is the obtained from a local prediction task.

• In ELMo and BERT, each token is assigned a representation that is a function of the entire input sentence.

• The final vector for a word is a linear combination of the internal layers of a deep bidirectional recurrent neural network (LSTM).

Muppet character images from The Muppet Wiki

Page 51: Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word vectors for English, Swedish, and various other ... in practice, use tf-idf or PPMI

The ELMo architecture

The play premiered

token embedding

yesterday

The play premiered yesterday<BOS> <EOS>

bi-LSTM layer 1

bi-LSTM layer 2

LSTM

LSTM

LSTM

LSTM

LSTM

LSTM

LSTM

LSTM

LSTM

LSTM

LSTM

LSTM

LSTM

LSTM

LSTM

LSTM

Page 52: Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word vectors for English, Swedish, and various other ... in practice, use tf-idf or PPMI

Figure 1: The Transformer - model architecture.

3.1 Encoder and Decoder Stacks

Encoder: The encoder is composed of a stack of N = 6 identical layers. Each layer has twosub-layers. The first is a multi-head self-attention mechanism, and the second is a simple, position-wise fully connected feed-forward network. We employ a residual connection [11] around each ofthe two sub-layers, followed by layer normalization [1]. That is, the output of each sub-layer isLayerNorm(x + Sublayer(x)), where Sublayer(x) is the function implemented by the sub-layeritself. To facilitate these residual connections, all sub-layers in the model, as well as the embeddinglayers, produce outputs of dimension dmodel = 512.

Decoder: The decoder is also composed of a stack of N = 6 identical layers. In addition to the twosub-layers in each encoder layer, the decoder inserts a third sub-layer, which performs multi-headattention over the output of the encoder stack. Similar to the encoder, we employ residual connectionsaround each of the sub-layers, followed by layer normalization. We also modify the self-attentionsub-layer in the decoder stack to prevent positions from attending to subsequent positions. Thismasking, combined with fact that the output embeddings are offset by one position, ensures that thepredictions for position i can depend only on the known outputs at positions less than i.

3.2 Attention

An attention function can be described as mapping a query and a set of key-value pairs to an output,where the query, keys, values, and output are all vectors. The output is computed as a weighted sumof the values, where the weight assigned to each value is computed by a compatibility function of thequery with the corresponding key.

3

Figu

re fr

om V

asw

ani e

t al.

(201

7)

Page 53: Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word vectors for English, Swedish, and various other ... in practice, use tf-idf or PPMI

Learning word embeddings

• Word embeddings from neural language models

• word2vec: continuous bag-of-words and skip-gram

• Word embeddings via singular value decomposition

• Contextualised embeddings – ELMo and BERT

Page 54: Applications in NLP - IDA · using available tools. word2vec, Gensim, GloVe • Pre-trained word vectors for English, Swedish, and various other ... in practice, use tf-idf or PPMI

Overview

• Introduction to natural language processing

• Introduction to word embeddings

• Learning word embeddings