Question Answering Gideon Mann Johns Hopkins University [email protected].
-
Upload
jazmyn-garbutt -
Category
Documents
-
view
218 -
download
0
Transcript of Question Answering Gideon Mann Johns Hopkins University [email protected].
Information Retrieval Tasks
Retired General Wesley Clark
How old is General Clark?
How long did Clark serve in the military?
Will Clark run for President?
Ad-Hoc Queries Prior work has been concerned mainly
with answering ad-hoc queries :
General Clark Typically a few words long, not an
entire question What is desired is general information
about the subject in question
Answering Ad-Hoc Queries Main focus of Information Retrieval past 2-3
decades Solution(s) :
– Vector-based methods– SVD, query expansion, language modeling– Return a page as an answer
Resulting systems Extremely Useful– Google, Altavista
Traditional IR
Query
Document Collection
Document Ranking
Document retrieval
But not all queries are Ad-Hoc!
How old is General Clark?
Does not fit well into an Ad-hoc paradigm– “How” and “is” are not relevant for appropriate
retrieval– Potentially useful cues in the question are
ignored in traditional ad-hoc retrieval system
Documents are not Facts Traditional IR systems return Pages
– Useful when only a vague information need has been identified
Insufficient when a fact is desired:– How old is General Clark? 58– How long did Clark serve in the
mililary? 36 years– Will Clark run for president? Maybe
Question Answering as Retrieval
Given a document collection and a question:
A question answering system should retrieve a short snippet of text which exactly answer the question asked.
Question Answering
Query
Document Collection
Document Ranking
Document retrieval
Ranked Answers
Answer Extraction(Sentence ranking)
QA as a Comprehension Task
For perfect recall, the answer only has to appear once in the collection.
In essence, this forces the QA system to function as a text understanding system
Thus QA may be interesting, not only for retrieval, but also to test understanding
QA as a stepping stone Current QA focused on Fact extraction
– Answers appear verbatim in text
How old is General Clark? How can we answer questions which don’t
appear exactly in the text?How long has Clark been in the military?
Will Clark run for President? Maybe build on low-level QA extracted facts
QA Methods
Two Main Categories of Methods for Question Answering
– Answer Preference Matching– Answer Context Matching
Lecture Outline1. Answer Preferences
Question Analysis Type identification Learning Answering Typing
2. Answer ContextLearning Context Similarity Alignment Surface Text Patterns
Answer Type Identification
From the question itself infer the likely type of the answer• How old is General Clark?
How Old
• When did Clark retire? When
• Who is the NBC war correspondent? Who
NASSLI!
April 12 Deadline –– Could be extended….– Mail [email protected] to ask for more time
Answer Type Identification
From the question itself infer the likely type of the answer• How old is General Clark?
How Old Age
• When did Clark retire? When Date
• Who is the NBC war correspondent? Correspondent Person
Wh-Words
Who Person, Organization, Location
When Date, Year
Where Location
In What Location
What ??
Difficult to Enumerate All Possibilities Though
What is the service ceiling for a PAC750?
WordNet
wingspan
length
diameter radius altitude
ceiling
WordNet For Answer Typing
wingspan
length
diameter radius altitude
ceiling
NUMBER
What is the service ceiling for a PAC750?
Lecture Outline1. Answer Preferences
Question Analysis Type identification Learning Answering Typing
2. Answer ContextLearning Context Similarity Alignment Surface Text Patterns
Answer Typing gives the Preference…
From Answer Typing, we have the preferences imposed by the question
But in order to use those preferences, we must have a way to detect potential candidate answers
Some are Simple…
Number [0-9]+ Date ($month) ($day) ($year) Age 0 – 100
… Others Complicated Who shot Martin Luther King?
– Person preferenceRequires a Named Entity Identifier
Who saved Chrysler from bankruptcy?– Not just confined to people…– Need a Tagger to identify appropriate candidates
Use WordNet for Type Identification
“What 20th century poet wrote Howl?”
writer
poet
Ginsburg FrostRilke
Candidate Set
communicator
Simple Answer Extraction
How old is General Clark?
Age
General Clark, from Little Rock, Arkansas, turns 58 after serving
36 years in the service, this December 23, 2002.
General Clark, from Little Rock, Arkansas, turns 58 after serving
36 years in the service, this December 23, 2002.
Age Tagger
Lecture Outline1. Answer Preferences
Question Analysis Type identification Learning Answering Typing
2. Answer ContextLearning Context Similarity Alignment Surface Text Patterns
Learning Answer Typing What is desired is a model which
predicts P(type|question) Usually a variety of possible types
– Who Person (“Who shot Kennedy?” Oswald) Organization (“Who rescued Chrysler from
bankruptcy?” The Government) Location (“Who won the Superbowl?” New
England)
What training data? Annotated Questions
– “Who shot Kennedy” [PERSON] Problems :
– Expensive to annotate– Must be redone, every time the tag set is
devised
Trivia Questions! Alternatively, use unannotated Trivia
Questions– Q: “Who shot Kennedy”– A: Lee Harvey Oswald
Run your Type-Tagger over the answers, to get tags– A: Lee Harvey Oswald [ PERSON]
MI Model From tags, you can build a MI model
– Predict from the question head-word MI(Question Head Word, Type Tag)
= P(Type Tag | QuestionHeadWord)
---------------------------------------------
P(Type Tag)
– From this you can judge the fit of a question/word pair
– (Mann 2001)
MaxEnt Model Rather than just use head word alone train
on the entire set of words, and build a Maximum Entropy model to combine features suggested by the entire phrase
“What was the year in which Castro was born?”
(Ittycheriah et al. 2001)
Maybe you don’t even need training data!
Looking at occurrences of words in text, look at what types occur next to them
Use these co-occurrence statistics to determine appropriate type of answer
(Prager et al. 2002)
Lecture Outline1. Answer Preferences
Question Analysis Type identification Learning Answering Typing
2. Answer ContextLearning Context Similarity Alignment Surface Text Patterns
Is Answer Typing Enough?
Even when you’ve found the correct sentence, and know the type of the answer a lot of ambiguity in the answer still remains
Some experiments show that in every sentence, around 2/3 choices of appropriate type for a sentence which answers a question
For high precision systems, this is unacceptable
Answer Context
Who shot Martin Luther King?
Answer Preference Answer Context
Using Context Many systems simply look for an
answer of the correct type in a context which seems appropriate– Many matching keywords– Perhaps using query expansion
Another alternative If the question is “Who shot Kennedy”
Search for all exact phrases matches “X shot Kennedy”
And simple alternations “Kennedy was shot by X”
(Brill et al. 2001)
Beyond… The first step beyond simple keyword
matching, is to use relative position information
One way of doing this is to use alignment information
Lecture Outline1. Answer Preferences
Question Analysis Type identification Learning Answering Typing
2. Answer ContextLearning Context Similarity Alignment Surface Text Patterns
Local Alignment
Who shot Kennedy?
Jack assassinated Oswald, the man who shot Kennedy, and was Mrs. Ruby’s Husband.
Three Potential Candidates by type
Local Alignment
Who shot Kennedy?
Jack assassinated Oswald, the man who shot Kennedy, and was Mrs. Ruby’s Husband.
Matching Context
Question Head word
Local Alignment
Who shot Kennedy?
Jack assassinated Oswald, the man who shot Kennedy, and was Mrs. Ruby’s Husband.
Anchor word
Local Alignment
Who shot Kennedy?
Jack assassinated Oswald, the man who shot Kennedy, and was Mrs. Ruby’s Husband.
Potential alignments
Local Alignment
Who shot Kennedy?
Jack assassinated Oswald, the man who shot Kennedy, and was Mrs. Ruby’s Husband.
One Alignment
Three Alignment Features :
Local Alignment
Who shot Kennedy?
Jack assassinated Oswald, the man who shot Kennedy, and was Mrs. Ruby’s Husband.
One Alignment
Three Alignment Features :
1. Dws : Distance between Question Head word and Anchorin the sentence
2
Local Alignment
Who shot Kennedy?
Jack assassinated Oswald, the man who shot Kennedy, and was Mrs. Ruby’s Husband.
Three Alignment Features :
2. Dwq Distance between Question Head word and AnchorIn the question
1
Local Alignment
Who shot Kennedy?
Jack assassinated Oswald, the man who shot Kennedy, and was Mrs. Ruby’s Husband.
Three Alignment Features :
3. R Has the Head Word changed position?
Headword position flipped
Build a Statistical Model
Pr (answer | question, sentence) = Pr ( Dws | answer, question, sentence)
*Pr(Dwq | answer, question, sentence)
*Pr(R | answer, question, sentence)
• and if unsure about type preference, can add in a term there
In essence, this local alignment model gives a robust method for using the context of the question to pick out the correct answer from a given sentence containing an answer
Surface text Patterns Categorize question into what kind of
data it is looking for Use templates to build specialized
models Use resulting “surface text patterns”
for searching
Birthday Templates
W. A. Mozart 1756
I. Newton 1642
M. Gandhi 1869
V. S. Naipaul 1932
Bill Gates 1951
Web Search to generate patterns
Web pages w/“Mozart” “1756”
Sentences with “Mozart” “1756”
Substrings with “Mozart” “1756”
How can we pick good patterns?
Frequent ones may be too general
Infrequent ones not that useful
Want precise, specific ones
Use held out templates to evaluate patterns