Stuart Russell Computer Science Division UC Berkeley

56

description

Representational and inferential foundations for possible large-scale information extraction and question-answering from the web. Stuart Russell Computer Science Division UC Berkeley. Goal. A system that knows everything on the Web* Answer all questions Discover patterns Make predictions - PowerPoint PPT Presentation

Transcript of Stuart Russell Computer Science Division UC Berkeley

Page 1: Stuart Russell Computer Science Division UC Berkeley
Page 2: Stuart Russell Computer Science Division UC Berkeley

Representational and inferential foundations for possible large-scale information extraction and

question-answering from the web

Stuart RussellComputer Science Division

UC Berkeley

Page 3: Stuart Russell Computer Science Division UC Berkeley

Goal• A system that knows everything on the Web*

– Answer all questions– Discover patterns– Make predictions

• Raw data → useful knowledge base• Requires: NLP, vision, speech, learning, DBs,

knowledge representation and reasoning• Berkeley: Klein, Malik, Morgan, Darrell, Jordan,

Bartlett, Hellerstein, Franklin, Hearst++

Page 4: Stuart Russell Computer Science Division UC Berkeley

Past projects: PowerSet

• “Building a natural language search engine that reads and understands every sentence on the Web.”

• Parsing/extraction technology + crowdsourcing to generate collections of x R y triples

• Example:– Manchester United beat Chelsea– Chelsea beat Manchester United

• Bought by Microsoft in 2008, merged into Bing

Page 5: Stuart Russell Computer Science Division UC Berkeley

Current projects: UW Machine Reading

• Initially based on bootstrapping text patterns– Born(Elvis,1935) => “Elvis was born in Tupelo” => “Obama was born in Hawaii” => “Obama’s birthplace was Hawaii” => ….

• [Google: Best guess for Elvis Presley Born is January 8, 1935]

– Inaccurate, runs out of gas, learned content shallow, 99% of text ignored

• Moving to incorporate probabilistic knowledge, inference using Markov logic

Page 6: Stuart Russell Computer Science Division UC Berkeley

Current Projects: NELL (CMU)• Bootstrapping approach to learning facts from the web

using text patterns (642,797 so far)• Initial ontology of basic categories and typed relations• Examples:

– the_chicken is a type of meat 100.0%– coventry_evening_telegraph is a blog 99.0%– state_university is a sports team also known as

syracuse_university 93.8% – orac_values_for_mushrooms is a fungus 100.0%– Hank Paulson is the CEO of Goldman 100.0%

Page 7: Stuart Russell Computer Science Division UC Berkeley

Problems• Language (incl. speech act pragmatics)

– … Jerry Brown, who has been called the first American in space

• Uncertainty– Reference uncertainty is ubiquitous– Bootstrapping can converge or diverge; exacerbated by

“accepting” uncertain facts, naïve probability models• Universal ontological framework (O(1) work)

– Taxonomy, events, compositional structure, time…– Compositional structure of objects and events– Knowledge, belief, other agents– Semantic content below lexical level (must be learned)

• E.g., buy = sell-1, ownership, transfer, etc.

Page 8: Stuart Russell Computer Science Division UC Berkeley

Technical approach• Web is just evidence; compute

P(World | web) α P(web | World) P(World) • What is the domain of the World variable?

– Complex sets of interrelated objects and events• How does it cause the Web variable?

– Pragmatics/semantics/syntax (and copying!)• Uncertainty about

– What objects exist– How they’re related– What phrases/images refer to what real objects

• => Open-universe, first-order probabilistic language

Page 9: Stuart Russell Computer Science Division UC Berkeley

9

Brief history of expressiveness

atomic propositional first-order/relational

logic

probability

5th C B.C. 19th C

17th C 20th C 21st C

Page 10: Stuart Russell Computer Science Division UC Berkeley

10

Brief history of expressiveness

atomic propositional first-order/relational

logic

probability

5th C B.C. 19th C

17th C 20th C 21st C

(be patient!)

Page 11: Stuart Russell Computer Science Division UC Berkeley
Page 12: Stuart Russell Computer Science Division UC Berkeley

12

Herbrand vs full first-order

GivenFather(Bill,William) and Father(Bill,Junior)How many children does Bill have?

Page 13: Stuart Russell Computer Science Division UC Berkeley

13

Herbrand vs full first-order

GivenFather(Bill,William) and Father(Bill,Junior)How many children does Bill have?

Herbrand semantics:2

Page 14: Stuart Russell Computer Science Division UC Berkeley

14

Herbrand vs full first-order

GivenFather(Bill,William) and Father(Bill,Junior)How many children does Bill have?

Herbrand semantics:2First-order logical semantics:

Between 1 and ∞

Page 15: Stuart Russell Computer Science Division UC Berkeley

Possible worlds• Propositional (Boolean, ANNs, Bayes nets)

• First-order closed-universe (DBs, Prolog)

• First-order open-universe

A B C D A B C D A B C D A B C D A B C D A B C D

Page 16: Stuart Russell Computer Science Division UC Berkeley

16

Open-universe models in BLOG

• Construct worlds using two kinds of steps, proceeding in topological order:– Dependency statements: Set the value of a

function or relation on a tuple of (quantified) arguments, conditioned on parent values

Page 17: Stuart Russell Computer Science Division UC Berkeley

17

Open-universe models in BLOG

• Construct worlds using two kinds of steps, proceeding in topological order:– Dependency statements: Set the value of a

function or relation on a tuple of (quantified) arguments, conditioned on parent values

– Number statements: Add some objects to the world, conditioned on what objects and relations exist so far

Page 18: Stuart Russell Computer Science Division UC Berkeley

18

Technical basics

Theorem: Every well-formed* BLOG model specifies a unique proper probability distribution over open-universe possible worlds; equivalent to an infinite contingent Bayes net

Theorem: BLOG inference algorithms (rejection sampling, importance sampling, MCMC) converge to correct posteriors for any well-formed* model, for any first-order query

Page 19: Stuart Russell Computer Science Division UC Berkeley

19

Example: Citation Matching[Lashkari et al 94] Collaborative Interface Agents,

Yezdi Lashkari, Max Metral, and Pattie Maes, Proceedings of the Twelfth National Conference on Articial Intelligence, MIT Press, Cambridge, MA, 1994.

Metral M. Lashkari, Y. and P. Maes. Collaborative interface agents. In Conference of the American Association for Artificial Intelligence, Seattle, WA, August 1994.

Are these descriptions of the same object?

Core task in CiteSeer, Google Scholar, over 300 companies in the record linkage industry

Page 20: Stuart Russell Computer Science Division UC Berkeley

20

(Simplified) BLOG model#Researcher ~ NumResearchersPrior();

Name(r) ~ NamePrior();

#Paper(FirstAuthor = r) ~ NumPapersPrior(Position(r));

Title(p) ~ TitlePrior();

PubCited(c) ~ Uniform({Paper p});

Text(c) ~ NoisyCitationGrammar (Name(FirstAuthor(PubCited(c))), Title(PubCited(c)));

Page 21: Stuart Russell Computer Science Division UC Berkeley

21

(Simplified) BLOG model#Researcher ~ NumResearchersPrior();

Name(r) ~ NamePrior();

#Paper(FirstAuthor = r) ~ NumPapersPrior(Position(r));

Title(p) ~ TitlePrior();

PubCited(c) ~ Uniform({Paper p});

Text(c) ~ NoisyCitationGrammar (Name(FirstAuthor(PubCited(c))), Title(PubCited(c)));

Page 22: Stuart Russell Computer Science Division UC Berkeley

22

(Simplified) BLOG model#Researcher ~ NumResearchersPrior();

Name(r) ~ NamePrior();

#Paper(FirstAuthor = r) ~ NumPapersPrior(Position(r));

Title(p) ~ TitlePrior();

PubCited(c) ~ Uniform({Paper p});

Text(c) ~ NoisyCitationGrammar (Name(FirstAuthor(PubCited(c))), Title(PubCited(c)));

Page 23: Stuart Russell Computer Science Division UC Berkeley

23

(Simplified) BLOG model#Researcher ~ NumResearchersPrior();

Name(r) ~ NamePrior();

#Paper(FirstAuthor = r) ~ NumPapersPrior(Position(r));

Title(p) ~ TitlePrior();

PubCited(c) ~ Uniform({Paper p});

Text(c) ~ NoisyCitationGrammar (Name(FirstAuthor(PubCited(c))), Title(PubCited(c)));

Page 24: Stuart Russell Computer Science Division UC Berkeley

24

(Simplified) BLOG model#Researcher ~ NumResearchersPrior();

Name(r) ~ NamePrior();

#Paper(FirstAuthor = r) ~ NumPapersPrior(Position(r));

Title(p) ~ TitlePrior();

PubCited(c) ~ Uniform({Paper p});

Text(c) ~ NoisyCitationGrammar (Name(FirstAuthor(PubCited(c))), Title(PubCited(c)));

Page 25: Stuart Russell Computer Science Division UC Berkeley

25

(Simplified) BLOG model#Researcher ~ NumResearchersPrior();

Name(r) ~ NamePrior();

#Paper(FirstAuthor = r) ~ NumPapersPrior(Position(r));

Title(p) ~ TitlePrior();

PubCited(c) ~ Uniform({Paper p});

Text(c) ~ NoisyCitationGrammar (Name(FirstAuthor(PubCited(c))), Title(PubCited(c)));

Page 26: Stuart Russell Computer Science Division UC Berkeley

26

(Simplified) BLOG model#Researcher ~ NumResearchersPrior();

Name(r) ~ NamePrior();

#Paper(FirstAuthor = r) ~ NumPapersPrior(Position(r));

Title(p) ~ TitlePrior();

PubCited(c) ~ Uniform({Paper p});

Text(c) ~ NoisyCitationGrammar (Name(FirstAuthor(PubCited(c))), Title(PubCited(c)));

Evidence: lots of citation stringsQuery: who wrote what? Which paper is being cited in this string? Are these two people the same?

Page 27: Stuart Russell Computer Science Division UC Berkeley

27

Citation Matching Results

Four data sets of ~300-500 citations, referring to ~150-300 papers

0

0.05

0.1

0.15

0.2

0.25

Reinforce Face Reason Constraint

Error

(Fraction of Clusters Not Recovered Correctly)

Phrase Matching[Lawrence et al. 1999]

Generative Model + MCMC[Pasula et al. 2002]

Conditional Random Field[Wellner et al. 2004]

Page 28: Stuart Russell Computer Science Division UC Berkeley

Example: multitarget tracking#Aircraft(EntryTime = t) ~ NumAircraftPrior();Exits(a, t)

if InFlight(a, t) then ~ Bernoulli(0.1);InFlight(a, t)

if t < EntryTime(a) then = falseelseif t = EntryTime(a) then = trueelse = (InFlight(a, t-1) & !Exits(a, t-1));

State(a, t)if t = EntryTime(a) then ~ InitState() elseif InFlight(a, t) then ~ StateTransition(State(a, t-1));

#Blip(Source = a, Time = t) if InFlight(a, t) then

~ NumDetectionsCPD(State(a, t));#Blip(Time = t)

~ NumFalseAlarmsPrior(); ApparentPos(r)

if (Source(r) = null) then ~ FalseAlarmDistrib()else ~ ObsCPD(State(Source(r), Time(r)));

Page 29: Stuart Russell Computer Science Division UC Berkeley

29

#Person ~ LogNormal[6.9, 2.3]();Honest(x) ~ Boolean[0.9]();#Login(Owner = x) ~ if Honest(x) then 1 else LogNormal[4.6,2.3]();Transaction(x,y) ~ if Owner(x) = Owner(y) then SibylPrior() else TransactionPrior(Honest(Owner(x)), Honest(Owner(y)));Recommends(x,y) ~ if Transaction(x,y) then if Owner(x) = Owner(y) then Boolean[0.99]() else RecPrior(Honest(Owner(x)), Honest(Owner(y)));

Evidence: lots of transactions and recommendationsQuery: Honest(x)

Example: cybersecurity sibyl defence

Page 30: Stuart Russell Computer Science Division UC Berkeley

30

Example: Global seismic monitoring

• CTBT bans testing of nuclear weapons on earth– Allows for outside inspection of 1000km2

• Need 9 more ratifications for “entry into force” including US, China

• US Senate refused to ratify in 1998– “too hard to monitor”

Page 31: Stuart Russell Computer Science Division UC Berkeley

31

254 monitoring stations

Page 32: Stuart Russell Computer Science Division UC Berkeley

32

Page 33: Stuart Russell Computer Science Division UC Berkeley

33

Vertically Integrated Seismic Analysis

• The problem is hard:– ~10000 “detections” per day, 90% false– CTBT system (SEL3) finds 69% of significant events plus

about twice as many spurious (nonexistent) events– 16 human analysts find more events, correct existing ones,

throw out spurious events, generate LEB (“ground truth”)– Unreliable below magnitude 4 (1kT)

Page 34: Stuart Russell Computer Science Division UC Berkeley

34

Page 35: Stuart Russell Computer Science Division UC Berkeley

35

Page 36: Stuart Russell Computer Science Division UC Berkeley

36

Page 37: Stuart Russell Computer Science Division UC Berkeley

37

Page 38: Stuart Russell Computer Science Division UC Berkeley

38

Page 39: Stuart Russell Computer Science Division UC Berkeley

39

Page 40: Stuart Russell Computer Science Division UC Berkeley

40

Page 41: Stuart Russell Computer Science Division UC Berkeley

41

Page 42: Stuart Russell Computer Science Division UC Berkeley

42

Page 43: Stuart Russell Computer Science Division UC Berkeley

43

Page 44: Stuart Russell Computer Science Division UC Berkeley

44

# SeismicEvents ~ Poisson[time_duration * event_rate];IsEarthQuake(e) ~ Bernoulli(.999);EventLocation(e) ~ If IsEarthQuake(e) then EarthQuakeDistribution()

Else UniformEarthDistribution();Magnitude(e) ~ Exponential(log(10)) + min_magnitude;Distance(e,s) = GeographicalDistance(EventLocation(e), SiteLocation(s));IsDetected(e,p,s) ~ Logistic[site-coefficients(s,p)](Magnitude(e), Distance(e,s);#Arrivals(site = s) ~ Poisson[time_duration * false_rate(s)];#Arrivals(event=e, site) = If IsDetected(e,s) then 1 else 0;Time(a) ~ If (event(a) = null) then Uniform(0,time_duration)

else IASPEI(EventLocation(event(a)),SiteLocation(site(a)),Phase(a)) + TimeRes(a);TimeRes(a) ~ Laplace(time_location(site(a)), time_scale(site(a)));Azimuth(a) ~ If (event(a) = null) then Uniform(0, 360)

else GeoAzimuth(EventLocation(event(a)),SiteLocation(site(a)) + AzRes(a);AzRes(a) ~ Laplace(0, azimuth_scale(site(a)));Slow(a) ~ If (event(a) = null) then Uniform(0,20)

else IASPEI-slow(EventLocation(event(a)),SiteLocation(site(a)) + SlowRes(site(a));

Page 45: Stuart Russell Computer Science Division UC Berkeley

45

Fraction of LEB events missed

Page 46: Stuart Russell Computer Science Division UC Berkeley

46

Fraction of LEB events missed

Page 47: Stuart Russell Computer Science Division UC Berkeley

47

Event distribution: LEB vs SEL3

Page 48: Stuart Russell Computer Science Division UC Berkeley

48

Event distribution: LEB vs NET-VISA

Page 49: Stuart Russell Computer Science Division UC Berkeley

Open questions

• Efficient inference• Model construction: creating useful new

categories and relations• HCI: What are answers when existence is

uncertain?• Making use of partially extracted or unextracted

information – “data spaces” (Franklin, Halevy)• Proper modeling of availability/absence of

evidence

Page 50: Stuart Russell Computer Science Division UC Berkeley

Summary• Basic components (accurate parsing, first-order and

modal probabilistic logics, universal ontology) are mostly in place; NLP is moving back towards combined syntax/semantics

• Vertically integrated probabilistic models can be much more effective that bottom-up pipelines

• The Web is Very Big– Does not imply we can only use trivial methods– Does not imply that trivial methods will suffice– Won’t happen for free

Page 51: Stuart Russell Computer Science Division UC Berkeley
Page 52: Stuart Russell Computer Science Division UC Berkeley

52

Example of using extra detections

Page 53: Stuart Russell Computer Science Division UC Berkeley

53

NEIC event (3.0) missed by LEB

Page 54: Stuart Russell Computer Science Division UC Berkeley

54

NEIC event (3.7) missed by LEB

Page 55: Stuart Russell Computer Science Division UC Berkeley

55

NEIC event (2.6) missed by LEB

Page 56: Stuart Russell Computer Science Division UC Berkeley

TREC 9 Results (2000)