Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 ·...

77
Visual world studies of spoken language processing: Interpreting language in the context of goal- Michael K. Tanenhaus University of Rochester directed behavior University of Rochester

Transcript of Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 ·...

Page 1: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Visual world studies of spoken language processing: Interpreting language in the context of goal-

Michael K. TanenhausUniversity of Rochester

g g gdirected behavior

University of Rochester

Page 2: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

OverviewPart 1: Logic of the Visual World Approach (4-

27)

Part 2: Referential Domains: (28-64)dynamic updatingaction-based affordancesreferential contextreferential contextspeaker perspective

Part 3: VW with Artificial Languages (65-77)eye movementseye movementsbrain imaging

Page 3: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

CollaboratorsCollaborators

Craig ChambersCraig ChambersSarah Brown-

S h idtSchmidtDaphna HellerDaphna HellerKate PirogKate PirogKate PirogKate Pirog

Page 4: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Part 1 (slides 4-27)

Logic and motivation for visual worldLogic and motivation for visual world approach:

Page 5: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Action-based Visual World ParadigmParticipants perform actions in response to instructions. p p pLanguage is “about” the world. Speakers and addresses have well-defined behavioral goalsFixations provide window into processing

Put the apple on the towel in the box

Microphone

Eye tracking

Microphone

VideoDisplay

y gdevice

CPU VCR

Page 6: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Why spoken language isWhy spoken language is important

• Talking and listening are primary– All societies have spoken languageAll societies have spoken language– All children learn to converse, initially by

talking about the world.g– Conversation is (arguably) the primary site

of language useg g

Page 7: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Ignoring speech can encourage scientific balkanization, making it easier to ignore duality of

patterning:

speech perceptionspeech perception----> > word recognitionword recognition---->syntactic processing>syntactic processing----> sentence > sentence interpretationinterpretation----> > discourse discourse representation

Duration:capcap//capcaptaintain (in strong position, vowel in (in strong position, vowel in capcap is longer)is longer)But, information structure of discourse can affect duration.But, information structure of discourse can affect duration.

Put the cap above the captainPut the cap above the captain. . Now put the cap/captain…Now put the cap/captain…p pp p p p pp p p

Page 8: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Why use eye movements to studyWhy use eye movements to study spoken language?

• Consider what spoken language is like from two perspectivesp p– Language unfolding over time as a

sequence of transient acoustic events.q– Conversation as a joint activity between

two or more people.

Page 9: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Time Course:Spoken words unfold as a series of transient acoustic events extending over a fewSpoken words unfold as a series of transient acoustic events extending over a few hundred ms, without reliable cues to word boundaries.

Imagine reading this page through a two-letter aperture, the text scrolling past without ti th d t i bl t ld t t l ith th i lspaces separating the words, at a variable rate one could not control, with the visual

features for each letter arriving asynchronously.Consequences:A h (VOT/ l d ti )Asynchronous cues (VOT/vowel duration)Lexical representations acoustically similar to the unfolding input must serve as a temporary memory of the input and as a set of candidate hypotheses.

Put the apple on thebeaker

beetle beacon beak beep

Put the apple on the towel…

beetle, beacon, beak, beep…

Page 10: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Categorical Perception

100

/BDi i i ti

% /p

/ Discrimination

ID (%/pa/)

VOT

0

PB

P

• Sharp identification of speech sounds on a

VOT PB

p pcontinuum• Discrimination poor within a phonetic category

Page 11: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Spoken language comprehension

Paradigms have required de-contextualized languagee.g., cross-modal priming

The lawyer represented the doctor The lawyer represented the doctor who testified that the bug frightenedwho testified that the bug frightenedwho testified that the bug frightened who testified that the bug frightened

himhim

Page 12: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Interactive conversation

1 *ok, ok I got it* ele…ok2 alright *hold on* I got another easy piece

Gibberish?

2 alright, hold on , I got another easy piece1 *I got a* well wait I got a green piece RIGHT above

that 2 above this piece?1 well not exactly right above it1 well not exactly right above it2 it can’t be above it1 it’s to the…it’ doesn’t wanna fit in with the cardboard2 it’s to the right, right?11 yup2 w- how? *where*1 *it’s* kinda line up with the two holes2 line ‘em right next to each other?g1 yeah, vertically2 vertically, meaning?1 up and down2 up and down2 up and down

Page 13: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

A sheet separates the subjects.

1 21 2

Page 14: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the
Page 15: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Interactive conversation

1 *ok, ok I got it* ele…ok2 alright *hold on* I got another easy piece

Gibberish?

2 alright, hold on , I got another easy piece1 *I got a* well wait I got a green piece RIGHT above

that 2 above this piece?1 well not exactly right above it1 well not exactly right above it2 it can’t be above it1 it’s to the…it’ doesn’t wanna fit in with the cardboard2 it’s to the right, right?11 yup2 w- how? *where*1 *it’s* kinda line up with the two holes2 line ‘em right next to each other?g1 yeah, vertically2 vertically, meaning?1 up and down2 up and down2 up and down

Page 16: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Spoken Language in a Different Tradition

Prototypical Experiment requires participants to interact in goal-driven task

Take the shape with thee, uhm, the circle above the triangle and, uh..

You mean the one that looks like a dancer

ith f t lYeah, the dancer

with a fat leg

Director Matcher

Page 17: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Why eye movements?

• ballistic measure• can be used with continuous speech/natural tasks• natural response measure• low threshold responselow threshold response• subject is unaware• does not require meta-linguistic judgment• closely time locked to speech• closely time-locked to speech• plausible linking hypothesis:

Probability of eye movement at time (t) is a function ofProbability of eye movement at time (t) is a function of activation of possible alternatives plus some delay for programming and execution (~200ms)

Page 18: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Eye movements allow us to study spoken language in rich contexts with precision g g p

necessary to examine time-course.I thi d thi ?Is this a good thing?

No: (confuses language with nonNo: (confuses language with non--linguistic stuff)linguistic stuff)We should be studying how we construct/generate linguistic We should be studying how we construct/generate linguistic representations. In visual world, we introduce taskrepresentations. In visual world, we introduce task--specific strategies, specific strategies, etc.etc.

Yes:Yes:There is always a context, it always matters; understanding how the There is always a context, it always matters; understanding how the system works in a rich environment can be more informative about system works in a rich environment can be more informative about basic principles than studying the system in an impoverished basic principles than studying the system in an impoverished environment (analogy with vision; bottomenvironment (analogy with vision; bottom--up or topup or top--down).down).

W i l t ti / l /lW i l t ti / l /lWe can manipulate action/goals/languageWe can manipulate action/goals/language

Page 19: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Allopenna, Magnuson & Tanenhaus (1998)

Eye camera

Scene camera

Pick up the beakerPick up the beaker

Page 20: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Fixation Proportions over TimeFixation Proportions over Time

1Trials

1

2

+ 3

4

+

ons

ons 200 ms

5

Time

Target = beaker

Cohort = beetle n of

fixa

tion

of fi

xatio

Cohort = beetle

Unrelated = carriagecarriage

Prop

ortio

nPr

opor

tion

Time

PP

Look at the cross. Click on the beaker.

Page 21: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

1Trials

Fixation Proportions summed over an intervalFixation Proportions summed over an interval

1

2

+ 3

4

+

5

ions

ions

ons

ons

on o

f fix

ati

on o

f fix

ati

n of

fixa

tion

of fi

xatio

Prop

ortio

Prop

ortio

Prop

ortio

nPr

opor

tion

Time cohort unrelated

Page 22: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Allopenna et al. results

Page 23: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

0.8

1.0Referent (e.g., "beaker")Cohort (e.g., "beetle")Rhyme (e.g., "speaker")Unrelated

•Activation converted to probabilities using the Luce (1959) choice rule

Si = e kai

S: response strength for each itema: activation

Linking hypothesisLinking hypothesis

0.6

tivat

ion

a: activation k: free parameter, determines amount of

separation between activation levels (set to 7)Li = Si / ∑ Si

Choice rule assumes each alternative is equally

0.2

0.4

Act

yprobable given no information; when initial instruction is look at the cross or look at picture X, we scale the response probabilities to be proportional to the amount of activation at each time step:

d = max act / max act overall

8060402000.0

CycleReferent (e.g., "beaker")C h t ( "b tl ")

1.0 Referent (e.g., "beaker")C h t ( "b tl ")

di = max_acti / max_act_overallRi = di Li

Cohort (e.g., "beetle")Rhyme (e.g., "speaker")Unrelated.

0.6

0.8

Cohort (e.g., "beetle")Rhyme (e.g., "speaker")Unrelated

0 2

0.4

P r e d i c t e d f i x a t i o n p r o b a b i l i t y

10008006004002000

Time since target onset (ms)

100080060040020000.0

0.2

Time since target onset (scaled to msec)

Page 24: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

P t th l th t l i th bPut the apple on the towel in the box

goalgoalreferentreferent

gardengarden--path goalpath goal

Tanenhaus, SpiveyTanenhaus, Spivey--Knowlton, Eberhard & Sedivy (1995);Knowlton, Eberhard & Sedivy (1995);Science,Science,Spivey, et al. (2002) Spivey, et al. (2002) Cognitive PsychologyCognitive Psychology

Page 25: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

P t th l th t l i th bPut the apple on the towel in the box.

0 8

1.0One-Referent Context, Ambiguous Instruction

Fixa

tions

0 4

0.6

0.8

Tria

ls w

ith

Target ReferentDistractor

0 0

0.2

0.4

ropo

rtion

of Correct Destination

Garden-Path Destination

5432100.0

Time (sec)

Pr

"Put the apple on the towel in the box."

Page 26: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Put the apple on the towel in the box.Put the apple that’s on the towel in the box.pp

0.8

1.0

AmbiguousUnambiguousth re

ct G

oal

Instruction

0.6

Unambiguous

tion

of T

rials

wi

men

ts to

Inco

rr

0.2

0.4

Pro

port

Eye

Mov

em

0.0

Page 27: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Part 2 (slides 27-64)

Referential domainsReferential domains

Page 28: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Many, if not, most, linguistic i t b i t t d ithexpressions must be interpreted with

respect to contextually-defined parameters or domains

The interpretation of definite nounThe interpretation of definite noun phrases is a paradigmatic example

Page 29: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Definiteness and ReferentialDefiniteness and Referential Domains

Definite referring expressions assume a uniquely identifiable referent with respect to a circumscribed referential domain.

Hand me Hand me the (*an)the (*an) empty empty martini glass.martini glass.

H dH d (*th )(*th ) ttHand me Hand me an (*the)an (*the) empty empty jar.jar.

Page 30: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

How do addressees establish and update How do addressees establish and update referential domains in realreferential domains in real timetimereferential domains in realreferential domains in real--time time

comprehension?comprehension?

Listeners (and speakers) dynamically update referential domains:

Are domains influenced by:real-world properties of potential referentsgoal-based constraints

Do these factors also affect syntactic ambiguity resolution?

considerations of interlocutor perspective

Page 31: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Point of DisambiguationPoint of DisambiguationLate Match

Pick up the empty martini glass ...

Early Match

Page 32: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Point of DisambiguationPoint of DisambiguationLate Match

Pick up the empty martini glass ...

Early Match

Pick up the empty martini glass ...*

Page 33: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Point of DisambiguationPoint of DisambiguationLate Match

Pick up the empty martini glass ...

Early Match

Pick up the empty martini glass ...*

* Assumes listeners will immediately interpret pre-nominal adjectives such as empty contrastively (i e there should be one empty X and one notsuch as empty contrastively (i.e.,there should be one empty X and one not empty X). See Sedivy, Tanenhaus, Chambers & Carlson (1999) Cognition for supporting evidence.

Page 34: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

1.00Pick up the empty martini glass ...Late Match

martini

0.20

0.40

0.60

0.80 TargetTargetOCompCompOOther

martini

0.00

Time since onset of determiner (ms)

Early Match

Page 35: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

1.00T t

Pick up the empty martini glass ...Late Match

0.20

0.40

0.60

0.80 TargetTargetOCompCompOOther

0.00

Time since onset of determiner (ms)

0 60

0.80

1.00TargetTargetO

Pick up the empty martini glass ...Early Match

0.00

0.20

0.40

0.60CompCompOOther

Time since onset of determiner (ms)

Page 36: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

1.00T t

Pick up the empty martini glass ...Late Match

0.20

0.40

0.60

0.80 TargetTargetOCompCompOOther

0.00

Time since onset of determiner (ms)

0 60

0.80

1.00TargetTargetO

Pick up the empty martini glass ...Early Match

0.00

0.20

0.40

0.60CompCompOOther

Time since onset of determiner (ms)

Page 37: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Referential domains are dynamically updated taking Referential domains are dynamically updated taking Chambers et al., JML, Chambers et al., JML, 20022002

into account actioninto account action--specific affordances.specific affordances.Display - Chambers: Uniqueness

Big Can (potential target)Big Can (potential target)

Small Can(potential target)

Bowl (uniquecompetitor)

.

competitor)

Cube (small OR big)

“Pickupthecube. Nowput it inthe/ acan.”

600

700

Two Compatible Targets

Eyemovement latencies to target

Small cubeSmall cube

200

300

400

500

m s

Definite Indefinite100

200

Page 38: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Referential domains are dynamically updated taking Referential domains are dynamically updated taking Chambers et al., JML, Chambers et al., JML, 20022002

into account actioninto account action--specific affordancesspecific affordancesDisplay - Chambers: Uniqueness

Big Can (potential target)Big Can (potential target)

Small Can(potential target)

Bowl (uniquecompetitor)

.

competitor)

Cube (small OR big)

“Pickupthecube. Nowput it inthe/ acan.”

600

700

Two Compatible Targets

Eyemovement latencies to target

Small cubeSmall cube

200

300

400

500

m s

Definite Indefinite100

200

Page 39: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Chambers et al., JML, Chambers et al., JML, 20022002

Referential domains are dynamically updated taking Referential domains are dynamically updated taking

Display - Chambers: Uniqueness

Big Can (potential target)

into account actioninto account action--specific affordancesspecific affordances

Big Can (potential target)

Small Can(potential target)

Bowl (uniquecompetitor)

.

competitor)

Cube (small OR big)

“Pickupthecube. Nowput it inthe/ acan.”Could you put it in/the/a can

600

700 One Compatible TargetTwo Compatible Targets

Eyemovem ent latencies to targetBig cubeBig cube

Small cubeSmall cube

200

300

400

500

m s

Size mattersSize mattersBut only when you have to do itBut only when you have to do it

Definite Indefinite100

200 But only when you have to do itBut only when you have to do itChambers et al, in press, CosmopolitanChambers et al, in press, Cosmopolitan

Page 40: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Put the apple on the towel in the box.Put the apple that’s on the towel in the box.pp

0.8

1.0

AmbiguousUnambiguousth re

ct G

oal

Instruction

0.6

Unambiguous

tion

of T

rials

wi

men

ts to

Inco

rr

0.2

0.4

Pro

port

Eye

Mov

em

0.0

Page 41: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

P t th l th t l i th b

Multiple referents eliminate the goal garden-path

Put the apple on the towel in the box.

0.8

1.0

AmbiguousUnambiguousw

ith rrec

t Goa

l

Instruction

0.6

Unambiguous

rtion

of T

rials

wem

ents

to In

cor

0.2

0.4

Pro

poE

ye M

ove

One-Referent Context0.0

Page 42: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

P t th l th t l i th b

Multiple referents eliminate the goal garden-path

Put the apple on the towel in the box.

0.8

1.0

AmbiguousUnambiguousw

ith rrec

t Goa

l

Instruction

0.6

Unambiguous

rtion

of T

rials

wem

ents

to In

cor

0.2

0.4

Pro

poE

ye M

ove

One-Referent Context0.0

Two-Referent Context

Page 43: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Pour the egg in the bowl over the flour.

Pour the egg that's in the bowl over the flour.

T t

Chambers et al.2004, Chambers et al.2004, JEP:LMCJEP:LMC

Target

Two compatible eggsTwo compatible eggs

True LocationFalse Location

0 50.60.7

Ambiguous

Unambiguous (THAT'S)

0 20.30.40.5

One Compatible Two Compatible0.00.10.2

Two-Referent Context

Page 44: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Pour the egg in the bowl over the flour.

Pour the egg that's in the bowl over the flour.

T t

Chambers et al.2004, Chambers et al.2004, JEP:LMCJEP:LMC

Target

Two compatible eggsTwo compatible eggs

True LocationFalse Location

0 50.60.7

Ambiguous

Unambiguous (THAT'S)

0 20.30.40.5

One Compatible Two Compatible0.00.10.2

Two-Referent Context

Page 45: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Pour the egg in the bowl over the flour.

Pour the egg that's in the bowl over the flour.

T t

Chambers et al.2004, Chambers et al.2004, JEP:LMCJEP:LMC

Target

One compatible eggOne compatible egg

True LocationFalse Location

Page 46: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Pour the egg in the bowl over the flour.

Pour the egg that's in the bowl over the flour.

T t

Chambers et al.2004, Chambers et al.2004, JEP:LMCJEP:LMC

Target

True LocationFalse Location

0 50.60.7

Ambiguous

Unambiguous (THAT'S)

0 20.30.40.5

One Compatible Two Compatible0.00.10.2 nsns

Page 47: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Pour the egg in the bowl over the flour.

Pour the egg that's in the bowl over the flour.

T t

Chambers et al.2004, Chambers et al.2004, JEP:LMCJEP:LMC

Target

True LocationFalse Location

0 50.60.7

Ambiguous

Unambiguous (THAT'S)

0 20.30.40.5

One Compatible Two Compatible0.00.10.2

Page 48: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Pour requires a liquid theme: Could the effects be restricted to information coded within lexical representations?Put the whistle (that’s) on the clipboard in the box

400AmbiguousU bi

On some trials participant handed an instrument, e.g., a hook.

200

300Unambiguous

100

No instrumentInstrument0

Looks to competitor goal

Page 49: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Hypothetical context for utterance After John put the pencil below the big apple, he put the appleon top of the towel. Then he put it next to the whistle to illustrate the implausibility of standardp p p yassumptions about context-independent comprehension. Note that the small (red) apple isintended to be a more prototypical apple than the large (green) apple.

Page 50: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

D i ll d t d ti llD i ll d t d ti llDynamically updated pragmatically Dynamically updated pragmatically defined contexts defined contexts can can affect earliest moments of affect earliest moments of

t ti bi it l tit ti bi it l tisyntactic ambiguity resolutionsyntactic ambiguity resolutionAside about modularity

Strongest evidence to date against Fodorian encapsulation.Strongest evidence to date against Fodorian encapsulation.

Fodor’s modules allow feedbackFodor’s modules allow feedback betweenbetween subsystems butsubsystems butFodor s modules allow feedback Fodor s modules allow feedback betweenbetween subsystems but subsystems but not from not from outsideoutside the language system:the language system:

To show that that system is penetrable (hence informationally y p ( yunencapsulated), you would have to show that its processes have access to information that is not specified at any of the levels of representation that the language input systemlevels of representation that the language input system computes…

(Fodor, p. 77)(Fodor, p. 77)

Page 51: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Can dynamically updated context take into Can dynamically updated context take into account the speaker’s perspective and actions or account the speaker’s perspective and actions or

is initial processing egocentric ?is initial processing egocentric ?

Page 52: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Joint action or “radical”Joint action or radical egocentrism?

Work in the “language-as-action tradition”, including work in formal and computational semantics and pragmatics, assumes that interlocutors seek and use information about each others likely intentions, commitments, and eac ot e s e y te t o s, co t e ts, a dknowledge, distinguishing between information that is in common ground and information that is privileged for each participant Recently howeverprivileged for each participant. Recently, however, several lines of investigation have suggested that whether knowledge is mutual or private may have little effect on real-time language processing.

Page 53: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

What about considerations of speaker andWhat about considerations of speaker andWhat about considerations of speaker and What about considerations of speaker and addressee information states, likely knowledge, addressee information states, likely knowledge, private vs shared knowledge commitmentsprivate vs shared knowledge commitmentsprivate vs. shared knowledge, commitments, private vs. shared knowledge, commitments, etc.?etc.?

How do we account for wide range of phenomena How do we account for wide range of phenomena without these notions?without these notions?(e.g., questions vs. declarative questions, vs. (e.g., questions vs. declarative questions, vs. declaratives):declaratives):declaratives):declaratives):

Is the coffee ready?Is the coffee ready?The coffee is ready.The coffee is ready.yyThe coffee is ready?The coffee is ready?

Page 54: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Keysar Barr Balin & Brauner (1998)Keysar, Barr, Balin & Brauner (1998)Pick up the small candle

Participants considered the privileged small candle.BUT the object in privileged ground is a better fit of the description than the shared object.

Page 55: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Hanna, Tanenhaus & Trueswell (2003)“Now put the blue triangle on the red one”

Target and competitor are the same. Found an effect of common ground.ou d a e ect o co o g ou d

Page 56: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Hanna, Tanenhaus & Trueswell (2003)“Now put the blue triangle on the red one”

Problem 1: infelicitous instruction.Problem 2: global ambiguity. Perhaps integrate this ob e g oba a b gu ty e aps teg ate t sinformation as last resort?

(Nadig & Sedivy 2002 suffer from the same problem)

Page 57: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Referential domains and perspectiveReferential domains and perspectiveHeller, Grodner & Heller, Grodner &

Put the big duck on the Put the big duck on the bottombottom

TanenhausTanenhaus

bottombottomOne One contrastcontrast

Two contrastTwo contrastdd

Shar

edSh

ared

ege

ege

Priv

ilePr

ivile

dd

egocentricegocentric: late POD : late POD perspectiveperspective: early POD: early POD

Page 58: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Referential domains and perspectiveReferential domains and perspectivePut the big duck on the bottomPut the big duck on the bottomPut the big duck on the bottomPut the big duck on the bottom

Page 59: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Referential domains in interactiveReferential domains in interactiveReferential domains in interactive Referential domains in interactive conversationconversation

Tasks need to be truly collaborative.Tasks need to be truly collaborative.

Language must be created by the interlocutors (nonLanguage must be created by the interlocutors (non--scripted).scripted).

Need realNeed real--time measure.time measure.

This is a methodological nightmareThis is a methodological nightmareg gg g

Page 60: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Questions studyQuestions studyBrownBrown--Schmidt, Gunlogson & Tanenhaus (under Schmidt, Gunlogson & Tanenhaus (under review)review)

CC

AA BB

PAPA

PBPB

gameboard

Animals:horses, cows, pigsAnimals:horses, cows, pigs

Accessories: hat shoesAccessories: hat shoes

No adjacent pairs in a row No adjacent pairs in a row or column can have the or column can have the Accessories: hat, shoes, Accessories: hat, shoes,

glassesglasses same animal and/or the same animal and/or the same accessoriessame accessories

Page 61: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

QuickTime™ and aTIFF (Uncompressed) decompressor

are needed to see this picture.

Page 62: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Proportion of questionsProportion of questionsWhat’s above the cow with the What’s above the cow with the hat?hat?

Common Ground

Speaker’s Privileged

Addressee’s PrivilegedGround Privileged

GroundPrivileged Ground

.00 (.01) .06 (.11) .94 (.11)( ) ( ) ( )

Page 63: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Referential domains in interactiveReferential domains in interactiveReferential domains in interactive Referential domains in interactive conversationconversation

0.6

0.8BaselineNoun Region

0

0.2

0.4

0Shared Addressee-Private Speaker-Private

Page 64: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Perspective with questionsPerspective with questions

Late point-of-disambiguation trials Early point-of-disambiguation trialsLate point-of-disambiguation trials(E) What’s in the bottom middle?(Participant: A horse with glasses)(E) What’s above the cow with the hat?

Early point-of-disambiguation trials(E) What’s in the bottom middle?(Participant: A pig with shoes)(E)What’s above the cow with the hat

0.4

0.5

0.6

f fix

atio

ns t

o et

itor

(ta

rget

Late POD Target Advantage

Early POD Target Advantage

0.1

0.2

0.3

Prop

ortion

of

Targ

et-C

ompe

cowcow hathat

-0.1

0

-200

-100 0

100

200

300

400

500

600

700

800

900

1000

1100

1200

1300

1400

Page 65: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Part 3 (slides 65-77)

Visual World and ArtificialVisual World and Artificial Languages

Page 66: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Part 3: Combining the visual world Part 3: Combining the visual world with artificial languageswith artificial languageswith artificial languageswith artificial languages

Magnuson, Magnuson, Tanenhaus, Aslin & DahanTanenhaus, Aslin & Dahan (2003) (2003) JEP:GenJEP:GenCreel Creel Aslin & TanenhausAslin & Tanenhaus. . (2006, (2006, JEP:LMCJEP:LMC; 2006, ; 2006, JMLJML, in press, , in press, C itiC iti ))CognitionCognition) ) Wonnacott, Wonnacott, Newport & TanenhausNewport & Tanenhaus ((Cognitive PsychologyCognitive Psychology, in press), in press)Pirog, Pirog, Tanenhaus & AslinTanenhaus & Aslin (submitted)(submitted)

Why use artificial languages?Why use artificial languages?

distributional properties of stimulidistributional properties of stimuli

Why use artificial languages?Why use artificial languages?Control:Control:

distributional properties of stimulidistributional properties of stimulilearning combined with processinglearning combined with processingsemanticssemanticstask environmenttask environment

Artificial languages show signature effects in lexical Artificial languages show signature effects in lexical g g gg g gprocessing processing (sentence processing), (sentence processing), with minimal bleeding from with minimal bleeding from

the native languagethe native language

Page 67: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Artificial lexicon paradigmMagnuson, Tanenhaus, Aslin & Dahan (2003) JEP:Gen

• 15 participants learned a 16-word lexicon

• Words refer to shapes →– 7 contiguous cells randomly

filled in a 5x5 grid– Random word → picture

mapping for each subjectFo r sets like• Four sets like:

pibo pibu dibo dibu• Allows high- and low-frequency (HF

vs LF) items with HF or LFvs. LF) items with HF or LF neighbors

Page 68: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Replicated cohort and rhyme effectsDay 1 Day 2

Page 69: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Effects modulated by target and competitor frequency: highcompetitor frequency: high

frequency targets

Page 70: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Absent neighbors compete

Page 71: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Summary• After short training:

– Replicated cohort and rhyme effects– Replicated frequency effects

(Dahan, Magnuson & Tanenhaus, 2001)

f f– Measured time course of interaction of item and neighbor frequency

Eff t t li it d t di l d it• Effects not limited to displayed items

Page 72: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

fMRI with art language

• Lexicon – 8 CVCV items refer to novel shapes

8 CVCVCV it f t h th t h t– 8 CVCVCV items refer to changes that happen to the shapes

• 4 motion (horizontal oscillation, vertical oscillation, / d )contract/expand, rotate)

• 4 surface (black, white, marble, speckle)– CVCVCV items form 4 cohort pairs

• Motion-motion, surface-surface, motion-surface (x2)

Page 73: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Experiment Details

• Training– 1 hr/day for 3 days prior to scan1 hr/day for 3 days prior to scan

• “show” task: participant sees shape + change, hears name of shape and change

• “tell” task: participant hears name of shape and change, sees shape and change, must indicate match / no match (shown correct pairing asmatch / no match (shown correct pairing as feedback)

– All participants at ceiling levels of accuracy (> 98%) on day 3 (inside mock magnet)

Page 74: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Experiment Details

• Test Session– Modified version of “tell” taskModified version of tell task

• Computer-paced• Reduced feedback• Participants still highly accurate (95% correct)

– MT localizer scan– Structural scan

Page 75: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

start of trialauditory interval

3 – 7 s

visual interval

2 – 4 s

response interval + feedback

3 s

ITI

1 – 9 s

+

Page 76: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

• Blue: more• Blue: more active for motion events than texture events

• Yellow: more active for motion words thanwords than texture words

• Green: area of overlap

Page 77: Visual world studies of spoken language processing: Interpreting language in … · 2014-04-16 · Visual world studies of spoken language processing: Interpreting language in the

Lunch!Lunch!