Emoscill Outline. Overview Motivations – The role of top-down processing during perception –...

49
Emoscill Outline

Transcript of Emoscill Outline. Overview Motivations – The role of top-down processing during perception –...

Emoscill Outline

Overview

• Motivations– The role of top-down processing during perception– Mood influences during perception

• Study design and aims

• Results so far– Mood induction check– Behavioral Data– ROI analysis

Overview

• Motivations– The role of top-down processing during perception– Mood influences during perception

• Study design and aims

• Results so far– Mood induction check– Behavioral Data– ROI analysis

Preview/redux of this section• Top-down feedback conveying a priori knowledge can facilitate

visual perception

• We are examining a particular top-down mechanism in which low spatial frequency (LSF) information about a stimulus is rapidly conveyed to prefrontal regions. This LSF information triggers predictions about stimulus identity which guide ongoing processing in object recognition areas in inferior temporal cortex

• The purpose of the present study is to examine if differences in participant mood facilitate or inhibit this top-down mechanism. We are also interested in any other mood effects on brain function during perception.

Overview

• Motivations– The role of top-down processing during perception– Mood influences during perception

• Study design and aims

• Results so far– Mood induction check– Behavioral Data– ROI analysis

How Does the Brain Recognize Objects?

One line of research addressing this question has focused on stimulus-driven analyses in the ventral visual stream

Basic features of the visual stimulus are extracted in V1.

This information is refined to higher levels of abstraction along the ventral visual stream.

A visual representation of the input is formed (Tanaka, 1996) and matched with a representation stored in memory.

Poggio & Bizzi 2004

How Does the Brain Recognize Objects?

Basic features of the visual stimulus are extracted in V1.

This information is refined to higher levels of abstraction along the ventral visual stream.

A visual representation of the input is formed (Tanaka, 1996) and matched with a representation stored in memory.

However, this stimulus-driven narrative is incomplete and cannot fully explain our sophisticated visual abilityPoggio & Bizzi 2004

How Does the Brain Recognize Objects?

Some feats achieved by the visual brain….

The brain…

•can distinguish salient ‘figures’ from ‘ground’

•can recognize occluded objects with ease • can identify novel objects as exemplars of previously learned categories

•can do all of this in well under half a second

Some feats achieved by the visual brain….

The brain…

•can distinguish salient ‘figures’ from ‘ground’

•can recognize occluded objects with ease • can identify novel objects as exemplars of previously learned categories

•can do all of this in well under half a second

These challenges would overwhelm a purely data-driven system operating without any a priori assumptions.

The role of top-down processing

• It has long been suggested that the brain must be using a priori knowledge (predictions) to guide perception. Behavioral paradigms have provided empirical support (e.g. Biederman 1973, Palmer 1975, many others)

• The feedback connections found throughout the ventral stream provide an anatomical basis for this facilitation (e.g. Rempel-Clower & Barbas, 2000 )

• Recently, functional imaging techniques have shed light on a specific mechanisms for various top-down facilitation processes (e.g. Engel, Fries, & Singer, 2001; Ruff…&Driver 2006; Kveraga et al., 2007)

Top-down facilitation based on low spatial frequencies

• Bar (2003) has proposed the existence of specific top-mechanism for facilitating object recognition based on low spatial frequencies (LSFs) extracted from visual stimuli

• Subsequent empirical support described in Bar, Ghuman, Boshyan…, 2006; Kveraga, Boshyan, & Bar, 2007

The Proposal

An illustration of the top-down facilitation model. A partially processed, LSF image of the visual input is rapidly projected to OFC from early visual regions, although detailed slower analysis of the visual input is being performed along the ventral visual stream. This ‘gist’ image activates predictions about candidate objects similar to the image in their LSF appearance, which are fed back to the ventral object recognition regions to facilitate bottom-up processing

The Proposal

Magnocellular cells of the dorsal visual stream may convey information from early visual areas to OFC, consistent with their sensitivity to coarse visual information and rapid conduction speeds. In contrast, the parvocellular cells of the ventral visual stream are known to convey detailed visual information more slowly.

The Proposal

Parvocellular cells:fine spatial detailSlower conduction speedsColor sensitiveInsensitive to luminance

Magnocellular cells:course spatial detailfast conduction speedsAchromaticSensitive to luminance

Overview

• Motivations– The role of top-down processing during perception– Mood influences during perception

• Study design and aims

• Results so far– Mood induction check– Behavioral Data– ROI analysis

LSF-based Top-Down Facilitation and Mood

• In the proposed LSF top-down model, the ability of the brain to engage in associative processing is key

• When the global information extracted from the visual stimulus reaches OFC, the brain must essentially answer the question “what is this like?” i.e. (less anthropomorphically) LSF information reaching OFC may trigger the retrieval of associated object representations from memory

• Previous research suggests that mood may affect such associative processing

Mood and Associative Processing•Positive mood promotes and negative mood impairs associative processing (Storbeck & Clore, 2008; Bless, et al 1996; Isen et al, 1985; Challis & Krane, 1988)

•This literature originally inspired the hypothesis that positive mood would enhance and negative mood inhibit the top-down facilitation described above by influencing the associative, predictive processing occurring in OFC

•However, a recent study (Huntsinger, Clore, & Bar-Anan, 2010) suggests the proposed link between mood and associations isn’t so clear cut. And so far our data suggest greater response amplitudes in OFC & ventral visual areas and greater OFC/ventral stream phaselocking under conditions of negative mood

Overview

• Motivations– The role of top-down processing during perception– Mood influences during perception

• Study design and aims

• Results so far– Mood induction check– Behavioral Data– ROI analysis

Preview/redux of this section• Subjects performed a simple object recognition task in which they

had to decide whether or not objects presented on a projector screen would fit in a typical shoebox. This task was chosen simply because it requited subjects to pay attention to the stimuli.

• Object stimuli were primed by a near-identical stimulus specially designed to either stimulate the magnocellular cells believed to transmit information to OFC (‘m-biased’ stimuli) or the parvocellular cells of the ventral stream (‘p-biased’ stims; details to follow)

• Three variables were manipulated– Participant mood (positive or negative)– Prime type (m biased, p biased, or none)– Stimulus onset asynchrony (SOA; number of milliseconds between the

onset of the m or p prime and the ‘ordinary’ stimulus, 50 or 100 ms)

Trial Timecourse (stimuli modified for visibility)

5 total trial types:M primed, 100 ms SOA P primed, 100 ms SOAM primed, 50 ms SOA P primed, 50 ms SOAControl condition (no prime)

unbiased imageM- or P- biased imageFixation

Response Period

IntertrialInterval

500 ms100 or 50 ms 1200 ms 500 ms 500-1500 ms

+

M and P biased stimuliM biased stimuliTarget differs from background in luminance but not colorShould selectively stimulate (to an extent) the magnocellular cells thought to convey LSF information to OFC, and, by extension, top-down processing

P biased stimuliTarget differs from background in color but not luminanceShould selectively stimulate the parvocellar cells that dominate the parvocellular stream, and, by extension, bottom-up processing

Full spectrum targetsTarget differes from background in both color and luminance. No m or p bias.

*Subjective luminance and color contrast detection thresholds were determined for each subject prior to the main task

The Proposal

Parvocellular cells:fine spatial detailSlower conduction speedsColor sensitiveInsensitive to luminance

Magnocellular cells:course spatial detailfast conduction speedsAchromaticSensitive to luminance

Task Schedule (abbreviated)1. Subject inserted into the MEG2. Luminance and color contrast thresholds determined3. Subjects complete PANAS #1 (a standardized mood

measure)4. Main task begins, composed of 5 runs each consisting of:

1) Mood induction period (1 m 30 sec of positive or negative music and images) with valence and arousal ratings (see next slide)

2) 25 ‘shoebox’ trials3) Steps 1&2 repeated 3 additional times for 100 trials per run

(thus, entire task consists of 500 trials, 100 for each condition)

5. PANAS #26. End of experiment

Valence and Arousal Ratings

Before and after every mood induction period participants rated their valence (subjective feelings of positivity or negativity, x-axis above) and arousal (subjective energy level, y-axis above) on a scale of 1-10

MEG Acquisition Details

• 306-channel Neuromag Vectorview whole-head system (Elekta Neuromag Oy) housed in a three-layer magnetically shielded room (ImedcoAG).

• Participant head position was monitored using four head-position indicator (HPI) electrodes affixed to the subject’s head. The positions of the HPI electrodes on the head as well as those of multiple points on the scalp were entered with a magnetic digitizer (Polhemus FastTrack 3D) in a head coordinate frame defined by anatomical landmarks.

• Eye blinks were monitored with four electrooculogram (EOG) sensors positioned above and beside the subjects’ eyes. Data collected by the MEG, EOG, and HPI sensors were sampled at 600 Hz, band pass-filtered in the range of 0.1–200 Hz, and stored for offline analysis.

• So far, MEG data have been analyzed with the MNE software package (Hämäläinen 2005).

Overview

• Motivations– The role of top-down processing during perception– Mood influences during perception

• Study design and aims

• Results so far– Mood induction check– Behavioral Data– ROI analysis

Aim 1: Determine if mood affects top-down information flow

• As outlined, positive mood may facilitate LSF-based TD predictions, while negative mood may hamper this mechanism. (as mentioned, preliminary results suggest that the reverse may in fact be true)

• To test this, we can begin by comparing current flow in OFC and ventral stream areas across mood conditions. – Increased early OFC amplitude for positive mood would suggest an

increased prefrontal role during recognition. – Increased ventral stream activity with negative mood (with little prefrontal

activity) would suggest greater processing demands in the absence of top-down facilitation

• Ideal finding: Greater early visualOFC and OFCventral stream phaselocking for positive mood and early visualventral stream phaselocking for negative mood

Aim 2: Characterize the time course of m & p information flow

• The model proposes that magno cells convey LSF information to OFC while parvo cells convey finer grained information along ventral stream

• We have already used MEG to show that LSF stimuli elicit a rapid response in OFC (Bar et al, 2006).

• Demonstrating similar early preferential activation of OFC by m-biased stimuli would support the proposed role of m cells in conveying information quickly to OFC

Aim 3: Analyze the effect of mood on alpha power

• EEG studies have shown that negative mood is associate with decreased alpha power across a range of electrode sites (Kuhbandner et al 2009, Everhart et al 2003)

• Localizing the cortical generators of this effect with MEG would be interesting in its own right

• We have observed a consistent (although nonsignificant) trend for subjects induced into a negative mood to display faster reaction times. Perhaps this is linked to alpha in some way?

Overview

• Motivations

• Study design and aims

• Results so far– Mood induction check– Behavioral Data– ROI analysis

Overview

• Motivations

• Study design and aims

• Results so far (N =21, pos subjs = 10, neg subjects = 11)

– Mood induction check– Behavioral Data– ROI analysis

Positive MI subjects consistently reported more positive valence than did negative MI subjects (2 mood x 5 run anova, main effect of mood: p < 1e-5).

Analysis of PANAS scores confirmed the groups did not differ in valence prior to the experiment Valence Key:

10 = most positive 5 = neutral 0 = most negative

Mean valence ratings by run

Positive MI subjects consistently reported higher arousal than did negative MI subjects (2 mood x 5 run anova, main effect of arousal: p < .05).

Arousal Key:10 = highest arousal 5 = neutral 0 = lowest arousal

Mean arousal ratings by run

Overview

• Motivations

• Study design and aims

• Results so far (N =21, pos subjs = 10, neg subjects = 11)

– Mood induction check– Behavioral Data– ROI analysis

Reaction Time

• Mixed model anova: x 2 (Mood: pos or neg) x 2 (prime: m or p)x 2 (SOA: 50 or 100)

Reaction Time

• Mixed model anova: x 2 (Mood: pos or neg) x 2 (prime: m or p)x 2 (SOA: 50 or 100)

• Main effect of Prime Type(faster for m primes, p < .05)

Reaction Time

• Mixed model anova: x 2 (Mood: pos or neg) x 2 (prime: m or p)x 2 (SOA: 50 or 100)

• Main effect of Prime Type(faster for m primes, p < .05)

• Main effect of SOA(faster for 100 ms SOA, p < 1e-6)

Reaction Time

• Mixed model anova: x 2 (Mood: pos or neg) x 2 (prime: m or p)x 2 (SOA: 50 or 100)

• Main effect of Prime Type(faster for m primes, p < .05)

• Main effect of SOA(faster for 100 ms SOA, p < 1e-6)

• No interactions or main effect of mood (although negative mood subjects tended to be faster across conditions)

Overview

• Motivations

• Study design and aims

• Results so far (N =21, pos subjs = 10, neg subjects = 11)

– Mood induction check– Behavioral Data– ROI analysis

Overview

• Motivations

• Study design and aims

• Results so far (N =21, pos subjs = 10, neg subjects = 11)

– Mood induction check– Behavioral Data– ROI analysis (evoked responses & phaselocking)

ROI Selection – early visual area V1Selection Steps:-For first pass, just used automatic Freesurfer parcellation of V1

Left hemisphere, medial view Right hemisphere, medial view

ROI Selection – Ventral StreamSelection Steps:-loaded the Freesurfer automatic fusiform parcellation -circled peak signal-to-noise ratio within the fusiform anatomical label between 100 and 200 ms post-stimulus (all subjects grouped together, not separated by mood)

Left hemisphere, ventral view Right hemisphere, ventral view

ROI Selection – OFCSelection Steps:-loaded the Freesurfer automatic OFC parcellation (medial and lateral areas)-circled peak signal-to-noise ratio within the OFC anatomical label between 100 and 200 ms post-stimulus (all subjects grouped together, not separated by mood)

Left hemisphere, ventral view Right hemisphere, ventral view

Y axis = tesla/cmYellow shading = timepoints with significant uncorrected t-tests between groupsCluster p val = p value from a monte carlo test (not discussed)Red and blue shading = 95% confidence intervalsOnly control trials shown for V1 and other ROIs. Intended as a broad overview of the sorts of analyses that have been done so far.

*1 negative subject excluded because mean OFC amplitude over time window of interest was > 3 sd above the mean

Functional OFCFusiform Phaselocking by hemi and mood: possible greater phaselocking for negative mood subjects in right hemisphere

(stats pending)

Present state of analysis• As of now, can

– Define ROIs and extract trial-by-trial or averaged data– Do rudimentary time-series analysis by identifying response peaks and

testing for amplitude or latency effects via anova– Compute power and phaselocking statistics for individual rois and calculate

p values using cluster mass monte carlo tests (Maris & Oostenveld, 2007)– Create whole brain ‘dspm maps’ (essentially, signal-to-noise z scores) using

mne software, although the statistical interpretation of these maps seems to be up to debate within the community

• Goals for the analysis:– Be able to create whole brain analyses identifying the effect of mood, prime

, soa, and their interactions on current amplitude or frequency power without being overwhelmed by the multiple comparisons over space and time

– Analyze the roi time-series data in a more sensitive manner