Melange: Space Folding for Multi-Focus Interaction

Post on 19-Jan-2015

673 views 2 download

Tags:

description

Interaction and navigation in large geometric spaces typically require a sequence of pan and zoom actions. This strategy is often ineffective and cumbersome, especially when trying to study several distant objects. We propose a new distortion technique that folds the intervening space to guarantee visibility of multiple focus regions. The folds themselves show contextual information and support unfolding and paging interactions. Compared to previous work, our method provides more context and distance awareness. We conducted a study comparing the space-folding technique to existing approaches, and found that participants performed significantly better with the new technique.

Transcript of Melange: Space Folding for Multi-Focus Interaction

MélangeSpace Folding for Multi-Focus

Interaction

Niklas ElmqvistNathalie HenryYann Riche

Jean-Daniel Fekete

CHI 2008 – Florence, Italy

Motivation

• Large visual spaces in information visualization– Social networks

• Several focus points– High precision– High detail– Overview?

• Multi-focus interaction

3

Multi-Focus Interaction

Example: Planning a trip to Florence

distance?

4

Solution: Split-Screen

• Show the foci as two different views

?

Mélange

Demonstration!

6

Generalizing RequirementsG1: Guaranteed focus visibility

• Montreal and Florence

G2: Surrounding context visibility• Local area around Montreal and Florence

G3: Intervening context awareness• Relative positions, detours (interesting

places?)

G4: Distance awareness• Approximate trip distance

Related Work: Space Distortion

• Fisheye views– [Furnas 1986, Shoemaker 2007]

• Rubber sheet– [Sarkar 1993, Munzner 2003]

Related Work: Semantic Distortion

• DOITrees [Card 2002]• SpaceTree [Plaisant 2002]

9

Space-folding using Mélange

10

Basic Idea

Visualizing Folds

open fold page

mouse cursor

closed fold pagesfocus A focus B

12

Implementation

• Java platform built using OpenGL– JOGL bindings

• Scene graph w/ geometrical shapes– Arcs, circles, rectangles (textured), etc

• Focus points (one primary) controlled by user and/or application

• Mélange canvas guarantees visibility– Seamlessly splits shapes into subshapes

13

Evaluation

Evaluation - Overview

• Controlled Experiment• Based on social networks analysis:

compare the immediate context of two distant connected nodes

MatLink: social network analysis using matrices[Henry&Fekete @ Interact 2007]

Evaluation - Overview

• Controlled Experiment• Based on social networks analysis:

compare the immediate context of two distant connected nodes

MatLink: social network analysis using matrices[Henry&Fekete @ Interact 2007]

Evaluation - Overview

• Controlled Experiment• Based on social networks analysis:

compare the immediate context of two distant connected nodes

MatLink: social network analysis using matrices[Henry&Fekete @ Interact 2007]

Evaluation - Overview

• Controlled Experiment• Based on social networks analysis:

compare the immediate context of two distant connected nodes

MatLink: social network analysis using matrices[Henry&Fekete @ Interact 2007]

18

Evaluation - Overview

• 1 trial consisted in 3 tasks– (T1) Find the connected twin of the source node– (T2) Estimate the distance between the two nodes– (T3) Estimate the number of contextual features

Connected Twin Node

Distractor Nodes

Source Node

19

Evaluation - Overview

• 1 trial consisted in 3 tasks– (T1) Find the connected twin of the source node– (T2) Estimate the distance between the two nodes– (T3) Estimate the number of contextual features

1 Screen

Target NodeContextual Features

Source Node

20

Evaluation - Factors

• 3x3x2x2 within-subject design• 3 Presentation techniques, interaction with PZ

– Single Viewport (SV)– Split-Screen Viewport (SSV)– Melange (M)

• 3 Off-Screen Distance: 4, 8 or 16 screens(distance between the two nodes)

• 2 Distractor density: low or high (1 or 2 per screen)(number of false targets between the two nodes)

• 2 Contextual Features density: few (≤5) many (>5)(number of contextual features between the two nodes)

21

Evaluation - Hypotheses

• Mélange’s presentation of context does not interfere with navigation

• Mélange’s presentation of context provides significant improvement to:– Distance awareness– Contextual features awareness

Evaluation – Results Outline• Time

– Finding the twin of the source node (T1)No significant difference in completion time between techniques

– Estimating distance between the nodes (T2)Mélange is significantly faster than both Single Viewport and Split Screen Viewport (F2,22 = 8.695, p ≤.05)

22

Evaluation – Results Outline• Time

– Finding the twin of the source node (T1)No significant difference in completion time between techniques

– Estimating distance between the nodes (T2)Mélange is significantly faster than both Single Viewport and Split Screen Viewport (F2,22 = 8.695, p ≤.05)

23

Evaluation – Results Outline• Time

– Finding the twin of the source node (T1)No significant difference in completion time between techniques

– Estimating distance between the nodes (T2)Mélange is significantly faster than both Single Viewport and Split Screen Viewport (F2,22 = 8.695, p ≤.05)

24

Evaluation – Results Outline• Time

– Finding the twin of the source node (T1)No significant difference in completion time between techniques

– Estimating distance between the nodes (T2)Mélange is significantly faster than both Single Viewport and Split Screen Viewport (F2,22 = 8.695, p ≤.05)

25

Time T2 + Time T3

F2,22=9.855, p<0.001.

Evaluation – Results Outline• Correctness

– Estimating distance between the nodes (T2)Mélange is significantly better than Split Screen Viewport and similar to Single Viewport

26

Results Summary

• Mélange provides– G1: Guaranteed focus visibility– G2: Surrounding context visibility

• Mélange is significantly more efficient in supporting contextual tasks– G3: Intervening context awareness– G4: Distance awareness

• Mélange is not slower than the other techniques for navigation

27

28

Conclusions and Future Work• Presentation technique for

supporting multi-focus interaction• Folds 2D space into 3D to guarantee

visibility of focus points• Evaluation shows significant

improvement over state of the art

• Future: Interaction for large spaces, real-world applications

29

Questions?

• Contacts (@lri.fr):Niklas Elmqvist – elmNathalie Henry – nhenryYann Riche – richeJean-Daniel Fekete – fekete

Project website: http://www.lri.fr/~elm/projects/melange.html

Thanks to our drummer, Pierre

30

Related Work

Technique G1 G2 G3 G4

Zoom and pan[Appert 2006, Igarashi 2000] - - - -

Split-screen[Shoemaker 2007]

Y Y - -

Fisheye views[Furnas 1986, Shoemaker 2007] Y P Y -

Rubber sheet[Sarkar 1993, Munzner 2007] P P Y P

Semantic distortion[Card 2002, Plaisant 2002] Y Y Y -