Seeking common ground while reserving differences in...

22
Seeking common ground while reserving differences in gesture elicitation studies Huiyue Wu 1,2 & Jiayi Liu 1 & Jiali Qiu 1 & Xiaolong (Luke) Zhang 3 Received: 27 March 2018 /Revised: 10 October 2018 /Accepted: 6 November 2018 / Published online: 21 November 2018 # Springer Science+Business Media, LLC, part of Springer Nature 2018 Abstract Gesture elicitation studies have been frequently conducted in recent years for gesture design. However, most elicitation studies adopted the frequency ratio approach to assign top gestures derived from end-users to the corresponding target tasks, which may cause the results get caught in local minima, i.e., the gestures discovered in an elicitation study are not the best ones. In this paper, we propose a novel approach of seeking common ground while reserving differences in gesture elicitation research. To verify this point, we conducted a four-stage case study on the derivation of a user-defined mouse gesture vocabulary for web navigation and then provide new empirical evidences on our proposed method, including 1) gesture disagree- ment is a serious problem in elicitation studies, e.g., the chance for participants to produce the same mouse gesture for a given target task without any restriction is very low, below 0.26 on average; 2) offering a set of gesture candidates can improve consistency; and 3) benefited from the hindsight effect, some unique but highly teachable gestures produced in the elicitation study may also have a chance to be chosen as the top gestures. Finally, we discuss how these findings can be applied to inform all gesture-based interaction design. Keywords User-defined gestures . Elicitation study . Mouse gesture . Web navigation Multimedia Tools and Applications (2019) 78:1498915010 https://doi.org/10.1007/s11042-018-6853-0 This work was supported by the National Natural Science Foundation of China under Grant No. 61772564, 61202344 and the funding offered by the China Scholarship Council (CSC) * Huiyue Wu [email protected] 1 The School of Communication and Design, Sun Yat-sen University, Guangzhou, China 2 Guangdong Key Laboratory for Big Data Analysis and Simulation of Public Opinion, Guangzhou, China 3 College of Information Sciences and Technology, Pennsylvania State University, University Park, PA 16802, USA

Transcript of Seeking common ground while reserving differences in...

Page 1: Seeking common ground while reserving differences in ...static.tongtianta.site/paper_pdf/a953aea6-e32f-11e... · common ground while reserving differences among user gesture preferences

Seeking common ground while reserving differencesin gesture elicitation studies

Huiyue Wu1,2 & Jiayi Liu1 & Jiali Qiu1 & Xiaolong (Luke) Zhang3

Received: 27 March 2018 /Revised: 10 October 2018 /Accepted: 6 November 2018 /Published online: 21 November 2018# Springer Science+Business Media, LLC, part of Springer Nature 2018

AbstractGesture elicitation studies have been frequently conducted in recent years for gesture design.However, most elicitation studies adopted the frequency ratio approach to assign top gesturesderived from end-users to the corresponding target tasks, which may cause the results getcaught in local minima, i.e., the gestures discovered in an elicitation study are not the bestones. In this paper, we propose a novel approach of seeking common ground while reservingdifferences in gesture elicitation research. To verify this point, we conducted a four-stage casestudy on the derivation of a user-defined mouse gesture vocabulary for web navigation andthen provide new empirical evidences on our proposed method, including 1) gesture disagree-ment is a serious problem in elicitation studies, e.g., the chance for participants to produce thesame mouse gesture for a given target task without any restriction is very low, below 0.26 onaverage; 2) offering a set of gesture candidates can improve consistency; and 3) benefited fromthe hindsight effect, some unique but highly teachable gestures produced in the elicitationstudy may also have a chance to be chosen as the top gestures. Finally, we discuss how thesefindings can be applied to inform all gesture-based interaction design.

Keywords User-defined gestures . Elicitation study.Mouse gesture .Web navigation

Multimedia Tools and Applications (2019) 78:14989–15010https://doi.org/10.1007/s11042-018-6853-0

This work was supported by the National Natural Science Foundation of China under Grant No. 61772564,61202344 and the funding offered by the China Scholarship Council (CSC)

* Huiyue [email protected]

1 The School of Communication and Design, Sun Yat-sen University, Guangzhou, China2 Guangdong Key Laboratory for Big Data Analysis and Simulation of Public Opinion, Guangzhou,

China3 College of Information Sciences and Technology, Pennsylvania State University, University Park, PA

16802, USA

Page 2: Seeking common ground while reserving differences in ...static.tongtianta.site/paper_pdf/a953aea6-e32f-11e... · common ground while reserving differences among user gesture preferences

1 Introduction

With the computational power steadily growing and the rapid development of HCI (Human-Computer Interaction), computers are beyond the conventional WIMP (Window, Icon, Menu, andPointing device) desktop paradigm and input devices are beyond traditional mouse and keyboard.For example, more and more systems begin to offer a compelling method to let end-users employtheir own gestures for such interaction tasks as manipulating virtual objects [6, 23] and interactingwith mobile phones [11, 25], large displays [24], and smart home systems [4, 12, 14, 28, 31, 33].

Although gesture-based applications have been widely explored in recent years, there arestill some challenges concerning the naturalness of gestures. For example, most existingsystems only offer a limited number of gestures due to the immaturity of gesture recognitiontechnologies, and many of those gestures, which are usually pre-defined by HCI experts, arearbitrarily associated with corresponding tasks [32]. In addition, end-users often have littleopportunity to participate in design and development processes. Consequently, such systemsmay not able to recognize some gestures users naturally perform. These limitations haveimpeded broad adoption of gesture-based interaction in applications.

Natural human-computer interaction based on gesture inputs requires an accurate under-standing of the intentions and preferences of end users. Currently, elicitation study, a methoddeveloped in the field of participatory design, has been applied in a variety of emerginginteractive systems, and has been used to collect requirements and expectations by involvingend-users into gesture design processes. However, traditional elicitation studies often sufferfrom the vocabulary disagreement problem [8], i.e., end-users’ gesture proposals are oftenbiased by their own preferences or experiences with prior user interfaces, such as WIMP userinterfaces or touch-based user interfaces. Therefore, it is unrealistic to expect that end-users canintuitively produce the same gesture for a given target task. This gesture disagreement problemcan lead to a risk of poor system usability and low user acceptance.

To enhance gesture-based user interface design, we propose a new approach to seekcommon ground while reserving differences among user gesture preferences in gestureelicitation study. Differently from existing elicitation studies, which only select the top gesturemostly favored by users for a target task, our approach aims to design gestures by consideringthe top gesture as well as other popular gestures. We argue that this approach can keep thesystem from being trapped in local minima, namely, uncover gestures not being the best suitedfor specified target tasks. To validate our method, we conducted a series of experimentalstudies on the development of a set of user-defined mouse gestures for web navigation. Webelieve that our findings can benefit research and design practices of gesture-based interfaces.

The rest of this paper is structured as follows. We first review related work and thenintroduce our research motivation. After we describe our experimental studies and report theirresults, we discuss our findings. Finally, we conclude the paper with the contributions of ourresearch and possible future research directions.

2 Related work

Our work primarily concerns research related to the application of mouse gestures in webbrowsers and the elicitation studies on gesture-based interaction in HCI. Thus, our review herefocuses on work in these areas.

14990 Multimedia Tools and Applications (2019) 78:14989–15010

Page 3: Seeking common ground while reserving differences in ...static.tongtianta.site/paper_pdf/a953aea6-e32f-11e... · common ground while reserving differences among user gesture preferences

2.1 Mouse gesture-based systems for human-computer interaction

Various research has been done to study issues concerning the efficiency of mouse gestures.By comparing task completion times of three different methods — mouse flick gestures,mouse clicks and shortcut keys, Dulberg et al. [5] found that mouse flick gestures are fasterthan mouse clicks, and are as fast as shortcut keys but not requiring homing activities, i.e.,moving the hand between mouse and keyboard. Another study by Moyle et al. [18] to analyzemouse flick gestures in a realistic web navigation setting showed that mouse flick gestures forpage Bback^ and Bforward^ actions are significantly faster than traditional navigation controlsand the subjective ratings by users are also positive towards mouse gestures.

Some research investigated the usability of mouse gestures. With a comprehensive study on thesuitability for learning, controllability, error tolerance and self-descriptiveness of mouse gestures,Paschke [22] found that most participants liked mouse gesture-based navigation controls, inparticular, the feature of mouse gestures that allows users to individualize gestures based on theirown personal preferences and habits. To maximize the efficiency of mouse gestures, Seo [26]conducted a cognitive response test and evaluated to what extent the mouse gesture matched theirrespective functional meanings, and based on the experimental results argued that mouse gesturedesign should be aligned with a user’s mental model to reduce cognitive load in interaction.

In addition to the above-mentioned studies on mouse gesture in the academic research field,mouse gestures have also been widely applied in commercial browsers, such as Opera,1 MozillaFirefox,2 Google Chrome,3 360,4 and QQ5 in recent years. However, the mouse gestures in thesebrowsers vary significantly, and the same target task is controlled by different gestures in thesebrowsers. This inconsistency in gesture choice may indicate the lack of a good understanding ofrelevant user behaviors and the need for a general design guidance on mouse gestures.

2.2 Elicitation studies resulting in a single canonical gesture set

To gain a better understanding of what types of gestures are most popular by end-users, someresearchers used elicitation methods to explore the law of end-users’ regular behaviors andpreferences. For example, Nielsen et al. [21] proposed a formalizing procedure for theelicitation and evaluation of use-defined gestures for hand-free computer interaction inubiquitous computing. Wobbrock et al. [30] introduced an agreement rate formula to calculatethe frequency ratio of elicited gestures from end-users and measure the level of end-users’consensus. Based on the agreement rate formula, the top gestures were selected and thenassigned to the corresponding tasks. Such approach has been widely used by HCI researchersfor surface computing [9, 13, 27], mobile interaction [11, 25], Virtual/Augmented Reality [23],omnidirectional video [24], and TV control in a living room [12, 28, 33].

Research has provided empirical evidence on the benefits of practices involving gestureelicitation in design. Morris et al. [16] by comparing user-authored surface gestures andresearcher-authored surface gestures, found that user-defined gestures are more acceptablethan those created solely by system developers. Similarly, the findings by Nacenta et al. [19]

1 http://www.opera.com.2 https://www.mozilla.org/en-US/firefox/new/.3 http://www.googlechromer.cn/.4 http://chrome.360.cn/5 http://browser.qq.com/.

Multimedia Tools and Applications (2019) 78:14989–15010 14991

Page 4: Seeking common ground while reserving differences in ...static.tongtianta.site/paper_pdf/a953aea6-e32f-11e... · common ground while reserving differences among user gesture preferences

indicated that user-defined gestures are easier to remember and learn, and more interesting touse when compared with pre-designed gestures by system developers.

Although gesture elicitation studies have been widely used in variety of emerging applica-tion fields, there are still some limitations, for example, most studies adopted the consensusformula by Wobbrock et al. [30] to choose the top gesture, i.e., the gesture with the highestfrequency, from a group of elicited gestures from end-users for a target task and finallygenerated a single canonical set of elicited gestures. The limitation of those studies is theprobable rejection of some potentially popular gestures with lower agreement scores in theearly stage of gesture design. Another limitation is that traditional elicitation studies oftensuffer from the Blegacy bias^ problem [17]. For example, in our prior work [31], half of 24participants preferred a Swiping Right gesture for a BTurn to next channel^ task while the otherhalf of the 24 participants preferred a Swiping Left gesture for the same task. A closer interviewdisclosed that participants proposed the Swiping Right gesture were inspired by the traditionalremote control while participants proposed the Swiping Left gesture were affected by the multi-touch recognizing screens.

Recently, several researchers [2, 3, 10] applied the priming and production techniqueproposed by Morris et al. [17] to offset legacy bias. In their method, participants were requiredto design at least three gestures for a specified target task. However, the practical effectivenessof this method might be limited because participants found it difficult to design so manygestures at a time for each task, especially when they were not familiar with the gesture designspace. For example, in Chan et al. [2], some participants said that they already had a gesture inmind, so designing three gestures at a time became a burden.

2.3 Elicitation studies resulting in two or more gesture sets

To reduce the risk of rejecting potentially popular gestures in the early stage of gesture design, someresearchers proposed to derive two or more gesture sets rather than only one single gesture set for asystem. Extended byNielsen et al.’s work [21], Löcken et al. (2011) developed two consistent sets offree-hand gestures that were used to control a music player, one set consists of only dynamicgestures and the other set consists of only static gestures. By usingWobbrock et al.’s agreement rateformula [30], Wu et al. [31] derived two sets of user-defined freehand gestures for TV controls insmart homes. Compared with Wobbrock et al.’s and Löcken et al.’s work, Wu et al. propose toreserve the top two gesture candidates for a given task, e.g., the Swiping Left and the Swiping Rightgesturewere both reserved for a BTurn to next channel^ task. As for which one to choose in practice,it depends on end-users’ preferences.Wu et al. also provided a flexible toolkit for non-expert users todefine personalizedmappings between the alternative gestures and the target tasks according to theirpreferences.

Compared with the basic elicitation studies that generate only a single canonical gesture set,the advantage of the methods mentioned above is the reservation of multiple gesture candi-dates in the early stage of design process. These methods may result in gesture sets that aremore likely to be discoverable by and memorable to a large user base, which in turn lead toimproved system performance and user satisfaction.

However, those studies only choose the top two gestures by using the basic frequency ratioapproach. Therefore, they also cannot completely avoid the gesture disagreement problem andstill face the risk of rejecting some unique but potentially popular gestures. Currently, there is alack of general design guidance on what is the minimum number of gestures should beconsidered according to the ranking of agreement scores.

14992 Multimedia Tools and Applications (2019) 78:14989–15010

Page 5: Seeking common ground while reserving differences in ...static.tongtianta.site/paper_pdf/a953aea6-e32f-11e... · common ground while reserving differences among user gesture preferences

In summary, to design gesture-based interaction systems that are more natural and user-friendly, we need to consider the gesture disagreement problem and explore how to get end-users more involved in the design. The aim of this research is to provide empirical evidence forthis problem and further help to lay a theoretical foundation for gesture-based user interfacedesign.

3 Research motivation

Traditional WIMP-based interaction technologies rely on mouse movement and button clicks.However, those technologies suffer from the distance and targeting issues that govern Fitts’Law [7]. Compared with them, the mouse gesture is an effective alternative to commonmouse-driven interactions in traditional GUI [5, 15, 18, 22, 26]. A mouse gesture can beperformed by pressing a button of the mouse and simultaneously conducting a motion, such asdrawing a pigtail.

Since spontaneous gestures are performed naturally and intuitively during human commu-nication, the use of mouse gestures as control-elements might provide an opportunity tosimplify the interaction with a web browser on a more natural basis [5, 18, 22, 26]. Manystudies have been conducted in recent years and more and more commercial web browsersbegan to support mouse gesture-based interaction. The advantages of mouse gesture include:1) minimizing Fitts’ Law constraints on time-to-target — theoretically, a movement of one-pixel is sufficient to perform a long-distance mouse movement-and-click task, and furthermouse movement leads to larger efficiency; and 2) meeting different users’ personalizedcustomization, i.e., end-users can generate personalized gestures based on their own prefer-ences or mental models.

Our research methodology includes a brainstorming session and four experimental studies.The brainstorm session is designed to identify core tasks that should be supported by mousegestures. The four experiments focus on the design and evaluation of mouse gestures for giventasks.

4 Requirement analysis and function definition

To design a user-friendly system, we should first collect data from end-users abouttheir actual needs and expectations and determine the most needed tasks in a mouse-gesture based web browser. To determine the set of most-needed core tasks, wecollected interaction tasks from four popular web browsers that support mouse ges-tures – Google Chrome, Mozilla Firefox, 360 Browser, and QQ Browser. In addition,we investigated previous research on mouse-gesture-based interaction applications [5,18, 22, 26]. By collecting and merging repetitive mouse interactive tasks provided bythe four business web browsers and previous research results, we obtained a set of 21navigation tasks.

Next, we recruited 50 participants (28 females and 22 males) to a semi-structuredinterview. They had various backgrounds, but were all heavy web-browser users. Eigh-teen of them had prior experience in mouse-gesture-based interaction. During the inter-view, the 21 collected navigation tasks were used as the basic information andparticipants were asked the following questions:

Multimedia Tools and Applications (2019) 78:14989–15010 14993

Page 6: Seeking common ground while reserving differences in ...static.tongtianta.site/paper_pdf/a953aea6-e32f-11e... · common ground while reserving differences among user gesture preferences

& what are the most needed tasks when interacting with a web browser?& which tasks are appropriate to perform by using mouse gestures?& what are the advantages and disadvantages of using mouse gestures in web browsing? and& what is the maximum number of mouse gestures a web browser should provide?

According to the results collected from the review, we developed a list of 8 most-needed coretasks in a mouse-gesture-based web browser. Table 1 shows these 8 tasks. The two leftcolumns of the table identify each task with a sequential number and a task name. The tworight columns indicate how popular each task was among participants. We chose these 8 tasksonly because they were selected by more than 50% of participants.

In this section, we collected requirements from actual end-users and determined the most-needed tasks that a mouse-gesture-based web browser should provide. The results of thissession laid the foundation for our subsequent experimental studies.

5 Experimental studies

We conducted four experiments. The first experiment was a study to elicit mouse-gesturedesigns for those identified core tasks without limiting participants on what gestures theycould choose. Its result was used as the gesture candidates for the best gestures in thesecond and third experiments, the goal of which was to investigate how the designs ofthe best gestures may converge when participants were given certain choices. The lastexperiment compared the mouse-gesture designs based on the previous three experimentswith the mouse gestures offered by 4 commercial browsers. In this section, we presentthese four experiments in detail.

5.1 Experiment 1

In this experiment, we first collected further information about the most needed core tasks for amouse gesture-based web browser to validate the derived results from the previous brainstormsession. Then, based on the most needed tasks that a mouse gesture-based system shouldsupport, we asked users to design a mouse gesture freely for a given target task. Our focus herewas on understanding what are the most intuitive mouse gesture patterns users preferred fordifferent web navigation tasks and then applying these findings to inform gesture-basedinterface design.

Table 1 Core tasks

No. Task Name Frequency Percentage

1 Page forward 47 94.0%2 Page backward 47 94.0%3 Refresh 44 88.0%4 Open a new TAB 41 82.0%5 Close the current TAB 41 82.0%6 Switch to previous TAB 38 76.0%7 Switch to next TAB 38 76.0%8 Minimize the window 30 60.0%

14994 Multimedia Tools and Applications (2019) 78:14989–15010

Page 7: Seeking common ground while reserving differences in ...static.tongtianta.site/paper_pdf/a953aea6-e32f-11e... · common ground while reserving differences among user gesture preferences

5.1.1 Participants

We recruited 30 university students (14 males and 16 females) from a Chinese university. Theycame from different majors and professional backgrounds and their ages were between 18 and33 (M = 24.7, SD = 2.95). All participants were familiar with web browsing. However, none ofthem had any experience with mouse gesture interaction before this study. We chose thesesubjects for two reasons: 1) they represent typical heavy web-browser users, who may demandmore efficient ways to interact with browser, and 2) they are usually open to new technologieslike gesture-based interaction and willing to learn them.

5.1.2 Apparatus

We conducted our experiment in a usability lab. To prevent any hint introduced by typicalWIMP-based UI elements in existing web browsers, we used PowerPoint slides instead of areal web browser in our study. Application scenarios and target tasks were presented toparticipants with text, pictures, and animated GIF images through PowerPoint slides on anApple iMac screen. The iMac had a 3.4GH CPU, 8GB memory, and a 1 TB hard disk.

A screen capture software was used to record participants’ mouse movement trajectoriesand what participants said during the experiment.

5.1.3 Procedure

Participants were first briefed about the experiment and thenwent through a consent process.During the experiment, participantswere told to imagineusingawebbrowser equippedwith amouse-gesture recognition engine to execute navigation tasks listed in Table 1.

After participants saw the animated GIF images illustrating the transition effect between theinitial and final state of a given task presented on a PowerPoint slide, they were asked toperform a favorite mouse gesture to produce the expected result on the slide.

To better understand a participant’s design rationale, we employed a Bthink-aloud^ tech-nique, which asked the participant to articulate why a specific gesture for a target task waschosen and performed. To study what gesture patterns would emerge, we did not provide anyhint about what mouse gestures could be used.

After finishing all 8 target tasks, participants were asked to answer a post-test questionnaireon their demographic information (e.g., age, gender, and education background) and thematching scores between each mouse gesture candidate and its corresponding target task ata 5-level Likert scale (1 – very bad, 5 – very good). The study lasted about 2 h.

5.1.4 Results

In this section, we first briefly explain how we classified and grouped mouse gesture data, andthen present the agreement scores among mouse gestures designed by participants.

Data processing With 30 participants and 8 target tasks, we collected 240 mouse gestures intotal. By analyzing the characteristics of the gesture set, we found that some gestures thatshared a commonality could be merged. For example, B ^ and B ^ could actually be mergedinto one group of identical gestures.

Multimedia Tools and Applications (2019) 78:14989–15010 14995

Page 8: Seeking common ground while reserving differences in ...static.tongtianta.site/paper_pdf/a953aea6-e32f-11e... · common ground while reserving differences among user gesture preferences

Five HCI researchers with expertise in user interface design and gesture interaction wereinvited to group and merge mouse gestures for each target task cooperatively. They groupedthose gestures with the exact same shape and/or same trajectory into a single gesture.

For gestures with similar characteristics in shape and/or trajectory, the five researchersreplayed the corresponding video files and discussed whether and how to group mousegestures based on the verbal explanations by participants in the experiment.

As a result, we obtained 50 groups of identical gestures for the 8 target tasks, as shown inTable 2.

Agreement scores Based on the collected user-defined mouse gestures listed in Table 2,we calculated the agreement score for each task by using the consensus formula [30].Figure 1 shows the agreement scores of all 8 tasks. The higher the agreement score of atask is, the more likely users prefer to choose the same gesture for the task.

5.1.5 Discussion of experiment 1

In this experiment, we first further verified the most needed tasks derived from theprevious brainstorm session and then derived a set of user-defined gestures for thesetasks.

Table 2 50 Groups of identicalmouse gestures for 8 target tasks(red dot represents the start point)

Task Gesture Frequency Task Gesture Frequency

1. Page forward

18

5. Close the current TAB

3

6 2

5 2

1 2

2. Page backward

18 1

6 1

5

6. Switch to previous TAB

8

1 6

3. Refresh12 6

9 5

5 5

2

7. Switch to next TAB

8

2 6

4. Open a new TAB

9 6

5 5

4 5

3

8. Minimize the window

6

3 4

2 4

2 4

1 3

1 3

12 3

4 2

3 1

14996 Multimedia Tools and Applications (2019) 78:14989–15010

Page 9: Seeking common ground while reserving differences in ...static.tongtianta.site/paper_pdf/a953aea6-e32f-11e... · common ground while reserving differences among user gesture preferences

The findings of this experiment indicated a higher divergence in our study compared toprevious elicitation studies. The mean agreement score for the derived mouse gesture vocab-ulary was 0.258, less than the mean agreement rates for mobile interaction, 0.26, by Ruiz et al.[25], for surface computing, 0.32, by Wobbrock et al. [30], and for TV-based applications,0.42, by Vatavu [28]. The lowest agreement score (0.129) appeared on Task 8 –Minimize thewindow. As shown in Table 2, 9 different gesture patterns were suggested by 30 participantsfor this task, which results in a serious gesture disagreement problem in gesture design.

Due to the low agreement scores of the target tasks and the variety of gesture candidates foreach task, we could not determine a single canonical set of user-defined mouse gestures thatthe majority of the participants agreed on. Therefore, we kept all promising mouse gesturecandidates for the following experiments to be refined and validated. We hoped that providingusers with a set of mouse gesture options may alleviate the gesture disagreement problem andimprove the agreement rate with the benefit of hindsight, i.e., participants may change theirminds after seeing some unique but more attractive mouse gestures derived from the otherparticipants.

The results of this experiment revealed the most intuitive gesture patterns that end-userspreferred to use when they interacted with a web browser. Such information laid the founda-tion for the remaining design process.

5.2 Experiment 2

Although we have learned about the most needed tasks for the mouse gesture-based webbrowser and the commonly preferred gestures by users for those tasks, it is still unclear

Fig. 1 Agreement scores of 8 target tasks. The horizontal axes represents agreement score

Multimedia Tools and Applications (2019) 78:14989–15010 14997

Page 10: Seeking common ground while reserving differences in ...static.tongtianta.site/paper_pdf/a953aea6-e32f-11e... · common ground while reserving differences among user gesture preferences

whether those mouse gestures would work well in practice because of the open-ended natureof the elicitation study we had used in our experiment. Thus, we conducted a subsequentexperiment to validate the popularity of the user elicited mouse gesture set. Specifically, weaimed to know whether offering multiple gesture choices could help to improve the consis-tency among end-users.

5.2.1 Participants and apparatus

For consistency, we recruited the same 30 participants involved in the previous experiment andconducted the experiment in the same test environment as described in Experiment 1. Here,instead of designing gestures, participants were asked to choose the best mouse gesture foreach task verbally.

5.2.2 Procedure

Before the experiment, all participants were briefly introduced about the experimental back-ground, scenario description, and experiment requirements. After the consent process, partic-ipants were shown a list of the 8 core target tasks identified previously.

For each task, participants were presented a set of candidate mouse gestures that werecompiled from all gesture designs for this task by participants from Experiment 1. The effectsof a task and the corresponding candidate mouse gestures were presented as animated GIFimages on PowerPoint slides. A text description was also provided to explain each task and the

Fig. 2 Change of agreement scores. The horizontal axes represent agreement score

14998 Multimedia Tools and Applications (2019) 78:14989–15010

Page 11: Seeking common ground while reserving differences in ...static.tongtianta.site/paper_pdf/a953aea6-e32f-11e... · common ground while reserving differences among user gesture preferences

corresponding mouse gesture. After seeing the effect of a task and all the candidate mousegestures for this task, participants were asked to choose the best gesture for the task based on towhat extent they believed that a mouse gesture matched the task and whether a gesture waseasy to perform. A Bthink-aloud^ method was used to record why participants chose a specificmouse gesture for a task. The experiment lasted about 60–80 min.

5.2.3 Results

Here, we present our results on the change of agreement scores of the 8 target tasks, the changeof the number of identical gesture groups, and the change of top gestures.

Change of agreement scores We observed the increase of the agreement scores of all 8 tasks.As shown in Fig. 2, the overall average agreement score was up to 0.386 from 0.258, anincrease of 49.6%.

Change of the number of identical gesture groups We also observed a significant decreaseof the number of identical gesture groups that participants chose in this experiment. Experi-ment 1 yielded 50 groups of identical gestures, but in Experiment 2 the number dropped to 36,a 28% decrease.

Change of top gestures We also found the changes of top gestures for some tasks. While thetop gestures for Tasks 1, 2, 3, 5, and 8, stayed the same, the top gestures for Tasks 4, 6, and 7changed, as shown in Fig. 3.

Fig. 3 Change of top gestures for Tasks 4, 6, and 7

Multimedia Tools and Applications (2019) 78:14989–15010 14999

Page 12: Seeking common ground while reserving differences in ...static.tongtianta.site/paper_pdf/a953aea6-e32f-11e... · common ground while reserving differences among user gesture preferences

For Task 4, the top gesture was B ^ in Experiment 1, chosen by 30% of the participants.

In Experiment 2, however, a new top gesture emerged, B ^, which was favored by 46.7% ofthe participants. Some participants stated that although the gesture B ^ never came to theirmind in Experiment 1, it was more iconic and easier to remember.

For Task 6, the top gesture B ^ (26.7%) was replaced by B ^ (40%). Similarly, the top

gesture B ^ for Task 7 (26.7%) was overridden by B ^ (40%).

5.2.4 Discussion of experiment 2

In this experiment, we had some interesting findings. First, with a set of given mousegestures, end-users’ gesture choices may converge better (e.g., reduced number ofgesture options and improved agreement rate), compared with no restriction at all inExperiment 1. This may be because end-users only need to recognize a good gesture,rather than to imagine a gesture. This finding is consistent with the Brecognition ratherthan recall^ principle by Nielsen [20] and Budiu [1] for interaction design. Second, thebasic elicitation methodology by Wobbrock et al. [30], which is to choose top gesturesbased on the frequency ratio, does not necessarily guarantee the popularity of thegestures. We observed the changes of top gestures for three tasks when users were givenchoices. Third, some potential gestures that would be eliminated under the traditionalelicitation methodology because of their low agreement scores may be highly teachableand consequently become top gestures. For example, the gesture B ^ was proposed byonly 5 participant for Task 4 in Experiment 1 (not a top gesture), but was preferred by 14participants in Experiment 2 (top gesture) because it was easy to learn and use.

5.3 Experiment 3

To further study how users choose top gestures when provided with candidate gestures, weconducted the third experiment. Different from Experiment 2, this experiment recruitedparticipants who knew nothing about mouse gestures and their corresponding tasks.

5.3.1 Participants and apparatus

A total of 30 participants (M = 27.1, SD = 3.42) were recruited for this experiment. Sixteenwere males and fourteen were females. None of these participants had any prior experiencewith mouse gesture-based interactive technologies. None of them had participated in theprevious two experiments. Similar to Experiment 2, participants were asked to choose thebest gesture for each task verbally.

The test environment was the same as what was used in Experiments 1 and 2.

5.3.2 Procedure

For consistency, this study used the same method and materials as described in Experiment 2.After seeing all the mouse gesture candidates for a given target task on PowerPoint slides,participants were asked to choose the best gesture for the task. A Bthink-aloud^ method was

15000 Multimedia Tools and Applications (2019) 78:14989–15010

Page 13: Seeking common ground while reserving differences in ...static.tongtianta.site/paper_pdf/a953aea6-e32f-11e... · common ground while reserving differences among user gesture preferences

also used to solicit why participants chose a specific mouse gesture for a task. It took about 60–80 min for each participant to complete the experiment.

5.3.3 Results

In this experiment, we found the same patterns for participants’ preferred top gestures forTasks 1, 2, 3, 5, and 8 as described in Experiments 1 and 2.

Fig. 4 Top gestures for Tasks 1, 2, 3, 5, and 8

Fig. 5 Changes of top gestures among three experiments

Multimedia Tools and Applications (2019) 78:14989–15010 15001

Page 14: Seeking common ground while reserving differences in ...static.tongtianta.site/paper_pdf/a953aea6-e32f-11e... · common ground while reserving differences among user gesture preferences

As shown in Fig. 4, B ^ was chosen as the top gesture for Task 1, Page forward, by 18participants in Experiment 1, 23 in Experiment 2, and 25 in Experiment 3. Therefore, a total of66 person-time participants (73.3%) chose B ^ as their favorite gesture for Task 1.Similarly, for Tasks 2, 3, 5, and 8, the frequency ratios are 73.3%, 55.6%, 46.7%, and31.1%, respectively.

Interestingly, for Tasks 4, 6, and 7, participants chose totally different top gestures from thechoices by participants in Experiments 1 and 2 (Fig. 5).

In Experiment 1, B ^, B ^, and B ^ were chosen as the top gesture for Tasks 4, 6, and

7, respectively. In Experiment 2, B ^ was selected as the top gesture for Task 4, B ^ was

selected as the top gesture for Task 6, and B ^ was selected as the top gesture for Task 7.

Compared with Experiment 2, the top gestures for Tasks 4, 6, and 7 were replaced by B ,̂

B ^, and B ^, respectively in Experiment 3.

5.3.4 Discussion of experiment 3

In this experiment, we recruited 30 new participants. Results show that the top gestures forTasks 1, 2, 3, 5, and 8 remained the same as those in Experiments 1 and 2. However, comparedto Experiments 1 and 2, the top gestures for Tasks 4, 6, and 7 changed once again in thisexperiment (Fig. 5). On one hand, some top gestures derived in Experiment 1 were seldom ornever selected by participants in Experiments 2 and 3, and some top gestures chosen byparticipants in Experiment 2 were no longer the favorite gestures by participants in Experiment3. On the other hand, some unique gestures proposed by very few participants in Experiment 1became top gestures in Experiment 2 or Experiment 3. After the three experiments, we stillcould not determine a single canonical set of mouse gestures that the majority of participants

agreed on for Tasks 4, 6, and 7. Therefore, we determined to keep all three top gestures {

∪ ∪ }, { ∪ ∪ }, and { ∪ ∪ } for Tasks 4, 6, and 7, respectively for further

study.

5.4 Experiment 4

The aim of this study was to compare the usability and user preference between prior mousegesture sets and the proposed user-defined mouse gesture set. As mentioned above, GoogleChrome, Mozilla Firefox, 360, and QQ are four popular web browsers. Both 360 and QQ webbrowser offer a built-in mouse gesture set. However, mouse gestures are not automaticallyavailable when using Chrome and Firefox. Therefore, the most downloaded Add-onsBGestures for Chrome^ and BGesturefy^ were installed in Google Chrome and MozillaFirefox, respectively.

As seen from Table 3, the 5 mouse gesture sets had the same mouse gesture B ^and B ^ for Task 1 and Task 2, respectively. However, Gestures for Chrome,Gesturefy, 360, and QQ did not offer a mouse gesture for Task 8 – Minimize thewindow. Therefore, we only compared mouse gestures for Tasks 3, 4, 5, 6, and 7 in thisstudy. By merging repetitive mouse gestures provided by the four business web browsersand the user-defined gesture set for each task, we obtained a new experimental mousegesture set (see the last column of Table 3).

15002 Multimedia Tools and Applications (2019) 78:14989–15010

Page 15: Seeking common ground while reserving differences in ...static.tongtianta.site/paper_pdf/a953aea6-e32f-11e... · common ground while reserving differences among user gesture preferences

5.4.1 Participants and apparatus

We recruited 24 participants (13 females, 11 males) for this study. Their ages were between 18and 31 (M = 24, SD = 0.72). None of these participants had any prior experience with mousegesture-based interactive technologies. None of them participated in any of the previous 3experiments. Similar to Experiments 2 and 3, participants were asked to choose the bestgesture for each task verbally. We conducted the experiment in the same usability lab asdescribed in the previous three experiments.

5.4.2 Procedure

During the experiment, participants were shown a list of gesture candidates in form ofanimated Gif images which we prepared in advance for each task through PowerPoint slides.After seeing all the candidate mouse gestures for a given target task, participants were asked torate a gesture, with Likert scales, based on to what extent the gesture matches the task (1 – notmatch at all, 5 – match very well) and its usability (1 – very hard to use, 5 – very easy to use).A Bthink-aloud^ method was used to elicit the reasons a rate was given. The experiment lastedbetween 80 to 120 min for each participant.

5.4.3 Results

We statistically analyzed the data on matching and usability. We used the Friedman test, a non-parametric test on ranks equivalent to the one-way ANOVA.

For the matching between gestures and their targeting tasks, significant difference wasfound among different gesture candidates for all 5 tasks: Task 3 – χ2(5) = 56.005, p = .000;Task 4 – χ2(5) = 62.695, p = .000; Task 5 – χ2(3) = 50.623, p = .000; Task 6 – χ2(3) = 11.925,p = .008; Task 7 – χ2(3) = 9.152, p = .027. For each task, one gesture was ranked higher than

all other gestures: B ^ for Task 3, B ^ for Task 4, B ^ for Task 5, B ^ for Task 6, and B ^

for Task 7.In terms of usability, our analysis shows that participants significantly favored some of our

gestures in all tasks except Task 5: Task 3 – χ2(5) = 58.935, p = .000; Task 4 – χ2(5) = 47.822,

Table 3 Mouse gesture set pro-vided by 4 popular commercial webbrowsers and the user-definedmouse gesture set proposed in thispaper (red dot represents the startpoint)

Gestures for

Chrome

Gesturefy 360 QQ Our work Experimental gesture set

1. Page forward

2. Page backward

3. Refresh

4. Open a new TAB

5. Close the current TAB

6. Switch to previous TAB

7. Switch to next TAB

8. Minimize the window N/A N/A N/A N/A

Multimedia Tools and Applications (2019) 78:14989–15010 15003

Page 16: Seeking common ground while reserving differences in ...static.tongtianta.site/paper_pdf/a953aea6-e32f-11e... · common ground while reserving differences among user gesture preferences

p = .000; Task 5 – χ2(3) = 7.783, p = .051; Task 6 – χ2(3) = 24.222, p = .000; Task 7 – χ2(3) =31.089, p = .000. The p values for Tasks 3, 4, 6, and 7 of the Friedman test are all less than0.001 (p < 0.001), while the p value for Task 5 is 0.051. The gestures that were ranked as theeasiest to perform are: B ^ for Task 3, B ^ for Task 4, B ^ for Task 6, and B ^ for

Task 7.Finally, participants’ overall opinions on their favorite gestures fairly converged for Tasks 3,

4, and 5. Thirteen participants (54.2%) chose B ^ for Task 3, 16 participants (66.7%) picked

B ^ for Task 4, and 22 participants (91.7%) selected B ^ for Task 5. These results were

consistent with those obtained in Experiment 2. For Task 6, both B ^ and B ^ got 9 votes

(37.5%). Similarly, B ^ and B ^ were preferred by 9 participants each.

5.4.4 Discussion of experiment 4

In this experiment, we generated an experimental gesture set by mixing the user-definedmouse gestures and the other 4 sets of mouse gestures offered by popular commercial webbrowsers. Then, 24 participants who never participated in the previous 3 experiments wereasked to choose a best mouse gesture from the experimental gesture set for each target task interms of to what extent the gesture matches the task and how easy the gesture is to perform.Experimental results verify the validity of the proposed user-defined gesture set. For thematching between gestures and their targeting tasks, all the most popular mouse gesturesselected in this experiment were designed by participants in the previous 3 experiments,

including B ^ for Task 3, B ^ for Task 4, B ^ for Task 5, B ^ for Task 6, and B ^ for

Task 7. None of them was offered by the 4 popular commercial web browsers. Similarly, 4 outof 5 gestures that were ranked as the easiest to perform were also designed by participants in

our studies, including B ^ for Task 3, B ^ for Task 4, B ^ for Task 6, and B ^ for

Task 7. None of them were offered by the 4 popular commercial web browsers.In addition, the results of this experiment confirmed that the gesture disagreement is a very

common problem in gesture design. In summary, it existed in each of the four experiments nomatter how many participants we recruited and whether we replaced participants or not indifferent experiments. After the 4 user studies, we still couldn’t determine a single canonicalgesture set for the targeting tasks. This also further verify the proposed strategy in this paper:seeking common ground while reserving differences in gesture elicitation studies.

6 Discussion

Based on the results obtained from the studies, we derived several guidelines for gesturedesign.

6.1 Be aware of the gesture disagreement problem

Designers should keep in mind that the vocabulary disagreement is a serious problem ingesture design. In our experiment, the chance for end-users to produce the same mouse gesturefor a given target task without any restriction is low, below 0.26 on average; and none of 8

15004 Multimedia Tools and Applications (2019) 78:14989–15010

Page 17: Seeking common ground while reserving differences in ...static.tongtianta.site/paper_pdf/a953aea6-e32f-11e... · common ground while reserving differences among user gesture preferences

target tasks’ agreement score was above 0.5. The lowest agreement score was observed indesign for Task 8. Nine different gesture patterns were produced by participants for it, but thetop gesture was chosen by only 1 participants.

6.2 Be aware of unique but potentially popular gestures in elicitation studies

In our study, the top gestures for Tasks 4, 6, and 7 changed three time in Experiments 1, 2, and 3.We found that some unique gestures still have a chance to become top gestures. For example,B ^ was produced by few participants in Experiment 1 but was chosen as the top gesture inExperiment 2. Follow-up interviews revealed that participants liked this gesture because itmeans insert something new and it is easy to perform by using a mouse when interacting withthe web browser. Therefore, using a frequency ratio approach to choose a top gesture for a giventarget task is not necessary to guarantee the popularity of the derived gestures. System designersshould be aware of those kinds of unique but potentially popular gestures in elicitation studies.

6.3 Be aware of the benefits of hindsight effects in gesture design

In the basic elicitation studies by Wobbrock et al. [30], participants were required to freelydesign gestures based on their own specialized knowledge without any hints. However, due tothe limited time and experimental conditions, participants may not always recall a best gesturefor a target task, and this may cause elicitation studies to get caught in local minima and fail touncover interactions that may be better suited for a given target task. However, after seeing anoffered set of gesture candidates proposed by other designers, especially some gestures that arehighly teachable, participants may be more likely to change their minds and likely to use them.In practice, designers may prompt participants to think more generally about what kinds ofgestures could be used by providing demonstrations, videos or their own actions of a variety ofpossible ways of using the target technology.

6.4 Implications of mouse gesture Design for web Navigation

Based on the results of the four experiments, a set of user-defined gestures were suggested forthe 8 core target tasks for web navigation. The proposed gesture set exhibited severalcharacteristics, indicating how participants’ mental models affect their design and choice ofmouse gestures for web navigation:

& Mouse gestures preferred by users often consist of simple single stroke (mouse movementtrajectory). Therefore, accurate recognition of single stroke mouse gestures is critical to thesuccess of future mouse-gesture-based interaction. System designers can borrow somepopular algorithms for single stroke gesture recognition. For example, the $ 1 algorithmcan obtain over 97% accuracy with only 1 loaded template and 99% accuracy with 3+loaded templates [29].

& Mouse gestures can be aligned with participants’ prior experience and habits. Although weprovide no hint or element from traditional user interfaces during the study, participantsstill designed mouse gestures based on their prior interaction experiences. For example,

when asked why they chose the gesture B ^ for Task 3 – Refresh, participants said that it

was like the BRefresh^ icon on a web browser.

Multimedia Tools and Applications (2019) 78:14989–15010 15005

Page 18: Seeking common ground while reserving differences in ...static.tongtianta.site/paper_pdf/a953aea6-e32f-11e... · common ground while reserving differences among user gesture preferences

& Mouse gestures selected by users may contain clear social meaning. For example, B ^

was chosen by 9 participants in Experiment 1 and B ^ was chosen by 14 participants inExperiment 2 for the same Task (Task 4 – Open a new TAB). However, the reasonunderlying participants’ preferences is the same: both of these gestures mean addingsomething new in real world scenarios, therefore, they can be well matched with the targettask.

& Mouse gestures preferred by participants are not only simple and easy to perform, but alsohighly identifiable and easy to remember. Although most gestures provided by the 4commercial web browsers are easy to perform (Table 3), they all consist of line segmentsand are more likely to cause confusion for end-users. Compared to the mouse gesturesprovided by the 4 commercial web browsers, the user-defined gestures were rated not onlyeasier to perform but also more closely match their target tasks by participants inExperiment 4.

6.5 Seeking common ground while reserving differences by involving end-users in gestureelicitation studies

Our research provides valuable a reference to addressing some design challenges in gestureelicitation studies. Traditional elicitation studies often derive gestures from end-users by askingthem to design gestures freely in the a priori stage and select the top gestures by designers inthe a posteriori stage. In many situations, the selected Btop gestures^ cannot represent end-users’ real intentions. Compared with prior studies, our research emphasizes the co-designprocedures of designers and end-users in practice.

More specifically, our research proposes a method in which designers play more complexroles in design. They need to host brainstorm sessions to identify core target tasks, organizemulti-stage elicitation study to identify different sets of meaningful gestures, and develop astrategy to seek common ground while reserving differences among different stakeholders’requirements and rights.

7 Conclusion and future work

Gesture disagreement is a serious problem in gesture design.Traditional elicitation studies usually adopted the frequency ratio to select the top gesture

derived from end-users and assign it to a target task directly. These methods may face the riskof getting caught in local minima and ignoring some unique but potentially popular gestures.In this paper, we propose the approach of seeking common ground while reserving differencesin elicitation studies. To verify this proposal, we conducted a series of experimental studies onuser-defined mouse gestures for web navigation.

Our contributions begin with the derivation of the core task set for a typical mouse-gesturebased web browser. Based on the core task set, an elicitation study was conducted toinvestigate end-users’ preferences and attitudes towards all possible mouse gestures for webnavigation using mouse gestures. After a thorough statistical and qualitative analysis of theexperimental results, we proposed an interactive mouse gesture vocabulary for potentialinteractions between end-users and web browsers.

15006 Multimedia Tools and Applications (2019) 78:14989–15010

Page 19: Seeking common ground while reserving differences in ...static.tongtianta.site/paper_pdf/a953aea6-e32f-11e... · common ground while reserving differences among user gesture preferences

The results of our experimental studies provide new empirical evidence on theexistence of the gesture disagreement problem and the benefits of hindsight effect byoffering multiple gesture candidates in user elicitation studies. We also propose someguidelines that can be applied to inform gesture-based interaction design.

There are some limitations in our study. First, similar to previous elicitation studies,this study focuses on understanding end-users’ mental models and design rationale onmouse gesture design, regardless of technology. In the future work, we plan to evaluatethe recognition accuracy and interaction efficiency of the resulting user-defined mousegesture set and compare it with existing standard mouse gesture set in a comparisonexperiment. Second, participants through all four experiments were limited to one singleethnic background. It has been known that elicited gestures for the same commands canvary with the ethnic and cultural background of participants [12, 28, 31]. Thus, togeneralize the results of our research, participants with more diverse ethnic and culturalbackgrounds need to be considered.

Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps andinstitutional affiliations.

References

1. Budiu R (2018) Memory recognition and recall in user interfaces. Retrieved March 6, 2018 fromhttps://www.nngroup.com/articles/recognition-and-recall/

2. Chan E, Seyed T, Stuerzlinger W, Yang XD, Maurer F (2016) User elicitation on single-hand microgestures.In: ACM CHI’16. 3403–3411

3. Chen Z, Ma XC, Peng ZY, Zhou Y, Yao MG, Ma Z, Wang C, Gao ZF, Shen MW (2018) User-definedgestures for gestural interaction: extending from hands to other body parts. Int J Human-Comput Inter 34(3):238–250

4. Choi E, Kwon S, Lee D, Lee H, Chung MK (2014) Towards successful user interaction with systems:focusing on user-derived gestures for smart home systems. Appl Ergon 45:1196–1207

5. Dulberg M, Amant RS, Zettlemoyer LS (1999) An imprecise mouse gesture for the fast activation ofcontrols. Human-Computer Interaction – INTERACT’99. 1–10

6. Feng ZQ, Yang B, Li Y, Zheng YW, Zhao XY, Yin JQ, Meng QF (2013) Real-time oriented behavior-driven3D freehand tracking for direct interaction. Pattern Recogn 46:590–608

7. Fitts PM (1954) The information capacity of the human motor system in controlling the amplitude ofmovement. J Exp Psychol 47:381–391

8. Furnas GW, Landauer TK, Gomez LM, Dumais ST (1987) The vocabulary problem in human-systemcommunication. Commun ACM 30(11):964–971

9. Grijincu D, Nacenta MA, Kristensson PO (2014) User-defined interface gestures: dataset and analysis. ITS.25–34

10. Hoff L, Hornecker E, Bertel S (2016) Modifying gesture elicitation: Do kinaesthetic priming and increasedproduction reduce legacy bias? In: TEI’16. 86–91

11. Kray C, Nesbitt D, Rohs M (2010) User-defined gestures for connecting mobile phones, public displays,and tabletops. MobileHCI’10. 239–248

12. Kühnel C, Westermann T, Hemmert F, Kratz S (2011) I’m home: defining and evaluating a gesture set forsmart-home control. Int J Human-Comput Stud 69:693–704

13. Kurdyukova E, Redlin M, André E (2012) Studying user-defined iPad gestures for interaction in multi-display environment. IUI’12. 93–96

14. Locken A, Hesselmann T, Pielot M, Henze N, Boll S (2011) User-centered process for the definition offreehand gestures applied to controlling music playback. Multimedia Systems 18(1):15–31

15. Midgley L, Vickers P (2006) Sonically-enhanced mouse gestures in the firefox browser. Proceedings of the12th International Conference on Auditory Display. 187–193

16. Morris, M.R., Wobbrock, J.O., Wilson, A.D. (2010) Understanding users’ preferences for surface gestures.GI’10. pp.261–268

Multimedia Tools and Applications (2019) 78:14989–15010 15007

Page 20: Seeking common ground while reserving differences in ...static.tongtianta.site/paper_pdf/a953aea6-e32f-11e... · common ground while reserving differences among user gesture preferences

17. Morris MR, Danielescu A, Drucker S, Fisher D, Lee B, Schraefel MC, Wobbrock JO (2014) Reducinglegacy bias in gesture elicitation studies. Interactions 21(3):40–45

18. Moyle, M., Cockburn, A. (2003) The design and evaluation of a flick gesture for Bback^ and Bforward^ inweb browsers. Australasian User Interface Conference on User Interfaces (AUIC 2003). 39–46

19. Nacenta MA, Kamber Y, Qiang YZ, Kristensson PO (2013) Memorability of pre-designed & user-definedgesture sets. CHI’13. 1099–1108

20. Nielsen J (2018) 10 usability heuristics for user interface design. Retrieved March 6, 2018 fromhttps://www.nngroup.com/articles/ten-usability-heuristics/

21. Nielsen M, Störring M, Moeslund T, Granum E (2004) A procedure for developing intuitive and ergonomicgesture interfaces for HCI. Gesture-based Communication in Human–Computer Interaction. 105–106

22. Paschke JD (2011) A usability study on mouse gestures. Study Thesis, Institute of Software Ergonomics.University of Koblenz, German

23. Piumsomboon T, Billinghurst M, Clark A, Cockburn A (2013) User-defined gestures for augmented reality.CHI’13. 955–960

24. Rovelo G, Vanacken D, Luyten K, Abad F, Camahort E (2014) Multi-viewer gesture-based interaction foromni-directional video. CHI’14. 4077–4086

25. Ruiz J, Li Y, Lank E (2011) User-defined motion gestures for mobile interaction. CHI’11. 197–20626. Seo HK (2013) Mouse gesture design based on mental model. J Kor Inst Ind Eng 39(3):163–17127. Valdes C, Eastman D, Grote C, Thatte S, Shaer O, Mazalek A, Ullmer B, Konkel MK (2014) Exploring the

design space of gestural interaction with active tokens through user-defined gestures. CHI’14. 4107–411628. Vatavu RD (2012) User-defined gestures for free-hand TV control. EuroITV’12. 45–4829. Wobbrock JO, Wilson AD, Li Y (2007) Gestures without libraries, toolkits or training: a $1 recognizer for

user interface prototypes. UIST’07. 159–16830. Wobbrock JO, Morris MR, Wilson AD (2009) User-defined gestures for surface computing. CHI’09. 1083–

109231. Wu HY, Wang JM, Zhang XL (2015) User-centered gesture development in TV viewing environment.

Multimed Tools Appl 75(2):733–76032. Yee W (2009) Potential limitations of multi-touch gesture vocabulary: differentiation, adoption, fatigue.

Proceeding of the 13th International Conference on Human Computer Interaction. 291–30033. Zaiţi IA, Pentiuc SG, Vatavu RD (2015) On free-hand TV control: experimental results on user-elicited

gestures with leap motion. Pers Ubiquit Comput 19:821–838

Huiyue Wu received the Ph.D. degree in computer science from Institute of Software, the Chinese Academy ofSciences, Beijing, China, in 2010. He is currently an Associate Professor at Sun Yat-sen University, Guangzhou,China. His research interests include vision-based interfaces, gestural interaction, and user-centered design. Hehas published more than 30 papers in international journals and conference proceedings.

15008 Multimedia Tools and Applications (2019) 78:14989–15010

Page 21: Seeking common ground while reserving differences in ...static.tongtianta.site/paper_pdf/a953aea6-e32f-11e... · common ground while reserving differences among user gesture preferences

Jiayi Liu is a graduate student at Sun Yat-sen University, Guangzhou, China. Her research interests includehuman-computer interaction, interaction design, and usability engineering. She obtained Bachelor of DesignScience from Sun Yat-sen University, Guangzhou, China, 2016.

Jiali Qiu is a graduate student at Sun Yat-sen University, Guangzhou, China. Her research interests includehuman-computer interaction, interaction design, and usability engineering. She obtained Bachelor of DesignScience from Sun Yat-sen University, Guangzhou, China, 2016.

Multimedia Tools and Applications (2019) 78:14989–15010 15009

Page 22: Seeking common ground while reserving differences in ...static.tongtianta.site/paper_pdf/a953aea6-e32f-11e... · common ground while reserving differences among user gesture preferences

Xiaolong (Luke) Zhang received the Ph.D. degree in information science from University of Michigan, AnnArbor, Michigan, in 2003. He is currently an associate professor at the College of Information Sciences andTechnology at the Pennsylvania State University, University Park, PA, and directs the Knowledge VisualizationLaboratory. His research interests are in the area of human-computer interaction and information visualizationand visual analytics.

15010 Multimedia Tools and Applications (2019) 78:14989–15010