Towards Multimodal Adaptive User Interfaces

38
MASARYK UNIVERSITY FACULTY OF I NFORMATICS Towards Multimodal Adaptive User Interfaces PH.D. THESIS PROPOSAL Zdenek Eichler Brno, January 2014

Transcript of Towards Multimodal Adaptive User Interfaces

Page 1: Towards Multimodal Adaptive User Interfaces

MASARYK UNIVERSITY

FACULTY OF INFORMATICS

}w���������� ������������� !"#$%&'()+,-./012345<yA|Towards Multimodal Adaptive

User Interfaces

PH.D. THESIS PROPOSAL

Zdenek Eichler

Brno, January 2014

Page 2: Towards Multimodal Adaptive User Interfaces

Declaration

Hereby I declare, that this paper is my original authorial work, which Ihave worked out by my own. All sources, references and literature used orexcerpted during elaboration of this work are properly cited and listed incomplete reference to the due source.

Advisor: doc. Ing. Jirı Sochor, CSc.Co-advisor: RNDr. Radek Oslejsek, Ph.D.

ii

Page 3: Towards Multimodal Adaptive User Interfaces

Acknowledgement

I would like to thank my girlfriend, since she is my muse, my inspiration.She taught me importance of emotions in communications, particularly onthe following statement: It’s Not What You Say — It’s How You Say It!

iii

Page 4: Towards Multimodal Adaptive User Interfaces

Contents

1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 State of the art . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2.1 Adaptable User Interfaces . . . . . . . . . . . . . . . . . . . . 62.2 Adaptive User Interfaces . . . . . . . . . . . . . . . . . . . . . 7

2.2.1 Adaptive User Interface in ERP System . . . . . . . . 92.3 Utilisation of Human-Factors Methods and Devices . . . . . 9

2.3.1 Eye-tracking . . . . . . . . . . . . . . . . . . . . . . . . 102.3.2 Electroencephalography . . . . . . . . . . . . . . . . . . 11

2.4 Multimodal User Interfaces . . . . . . . . . . . . . . . . . . . 122.5 Multimodal Adaptive User Interfaces . . . . . . . . . . . . . 13

3 Achieved Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143.1 OpenOffice.org Interceptor (OOI) . . . . . . . . . . . . . . . . 143.2 Boulevard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

3.2.1 Boulevard in OpenOffice.org . . . . . . . . . . . . . . 173.2.2 Proof-of-Concept Usability Study . . . . . . . . . . . 17

3.3 Boulevard in ERP system . . . . . . . . . . . . . . . . . . . . . 183.3.1 Adaptive Disclosure with Ephemeral Visualization . 18

4 Aims of the Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . 204.1 Emotiv EPOC . . . . . . . . . . . . . . . . . . . . . . . . . . . 204.2 The Eye Tribe . . . . . . . . . . . . . . . . . . . . . . . . . . . 204.3 Plan of Research . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

4.3.1 Research Methodology . . . . . . . . . . . . . . . . . . 224.4 Proposed Plan of Work . . . . . . . . . . . . . . . . . . . . . . 23

A List of Author’s Results in the Field . . . . . . . . . . . . . . . . . 33A.1 Published Publications . . . . . . . . . . . . . . . . . . . . . . 33A.2 Submitted Publications . . . . . . . . . . . . . . . . . . . . . 33A.3 Participation in Projects . . . . . . . . . . . . . . . . . . . . . 34A.4 Patent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34A.5 Citations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

B Attached Publications . . . . . . . . . . . . . . . . . . . . . . . . . 35

1

Page 5: Towards Multimodal Adaptive User Interfaces

Chapter 1

Introduction

With the improvement of technology, Human-Computer Interaction (HCI) isbecoming more important. User interface has been recognized as one of themost important elements of software. End-user applications, such as spread-sheet and word processor, are growing in terms of the offered functionality,which induces growing complexity of user interfaces. Most users use only asmall fraction of the provided functionality and the functionality used byparticular users is very different. This is sometimes referred to as softwarebloat, creeping featurism or feature war. One of the means to address thisissue is user interface personalization. A properly personalized user inter-face improves users’ satisfaction and performance, compared to traditionalmanually designed “one size fits all” interfaces.

There are two basic kinds of personalization: adaptable and adaptive.Both adaptable and adaptive approaches try to improve usability throughpersonalization of software from its default configuration. The adaptableapproach means personalization performed by the user, whereas the adap-tive approach stands for personalization performed by the computer. Theadaptable personalization is widely integrated in today’s applications, e.g.Microsoft Office and OpenOffice.org. However, only a small number of userspersonalize, mostly because it is too hard to accomplish.

We focus on adaptive user interfaces (AUI), where the user interfaceis personalized by the computer without direct user’s intervention. Thereis usually some kind of “intelligence” involved. The main advantage ofthe adaptive approach over the adaptable one is that no skills are requiredfrom a user to do the personalization. Such adaptive personalization can beperformed more often, without consuming the user’s time and energy. Bothadaptable and adaptive approaches have their advantages and disadvan-tages, however, we aim to provide the best of the both worlds since we tryto support both approaches.

Leaders in computing innovations, such as Microsoft, noticed and rec-ognized the phenomenon, too. They have performed many attempts toimprove user interfaces. In Microsoft Office 97, the smart agent “Clippy” —

2

Page 6: Towards Multimodal Adaptive User Interfaces

1. INTRODUCTION

the well-known AUI and a target of various jokes, was introduced. Afterthat, “Clippy” was replaced by Smart Menus in Microsoft Office 2003. ThenMicrosoft conducted an extensive and unpublished research which outcomewas the Ribbon User Interface, introduced in Microsoft Office 2007. It is astatic user interface with context behavior, no more WIMP-based (Window,Icon, Menu and Pointing device) user interface.

Ability of computers to interpret, process, and simulate human affects, isgrowing. Computers are able to read and interpret Electroencephalography(EEG) and Electrocardiography (ECG). Nowadays, computers are able todetermine current user’s emotional state even through observation of theuser’s face or voice. When computers will consider the current user’s emo-tional state, the interaction between human and computer will become morenatural. The so-called Affective computing is becoming more widespreadand is expected to be even more prevalent in near future, which is supportedby a recent introduction of Intel RealSense. As a leader in computing inno-vations, the company designed so small a 3D Depth and 2D camera module,that it can be used in tablet, ultrabook, notebook and other mobile devicesin the future. RealSense also includes algorithms to deliver user’s emotionalstate from face expressions captured by 3D camera. The importance of user’semotions can be presented on the following statement:

The Kometa Brno won again.

Was this statement intended to be perceived as pleased or disappointed?This can’t be decided without additional information such as facial expres-sion or emotion of voice of the person who said it.

Eye tracking is used to determine where someone is looking (point ofgaze). It utilizes a device (Eye tracker) and appropriate method to captureand interpret eye movements for analyzing user’s behavior or as an real-timeinput to the human-computer dialogue. Eye tracking can be used in variousapplications typically described as active or passive. Passive applicationsinclude analysis of design or layout, where active applications involve devicecontrol, for example aiming in games or hands-free typing (usually by userswith disabilities).

In our previous work we have developed a novel adaptive interfacecalled “Boulevard”, a panel container with a user interface automaticallypersonalized to an individual user. Our user interface automatically observesthe user’s behavior and personalizes the user interface to the preferredfunctionality and the preferred interaction style used to activate a particularuser command. Also parameters applied to user commands are consideredand personalized.

3

Page 7: Towards Multimodal Adaptive User Interfaces

1. INTRODUCTION

Our main research goal is discussed in this proposal. In brief, our goal isto enhance Boulevard by utilization of affective computing and eye track-ing, since considering users’s emotional state and points of gaze may playimportant role for adaptations in user interface.

This proposal is organized as follows: in Chapter 2 we present anddiscuss basic approaches in the field on published related work. Chapter 3describes our results in the field and Chapter 4 proposes our research.

4

Page 8: Towards Multimodal Adaptive User Interfaces

Chapter 2

State of the art

The today’s desktop applications are usually very complex and offer muchfunctionality. Unfortunately, the excess of provided functionality was identi-fied as a substantial source of dissatisfaction by many users [1]. The usersusually use only a small subset of the offered functionality and there areconsiderable differences between users in the functionality preferred byusers, which was described in various studies (e.g. [2, 3]). Such dispropor-tion between provided and actually used functionality induces a need of thepersonalization of the user interfaces.

Ceaparu et al. conducted a study on determining the causes of users’frustration [4]. They presented very discouraging results when they showedthat users experience frustration on a frequent basis. The applications inwhich the frustrating experiences happened most frequently were identifiedas web browsing, e-mail, and word processing. The most reported causesof frustration were error messages, dropped network connections, longdownload times, and hard-to-find features. The time lost due to reportedissues ranged between 47-53% of time spent on a computer.

Our interest focuses on word-processing applications, as they are usedby many users and various types of users, from beginners to professionals.It is estimated that Microsoft Office is used by 500 million users and OpenOf-fice.org version 3 has been downloaded more than a hundred million times.More importantly, today’s word processors are complex applications andthe above-mentioned phenomenon affects them as well, which was reportedin an interesting study focused on Microsoft Office users and bloatware,conducted by McGrenere and Moore, with a poignant title Are We All in theSame Bloat? [1]. Another relevant study, conducted by Kaufman and Weed,is called: “Too much of a good thing?” identifying and resolving bloat in theuser interface [2].

We also focus on Enterprise Resource Planning (ERP) systems, sincetoday ERP systems are often large-scale complex systems covering a hugenumber of different tasks performed across an entire organization [5, 6].However, it is widely acknowledged that ERP systems suffer from complex

5

Page 9: Towards Multimodal Adaptive User Interfaces

2. STATE OF THE ART

user interfaces, which negatively affects the usability [7]. Singh and Wes-son [8] identified lack of usability metrics and methodologies that addressERP specifics as one of the possible causes of the above-mentioned phe-nomenon. They proposed a set of heuristics which can be used to assess theusability of ERP systems. It has also been reported that small improvementsin usability of ERP system can bring large benefits [4, 7]. Evaluation of us-ability of ERP system was done by Faisal et al. [9], where they conducted astudy on ERP users. One of the presented findings is that a poorly arrangeduser interface affects the user performance and efficiency.

There are two basic kinds of personalization: adaptive and adaptable.An application which provides a way to customize content and for examplelayout of toolbars and menus by the user is referred to as adaptable. Thisapproach is widely used today, due to easy implementation and fact thatusers has everything under the control. Another approach is an automaticcustomization performed by the application itself in order to increase user’ssatisfaction by observing user’s behavior followed by automatic personaliza-tion. The main reason why such an approach is not widely used is usually abad reception by the users.

2.1 Adaptable User Interfaces

The adaptable personalization is widely integrated in today’s applications,for example Microsoft Office and OpenOffice.org. Although many of existingapplications support personalization, many users do not personalize. In astudy on how many users use personalization in word processors and howmany users know how to do it, they concluded that 92% of users personalize,however, in the study even a change of a zoom level was considered aspersonalization, not only changes to the user interface layout [10]. Anotherstudy has been conducted on UNIX users on how they personalize [11]. Theresults showed that only a small number of users personalize, mostly sinceit is too hard to accomplish.

Personalization can be in many applications difficult [12] and a lessexperienced user may not know how to do it, or even that such a thing iseven possible. In a study focused on a word processor users it was foundout that programmers use personalization much more than secretaries [13].This introduces a paradox: Less experienced or handicapped users need toreduce the complexity, and thus need personalization more than other users;but they usually do not know how to do that or it is too hard to accomplish.

6

Page 10: Towards Multimodal Adaptive User Interfaces

2. STATE OF THE ART

Personalization is typically not a simple task and consumes user’s time andenergy, therefore, the user can decide to avoid it [11].

2.2 Adaptive User Interfaces

Adaptive user interfaces (AUIs) are defined as systems that adapt theirdisplays and available actions to the user’s current goals and abilities bymonitoring user status, the system state and the current situation [14]. How-ever, such a goal is hard to accomplish. Studies such as [15, 16, 17, 18, 19]reported disappearing of AUIs benefits when usability design principleswere violated. We believe that the results of the above-mentioned studiesshould not be interpreted as a general argument against the idea of adaptiveuser interfaces, but rather as a criticism of the existing adaptive interfaceimplementations.

Most AUI research has been focused on menu adaptation, where suchadaptation mechanisms are based on reordering or removing menu items.Mitchell and Shneiderman [20] developed a system that reorders menu itemsusing their relative frequencies of usage. Another approach to menu adapta-tion has been termed as Split Menus [21], where the most frequently useditems are moved to the top of the menu which is considered a prominentarea for menu items. Other menu items are placed below the “split”. Alsoa predictive model of performance of multiple menu variants was devel-oped by Cockburn et al. [22]. It is based on Fitts’ law [23] and Hick-Hymanlaw [24, 25] and includes also the above-mentioned Split menus.

Aforementioned examples appeared as prototypes, however, a few ex-amples can be found in widely-used software applications — for instance,Microsoft Word 2000 was empowered with “Smart Menu” technology whichautomatically hides unused menu items. Unfortunately, there is not muchdetail known about the implementation details of adaptive mechanisms ofthe above mentioned adaptive interfaces.

Naturally, there were attempts at the adaptation of a toolbar, e.g. per-formed by Gajos et al. [26]. They compared three variants of adaptationcalled Split Interface, Moving Interface and Visual Popout Interface.

A semi-automatically generating dialogue-driven recommendation sys-tem based on product ontologies was introduced and evaluated by Keindlet al. [27]. Their tests indicate that such an approach can lead to competitiveresults with manually created user interface in terms of click-out rate, whilerequiring much less manual effort.

7

Page 11: Towards Multimodal Adaptive User Interfaces

2. STATE OF THE ART

AUIs have been criticized for violating various usability principles, likecontrollability, predictability, transparency and unobtrusiveness. Bakov etal. proposed an approach to controlling adaptive behavior in recommendersystems, which allows users to get an overview of personalization effects,view and edit the user profile that is used for personalization [28]. Theresults of the user study showed that such an approach helps to solve thetransparency and controllability issues. However, a sufficient level of trustbetween the user and an AUI was not stablished. Another study conductedby Gajos et al. evaluated how predictability and accuracy of AUI affectperformance of UI [29], the results show that increasing predictability andaccuracy led to strongly improved satisfaction. Also, increasing accuracyresulted in improved performance and higher utilization of the adaptiveinterface.

We continue with examples of complex adaptive user interfaces: Aidais an adaptive system for interactive drawing and CAD systems [30]. Thesystem organized the contents of toolbars and menus, where the user hadtwo possible ways to add new items: first, use the desired command fromthe command-line. Secondly, the system detected user’s behavior also inthe drawing area, where some repeated operations were added to the userinterface, as a macro.

Interesting adaptive user interface for Microsoft Excel – FLEXCEL [31],was developed for the same reason as why we have chosen to developfor the OpenOffice.org — to study user interface on real software, not asimple prototype. The key features were: an adaptative toolbar, creating newmenu items corresponding to existing commands, but applying with theuser-defined parameters, and a teaching agent which provides tips and hintsto the user.

FLEXCEL is probably the most advanced adaptive interface in existingapplication, however, it was not well accepted by users. One of the reasonsreported was that users do not personalize appropriately, for example whenthey were naming new menu items, they sometimes just quickly typed somerandom characters. FLEXCEL was driven by the LISP rule-based systemempowered by CLOS (Common Lisp Object System).

Very interesting adaptive user interface is SUPPLE [32]. It’s main purposeit to generate user interfaces adapted to the user’s abilities automatically. Theresults show that such approach improved speed, accuracy and satisfactionof users with motor impairments compared to manufacturers’ default userinterfaces.

Since both the adaptive and adaptable approaches have their advan-tages and disadvantages, there is a mixed-initiative solution to interface

8

Page 12: Towards Multimodal Adaptive User Interfaces

2. STATE OF THE ART

customization, where the goal is to maximize advantages, while minimiz-ing disadvantages. Such mixed-initiative user interface was studied andevaluated by Andrea Bunt [33]. She designed and implemented the MICA(Mixed-Initiative Customization Assistance) system. The results indicatethat users prefer MICA’s support to customizing independently, that MICA’ssupport positively impacts performance with the interface (in terms of tasktime), and that MICA reduces customization time.

One of the typical property of the most AUI is that they adapt userinterface at the moment when a new pattern or preferred functionality hasbeen recognized, without considering the user’s emotional state.

2.2.1 Adaptive User Interface in ERP System

The fact that ERP system suffers from usability issues has been mentionedabove. Most of the usability issues comes from too complex user interfaces.Fortunately, ERP systems complies fundamental rule for feasibility of per-sonalization: there is a considerable difference in the frequency of use ofparticular functionality. Utilization of AUI in ERP systems seems promisingfor several reasons: from the economic point of view, a properly personal-ized user interface could reduce task times and thus increase productivity.Furthermore, a precise traditional customization of an ERP system by in-dividual developers is very costly and is usually performed repeatedly toachieve good results.

To date, adaptive user interfaces have been studied mostly on mock-upsor simple prototypes, sometimes on either word processors or spreadsheets.However, the idea of adaptive personalization of ERP systems for improvingthe overall usability of ERP systems has already been mentioned by Singh et.al [8, 34, 7], for instance. Furthermore, to our knowledge only one prototypeof the AUI has been developed [35] for an existing ERP system, namely SAPBusiness One. The AUI prototype used content adaptation, presentationadaptation and navigation adaptation in order to address the above men-tioned usability issues. An evaluation was conducted: the results revealedthat the presented AUI provided usability benefits, particularly learnabilityand satisfaction.

2.3 Utilisation of Human-Factors Methods and Devices

The human factor engineering is the application of knowledge about humancapabilities. Human factors researchers usually use eye tracking, EEG (Elec-troencephalography) and ECG (Electrocardiography). From such inputs it is

9

Page 13: Towards Multimodal Adaptive User Interfaces

2. STATE OF THE ART

also possible to deduce the subject’s behavior and psychological state usingemotion recognition algorithms. The above-mentioned devices and methodsare usually used for studies and evaluations. However, such devices arebecoming used as a input device (controller). At first it has been used bypeople with disabilities who are not able to use the traditional modalitiessuch as mouse and keyboard. Nowadays, there are systems that integratesuch modalities originally developed for people with disabilities in order toincrease usability of such a system by common users.

A very interesting study on human factors in the design of adaptiveuser interface was conducted by Elena Zudilova-Seinstra [36], where shesuggested the criteria for the user model guiding and controlling the adapta-tion process. She also evaluated their impact of particular human factors oninterface adjustments done manually by users.

There are several ways to infer the affective state of the user. Most preciseis the EEG, however, other unobtrusive techniques are utilized, e.g. posturetracking with a depth camera [37]. Such an approach is used for example byMicrosoft Kinect and Intel RealSense.

2.3.1 Eye-tracking

Eye tracker is a device used to determine what is the user is looking at byobserving his eye movement (e.g. useful to determine prominent areas ofuser interface — hot zones). Eye tracking (gaze tracking), is a technologythat consists of capturing the eye gaze and calculating the point the useris looking at. As eye tracking is becoming more affordable, it is easier tocapture human vision attention.

We distinguish between two basic types of eye tracker: table-mountedand head-mounted. The table-mounted systems consist of infrared leddiodes and tracking cameras, where the entire device is usually attached toa monitor. Head-mounted systems (mobile) usually utilize two cameras andan infra led diode, which are attached to the user’s head. The first cameracaptures the user’s gaze (eye camera) and the second camera captures thescene (field camera) in order to derive the position of user’s head. Accu-racy of a professional eye tracker is usually around 0.5◦ and sampling ratebetween 120Hz – 600Hz.

Low-cost eye tracking can be performed also by a low-resolution webcamera. Such an approach was recently significantly improved in terms ofaccuracy and robustness by Fabian Timm and Erhardt Barth [38].

We discuss here various areas of eye gaze research in order to show thateye-tracking based information plays an important role in various human

10

Page 14: Towards Multimodal Adaptive User Interfaces

2. STATE OF THE ART

cognitive activities. An eye tracker can be used in several ways, for exampleto evaluation purposes or simply using it as an input device for controllingan application. However, it is also possible to interpret the user’s intentionin multimodal understanding and adapt the system’s behaviors accordingto the user’s interest and attitude estimated from the eye tracker.

Utilization of an eye tracker as input device was performed e.g. byJackob[39] and Ware et al.[40] . They designed a user interface controlled byeye tracker. The eye tracker has been successfully used also for monitoringlevel of perceptual load and level of cognitive load during tasks [41]. Aneye tracker is usually used for conducting user studies, where it providesadditional quantitative information. Example of such a relevant user studyis [42], where Chen and Pu explored users’ actual visual searching patternin the ranked list and another effective layout designs. The results show thatusers did adapt their searching pattern to different layout designs.

2.3.2 Electroencephalography

Electroencephalography (EEG) provides the one-way non-invasive interfacebetween brain and computer – Brain-Compuer Interface (BCI). The streamof data recorded from the user’s brain must be processed by an algorithmdesigned to decode the user’s intentions or emotional state. Utilization ofEEG as a computer controller has been proposed by various authors, e.g. bySolovey [43].

Usage of EEG to detect human satisfaction was performed e.g. by Es-fahani et al. [44]. Their experimental results show an accuracy of 79.2% indetecting the human satisfaction level using EEG headset which recordsbrain activity from 14 locations on the scalp. Another study [45] evaluatedthe detection accuracy of one of the first BCI intended for personal use: theEmotiv EPOC. Provided results indicate that the Emotiv EPOC performsits function as a BCI with an acceptable level of accuracy. Emotiv EPOCheadset was evaluated also by Wright in his dissertation [46], where heevaluated accuracy of measured levels of excitement, engagement and frus-tration in order to establish the validity of the Emotiv EPOC Affectiv suite.He found that self-reported levels of engagement and excitement corre-lated with levels measured by the headset. Frustration, however, did not.Wright implemented Attentive User Interface, which distinguish betweenattentional states of the user (rest, moving, thinking and busy) and regulatenotifications in a instant messaging. Similar approach was also used by Chenet al. [47], where they identified appropriate notification in 83% of cases.

11

Page 15: Towards Multimodal Adaptive User Interfaces

2. STATE OF THE ART

Another approach in emotion recognition was used by Gilroy et al, wherethey utilised surface electromyography (EMG) and galvanic skin response(GSR) [48]. The results (both qualitative and quantitative) suggest that theparticular multimodal fusion approach is consistent with physiological indi-cators of emotion.

2.4 Multimodal User Interfaces

According to Oviatt [49], multimodal user interfaces “process two or morecombined user input modes (such as speech, pen, touch, manual gesture,gaze, and head and body movements) in a coordinated manner with multi-media system output. They are a new class of interfaces that aim to recognizenaturally occurring forms of human language and behavior, and which in-corporate one or more recognition-based technologies (e.g. speech, pen,vision).”

In this section we will focus on multimodal user interfaces, particularlythose using EEG or eye tracking as other input modalities. An interestingstudy which combines both approaches (EEG and eye tracking) was per-formed by Putze et al. [50]. They used Eye tracking and EEG as additionalinput modalities for event selection in video stream. From eye tracking, theyderived the spatial location of a perceived event and from patterns in theEEG signal they derived its temporal location within the video stream. Theyachieved a temporal accuracy of 91% and a spatial accuracy of 86%.

Another interesting multimodal adaptive user interface is SmartKomproject [51], which is multimodal dialogue system that combines speech,gesture, and facial expressions.

Real-time EEG based applications were developed and evaluated bySouriana et al. [52]. They developed an emotional avatar, music therapy,EEG music player, which plays music in according to the user’s emotionalstate, and interactive storytelling application.

Another multimodal interface combining BCI and eye tracker has beendeveloped and evaluated by Zander et al. [53]. However, they focusedon handicapped users who are still able to control eye movements. Theyconcluded that supplement an eye tracker by BCI resulted in more robustand intuitive device for touch-less interaction. Another Hybrid BCI thatcombines eye tracker and BCI has been successfully developed for text-sentry operations [54]. Using such an approach they reduced the number offalse-positives selections.

12

Page 16: Towards Multimodal Adaptive User Interfaces

2. STATE OF THE ART

2.5 Multimodal Adaptive User Interfaces

It is obvious that considering the human abilities and properties is importantin the design of every user interface, and even more important in adaptiveuser interface. This new approach usually utilizes brain activity or gaze dataas an additional source of information, to augment and adapt the interface inconjunction with standard devices, instead of controlling it directly with thebrain or eyes. There are various methods used in adaptive user interfaces,from changing the properties of the interface to adapting the task to theuser’s ability level. By considering user’s cognitive states, such as interest,workload, frustration or fatigue, it is possible to adapt the user interface tothe user’s psychological state, which may have impact on the performance ofAUI. The new trend of utilization of an eye tracker in adaptive user interfacesproves e.g. recently published book Eye Gaze in Intelligent User Interfaces.by Springer [55].

In a recent paper from Tan et al. [37] they claim that Intelligent UserInterfaces can benefit from having knowledge on the user’s emotion. Similarclaims can be found in various papers (e.g. in [41]) and dissertations (e.g.[56]).

Utilization of adaptation in information visualization using eye-trackerwas recently done by Ben Steichen et al. [57]. They developed and evaluatedvisualization systems that can adapt to each individual user in real-time.They found such a system using machine learning and eye tracker signifi-cantly better than a one-size-fits-all model.

Another interesting multimodal AUI was developed by Shiwei Cheng etal. [58], whose purpose was a product eye-tracking driven recommendationbased on interactive genetic algorithm, which used an eye tracker as fitnessfunction. This paper employed eye-movement data as an implicit indicatorto predict the use’s preferences, where the average accuracy of productrecommendation was 87.5%.

Also quite interesting is AdELE (A Framework for Adaptive E-Learningthrough Eye Tracking[59]), which, however, uses only eye tracker.

All of the above-mentioned adaptive user interfaces are mockups orsimple one-purpose applications or do not perform adaptation in realtime.To our knowledge, there is not an adaptive user interface implemented incomplex software application which uses eye tracker and EEG for real-timeadaptation.

13

Page 17: Towards Multimodal Adaptive User Interfaces

Chapter 3

Achieved Results

In our previous work we have focused on traditional adaptive user inter-faces where advanced inputs such as EEG or eye tracker was not concerned.However, we have already designed, developed and basically evaluated aprototype of an adaptive user interface, which is integrated into a real com-plex applications, namely: OpenOffice.org and OpenERP. Implementationto SAP ERP is under development.

3.1 OpenOffice.org Interceptor (OOI)

The most difficult part of implementation of any adaptive user interfaceto existing application is a user activity logger. There were not any suit-able logger that would satisfy our needs, so we have developed a researchframework that enables to use the OpenOffice.org as a platform for HCIresearch, particularly for conducting user studies or prototyping intelligentuser interfaces [60]. OOI uses innovative hybrid logging technique whichprovides high-level, rich and accurate information about issued user com-mands, command parameters and used interaction styles (whether menu,toolbar or dialog was used). Hybrid approach [61] combines both: user in-terface event (low-level events like cursor movement) and user command(captures issued user commands at the level of the underlying function call)logging approaches. Further complex processing of logged user interfaceevents to infer user commands, which must be performed on most loggersthese days, is not necessary using our hybrid logging technique.

3.2 Boulevard

Our system named Boulevard [62] for adaptive user interface personal-ization utilizes various kinds of personalization to support the user, e.g.personalization to user preferred commands, preferred interaction style and

14

Page 18: Towards Multimodal Adaptive User Interfaces

3. ACHIEVED RESULTS

frequently used parameter values applied to user commands. The moredetailed description of adaptation techniques used in Boulevard follows:

Discovering prominent functionality — we proposed a novel algorithmsfor computing user command prominence. Simply speaking, we con-sider the most frequently used commands as the most prominent.However, recently used commands are also considered in order to per-sonalize to short-time usage patterns. The user command prominenceis computed using Formula 3.1, where |x| expresses the count of usercommand x activations, T expresses the total count of all commandsactivations. This part of the formula represents the frequency of usagefactor. The rest of the formula is related to the recency of usage. Eachuser command may appear in the queue of size q at multiple positions.We put a set Px which contains the positions of occurrence of usercommand x in the queue. The topmost position is 1 (refers to the mostrecently used user command) and the lowermost position is q. Thew ∈< 0, 1 > parameter represents the weight of relative frequency ofusage factor (long-time usage patterns). The rank of a user commandx. Rank(x) ∈< 0, 1 > expresses the prominence of a particular usercommand x for the user.

rank(x) = w|x|T

+ (1− w)

∑pi∈Px

(q − pi + 1)∑qi=1 i

(3.1)

The “Sweeping-back” feature — provides an intelligent organisation ofsemantically similar user commands. Besides the frequency and/or re-cency of usage, our system organises user commands in the Boulevardconsidering semantical relationships also. For example, “Bold”, “Italic”and “Underline” represent a group of font style related commands, or“Align-left”, “Align-right”, “Center” and “Justify” command representthe text alignment group.

Adaptive representation — presenting user commands according to theinteraction style used to issuing user commands (in classic WIMP ap-plication using a menu, a toolbar button, a pop-up menu, a keystrokeor a dialog). Presumably, users associate user commands with their vi-sual representation, e.g. “Print” as a toolbar icon with a printer symbolor “New” as a menu item in an upper part of “File” menu category.Such adaptation supports a visual association of a user command withthe interaction style preferred by the user.

15

Page 19: Towards Multimodal Adaptive User Interfaces

3. ACHIEVED RESULTS

Recommending parameter values for user commands — recommendingthe most frequently and/or recently used values to apply with usercommands, e.g. most frequently used colors or font sizes. Currently,such parameters are offered to the user as individual commands ap-plied directly with the proposed parameter, e.g. red text color button,blue text color button, etc. See the next-to-last row of the Boulevarddepicted on Fig. 3.2.

Adaptive disclosure — Boulevard is able to personalize, apart from themenu and toolbar items, also dialogs. However, only the frequentlyand/or recently used parts of dialogs are presented. In fact, it is anadaptive variant of the well-established interaction technique some-times referred to as progressive disclosure, which sequences complexdialogs several parts (i.e. from basic to advanced features) which canbe disclosed progressively. Such approach is used in order to managethe visual clutter and also reduce overwhelming number of presentedfeatures. The fundamental difference from the common progressivedisclosure is that the our variant is adaptive.

Non-destructive behavior — in contrast to most of adaptive user interfacesthese days, Boulevard does not modify the original part of the interfacedestructively (e.g. menu or toolbar contents or structure) so that a useris able to clearly understand which part of the user interface is staticand which part is dynamic (adaptive). Seemingly, this approach couldreduce user disorientation and improve the perception of the userinterface. Also the user is not forced to use the adaptive part of theuser interface.

The system architecture is outlined on Figure 3.1. Boulevard uses theabove-mentioned logger for tracking user activity in OpenOffice.org Writer.Logged information is handled by the forward-chaining expert system writ-ten in CLIPS [63], which is a well-known and widely-used expert systemshell with a forward chaining inference engine which provides support forrule-based, object-oriented and procedural programming paradigms. Theinteroperation between Java and CLIPS engine is provided by CLIPSJNI(CLIPS Java Native Interface). The output of expert system is visually in-terpreted by a presentation layer based on the Swing widget toolkit forJava.

16

Page 20: Towards Multimodal Adaptive User Interfaces

3. ACHIEVED RESULTS

OpenOffice.org Writer

Writer Adaptive User InterfaceExtension

Logger

Expert System

Adaptive User Interface Presentation Layer

Performed user commands

Adaptive user interface contents

Figure 3.1: Boulevard architecture

3.2.1 Boulevard in OpenOffice.org

Boulevard (depicted on Fig. 3.2) is represented as a panel container windowcontaining the personalised user interface of the application. Since the panelcontainer centralises the most preferred functionality of the application, theuser’s effort required to find a command in complex menus or toolbars isreduced. In contrast to pull down menus, the functionality provided by ouradaptive panel container is directly available without the need to open andnavigate within a menu — this feature is sometimes referred to as “one-click”approach.

3.2.2 Proof-of-Concept Usability Study

The goal of the study was to verify the basic concepts of our system. Bothquantitative and qualitative measures were used. The study focused ontask time and error rate analysis measured on selecting particular usercommands using three different interaction styles, namely: toolbar, menuand Boulevard. The study provided promising results since Boulevard wasfound comparable to toolbar in quantitative measurements (although nostatistically significant difference was found) and faster than menus. Accord-ing to qualitative measures, Boulevard was found as a better interactionstyle than toolbar and menu by most of the participants. Furthermore, mostparticipants reported that they would like to use Boulevard.

17

Page 21: Towards Multimodal Adaptive User Interfaces

3. ACHIEVED RESULTS

Figure 3.2: Boulevard in OpenOffice Writer word processor

3.3 Boulevard in ERP system

From the reasons mentioned in previous chapter (usability of ERP systems iswidely acknowledged as problematic), we have developed implementationof Boulevard in ERP system. At first, we designed a mockup which was eval-uated by SAP ERP users [64]. After that we have designed and developed animplementation in OpenERP [65], implementation to SAP ERP is emerging.Not all adaptive features of Boulevard in word processor are utilized, sincescreens in ERP systems are typically more complex than most dialogs inword processors. Also some of new features are utilized for the same reason.

3.3.1 Adaptive Disclosure with Ephemeral Visualization

We decided to utilize a new feature, which is suitable for complex user inter-face in ERP system. This feature should reduce the visual search time andaccelerate interaction in complex screens (transactions) while maintainingspatial consistency. Ephemeral visualization was introduced by Findlateret al. [66]. They reported reduced the search time in complex screens whilemaintaining spatial consistency. Using ephemeral visualization, predictedprominent fields become visible immediately, while the rest of fields (thosethat were not predicted as prominent) will fade-in gradually, but stops at75% of visibility, so that the user is able to easily distinguish between promi-

18

Page 22: Towards Multimodal Adaptive User Interfaces

3. ACHIEVED RESULTS

Figure 3.3: Ephemeral Visualization in OpenERP[65]

nent and non-prominent fields. Ephemeral visualization (implementationin OpenERP is depicted on Figure 3.3) preserves original positions of theuser interface components, which conforms the above-mentioned nonde-structive principle and maintains users’ orientation in the user interface. Ina study on ephemeral-based menus, such menus were found faster thanstatic menus when accuracy of predicted items is high and more importantlynot significantly slower when it is low [66]. We found such property verybeneficial for enterprise application where the excesses in task times are un-desirable. Prediction of prominence of fields is based on our algorithm usedfor determining prominent functionality, which considers both long-termand short-term usage patterns.

We have developed advanced navigation is such form, where the useris navigated more effectively through predicted fields. During the focustraversal, the non-predicted fields are skipped. Such optimized interactionreduces the time needed to navigate in the form. However, it is still possibleto move to a non-predicted field using the different keystroke respecting theoriginal field order on screen strictly.

We conducted a brief questionnaire study in which six professional SAPERP users evaluated a mockup of the above introduced user interface in SAPERP. In that time the real implementation in OpenERP was not available.We found access to ERP users far more complicated than to word processorusers. Subjects were asked to evaluate statements on the five-point Likertscale, in which participants showed promising positive response to AUIBoulevard.

19

Page 23: Towards Multimodal Adaptive User Interfaces

Chapter 4

Aims of the Thesis

The aim of our research is to contribute to adaptive user interfaces, par-ticularly develop and evaluate adaptive user interfaces considering user’semotional state, workload and current point of gaze. On the proposed in-terface we will study factors that influence performance of AUI in terms ofqualitative and quantitative measures. We plan to use EEG and eye-trackerfor evaluation and even real-time adaptation purposes. As main output weexpect to answer the question whether it may be beneficial or not to useinputs such as EEG and eye-tracker for user interface adaptation, which willbe demonstrated on real-life software application such as OpenOffice.org orOpenERP. For this purpose the already developed adaptive user interfacewill be used.

The primary source for detecting user’s emotional state will be EEG,however, there are other techniques like analyzing user’s face through cam-era. Even utilization of an eye tracker for determining user’s emotional stateis possible, however, not as precise as EEG (used e.g. in [41]). We plan to usethe following equipment:

4.1 Emotiv EPOC

Emotiv EPOC is a consumer grade EEG headset, which utilizes 16 sensorsthat captures electrical signals from the scalp. The captured data from theheadset is wirelessly transmitted to a computer. The data must be processedby the Emotiv software, which captures and processes the data received bythe headset. This headset has been already used and evaluated in variousstudies discussed in Section 2.3.2.

4.2 The Eye Tribe

The Eye Tribe, is an affordable table-mounted eye tracker. It provides 60Hzsampling rate and accuracy 0.5◦ – 1◦. This is not a very accurate and fast

20

Page 24: Towards Multimodal Adaptive User Interfaces

4. AIMS OF THE THESIS

eye tracker, however, when the user sits approximately 60 cm away fromthe screen, this accuracy corresponds to an on-screen average error of 0.5 to1 cm, which is enough for our purposes.

4.3 Plan of Research

First, we plan to focus on techniques related to measuring emotions, work-load and user’s gaze, therefore we will use appropriate devices for off-lineevaluation purposes only. There have been a study focused on how pre-dictability and accuracy of AUI affect performance of UI [29]. Therefore, wewill conduct a study on comparison of adaptive, adaptable and static userinterface using EEG and Eye-tracker. Particularly, we want to study differ-ences in user interfaces, namely adaptive, adaptable and static in varioussituations, such as: routine versus non-routine tasks and emotional state ofuser (Excitement, Engagement, Meditation and Frustration). First of testedhypotheses will be:

Hypothesis 1 AUI is perceived as less acceptable than both static and adaptableuser interface when the user is busy or frustrated.

We will continue by research on how timing of user interface adaptationinfluences performance, usability and reception of such AUI by users, sincewe consider a proper timing of user interface adaptation as critical. Forthis purpose we will enhance the current expert system which controlsthe adaptation by data sources from EEG and appropriate rules will bedeveloped. Simply speaking, whether the current user’s emotional stateinfluences how the user perceives the user interface adaptation. We willcontinue by our next hypothesis:

Hypothesis 2 The user interface adaptation is perceived as less acceptable whenthe user is tired, busy or frustrated.

For determining how the current user’s area of interest affects perceptionof user interface adaptation and its impact on performance, an eye trackerwill be used. Such an adaptive technique is also based on proper timingof adaptation, however, current point of gaze from eye tracker is used,instead of mental state from EEG. Also in this case our expert system willbe extended for appropriate data sources and rules. The next hypothesisfollows:

Hypothesis 3 The dynamic user interface adaptation is more acceptable if theuser’s gaze is focused on the AUI than adaptation during which the user’s gaze isfocused elsewhere.

21

Page 25: Towards Multimodal Adaptive User Interfaces

4. AIMS OF THE THESIS

After implementation and evaluation of the above-mentioned approachesindividually, we will evaluate a mixed approach, where the timing of per-formed adaptation will be based on both – the mental state and point ofgaze.

We will also focus on visualization of adaptation. For this purpose var-ious animation techniques will be developed and evaluated in order todetermine impact of expressive (aggressive) animation versus modest ani-mation on user’s attention. Also combination of EEG and Eye tracker will beutilized in order to determine whether expressive animation is better whenthe user is not busy or tired, and modest animation is more appropriate incase when the user is busy or tired.

Hypothesis 4 More expressive and noticeable visualization (animation) of userinterface adaption is better perceived by the user than modest one when the user isnot busy or frustrated.

We will utilize eye-tracker and EEG for various adaptation techniques,which we have already designed, and it’s evaluation However, we do notpresent them here, since we expect that they will be re-evaluated after weimplement and evaluate the above mentioned ones. Thus we present onlya few instances, such as eye tracker based adaptive disclosure (presentedin Chapter 3.2), which originally presents only the frequently used parts ofdialogs in personalized user interface. So we propose an eye tracker-basedadaptive disclosure, where data from an eye tracker will be considered fordetermining such important parts of dialog for user.

Also we will investigate utilization of Brain-Computer Interface (EEGbased) as input for AUI to detect the human satisfaction level of user in-terface adaptation and use it as a feedback for correcting and improvingthe personalization in order to maximize human satisfaction with the userinterface.

4.3.1 Research Methodology

The most important for our project are the user studies whose methodscan be basically divided to qualitative and quantitative approaches. Thequantitative approach is based on collecting data which can be directlymeasured, usually speed, time, errors and distance. Qualitative approach isintended to extract user’s satisfaction, which cannot be observed from thequantitative measures. However, in our case we will also collect data fromEEG headset and the eye tracker, where such an approach will provide with

22

Page 26: Towards Multimodal Adaptive User Interfaces

4. AIMS OF THE THESIS

the means to measure some otherwise qualitative data using a quantitativemethod.

A comprehensive handbook of eye tracking methodology [67] has beenrecently published. We found such a handbook to be very useful and we willfollow the fundamentals presented in the book in our research. In particular,how to design and evaluate an eye tracker-based study and interpret eyetracker data for AUI.

User’s emotional state and workload can be determined by variousmethods and devices. Since the Emotiv EPOC and its software suite hasbeen reported to provide sufficient accuracy [45, 46], we primarily plan torely on the Emotiv EPOC headset and algorithms used in Emotiv softwaresuite. However, Emotiv EPOC provides also raw EEG data, so utilizationof another, a possibly more appropriate and reliable algorithm, is viable. Incase of emergence of an algorithm or technology accurate enough to extractsuch informations like users’s emotional and mental state from user’s faceor voice, the EEG headset may not be used at all. Unfortunately, recentlyintroduced systems like Intel RealSense and others were not evaluatedappropriately yet.

User studies may either be conducted on a mock-up, prototype or realsoftware application. The advantage of mock-up or software prototypesis that they are usually developed in a short time and also that the imple-mentation of logging facilities (input for adaptation) and AUI itself is not ademanding task. However, the limitation is that prototypes do not representreal applications known by users and usually offer only a limited functional-ity. That is why studies conducted on prototypes may be limited in terms ofexternal validity.

Particularly for AUIs, real, widely-used and widely-known softwareoffers a more relevant and promising results in terms of external validity.Limitations of software prototypes have also been discussed in more detailby Thomas and Krogsæter in [68]. Also the need of studying AUIs on a real-istic task was mentioned by Findlater et al.[69]. We have already developeda suitable research framework for performing user studies on a real complexapplication, namely OpenOffice.org and OpenERP.

4.4 Proposed Plan of Work

• Spring 2014

– Thesis proposal submission (January 2014).

– Thesis proposal defense and state exam (May 2014).

23

Page 27: Towards Multimodal Adaptive User Interfaces

4. AIMS OF THE THESIS

– Conduct a first user study using EEG and Eye-Tracker.

– Paper submission – Evaluation of Adaptive, Adaptable and StaticUser Interface using EEG and Eye-tracker.

– Design of the adaptive user interface using advanced inputs suchas EEG and eye tracker.

• Autumn 2014

– Implementation of the above-mentioned adaptive user interface.

– Basic evaluation of the implemented interface.

– Paper submission – Multimodal Adaptive User Interface.

• Spring 2015

– Modification of adaptive user interface based on conducted userstudy.

– Implement other adaptive features of adaptive user interface.

– Conduct advanced evaluation of the user interface.

– Journal submission – Multimodal Adaptive User Interface.

• Autumn 2015

– Doctoral thesis preparation.

– Doctoral thesis submission.

24

Page 28: Towards Multimodal Adaptive User Interfaces

Bibliography

[1] Joanna McGrenere and Gale Moore. Are we all in the same bloat? InGraphics Interface’00, pages 187–196, 2000.

[2] Leah Kaufman and Brad Weed. Too much of a good thing?: identify-ing and resolving bloat in the user interface. SIGCHI Bull., 30:46–47,October 1998.

[3] Martin Dostal. On the differences in usage of word processing appli-cations. In Constantine Stephanidis, editor, HCI International 2011 –Posters’ Extended Abstracts, volume 173 of Communications in Com-puter and Information Science, pages 133–137. Springer Berlin Heidel-berg, 2011.

[4] Irina Ceaparu, Jonathan Lazar, Katie Bessiere, John Robinson, and BenShneiderman. Determining causes and severity of end-user frustration.International Journal of Human-Computer Interaction, 17(3):333–356,2004.

[5] Marie-Claude Boudreau. Learning to use erp technology: A causalmodel. In Proceedings of the 36th Annual Hawaii International Con-ference on System Sciences (HICSS’03) - Track 8 - Volume 8, HICSS ’03,Washington, DC, USA, 2003. IEEE Computer Society.

[6] Arik Ragowsky and David Gefen. What makes the competitive contri-bution of erp strategic. SIGMIS Database, 39(2):33–49, April 2008.

[7] Akash Singh and Janet Wesson. Improving the usability of erp systemsthrough the application of adaptive user interfaces. In ICEIS (4), pages208–214, 2009.

[8] Akash Singh and Janet Wesson. Evaluation criteria for assessing theusability of erp systems. In SAICSIT Conf., pages 87–95, 2009.

[9] Chaudhry Muhammad Nadeem Faisal, Muhammad Shakeel Faridi, Za-hid Javed, and Muhammad Shahid. Users’ adoptive behavior towardsthe erp system. Inquiry, 8:9, 2012.

25

Page 29: Towards Multimodal Adaptive User Interfaces

4. AIMS OF THE THESIS

[10] Stanley R. Page, Todd J. Johnsgard, Uhl Albert, and C. Dennis Allen.User customization of a word processor. In Proceedings of the SIGCHIconference on Human factors in computing systems: common ground,CHI ’96, pages 340–346, New York, NY, USA, 1996. ACM.

[11] Wendy E. Mackay. Triggers and barriers to customizing software. InProceedings of the SIGCHI conference on Human factors in computingsystems: Reaching through technology, CHI ’91, pages 153–160, NewYork, NY, USA, 1991. ACM.

[12] Leah Findlater and Joanna McGrenere. A comparison of static, adaptive,and adaptable menus. In Proceedings of the SIGCHI conference onHuman factors in computing systems, CHI ’04, pages 89–96, New York,NY, USA, 2004. ACM.

[13] M.B Rosson. The effects of experience on learning, using, and evaluat-ing a text-editor. Unpublished manuscript, 1983.

[14] Ling Rothrock, Richard Koubek, Frederic Fuchs, Michael Haas, andGavriel Salvendy. Review and reappraisal of adaptive interfaces: To-ward biologically inspired paradigms. Theoretical Issues in ErgonomicsScience, 3(1):47–84, 2002.

[15] Kristina Hook. Designing and evaluating intelligent user interfaces.In Proceedings of the 4th international conference on Intelligent userinterfaces, IUI ’99, pages 5–6, New York, NY, USA, 1999. ACM.

[16] Anthony Jameson. The human-computer interaction handbook. chapterAdaptive interfaces and agents, pages 305–330. L. Erlbaum AssociatesInc., Hillsdale, NJ, USA, 2003.

[17] R.J Keeble and R.D Macredie. Assistant agents for the world wide webintelligent interface design challenges. Interacting with Computers,12(4):357 – 381, 2000.

[18] Thomas Kuhme. A user-centered approach to adaptive interfaces.In Proceedings of the 1st international conference on Intelligent userinterfaces, IUI ’93, pages 243–245, New York, NY, USA, 1993. ACM.

[19] Ben Shneiderman. Direct manipulation for comprehensible, predictableand controllable user interfaces. In Proceedings of the 2nd internationalconference on Intelligent user interfaces, IUI ’97, pages 33–39, NewYork, NY, USA, 1997. ACM.

26

Page 30: Towards Multimodal Adaptive User Interfaces

4. AIMS OF THE THESIS

[20] J. Mitchell and B. Shneiderman. Dynamic versus static menus: Anexploratory comparison. SIGCHI Bull., 20(4):33–37, April 1989.

[21] Andrew Sears and Ben Shneiderman. Split menus: effectively usingselection frequency to organize menus. ACM Trans. Comput.-Hum.Interact., 1:27–51, March 1994.

[22] Andy Cockburn, Carl Gutwin, and Saul Greenberg. A predictive modelof menu performance. In Proceedings of the SIGCHI Conference onHuman Factors in Computing Systems, CHI ’07, pages 627–636, NewYork, NY, USA, 2007. ACM.

[23] P. M. Fitts. The information capacity of the human motor systemin controlling the amplitude of movement. Journal of ExperimentalPSychology, 74:381–391, 1954.

[24] W. E. Hick. On the rate of gain of information. Quarterly Journal ofExperimental Psychology, 4(1):11–26, 1952.

[25] R HYMAN. Stimulus information as a determinant of reaction time. JExp Psychol, 45(3):188–196, Mar 1953.

[26] Krzysztof Z. Gajos, Mary Czerwinski, Desney S. Tan, and Daniel S.Weld. Exploring the design space for adaptive graphical user interfaces.In AVI ’06: Proceedings of the working conference on Advanced visualinterfaces, pages 201–208, New York, NY, USA, 2006. ACM.

[27] Hermann Kaindl, Elmar P. Wach, Ada Okoli, Roman Popp, Ralph Hoch,Werner Gaulke, and Tim Hussein. Semi-automatic generation of rec-ommendation processes and their guis. In Proceedings of the 2013International Conference on Intelligent User Interfaces, IUI ’13, pages85–94, New York, NY, USA, 2013. ACM.

[28] Fedor Bakalov, Marie-Jean Meurs, Birgitta Konig-Ries, Bahar Sateli,Rene Witte, Greg Butler, and Adrian Tsang. An approach to controllinguser models and personalization effects in recommender systems. InProceedings of the 2013 International Conference on Intelligent UserInterfaces, IUI ’13, pages 49–56, New York, NY, USA, 2013. ACM.

[29] Krzysztof Z. Gajos, Katherine Everitt, Desney S. Tan, Mary Czerwinski,and Daniel S. Weld. Predictability and accuracy in adaptive user inter-faces. In Proceeding of the twenty-sixth annual SIGCHI conference onHuman factors in computing systems, CHI ’08, pages 1271–1274, NewYork, NY, USA, 2008. ACM.

27

Page 31: Towards Multimodal Adaptive User Interfaces

4. AIMS OF THE THESIS

[30] J. A. Cote-Munoz. Aida: An adaptive system for interactive draftingand cad applications. In M. Schneider-Hufschmidt, T. Kuhme, andU. Malinowski, editors, Adaptive User Interfaces: Principles and Prac-tice, pages 225–240. North-Holland, Amsterdam, 1993.

[31] Reinhard Oppermann, editor. Adaptive user support: ergonomic de-sign of manually and automatically adaptable software. L. ErlbaumAssociates Inc., Hillsdale, NJ, USA, 1994.

[32] SUPPLE: Automatic Generation of Personalizable User Interfaces.http://www.cs.washington.edu/ai/supple/.

[33] Andrea Bunt. Mixed-initiative support for customizing graphical userinterfaces. PhD thesis, University of British Columbia, 2007.

[34] Akash Singh and Janet Wesson. The design of adaptive interfaces forenterprise resource planning systems. In ICEIS (4), pages 281–286, 2011.

[35] Akash Singh. Designing adaptive user interfaces for enterprise resourceplanning systems for small enterprises. PhD thesis, Nelson MandelaMetropolitan University, 2013.

[36] Elena Zudilova-Seinstra. On the role of individual human abilities inthe design of adaptive user interfaces for scientific problem solvingenvironments. Knowledge and Information Systems, 13(2):243–270,2007.

[37] Chiew Seng Sean Tan, Johannes Schoning, Kris Luyten, and KarinConinx. Informing intelligent user interfaces by inferring affectivestates from body postures in ubiquitous computing environments. InProceedings of the 2013 International Conference on Intelligent UserInterfaces, IUI ’13, pages 235–246, New York, NY, USA, 2013. ACM.

[38] Fabian Timm and Erhardt Barth. Accurate eye centre localisation bymeans of gradients. In Leonid Mestetskiy and Jose Braz, editors, VIS-APP, pages 125–130. SciTePress, 2011.

[39] Robert J. K. Jacob. Virtual environments and advanced interface design.chapter Eye Tracking in Advanced Interface Design, pages 258–288.Oxford University Press, Inc., New York, NY, USA, 1995.

[40] Colin Ware and Harutune H. Mikaelian. An evaluation of an eye trackeras a device for computer input2. In Proceedings of the SIGCHI/GI

28

Page 32: Towards Multimodal Adaptive User Interfaces

4. AIMS OF THE THESIS

Conference on Human Factors in Computing Systems and GraphicsInterface, CHI ’87, pages 183–188, New York, NY, USA, 1987. ACM.

[41] Siyuan Chen, Julien Epps, and Fang Chen. Automatic and continuoususer task analysis via eye activity. In Proceedings of the 2013 Interna-tional Conference on Intelligent User Interfaces, IUI ’13, pages 57–66,New York, NY, USA, 2013. ACM.

[42] Li Chen and Pearl Pu. Eye-tracking study of user behavior in rec-ommender interfaces. In Paul Bra, Alfred Kobsa, and David Chin,editors, User Modeling, Adaptation, and Personalization, volume 6075of Lecture Notes in Computer Science, pages 375–380. Springer BerlinHeidelberg, 2010.

[43] E.T. Solovey. Using your brain for human-computer interaction. ACMUIST 2009 Symposium on User Interface Software and Technology,2009.

[44] EHSAN TARKESH ESFAHANI and V. SUNDARARAJAN. Using brain–computer interfaces to detect human satisfaction in human–robot in-teraction. International Journal of Humanoid Robotics, 08(01):87–101,2011.

[45] Grant S. Taylor and Christina Schmidt. Empirical evaluation of theemotiv epoc bci headset for the detection of mental actions. Proceed-ings of the Human Factors and Ergonomics Society Annual Meeting,56(1):193–197, 2012.

[46] Franklin Pierce Wright. Emochat: Emotional instant messaging withthe Epoc headset. PhD thesis, UNIVERSITY OF MARYLAND, BALTI-MORE COUNTY, 2010.

[47] Daniel Chen and Roel Vertegaal. Using mental load for managinginterruptions in physiologically attentive user interfaces. In CHI ’04Extended Abstracts on Human Factors in Computing Systems, CHI EA’04, pages 1513–1516, New York, NY, USA, 2004. ACM.

[48] Stephen W. Gilroy, Marc O. Cavazza, and Valentin Vervondel. Eval-uating multimodal affective fusion using physiological signals. InProceedings of the 16th International Conference on Intelligent UserInterfaces, IUI ’11, pages 53–62, New York, NY, USA, 2011. ACM.

[49] Sharon Oviatt. Advances in robust multimodal interface design. IEEEComput. Graph. Appl., 23(5):62–68, September 2003.

29

Page 33: Towards Multimodal Adaptive User Interfaces

4. AIMS OF THE THESIS

[50] Felix Putze, Jutta Hild, Rainer Kargel, Christian Herff, Alexander Red-mann, Jurgen Beyerer, and Tanja Schultz. Locating user attention usingeye tracking and eeg for spatio-temporal event selection. In Proceed-ings of the 2013 International Conference on Intelligent User Interfaces,IUI ’13, pages 129–136, New York, NY, USA, 2013. ACM.

[51] Wolfgang Wahlster, Norbert Reithinger, and Anselm Blocher.Smartkom: Multimodal communication with a life-like character. InProceedings of the 7th European Conference on Speech Communicationand Technology, volume 3, pages 1547–1550. ISCA, 9 2001.

[52] Olga Sourina, Yisi Liu, Qiang Wang, and MinhKhoa Nguyen. Eeg-basedpersonalized digital experience. In Constantine Stephanidis, editor,Universal Access in Human-Computer Interaction. Users Diversity,volume 6766 of Lecture Notes in Computer Science, pages 591–599.Springer Berlin Heidelberg, 2011.

[53] Thorsten O. Zander, Matti Gaertner, Christian Kothe, and Roman Vil-imek. Combining eye gaze input with a brain–computer interfacefor touchless human–computer interaction. International Journal ofHuman-Computer Interaction, 27(1):38–51, 2010.

[54] X. Yong, M. Fatourechi, R. K. Ward, and G. E. Birch. The design ofa point-and-click system by integrating a self-paced brain-computerinterface with an eye-tracker. Emerging and Selected Topics in Circuitsand Systems, IEEE Journal on, 1(4):590–602, 2011.

[55] Eye Gaze in Intelligent User Interfaces. Springer London, 2013.

[56] A. Girouard. Towards Adaptive User Interfaces Using Real Time Fnirs.BiblioBazaar, 2011.

[57] Ben Steichen, Giuseppe Carenini, and Cristina Conati. User-adaptiveinformation visualization: Using eye gaze data to infer visualizationtasks and user cognitive abilities. In Proceedings of the 2013 Interna-tional Conference on Intelligent User Interfaces, IUI ’13, pages 317–328,New York, NY, USA, 2013. ACM.

[58] Shiwei Cheng, Xiaojian Liu, Pengyi Yan, Jianbo Zhou, and ShouqianSun. Adaptive user interface of product recommendation based oneye-tracking. In Proceedings of the 2010 Workshop on Eye Gaze inIntelligent Human Machine Interaction, EGIHMI ’10, pages 94–101,New York, NY, USA, 2010. ACM.

30

Page 34: Towards Multimodal Adaptive User Interfaces

4. AIMS OF THE THESIS

[59] Victor Manuel, Victor M. Garcıa-Barrios, Christian Gutl, AlexandraPreis, Keith Andrews, Maja Pivec, Felix Modritscher, and ChristianTrummer. Adele: A framework for adaptive e-learning through eyetracking. In Proceedings of IKNOW 2004, pages 609–616, 2004.

[60] Martin Dostal and Zdenek Eichler. A research framework for perform-ing user studies and rapid prototyping of intelligent user interfacesunder the openoffice.org suite. In Proceedings of the ACM SIGCHISymposium on Engineering Interactive Computing Systems. ACM,2011.

[61] Martin Dostal and Zdenek Eichler. A hybrid approach to user activityinstrumentation in software applications. In In Proceedings of theHuman-Computer Interaction International Conference, CCIS. Springer,2011.

[62] Martin Dostal and Zdenek Eichler. Fine-grained adaptive user inter-face for personalization of a word processor user interface: Principlesand a preliminary study. In In Proceedings of the Human-ComputerInteraction International Conference, CCIS. Springer, 2011.

[63] Joseph C Giarratano. CLIPS Reference manual, Volume II - AdvancedProgramming Guide, 2007. Accessed 11 November, 2009.

[64] Zdenek Eichler and Martin Dostal. Adaptive user interface person-alization in erp systems. In Witold Abramowicz, John Domingue,and Krzysztof Wecel, editors, Business Information Systems Work-shops, Lecture Notes in Business Information Processing, pages 49–60.Springer Berlin Heidelberg, 2012.

[65] Dominik MICHNA. Adaptivnı uzivatelske rozhranı v erp systemu[online]. Diplomova prace, Masarykova univerzita, Fakulta informatiky,2013 [cit. 2014-01-19].

[66] Leah Findlater, Karyn Moffatt, Joanna McGrenere, and Jessica Dawson.Ephemeral adaptation: The use of gradual onset to improve menuselection performance. In Proceedings of the SIGCHI Conference onHuman Factors in Computing Systems, CHI ’09, pages 1655–1664, NewYork, NY, USA, 2009. ACM.

[67] Kenneth Holmqvist, Marcus Nystrom, Richard Andersson, RichardDewhurst, Halszka Jarodzka, and Joost Van de Weijer. Eye tracking:

31

Page 35: Towards Multimodal Adaptive User Interfaces

4. AIMS OF THE THESIS

A comprehensive guide to methods and measures. Oxford UniversityPress, 2011.

[68] Christopher G. Thomas and Mette Krogsæ ter. An adaptive environ-ment for the user interface of excel. In IUI ’93: Proceedings of the 1stinternational conference on Intelligent user interfaces, pages 123–130,New York, NY, USA, 1993. ACM.

[69] Leah Findlater and Joanna McGrenere. Impact of screen size on perfor-mance, awareness, and user satisfaction with adaptive graphical userinterfaces. In Proceedings of the SIGCHI Conference on Human Factorsin Computing Systems, CHI ’08, pages 1247–1256, New York, NY, USA,2008. ACM.

32

Page 36: Towards Multimodal Adaptive User Interfaces

Appendix A

List of Author’s Results in the Field

A.1 Published Publications

1. Zdenek Eichler and Martin Dostal. Adaptive user interface person-alization in erp systems. In Witold Abramowicz, John Domingue,and Krzysztof Wecel, editors, Business Information Systems Work-shops, Lecture Notes in Business Information Processing, pages 49–60.Springer Berlin Heidelberg,

2. Martin Dostal and Zdenek Eichler. Fine-grained adaptive user inter-face for personalization of a word processor user interface: Principlesand a preliminary study. In In Proceedings of the Human-ComputerInteraction International Conference, CCIS. Springer, 2011.

3. Martin Dostal and Zdenek Eichler. A hybrid approach to user ac-tivity instrumentation in software applications. In In Proceedingsof the Human-Computer Interaction International Conference, CCIS.Springer, 2011.

4. Martin Dostal and Zdenek Eichler. A research framework for perform-ing user studies and rapid prototyping of intelligent user interfacesunder the openoffice.org suite. In Proceedings of the ACM SIGCHISymposium on Engineering Interactive Computing Systems. ACM,2011.

A.2 Submitted Publications

• Tomas Jirsık, Martin Husak, Pavel Celeda, Zdenek Eichler. Cloud-based Security Research Testbed: A DDoS Use Case. Submitted toNOMS conference.

• Martin Dostal and Zdenek Eichler. Expert System for Adaptive Per-sonalization of a Word Processor’s User Interface. Submitted to Inter-national Journal on Artificial Intelligence Tools.

33

Page 37: Towards Multimodal Adaptive User Interfaces

A. LIST OF AUTHOR’S RESULTS IN THE FIELD

A.3 Participation in Projects

• Cybernetic Proving Ground, Project of Ministers of the Interior of theCzech Republic, identification code: VG20132015103, Deputy head ofvisualization group.

A.4 Patent

U.S. Patent Application, Unpublished (filing date July. 1, 2013), Dostal, Eich-ler

A.5 Citations

The paper Fine-grained adaptive user interface for personalization of a wordprocessor user interface: Principles and a preliminary study has been citedin the following recent paper:

Alotaibi, M. B. (2013). Adaptable and Adaptive E-Commerce Interfaces:An Empirical Investigation of User Acceptance. Journal of Computers, 8(8).

34

Page 38: Towards Multimodal Adaptive User Interfaces

Appendix B

Attached Publications

1. Zdenek Eichler and Martin Dostal. Adaptive user interface person-alization in erp systems. In Witold Abramowicz, John Domingue,and Krzysztof Wecel, editors, Business Information Systems Work-shops, Lecture Notes in Business Information Processing, pages 49–60.Springer Berlin Heidelberg.

My contribution to publication was a new idea of Adaptive Disclo-sure with Ephemeral Visualization. I have also created the proposedmockup, explored possibilities of a real implementation to SAP ERPand conducted the presented brief user study.

2. Martin Dostal and Zdenek Eichler. Fine-grained adaptive user inter-face for personalization of a word processor user interface: Principlesand a preliminary study. In In Proceedings of the Human-ComputerInteraction International Conference, CCIS. Springer, 2011.

I have developed presented implementation, contributed to basic ideasbehind presented AUI and conducted the presented user study.

3. Martin Dostal and Zdenek Eichler. A hybrid approach to user ac-tivity instrumentation in software applications. In In Proceedingsof the Human-Computer Interaction International Conference, CCIS.Springer, 2011.

I have developed the presented software and contributed to proposedhybrid approach to logging user activity.

4. Martin Dostal and Zdenek Eichler. A research framework for perform-ing user studies and rapid prototyping of intelligent user interfacesunder the openoffice.org suite. In Proceedings of the ACM SIGCHISymposium on Engineering Interactive Computing Systems. ACM,2011.

I have developed the presented software and proposed example of theimplementation of adaptive menus in OpenOffice.org.

35