Akif Paper

download Akif Paper

of 17

Transcript of Akif Paper

  • 8/2/2019 Akif Paper

    1/17

    Towards the ubiquitous visualization: Adaptive user-interfaces basedon the Semantic Web

    Ramn Hervs , Jos Bravo

    Castilla-La Mancha University, Paseo de la Universidad, 13071 Ciudad Real, Spain

    a r t i c l e i n f o

    Article history:

    Received 3 February 2010Received in revised form 22 August 2010

    Accepted 24 August 2010

    Available online 15 September 2010

    Keywords:

    Ambient Intelligence

    Information visualization

    Information retrieval

    Context-awareness

    Ontology

    Intelligent user interfaces

    a b s t r a c t

    This manuscript presents an infrastructure that contributes to ubiquitous information. Advances in

    Ambient Intelligence may help to provide us with the right information at the right time, in an appropri-

    ate manner and through the most suitable device for each situation. It is therefore crucial for such devices

    to have contextual information; that is, to know the person or persons in need of information, the envi-

    ronment, and the available devices and services. All of this information, in appropriate models, can pro-

    vide a simplified view of the real world and let the system act more like a human and, consequently, more

    intelligently. A suitable context model is not enough; proactive user interface adaptation is necessary to

    offer personalized information to the user. In this paper, we present mechanisms for the management of

    contextual information, reasoning techniques and adaptable user interfaces to support visualization ser-

    vices, providing functionality to make decisions about what and how available information can be

    offered. Additionally, we present the ViMos framework, an infrastructure to generate context-powered

    information visualization services dynamically.

    2010 Elsevier B.V. All rights reserved.

    1. Introduction

    The real world is wide and complicated, and the human brain

    requires complex cognitive processes to understand it. In fact, we

    are used to creating models to describe the environment but hiding

    its complexity in some degree. Computer systems also require

    models that describe the real world and abstract from these diffi-

    culties in order to understand it (at least in part) thus acting more

    like humans. Consequently, numerous applications can be devel-

    oped to facilitate peoples daily life. A large amount of information

    from humans everyday lives can be recognized: newspapers, sales,

    mail, office reports, and so on. All this information can be managed

    by an intelligent environment offering the contents needed, when

    needed, no matter where we are.

    The services mentioned above require a high-quality method ofvisualizing information. Our objective is to offer the desired infor-

    mation at the right time and in a proper way. Advances in the

    Semantic Web combined with context-awareness systems and

    visualization techniques can help us accomplish our main goal.

    Applications capable of managing a model of context, represented

    by an ontology describing parts of the surrounding world, assist us

    by offering information from heterogeneous data sources in an

    integrated way. This will reduce the interaction effort (it is possible

    to deduce part of the information needed to analyze the user situ-ation) and generate information views according to the user and

    the displays characteristics. The generation of user-interfaces

    based on the users situation requires advanced techniques to

    adapt content at run-time. It is necessary to automate the visuali-

    zation pipeline process, transforming the selected raw data into vi-

    sual contents and adapt them to the final user interface.

    This paper is structured as follows: Section 2 is dedicated to the

    modeling of context-aware information applying advances in

    Semantic Web languages. Section 3 introduces information visual-

    ization services in pervasive environments. Section 4 presents our

    infrastructure to generate ontology-powered user interfaces

    dynamically, retrieving information based on the users situation

    and adapting their visual form to the display. A case study is de-

    scribed in Section 5, analyzing infrastructure functionality for thisparticular case. In Section 6 we evaluate the infrastructure. Sec-

    tions 7 and 8 include related work that uses context for generating

    and adapting user interfaces and the contributions and discussions.

    Finally, Section 9 concludes the paper.

    2. Context-awareness through the Semantic Web

    Only by understanding the world around us, applications can be

    developed that will be capable of making daily activities easier.

    Users actions can be anticipated by looking at the situations they

    are in Schilit et al. (1994). Context is, by nature, broad, complex

    and ambiguous. We need models to represent reality or, more

    0953-5438/$ - see front matter 2010 Elsevier B.V. All rights reserved.doi:10.1016/j.intcom.2010.08.002

    Corresponding author. Tel.: +34 926295300x6332; fax: +34 926295354.

    E-mail addresses: [email protected] (R. Hervs), [email protected] (J.

    Bravo).

    Interacting with Computers 23 (2011) 4056

    Contents lists available at ScienceDirect

    Interacting with Computers

    j o u r n a l h o m e p a g e : w w w . e l s e v i e r . c o m / l o c a t e / i n t c o m

    http://dx.doi.org/10.1016/j.intcom.2010.08.002mailto:[email protected]:[email protected]://dx.doi.org/10.1016/j.intcom.2010.08.002http://www.sciencedirect.com/science/journal/09535438http://www.elsevier.com/locate/intcomhttp://www.elsevier.com/locate/intcomhttp://www.sciencedirect.com/science/journal/09535438http://dx.doi.org/10.1016/j.intcom.2010.08.002mailto:[email protected]:[email protected]://dx.doi.org/10.1016/j.intcom.2010.08.002
  • 8/2/2019 Akif Paper

    2/17

    precisely, to characterize the context as a source of information.

    These models define the context factors relevant to the user, his

    or her environment and situation. At the same time, it is possible

    to share this real world perception between different applications

    and systems (Henricksen et al., 2002).

    Recently, Semantic Web languages have been used for context

    modeling, for example the CONON model (Gu et al., 2004), that

    implements mechanisms for representing, manipulating andaccessing contextual information; the SoaM Architecture (Vazquez

    Gmez et al., 2006), a Web-based environment reactivity model

    that uses orchestration to coordinate existing smart objects in per-

    vasive scenarios in order to achieve automatic adaptation to user

    preferences; and the COBRA Architecture (Chen et al., 2003), an

    infrastructure based on multi-agents that share contextual infor-

    mation. In general, there are benefits associated with the rich

    expressiveness of modeling languages such as OWL and RDF and

    their semantic axioms and standardization. Despite the well-

    known benefits of these languages, they were not originally de-

    signed to model intelligent environments. For this reason, there

    are some difficulties in modeling contextual information: distin-

    guishing between different information sources, allowing for

    inconsistencies, temporal aspects of the information, information

    quality, and privacy and security policies.

    By adapting Semantic Web technologies to context-aware envi-

    ronments, we can implement solutions to these problems. We pre-

    sented context management strategies based on the Semantic Web

    in previous publications (Hervs et al., in press). Thus, the follow-

    ing sections of this paper focus on the design decisions, the con-

    straints and capabilities of the user interface generator to realize

    the design, and prototypes for a particular service: information

    visualization.

    3. Pervasive information visualization

    In our daily life, we manage and analyze a great variety of per-

    sonal information such as calendars, news, emails and digital doc-

    uments. Many actions and decisions are based on information weobtain from various and heterogeneous sources. In fact, informa-

    tion is a ubiquitous part of our everyday tasks. As a result, advances

    in the visualization of information may be a great support for the

    development and acceptance of the Ambient Intelligence paradigm

    (ISTAG, 2001).

    Visualization in smart environments has been studied from dif-

    ferent perspectives. For example, public displays have received

    considerable interest in recent years. Demand is increasing for

    ubiquitous and continuous access to information and for interac-

    tive and embedded devices. Typically, public displays can enhance

    user collaboration and coordinate multi-user and multi-location

    activities. Displayed information may offer an overview of work-

    flow, revealing its status, enabling communication between the

    users and the management of contingencies or unexpected situa-tions (Jos et al., 2006). Toward this end, we can find significant

    contributions in the bibliography. Gross et al. (2006) introduced

    Media Spaces, a collaborative space with displays that connect

    two physical spaces through video and audio channels. Other

    authors have presented proposals based on wall displays (Baud-

    isch, 2006; Vogl, 2002) including interesting advances in interac-

    tion techniques for public displays. Most of these proposals

    include adaptive mechanisms based on contextual parameters.

    For example, Mu~noz et al. (2003) developed a display-based coor-

    dination system for hospitals, adapting their behavior based on

    user tasks, their status and environmental contingencies. Another

    study (Mitchell and Race, 2006) adapts the displayed information

    depending on the space characteristics (distinguishing between

    transient spaces, social spaces, public or open spaces, or informa-tive spaces).

    The transition from collaborative desktop computers to public

    displays brings up a wide range of research questions in user inter-

    faces and information visualization areas. Using applications de-

    signed for desktop computers in public displays may be

    problematic. One important difference is the spontaneous and

    the sporadic nature of public displays, but the main question is

    how to adapt to a variety of situations, multiple users and a wide

    range of required services. Focusing on the use of public displays,we can identify several differences to keep in mind:

    Wide size ranges and capabilities: element visualization

    depends on the absolute position in the interface. The visual

    perception of elements is different at the middle or at the dis-

    play corner. Moreover, visual capabilities (such as size, resolu-

    tion, brightness, and contrast) affect the final interface view.

    Interaction paradigms: The classic WindowsIconsMenus

    Pointers (WIMP) paradigm requires reconsideration if visualiza-

    tion is to operate coherently because this paradigm is funda-

    mentally oriented toward processing a single stream of user

    input with respect to actions undertaken on relatively small,

    personal screens. Innovative interaction techniques and para-

    digms are thus necessary, such as implicit interaction (Schmidt,

    2000), touching approaches (Bravo et al., 2006), and gesture rec-

    ognition (Wang, 2009). It is important to analyze the particular

    characteristic of available interaction paradigms when develop-

    ing user interfaces. In particular, interaction flow is an essential

    issue to take into account. One study presents a classification of

    interaction according the interaction flow (Vincent and Francis,

    2006); the authors distinguish between three types. One-way

    interaction includes applications that only need to be able to

    receive content from users. Two-way interaction requires that

    data can be sent to the display landscape from users and vice

    versa. Finally, a high degree of interactions occurs when appli-

    cations require permanent interaction between the display

    landscape and the users in both directions. Another classifica-

    tion that focuses on the tasks that users wish to perform during

    information visualization organizes interaction more abstractly,for example, prepare, plan, explore, present, overlay, and re-ori-

    ent (Hibino, 1999).

    Multi-user: multiple parallel inputs and, consequently, multiple

    parallel outputs may be allowed in public displays. Further-

    more, the users social organization is an important information

    source, distinguishing between single users, group users or

    multi-group users.

    Privacy vs. content: these concepts are sometimes contradictory

    in public displays. Whenever applications are designed to offer

    information to users through public displays, the visualization

    of personalized contents may endanger the privacy of users.

    However, inflexible levels of privacy assurance typically make

    it difficult to offer a broad set of relevant contents. Nowadays,

    most public presentation visualization services lack context-awareness, thus making impossible a valid compromise

    between privacy and personalized contents.

    On the other hand, it is important to consider the dynamic and

    continuous evolution of pervasive environments. The kind of users,

    available displays, and visualization requirements, are only some

    of the characteristics that may change with time. Consequently,

    it is necessary to reconsider the design process and to provide pro-

    active mechanisms to generate user interfaces dynamically. By

    representing context information, the environment will be able

    to react to situation changes and determinate the services to be

    displayed.

    In this section, we have identified and described several charac-

    teristics that context-sensitive user interfaces must analyze inselecting which contents should be offered and in which visual

    R. Hervs, J. Bravo/ Interacting with Computers 23 (2011) 4056 41

    http://www.elsevier.com/locate/intcomhttp://www.elsevier.com/locate/intcomhttp://www.elsevier.com/locate/intcomhttp://www.elsevier.com/locate/intcomhttp://www.elsevier.com/locate/intcom
  • 8/2/2019 Akif Paper

    3/17

    form. However, these characteristics are not always directly obser-

    vable, but user interfaces focus on observable behaviors. Advances

    in context-awareness improve the connection between human

    behavior and its computational representation, for example, by

    abstracting or compounding unobservable characteristics from ob-

    servable ones. The next section introduces our ontological context

    model in order to enhance implicit communication between the

    users immediate environment and the generated user interfaces.

    4. Ontology-powered information visualization. Our proposal

    The main challenges to generate context-driven visualization

    services at run-time are: (a) to determinate relevant changes in

    the context elements and the correlation between these changes

    and the reconfiguration of the displayed user interfaces, (b) how

    make the heterogeneous visualization services interoperable in or-

    der to work together uniformly; several services should share

    information and complement one another, and (c) how to integrate

    the set of services in order to display them into a homogeneous

    view. By modeling the world around the applications and users,

    in a simplified way, through Semantic Web languages (such as

    OWL, RDF and SQRL), we can solve these problems.

    4.1. User context and visualization model

    We have defined the context model from two perspectives:

    Information Visualization and Ambient Intelligence issues. The

    first perspective pertains to perceptive and cognitive issues, gra-

    phic attributes, and data properties. The second one recognizes

    environmental characteristics to improve information visualiza-

    tion. On the one hand, the environmental issues are describedthrough three OWL ontologies: (a) User Ontology, describing the

    user profile, their situation (including location, activities, roles,

    and goals, among others) and their social relationships, (b) Device

    Ontology, that is the formal description of the relevant devices and

    their characteristics, associations and dependencies, and (c) Phys-

    ical Environment Ontology, defining the space distribution. The

    principal elements of these ontologies are shown in Fig. 1.

    These models represent the main elements of the context, the

    three elements described above and the service model. This formal

    description is intended to be generic enough to support a variety of

    services that intelligent environments provide to users. This study

    focuses on personalized information visualization, which is a par-

    ticular, required service for the development of many activities,

    whether they are intended for work or leisure, social or personal

    Fig. 1. Principal concepts and properties of the context model.

    42 R. Hervs, J. Bravo/ Interacting with Computers 23 (2011) 4056

  • 8/2/2019 Akif Paper

    4/17

    use or for daily or infrequent deployment. As such, we also propose

    an ontological definition of information visualization concepts and

    properties.

    Information visualization is a multi-disciplinary area, so it is

    hard to construct an ontological representation for it. For this rea-

    son, we have identified the most important concepts, classifying

    them according to the criteria for constructing a taxonomy (Hervs

    et al., 2008) to guide the process of building the correspondingontology, called PIVOn (pervasive information visualization ontol-

    ogy, shown in Fig. 2). We have organized the ontology elements as

    follows:

    The relationship between information visualization issues and

    the relevant elements of the context: The visualization of the

    information process should not be limited to the visual data

    representation, but should rather be understood as a service

    offered to one or more users with specific characteristics and

    capabilities, all immersed in an environment, and presented

    through devices of different features and functionalities.

    Metaphors and patterns: The way in which information is pre-

    sented should facilitate rapid compression and synthesis, mak-

    ing use of design principles based on human perception and

    cognition. One-way to achieve these principles is through

    patterns.

    Visualization pipeline: The model represents the main elements

    involved in the visual mapping. Data sets are transformed into

    one or more visual representations, which are chosen to be dis-

    played to the user, along with associated methods or interaction

    techniques.

    Methods and interaction Paradigms: It is possible to interact

    with the visualization service by many different paradigms

    and techniques. The model has to represent these two features

    for providing the needed mechanisms to offer consistent infor-

    mation according to the devices that interact with the environ-

    ment. Displays and other devices can be involved in the

    interaction processes through pointers, infrared sensors, Radio

    Frequency Identification (RFID) or Near Field Communication(NFC) devices, and so on.

    Structure and characteristics of the view: Information is not

    usually displayed in isolation. On the one hand, visualization

    devices have graphical capabilities for displaying various types

    of contents at once. Moreover, providing a set of related con-

    tents makes the knowledge transmission easier and provides

    more information than the separated addition of all the consid-

    ered contents.

    Related social aspects: The visualization can be optimized

    depending on the social groups of its users. At this point, it is

    possible to observe the relationship between this model and

    the user model. The latter represents the relationships in the

    group, specifying the objectives and tasks, individual or

    grouped. Moreover, the user model reflects the fact that the

    individual users or groups can be located at the same place or

    in different places.

    Data characteristics: Again, talking about the process of trans-

    forming the data sets to their visual representation, studying

    the data characteristics can improve the process: data source,

    type, data structure, expiration, truth and importance. Data

    source of information presents challenges mainly because of

    Fig. 2. Information visualization ontology: a simplified representation.

    R. Hervs, J. Bravo/ Interacting with Computers 23 (2011) 4056 43

  • 8/2/2019 Akif Paper

    5/17

    their diversity, volume, dynamic nature and ambiguity. Under-

    standing the nature of the data, we can provide mechanisms

    that help the visualization process. Regarding the data source,

    we considerer some data types: text, databases, images, video

    and contextual data (typically obtained from sensors, but it

    may be inferred).

    Scalability: Another core concept is the scalability. Usually, fil-

    ter methods are necessary to scale the data, reducing theamount, defining the latency policy or adapting the complexity.

    These concepts are input variables; we also can analyze the sca-

    lability as an output variable that determinates the capability of

    visualization representation and visualization tools to effec-

    tively display massive data sets (called visual scalability ( Eick

    and Karr, 2002). There are some factors directly related with

    visual scalability: display characteristics such as resolution,

    communication capabilities, and size; visual metaphors, cogni-

    tive capabilities, and interaction techniques (Thomas and Cook,

    2005). It is plausible that the amount of information required

    could be reduced by increasing the number of views and, there-

    fore, growing the interactions. There are various techniques for

    information scalability. The model describes some of there:

    zooming, paging, filtering, latency and scalability of complexity.

    PIVOn conceptualizes a considerable number of concepts and

    relationships. However, it may not be complete for use in a partic-

    ular environment. This is why special attention has been paid to

    avoiding the inclusion of elements that are inconsistent in certain

    domains. In addition, we offer mechanisms needed to extend the

    model in order to satisfy the new requirements and integrate them

    with other ontological models. In general, the context ontologies

    are adequately generic and have sufficient detail to represent con-

    cepts involved in many typical scenarios related to Ambient Intel-

    ligence, particularly those that take a user-centered perspective.

    However, this models generality requires undertaking a special-

    ization process that includes the domain-specific concepts for each

    concrete application. Thus, our context model includes general

    concepts and relationships, and as such, it serves as a guide for tak-ing into account relevant aspects of context in order to obtain a

    specific context model depending on application needs. For exam-

    ple, the user profile (as an expansion of FOAF ontology) includes

    the concept of cognitive characteristic; depending on the context-

    sensitiveapplicationto be developed, this concept shouldto be spe-

    cialized, for example, by relating this concept to an OWL or RDF

    vocabulary that includes cognitive characteristics. Authors such as

    Abascal et al. (2008) and Golemati et al. (2006) have studied user

    cognitive characteristics that affect interaction with Ambient Intel-

    ligence services. These kinds of taxonomycan be easily integrated in

    our context model to enablea definition of adaptive behavior based

    on them. The same specialization process has been necessary to de-

    velop the prototypes described in Section 5. These interaction-re-

    lated concepts have been expanded to describe characteristics ofthe different interaction techniques (in this case, touch screens

    and mediated interaction through Near Field Communication). In

    addition, the patternconcept in the visualizationontologytakes val-

    ues of specific patterns developed in our prototypes.

    As described in previous works (Hervs, 2009), the COIVA archi-

    tecture manages contextual information. This architecture, in addi-

    tion to providing a specialization mechanism, supports the

    dynamic maintenance of context information. COIVA includes a

    reasoning engine that hastens the start-up process, enabling the

    automatic generation of ontological individuals. Moreover, con-

    text-aware architectures tend to generate excessive contextual

    information at run-time. The reasoning engine can support the def-

    inition of updated or deleted policies, thereby keeping the context

    model accurate and manageable. This reasoning engine is based onthe description logics and behavior rules in Semantic Web Rule

    Language (SWRLHorrocks et al., 2004) in order to endow the archi-

    tecture with inference capabilities. To support the highly dynamic

    nature of Ambient Intelligence, COIVA enables adaptive behavior at

    two stages: at design-time and at run-time. In anticipation of this

    requirement, we decided to refactor the mechanism to monitor

    reasoning rules by dynamic context-event handlers, which develop

    a reaction to context changes. The active rules are obtained from

    plain text files, and each gathered rule is handled independently.Moreover, COIVA includes an abstraction engine that fits raw con-

    text data into the context model, which is needed to abstract and

    compound context information and reduce redundancy and ambi-

    guities. An important limitation is that COIVA does not directly

    manage the sources of raw data, i.e., sensors or data collections

    to be transformed into ontological individuals in the context mod-

    el. COIVA has been designed under the premise of generality, and

    thus, any transformation of the model is highly dependent on the

    environment in which the services are deployed and the applica-

    tion domain itself. Thus, it is necessary that data collections are

    annotated with meta-data (or another technique used to associate

    semantics to data) and that the user context is acquired. Section 5

    describes how context and data information is generated and ac-

    quired in particular prototypes as well as how this information is

    transformed to ontological individuals and then used in the user

    interface generation and adaptation processes.

    4.2. Visualization mosaics

    Our framework called visualization mosaics (ViMos) generates

    user interfaces dynamically. ViMos is an information visualization

    service that applies context-awareness to provide adapted infor-

    mation to the user through embedded devices in the environment.

    The displayed views are called mosaics, because they are formed

    by independent and related pieces of information creating a two

    dimensional user interface. These pieces of information are devel-

    oped as user interface widgets with the principal objective of pre-

    senting diverse contents. In this sense, they have several associated

    techniques of scalability to adapt themselves according to whichcontents to display and the available area in the user interface

    for the given piece of information. ViMos includes a library of these

    pieces in order to display multiple kinds of data (e.g., plain text,

    images, multimedia, and formatted documents) by using different

    visualization techniques (e.g., lists of elements and 2D trees) and

    providing adaptive techniques to fit the visual form (e.g., zoom,

    pagination, and scrolling).

    Initially, we simply have several sets of information. By analyz-

    ing users situations (described in the context model), the best sets

    are selected. Each item of content has several associated character-

    istics (such as optimum size or elasticity). These characteristics can

    be described in the visualization information ontology and make

    the final generation process of the mosaic possible, adapting the

    user interface according to the situation. We can thus make inter-faces more dynamic and adaptive and improve quality of content.

    The mosaic generation process is based on Garrets proposals

    (Garrett, 2002) for developing Web sites and hypermedia applica-

    tions by identifying the elements of user experience. Garrets pro-

    posals focus on design-time development, while ViMos generates

    user interfaces at run-time. However, the process has similar steps

    in both cases: analysis of the user situation and the visualization

    objectives, content requirements, interaction design, information

    design and, finally, visual design. The COIVA architecture provides

    the information needed to generate the user interface dynamically.

    The principal characteristics of the ViMos framework can be

    summarized as follows:

    ViMos is a framework that can analyze contextual informationabout users and their environments in order to improve the

    44 R. Hervs, J. Bravo/ Interacting with Computers 23 (2011) 4056

  • 8/2/2019 Akif Paper

    6/17

    quantity as well as the quality of the offered information, at the

    right time and using the most suitable device. The information and its visual form are auto-described by an

    ontological model that represents relevant attributes based on

    knowledge representation and information visualization. The

    formal model enables interoperability between the heteroge-

    neous services and the combination of diverse application

    domains.

    ViMos includes mechanisms to dynamically adapt and person-

    alize the interface views whenever the users need them.

    Toward this end, high-level controls libraries have been devel-

    oped, letting the user interface be proactive.

    Integration of well-known design patterns in order to improve

    the final views offered to the user. Pattern selection is driven

    dynamically by the analysis of the contextual information.

    Abstractinteractionlayersto support the diversity of techniques,

    methods and paradigms applied in Ambient Intelligence. Vimos includes mechanisms to consider important social fac-

    tors in intelligent environments, switching the traditional indi-

    vidualist interaction to a group communication that is assisted

    by visualization devices in the environment.

    The organization of the visualization mosaics has been designed

    following these principles, based on the proposals of Norman

    (1993), Tversky et al. (2002) and Thomas and Cook (2005):

    Appropriateness principle: The visual representation should

    provide neither more nor less information than what is needed

    for the task at hand. Additional information may be distracting

    and makes the task more difficult. The contextual situation of

    the users determines the contents to show in a mosaic view.Every content has an ontological definition about the informa-

    tion that it includes based on the PIVOn model. A matching

    between this definition and the current users context model

    offers a quantitative measure of relevance about each item of

    content.

    Naturalness principle: Experiential cognition is most effective

    when the properties of the visual representation most closely

    match the information being represented. This principle sup-

    ports the idea that new visual metaphors are only useful for

    representing information when they match the users cognitive

    model of the information. Purely artificial visual metaphors can

    actually hinder understanding. ViMoss view generation is pat-

    tern-driven. ViMos relates several design patterns to each view

    role, an important ontological concept obtained through severalsituation attributes.

    Matching principle: Representations of information are most

    effective when they match the task to be performed by the user.

    Effective visual representations should correspond to tasks in

    ways that suggest the appropriate action. The selected contents

    in a ViMos view and their visual design depend on the user-task

    ontological concept. Combining the previously described mech-

    anisms to achieve the appropriateness and naturalness princi-

    ples, ViMos matches the task performed by the user to the

    displayed view.

    Congruence principle: The structure and content of the external

    representation should correspond to the desired structure and

    content of the internal representation. The pieces of information

    that include each kind of content are organized in a taxonomic

    structure that preserves their independence and models the

    semantic relationships among items of content. This organiza-

    tion is a metaphor for the cognitive representation of the infor-

    mation, easing information assimilation and consciousness. Apprehension principle: The structure and content of the exter-

    nal representation should be readily and accurately perceived

    and comprehended. The proactive behavior enabled by the con-

    text-aware architecture supports the suitable changes in con-

    tents and its organization based on the user and surrounding

    events.

    The ViMos architecture comprises several functional modules,

    implemented in Microsoft.NET. The business logic has been devel-

    oped using the C# language and the user interface layer with Win-

    dows Presentation Foundation.1

    The generation of user interface views can be described through

    a stepwise process (Fig. 3) using the ViMos modules and the COIVA

    functionalities. Acquisition of the context: At the start of the service, ViMos

    obtains the sub-model required for the concrete visualization

    service from the COIVA architecture. The sub-model includes

    OWL classes and properties that contain valid individuals; the

    sub-model is refreshed at run-time by deleting elements with

    individuals that have disappeared and by including elements

    that have transformed into new individuals after data acquisi-

    tion. In this way, the run-time management of the ontologies

    is optimized. After extracting the sub-model, the context broker

    maintains it and updates it based on visualization requirements

    and situational changes that occur when an individual change

    its value. Moreover, the context broker keeps temporal

    Fig. 3. Generation process of ontology-powered visualization mosaics.

    1 Windows Presentation Foundation. http://msdn.microsoft.com/es-es/library/ms754130.aspx.

    R. Hervs, J. Bravo/ Interacting with Computers 23 (2011) 4056 45

    http://msdn.microsoft.com/es-es/library/ms754130.aspxhttp://msdn.microsoft.com/es-es/library/ms754130.aspxhttp://msdn.microsoft.com/es-es/library/ms754130.aspxhttp://msdn.microsoft.com/es-es/library/ms754130.aspx
  • 8/2/2019 Akif Paper

    7/17

    references about the requests made by the visualization service.

    In this way, whenever a service makes a request about context

    information, the context broker offers an incremental response;

    that is, it provides newly acquired, modified or inferred individ-

    uals based on the request.

    Selection of candidate data: The significant items of content to

    be offered to the user are selected based on the criteria defined

    for a specific visualization service. The selection mechanismconsists of obtaining a quantitative measure of significance of

    each item of content based on the context-model instances

    and retrieving those that exceed a certain threshold. This

    threshold is determined according to the display characteristics

    that are described in the devices ontology.

    Selection of the design pattern: Several factors affect pattern

    selection, for example, the role of the visualization service, the

    social and group characteristics of the audience and the quan-

    tity of the candidate data.

    Selection of information pieces: The ViMos broker selects the

    container widgets (information pieces) that are appropriate

    for visualizing the candidate data analyzing the characteristics

    of the data described in the visualization ontology.

    Mosaic design: All information pieces include adaptability

    mechanisms in order to adjust themselves to the selected pat-

    tern proactively. The adaptability mechanisms consist of zoom

    policies, latency, pagination and scrolling.

    Incorporation of awareness elements: ViMos recognizes

    abstract interactions, that is, general events that cause changes

    in an information piece or in the general view (e.g., next ele-

    ment, previous element, view element, and discard element).

    The device model includes interaction techniques available in

    a specific display. This information enables the inclusion of ele-

    ments that help users interact with the visualization service.

    5. Information visualization services for collaborative groups

    5.1. The scenario

    This scenario involves groups of users that share interests and

    agendas, working collaboratively and having a dynamic informa-

    tion flow. The prototype supports the daily activities of research

    groups by means of information visualization, using the public dis-

    plays in the environment. This specific prototype can be applied to

    similar scenarios that involve people working together, for exam-

    ple, in an office.

    The prototype environment is equipped with several public dis-

    plays, including plasma and LCD TVs with a screen size between 32

    and 50 in. and touch screens of 21 in. The interaction with TVs is

    mediated through NFC mobile phones; displays wear several NFCtags with associated actions that depend on the displayed visuali-

    zation service at run-time. Thus, tag functionality changes dynam-

    ically. Whenever users touch a tag, their mobile device sends the

    associated information via Bluetooth (if available) or GPRS connec-

    tion to the context server. The visualization service uses two ded-

    icated service, namely, the COIVA server to manage the context

    model and the ViMos server to generate user interfaces. The user

    interfaces are sent using WiFi-VGA extenders that enable the wire-

    less transitions of VGA signals to the public displays.

    The main objective of the visualization service is to provide

    quick and easy access for users to share information and to coordi-

    nate collaborative activities. Specifically, the prototype imple-

    ments six services: user location and user state, work-in-progress

    coordination, document recommendation, events and deadlines,

    meeting support, and group agenda management.

    We previously commented that neither COIVA nor ViMos di-

    rectly treat the acquisition of raw data from sensors and content

    collections. For this reason, we have developed several mecha-

    nisms to gather data and transform them into contextual individ-

    uals for this prototype. First, we capture the users location and

    actions through NFC interaction with tagged objects in the envi-

    ronment. Second, contents are annotated with meta-data (e.g.,

    author, title, keywords, and document type) at the moment of

    inclusion in the repositories. Finally, we implemented two soft-

    ware components: a collaborative agenda to facilitate user activi-

    ties while we acquire schedule information and a document

    supervisor that gathers information about which documents a user

    views or modifies with the approval of the user. Table 1 shows

    some examples of sources of raw data, gathered data and map-pings to valid entities in the context model.

    Fig. 4 shows a user interacting with the visualization service

    and a personalized mosaic. When the users begin interacting with

    Table 1

    Examples of data acquired from sensors.

    Type of sensor Gathered data Meaning Generated/updated

    individuals

    Touch screen Hw sensor c:Interactive

    content

    Someone is interacting with the a application

    Someone is performing the task associated with content c in a

    Pivon:interacting

    a:Application Pivon:Interaction Method

    User:Task

    NF C Hw/Sw sensor t:Tag ID User related to b is interacting with the a application User related to b is performing a task associated with the sensor t Pivon:interacting

    b:Bluetooth address Pivon:InteractionMethod

    a:Application User:Task

    User:locatedIn

    User:userAvailability

    Document monitor Software sensor d:Document User related to dv is interacting with application a

    User related to dv is editing or reviewing document d

    Pivon:interacting

    a:Application Pivon:InteractionMethod

    dv:device User:Task

    User:locatedIn

    Foaf:document

    Document repository Repository d:document User logged in as u is interacting with application a

    User logged in as u is interested in document d

    Pivon:InteractionMethod

    u:user ID Pivon:interacting

    User:Task

    User:locatedIn

    Foaf:interest

    46 R. Hervs, J. Bravo/ Interacting with Computers 23 (2011) 4056

  • 8/2/2019 Akif Paper

    8/17

    the display (using NFC technology, in this case), the environment

    recognizes them, analyzes their situation and infers their informa-

    tion needs and current tasks. Additionally, COIVA uses the user

    interactions to update instances of the context model: for example,

    assigning the display location to the user location property. All

    these behaviors and functionalities are defined using SWRL. In

    the next subsections we detail the ViMos functionalities and mech-

    anisms launched to generate these visualization services.

    5.2. Proactive retrieval information

    The information that is retrieved in order to be displayed tousers is selected on the basis of three principal criteria: the ex-

    pected functionalities of the specific visualization service, the con-

    textual situation of users closer to the visualization device, and the

    behavioral rules defined for the specific visualization service. The

    first criterion is preferable over the others. The second and third

    criteria generate a collection of contents and a quantitative mea-

    sure of relevance for each. Additionally, we promote or penalize

    the selected contents based on user interactions. The final formula

    used to set the measure of relevance has been obtained from

    experimentation. We cannot guarantee that it can be applied to

    other similar systems; nevertheless, this formula provides encour-

    aging results about the relevance of contents as discussed in Sec-

    tion 6.

    Focusing on the example in Fig. 4, we can describe in detail themechanisms to select the displayed contents and the formula

    (shown in next subsections).

    5.2.1. Explicit requirements of the specific visualization service

    Requirements can be associated with a particular display in order

    to affect the different visualization services offered to users; these

    requirements are independent of the context situation and havepri-

    ority over other factors. These requirements define the default con-

    figuration and general functionalities for inclusion in the service

    displays. The definition of these conditions primarily depends on

    the principal functionalities associated with a particular display. In

    the example, the requirements define the main role of the service,

    such as reviewing personal documents individually as well as col-

    laboratively. In addition, there is certain mandatory content,namely, the location of all known users in the environment.

    5.2.2. Existence of certain individuals in the ontological context model

    The Fig. 5 shows the relevant context sub-model in this exam-

    ple and the OWL individuals that influence the content selection.

    The context captures the location of the users GoyoCasero (line

    5) and AlbertMunoz (line 13), in the same place as the display cur-

    rently showing the visualization service (lines 1820). All items of

    content related to these users have been pre-selected and are fil-

    tered based on their context. Concretely, GoyoCasero is the user

    who is interacting with the display (line 9); thus, their items of

    content are preferred. Additionally, the context definition de-

    scribes that GoyoCasero supervises AlbertMunozs work (line 8)

    and consequently, the AlbertMunoz work-in-progress is selected.Finally, the context framework keeps information about the last

    content looked up by the user (line 11), independently from their

    location. This information determines the most important content

    that is shown in the main area of the mosaic.

    We obtain a quantitative measure of significance based on the

    existence of certain individuals in the context model. This measure

    is inversely proportional to the distance between the content ele-

    mentand theuser class, thatis, thenumberof relationships between

    these twoontologicalclasses.For example,GoyoCasero(i.e., individ-

    ual in the user class) is the author (i.e., relationship) of CFPuc-

    ami2010 (i.e., an individual kind of content). The distance between

    both classes is one. We can see another example related to Fig. 5:

    GoyoCasero is located in the room 2; this room also includes Alber-

    tMunoz, who is theauthorof MasterMemoryV2.1. Thus,the distancebetween the user GoyoCasero and this document is three.

    This criterion for selecting contents quantifies the relevance of

    the content for a user with the value RC2 in [0, 1]. The distance be-

    tween the content and the user (Dc) inversely affects the measure

    of relevance; the b factor determines whatever the distance Dc af-

    fects to RC2 and takes a value from 0.1 to 1. In our case, the exper-

    imental tests imply that b = 0.5. Fi promotes or penalizes relevance

    based on previous user interactions. Explicit rejection of a content

    results in Fi = 0.25, and explicit interaction results in Fi = 1.50. This

    formula takes into account the relevance of the content in any pre-

    vious visualization service launched to a specific user, which isRC2T1; this factor models the human tendency to continue a task

    even though her/his context situation may have changed. The

    weight of RC2T1 is determined by a, which takes a value from 1to N. Our prototype sets a = 3.

    Fig. 4. A user interacting with the visualization service via NFC. Ontological individuals and inferred information.

    R. Hervs, J. Bravo/ Interacting with Computers 23 (2011) 4056 47

  • 8/2/2019 Akif Paper

    9/17

    RC2 a Fi 1=DC b RC2T1=a 1

    RF2: Relevance based on criteria 2; RF2T1: Last relevance for this

    user and content; Dc: Distance between user and content classes;

    Fi: Interaction factor; a, b: Adjustment factors.

    5.2.3. Behavior rules

    Content personalization cannot only be based on the existence

    of certain individuals in the context model. It is necessary to in-

    clude more complex mechanisms to select candidate contents to

    be offered. Concretely, mechanisms based on SWRL rules, powered

    by built-in constructors and XQuery operations that enable selec-

    tion by the particular value of an ontological instance and applying

    math, Boolean, string or date operations. The listing shows three

    behavioral rules. The fist one is used to offer contents whose dead-

    line is closer to the current date. The view in the Fig. 4 includes

    information about an upcoming event; concretely, the content is

    an image that is augmented with contextual data, for example,

    the deadline date and the representative name of the event. The

    second and the third rules help to quantify content relevance andselect the final items of content to be shown. The rule in lines 8

    and 9 modifies content relevance based on the user interacting

    with the display. The last rule promotes the items of content whose

    supervisor and author are located in the same place (lines 1215)

    (see List 1).

    The rules generate a relevance level in the interval [0, 10]. This

    criterion for selecting contents based on rules is quantified by the

    value RC3 and takes a value in [0, 1]. The formula is very similar to

    the second criterion, but in this case, the distance between classes

    is not relevant.

    RC3 a Fi RC3T1=a 1

    RF2: Relevance based on criteria 3; RF2T1: Last relevance for this

    user and content; Fi: Interaction factor; a: Adjustment factor

    5.3. Automatic generation of user interface views

    After ViMos selects the items of content to show, it launches the

    automatic design process. First of all, the expected functionalities

    of the service and the context determine the design pattern. The

    example mainly uses two patterns: the news panel pattern and

    the document viewer pattern. The news panel pattern is a well-

    known design that is used in many web portals. The user interface

    is divided into columns (typically three of them)and rows. Thesizes

    of the areas are similar because there are not criteria to elevate the

    significance of the content over theothers.This is thedesign pattern

    applied in Fig. 4 (left). This pattern is chosen when no one is explic-

    itly interacting with the display and there are no planned events atthis moment; thus, it could be considered the default pattern. In

    other prototypes, for example, visualization services for academic

    conferences or fora classroom, this pattern would be applied during

    breaks to offer general information. The document viewer pattern is

    applied when a user is interacting with the mosaic, and the context

    model can determine a principal related document. That is the case

    in this scenario. The criteria for selecting the pattern in this proto-

    type are simple and depend on very few contextual elements; how-

    ever, it is possible to define more complex criteriausing SWRL rules.

    Once the design pattern is set, ViMos must create the visual form of

    the selected content items.

    The description of the content is retrieved from the PiVon mod-

    el in order to determine the nature of the information. The ViMos

    library includes abstract widgets that implements adaptability in

    order to match the visual form of the content with the design pat-

    tern. The example includes four kinds of content:

    Location component: The data are a set of the locationIn individ-

    uals whose range is the Userconcept and have associated Image

    items. The most suitable information piece to generate the

    visual form is the piece called multiImageViewer. This pieceshows a set of images vertically or horizontally. It adapts the

    images to the available area and includes a footnote (the value

    of the user individual).

    Reminder component: In this case, the selected datum is an

    individual of the Eventclass associated with an Image, a textual

    description, a deadline and several involved users. The selected

    piece is called richTextViewer. It implements zoom and flow text

    techniques to adapt content, typically, arbitrary textual data

    and associated images.

    Work-In-Progress component: The nature of the data is similar

    to the reminder component, but it includes several blocks of

    data. This abstract widget is known as multiRichTexViewer and

    it includes an adaptive list of richTextViewer pieces.

    MainDocument component: This is a simple element thatincludes a document as original data. The selected piece is a

    general document viewer that adapts the visualization to the

    assigned area.

    Fig. 6 shows the particular data transformation from the onto-

    logical individuals to the final visual form.

    The last step is to incorporate the awareness elements depend-

    ing on the technology able to interact with the displays. This pro-

    totype is configured to work with touch screens as well as with

    NFC interaction; the characteristics of these interaction techniques

    are conceptualized in the device model as part of the ontological

    specialization process performed in this prototype. For example,

    the device model includes information about the kind and avail-

    able number of NFC tags for each display, the kind of interactionthat the touch screen accepts (such as single-touch, double-touch

    Fig. 5. Individuals and sub-model involved in the information retrieval of the scenario.

    48 R. Hervs, J. Bravo/ Interacting with Computers 23 (2011) 4056

  • 8/2/2019 Akif Paper

    10/17

    and multi-touch) and so on. Based on this information, ViMos in-

    cludes awareness elements to facilitate user interaction, for exam-

    ple, by framing interactive contents in the case of touch screens or

    using labels to describe actions associated with each NFC tag.

    5.4. Interoperability

    5.4.1. Interoperability through instances of the shared model

    In order to illustrate the services that share contextual informa-

    tion to improve the output, we present the following specific

    scenario: John is a professor at the university and takes part in a research

    group. He is at a conference to present a paper, and he has a meeting

    with his colleague Albert who is also present at this event.

    The personal agenda is part of the user model represented in

    COIVA. It is usual for users agendas to complement each other.

    The definition of the agenda elements and their visual representa-

    tion can be very diffuse but the semantic model provides enough

    shared knowledge to understand it. In our scenario, John has an

    agenda that is associated with upcoming activities. Furthermore,the congress organizers define the conference program, which is

    another kind of agenda. The integration of both these agendas

    serves not only to offer the user a complete picture of the activities

    to be carried out with less effort, but to provide additional informa-

    tion to the system; this information would allow new inferences

    and a set of instances providing a wider context. List 2 shows

    individuals that have been generated by combining both agendas

    and including meta-context information (see List 2).

    List 1. Rules that modify content relevance according to the user context.

    Fig. 6. ViMos visualization pipeline process from the ontological elements to the final view.

    R. Hervs, J. Bravo/ Interacting with Computers 23 (2011) 4056 49

  • 8/2/2019 Akif Paper

    11/17

    5.4.2. Interoperability between heterogeneous visualization services

    An important contribution of the COIVA and ViMos infrastruc-

    ture involves the semantics that describe the data sets and the

    mechanisms for sharing this information. In fact, semantic-based

    technologies have been surveyed and used as a medium for inte-

    gration and interoperability (Noy, 2004). So far, we have seen that

    several visualization services implemented within the ViMos

    framework can integrate information from other models and

    pieces of information from other services into their views.

    This section emphasizes the ability of COIVA to work with other

    information visualization services different from the ViMos frame-

    work, such as Web pages. It is not new that an ontological model

    serves to enrich the contents and makes a more intelligent Webbehavior; this is actually the basis of the Semantic Web. What

    should be emphasized is that the context models for intelligent

    environments, especially when formalized with languages of the

    Semantic Web, can increase the significance of the information

    published. To illustrate this, the next scenario is presented:

    Robert is a colleague from another university. He makes oral ses-

    sions but most of the students cannot be present for his lectures. This

    is why he offers the students the opportunity to view his lectures

    through his Web page, either in real time or anytime afterwards.

    The students are connected to the Web and can follow the slides or

    any other elements that are set out in class. In addition, he puts at

    their disposal his mobile phone for consultations by telephone, but

    only when he is in his office and not busy.

    Once again, we can integrate two types of information gener-

    ated and managed with COIVA, contextual information and data

    sets described by the model of the service of visualization. Fig. 7

    (left) shows an example in which Roberts research group Web site

    can obtain information from COIVA to define the current state of

    each member. Additionally, Fig. 7 (right) shows a Web site display-

    ing the current slide shown in class.

    6. Evaluation

    We evaluated the prototypes through interviews and user stud-

    ies. Twenty-one users (11 men, 10 women) participated in the

    experiment during a period of two weeks. The experiments were

    incorporated into their daily activities to simulate actual situa-

    tions. The specific time that each user tested our prototypes wason average 35 min per day. The population included seven engi-

    neering undergraduates, four Ph.D. candidates, two professors,

    and eight users that are not linked with the university, between

    the ages of 20 and 61. The users associated with our university

    were familiar with the technology and the tasks, while the other

    users were not familiar with this kind of system and the 50% had

    no familiarity with the task to perform. The objective was to vali-

    date the system from three perspectives: (a) developing a valida-

    tion metric for retrieved information, (b) agreement with the

    auto-generated user interfaces, and (c) usefulness of the visualiza-

    tion services in daily activities:

    Adaptability of content according to context information: all

    prototypes implement autonomous mechanisms to adapt viewsto the context situation. The prototype considers the situation,

    users profile and their specific needs among other consider-

    ations. Users have tested the prototypes and have expressed

    their agreement or disagreement with the displayed contents,

    as shown in Table 2. We have applied basic statistical classifica-

    tion to evaluate the relevance of the offered contents; con-

    cretely, precision and recall measures (van Rijsbergen, 2004).

    In our system, Precision represents the number of items of rele-

    vant content retrieved in a mosaic view divided by the total

    number of items retrieved for a particular situation. Recall is

    the number of relevant documents retrieved in a mosaic view

    divided by the total number of existing relevant content items

    at the system. Additionally we include two well-known mea-

    sures: the Fall-out, that is, the proportion of offered irrelevant

    items out of all irrelevant items available in the system, and

    the F-score as a measure that combines precision and recall.

    Prec jfRcg \ fOcgj=jfOcgj

    Rcall jfRctg \ fOcgj=jfRctgj

    Fout jfnRcg \ fOcgj=jfnRctgj

    Fscore 2 Prec Rcall=Prec Rcall

    Rc: Relevant Contents: Oc: Offered Contents; Rct: Total Relevant Con-

    tents; nRc: Irrelevant Contents; nRct: total Irrelevant Contents

    Table 2 shows the results from users and in general. The total

    number of content items in the repositories was 189. Adding the

    different users tests, out of a total of 188 items to return to the dif-

    ferent users, 163 were adequate for them. These results provide aprecision of 86.7%, a recall average of 86.2%, a fall-out of only

    List 2. Individuals obtained from agenda matching.

    50 R. Hervs, J. Bravo/ Interacting with Computers 23 (2011) 4056

  • 8/2/2019 Akif Paper

    12/17

    0.7% and an F-score up to 86%. Based on these values, we can deter-

    mine that the general measurement of relevance was on average

    up to 86%.

    Performing tasks using prototypes or using traditional methods.

    We have measured the time and interaction effort required for

    queries of shared documents through the visualization service.

    We evaluated two dimensions of this problem. The first dimen-

    sion is about the quantitative effort for tasks. The experiments

    consist of accessing a personal and random document using

    the traditional procedure (the document may be stored in the

    local host or in the network) and using the visualization service.

    In this test, we do not analyze context-based adaptation; rather,

    we measured the effort needed to interact with a public display

    using NFC and using a personal computer. Fig. 8 shows the

    results for the ten users. Moreover, we counted the time atthe moment of accessing the particular document and at the

    moment of navigation down to the final page. Concretely, the

    time to search and access documents has been 4050% faster

    via this service due to automatic information retrieval based

    on context. However, reviewing the document content was

    made slower and more difficult, mainly due to the interaction

    technique used: NFC mobile phones. The user has to touch a

    tag with the cell phone to advance a page in the document,

    and the NFC device has a lag of 1.4 s due to Bluetooth commu-

    nication. The second dimension focuses on user experience

    related to productivity. These items are defined in the MoBiS-

    Q questionnaire (Vuolle et al., 2008). The users evaluated the

    use of the visualization service in daily collaborative tasks, dur-

    ing 7 days and requiring access to personal and shared docu-

    ments. The users gave high ratings to the control and

    gathering of information, coordinating and ubiquitous work

    and system satisfaction. They gave lower ratings to ease of task

    performance and the reduction of time for complex collabora-

    tive tasks. We divided the population based on technologic

    familiarity and on the daily performing the evaluated tasks.

    Focusing on groups with and without technological knowledge,

    we analyzed the variance based on this independent factorthrough a one-way ANOVA model with a = 0.05. We observedthat there is a statistically significant difference between the

    groups divided according to technological familiarity with

    respect to most of the questions; higher P-values were obtained

    with regard to reductions in time and the ease of performing

    tasks as well as regarding general satisfaction. However, the

    comparisons between groups divided by task familiarity do

    not yield significant results when a = 0.05 or a = 0.01. Agreement with inference and usability issues. It is known that

    a user interface able to adapt itself to the current situation of

    the user to better suit user needs and desires improves usability

    (Dey, 2001). However, we consider it necessary to test the gen-

    eral aspects of usability in our prototypes. Thus, we have

    adapted some of Shackel (2009) proposals as well as theMoBiS-Q questionnaire in order to design an opinion poll with

    16 questions on the visualization characteristics of the mosaics,

    allowing us to evaluate user experiences with the information

    visualization prototypes. Fig. 9 shows the evaluation and

    results. The average agreement was 80.77% and the rating aver-

    age was 4.09 out of 5. Again, we analyzed groups of users

    divided by technological and task familiarity and applied a

    one-way ANOVA. We have rejected the hypothesis that the

    groups were equivalent regarding technological familiarity

    and task experience; we have obtained significant high P-values

    for questions related to scalability techniques, the minimization

    of user memory needs and the ease of navigation in the groups

    divided according the technological experience as well as in

    groups divided according the task familiarity. In general, theevaluation of usability is influenced by these two factors, as

    Fig. 7. Busy status inferred through COIVA engines and published in a personal web site (left) and current class slide showed in the subject Web site.

    Table 2

    Results of the adaptability of content according to context information.

    Oc Rc Rct Prec Rcall Fout Fscore

    User 1 12 9 10 0750 0900 0017 0818

    User 2 11 11 11 1000 1000 0000 1000

    User 3 12 9 11 0750 0818 0017 0783

    User 4 8 7 9 0875 0778 0006 0824User 5 9 9 9 1000 1000 0000 1000

    User 6 14 11 12 0786 0917 0017 0846

    User 7 9 8 9 0889 0889 0006 0889

    User 8 10 9 11 0900 0818 0006 0857

    User 9 15 12 14 0857 0857 0011 0857

    User 10 11 11 11 1000 1000 0000 1000

    User 11 7 7 8 1000 0875 0000 0933

    User 12 9 8 8 0889 1000 0006 0941

    User 13 6 3 9 0500 0333 0017 0400

    User 14 6 6 7 1000 0857 0000 0923

    User 15 10 9 9 0900 1000 0006 0947

    User 16 6 5 6 0833 0833 0005 0833

    User 17 9 8 8 0889 1000 0006 0941

    User 18 5 5 7 1000 0714 0000 0833

    User 19 6 4 5 0667 0800 0011 0727

    User 20 9 7 9 0778 0778 0011 0778

    User 21 5 5 6 1000 0833 0000 0909

    Total 188 163 189 0867 0862 0007 0865

    R. Hervs, J. Bravo/ Interacting with Computers 23 (2011) 4056 51

  • 8/2/2019 Akif Paper

    13/17

    we observed that the group of users with low levels of techno-logical experience and task familiarity provides the highest

    ratings.

    7. Related work and contributions

    There are three general bodies of research relevant to our work:

    the design of context-sensitive user interfaces, the automatic gen-

    eration of context-aware user interfaces and the development of

    run-time adaptive user-interfaces based on context. We provide

    an overview of the most relevant findings in these areas and sum-

    marize the differences between them and our work, including the

    primarily contributions of our approach.

    Our system determines which contents are appropriate for a

    user and automatically generates a pattern-driven user interface.Both, the selection of context and user interface generation, use

    ontological contextual information to enhance these processes.

    We are not aware of any readily available system that generates

    and adapts complex user interfaces by means of contextual infor-

    mation at run-time. However, there have been a number of prior

    systems and proposals that partially use contextual information

    for user interface creation, most of which use this information dur-

    ing the design-time process. Jung and Sato (2005) introduce a con-

    ceptual model for designing context-sensitive visualization

    systems through the integration of mental models in the develop-

    ment process. Clerckx and Coninx (2005) focus their research on

    integrating context information the user interface in early design

    stages. They take into account the distinctions between user inter-

    face, functional application issues and context data. We agree withthe need for such distinctions; however, their proposal is weak in

    that it has difficulty predicting all possible contextual changes inthe design-time process. This problem also influences the useful-

    ness ofLuyten et al. (2006) proposal, which is a model-based meth-

    od for developing user interfaces for Ambient Intelligence. This

    method leads to a definition of a situated task in an environment

    and provides a simulation system to visualize context influences

    on deployed user interfaces. There also have been several attempts

    to establish general languages for describing context-aware inter-

    faces such as UsiXML (Limbourg et al., 2005), which is based on

    transforming abstract descriptions that incorporate context into

    user interfaces, and CATWALK (Lohmann et al., 2006), which was

    designed to support the definition of various graphical user inter-

    face patterns using XSLT templates and CSS style sheets. These ap-

    proaches are complex and difficult to use as they require

    specialized tools for user interface designers; in some cases, mod-eling user interfaces and associated context-aware behaviors is

    more difficult than coding them. Moreover, systems that make

    use of style sheet transformations, such as XSLT and CSS, are not

    rich enough to support a wide range of media and content charac-

    terization (Sebe and Tian, 2007). In summary, prior research on

    context-sensitive user interfaces was principally motivated by

    the desire to improve existing development processes. This ap-

    proach, at least in todays design context, makes the designers

    work difficult by requiring a large amount of upfront effort. More-

    over, as we noted previously, a context-aware user interface must

    address changes in user behavior dynamically because it is difficult

    to predict adaptive requisites at design-time.

    With respect to related works on the automatic generation of

    context-aware user interfaces, there are relevant studies thatshould be noted. The Amigo project (Ressel et al., 2006) includes

    Fig. 8. Summarized results about using ViMos for tasks.

    Fig. 9. Summarized results about the user experience with ViMos.

    52 R. Hervs, J. Bravo/ Interacting with Computers 23 (2011) 4056

  • 8/2/2019 Akif Paper

    14/17

    methods to personalize the logic of the menu structure in intelli-

    gent home environments by means of an ontological description

    of the available services. The functionalities of software compo-

    nents for different devices are bound into one operation environ-

    ment to give the user the feeling of interacting with a solid

    system. News@hand (Cantador et al., 2008) is a news system that

    makes use of semantic technologies to provide online news recom-

    mendation services through the ontological description of contentsand user preferences; recommendations are displayed in an auto-

    generated user interface that contains a paginated list of items.

    Abascal et al. (2008) discuss adaptive user interfaces oriented to

    the needs of elderly people living in intelligent environments and

    propose an interface based on coherent multimedia text messages

    that appear on a TV screen. Gilson et al. (2008) make use of domain

    knowledge in the form of ontologies to generate information visu-

    alizations from domain-specific web pages. Multimedia retrieval

    and control interfaces for Ambient Intelligence environments have

    also been widely studied. The Personal Universal Controller (Nic-

    hols et al., 2002) represents one result of such studies. This system

    builds a real-user interface at run-time in a mobile device in order

    to unify the control of complex appliances such as TVs and DVD

    players. Wang et al. (2006) design and implement a personalized

    digital dossier to present media-rich collections, including video,

    images and textual information, each of which are generated as

    independent windows that are simultaneously displayed. The

    Huddle system (Nichols et al., 2006) provides user interfaces for

    dynamically assembled collections of audio-visual appliances.

    These prior studies consider the autonomous generation of user

    interfaces but assume static behavior at run-time. Additionally,

    some of these systems focus only on particular aspects of user

    interfaces, for example, menus (Ressel et al., 2006), lists of items

    (Cantador et al., 2008) and message boxes (Abascal et al., 2008).

    Other studies analyze user interfaces in very limited application

    domains, such as multimedia control (Nichols et al., 2002), multi-

    media retrieval (Nichols et al., 2006), social information (Vazquez

    and Lpez de Ipia, 2008), and digital dossiers of artworks (Wang

    et al., 2006). The SUPPLE system (Gajos, 2008) is a notable excep-tion; it can automatically generate complex interfaces adapted to

    a persons device, tasks, preferences, and abilities through formally

    defining the interface generation process as an optimization prob-

    lem. We have found many interesting similarities between this

    system and our approach, especially with respect to the contextual

    aspects that must be taken into account. However, SUPPLE requires

    a formal and articulate description of each widget in an interface.

    As a result, despite the automatic generation process, the final cre-

    ation of these model-based user interfaces requires a large amount

    of upfront effort. Additionally, the generated interfaces are focused

    on dialog box-like interfaces, as this is a style that may be inappro-

    priate for Ambient Intelligence user interfaces.

    In general and in contrast to our adaptive approach, all previ-

    ously described works only consider input data and static contex-tual information. Very few systems consider autonomous run-time

    adaptation based on context, and most of them apply run-time

    adaptability to specific and delimited domains and/or basic user

    interfaces. For example, Ardissono et al. (2004) apply recommen-

    dation techniques in personal program guides for digital TV

    through a dynamic context model that handles user preferences

    based on user viewing behavior. The ARGUR system (Hartmann

    et al., 2008) is an exception because it is motivated by the desire

    to create multi-domain interfaces. ARGUR is based on mapping

    context elements to input elements in the user interface; for exam-

    ple, the users agenda may suggest a date and time for departure as

    the input of a travel agency web page. Typically, it is very difficult

    to establish a one-to-one relationship between input elements and

    contextual characteristics. In fact, we have found a many-to-manyrelationship to be more common in our adaptive system. That is,

    several contextual elements affect several user interface compo-

    nents. In addition, there are also adaptive user interfaces that base

    their behavior on particular components of the context. The adapt-

    ability of interfaces to different kinds of device is a common chal-

    lenge. Butter et al. (2007) developed an XUL-based user interface

    framework to allow mobile applications to generate different

    screen resolutions and orientations. The SECAS project (Chaari

    et al., 2007) includes a generic XML user interface vocabulary toprovide adaptive multi-terminal capabilities based on the descrip-

    tion of each panel of the interface, visualization adaptation and

    navigation among panels.

    In summary, our context-aware system for autonomous gener-

    ation and run-time adaption of user interfaces takes into account

    the above-described work and makes the following contributions.

    Automatic user interface generation to reduce design effort. The

    ViMos framework automatically generates user interfaces at run-

    time and thus reduces design effort. The designer does not need

    knowledge on programming or design. Only detailed knowledge

    on the application domain is required to specialize the context

    model for visualization; in addition, the user needs to define the

    dynamic behavior through SWRL. For this goal, there are many

    available tools, including, for example, Protg.2

    Dynamic multi-modal complex user interfaces. The ViMos

    framework generates complex user-interfaces based on differ-

    ent design patterns and provides a complete set of components

    to visualize data of different natures as well as include several

    kinds of scalability strategies. Although ViMos does not provide

    a mechanism to generate user interfaces for any specific pur-

    pose or task, it offers interfaces for a wide user interface sub-

    type: information presentation. The implementation of ViMos

    has been explored in several domains, including collaborative

    groups, medical environments (Bravo et al., 2009a) and public

    services (Bravo et al., 2009b). Adaptive user interfaces with respect to context changes at run-

    time. ViMos provides mechanisms to readapt visualization ser-

    vices across the run-time life of the applications. Adaptive con-text-aware user interfaces should implement a mechanism to

    enhance their behavior based on user needs due to the impossi-

    bility of detecting all needs at design-time. These mechanisms

    can be automatic or human-generated. Advances in machine

    learning can improve context-aware systems; such advances

    (e.g., incorporating a learning mechanism into ViMos as a func-

    tional engine) comprise a significant area of future study. At

    present, ViMos provides a human-generated mechanism to

    adapt visualization service behavior by changing the set of

    SWRL rules at run-time thought re-factoring techniques.

    Complex ontological-powered context modeling. ViMos

    includes a context model compounded by four ontologies:

    users, devices, environment, and visualization services. The pro-

    posed model is detailed and can be consider complete, though itmust be specialized to particular application domains before

    implementation. A general model that encompasses any appli-

    cation domain requires a complex structure, but even with such

    a structure, it still cannot be universal. Our strategy is to isolate

    the principal elements across all user-centered context-aware

    application and provide mechanisms to rapidly specialize the

    context model and prototype applications. This approach pro-

    vides mechanisms to formalize the conceptualization of context

    aspects, thereby enhancing semantic cohesion and knowledge-

    representation capabilities. Moreover, we provide a high-level

    abstraction model that relates the ontologies. This taxonomical

    2 The Protg Ontology Editor and Knowledge Acquisition System. http://pro-tege.stanford.edu/.

    R. Hervs, J. Bravo/ Interacting with Computers 23 (2011) 4056 53

    http://dx.doi.org/10.3217/jucs-016-12http://dx.doi.org/10.3217/jucs-016-12http://dx.doi.org/10.3217/jucs-016-12http://dx.doi.org/10.3217/jucs-016-12
  • 8/2/2019 Akif Paper

    15/17

    organization enables exchange between adaptive services and

    specialization in particular domains.

    Overall, these contributions help address the three main chal-

    lenges discussed in Section 4. First, the formal model and the

    mechanisms to specify application behavior at run-time enable

    the identification of relevant changes in context, the correlation

    between these changes and the subsequent reconfiguration of user

    interfaces. Second, the ontological description of visualization ser-vices make possible the interoperation of information presentation

    in user interfaces; contents managed by an application can be

    shared, as can include descriptions and information about how

    contents should be visualized. Finally, ViMos offers model and pat-

    tern-driven mechanisms to adapt contents to different user

    interfaces.

    8. Discussion

    The first core question raised by this paper is whether systems

    like ViMos are practical. Our proposed automatic context-aware

    user interface generator focuses on a particular kind of interface,

    namely, information presentation. Thus, the primary requisite for

    practically using ViMos is that the principal task to be performed

    by users interacting with ViMos involves obtaining personalized

    information. It is important to keep in mind the two main high-le-

    vel components of our system: the semantic-powered representa-

    tion of context and behavior and the mechanisms to generate and

    adapt user interfaces. The potential of ViMos emerges whenever

    we exploit these two characteristics in highly dynamic environ-

    ments that greatly affect an applications behavior as well as when

    using complex and heterogeneous information sources that require

    the adaptation of user interfaces at run-time. The prototype de-

    scribed in this paper illustrates the use of this system. However, Vi-

    Mos increases the level of design effort under static user interfaces

    and interfaces that are not highly influenced by context. For these

    cases, we have shown that the two high-level components de-

    scribed above can provide interesting functionalities for certainkinds of applications by using them independently. The proposed

    automatic user interface generation mechanism can be applied

    for rapid prototyping purposes in dynamic and non-contextual-

    influenced user interfaces. In addition, the context model and the

    context management system can provide effective information

    sourcing to improve external user interfaces that were not

    created by ViMos or other kinds of services requiring contextual

    information.

    In addition, in the introduction of this paper, we focused on the

    challenge of visualizing the right information to the right person in

    the right place. Our proposal offers a partial solution for achieving

    this goal. In the development and testing of ViMos, we highlighted

    several primary problems that affect this kind of system. First, it is

    difficult to model the relationship between user context and infor-mation needs. This is a many-to-many relationship and depends on

    factors that may not be directly observable. For example, users

    usually change the performed task whenever they feel tired,

    though this fact may remain unnoticed under ViMos. As a result,

    our system may generate inappropriate contents. Last-minute

    changes in the user planning, personal circumstances and the gen-

    eral unpredictability of human nature are unobservable factors

    that make the definition of application behavior difficult. In order

    to address this problem, ViMos includes a historical record that

    uses a meta-context engine to detect repeated information content

    that is rejected by the user. In this case, the rules that induced the

    visualization of these contents incur penalties. In the same way,

    this historical record temporally promotes the rules that generate

    appropriate content because we have observed that when someinformation needs are satisfied, other similar needs emerge. This

    promotion and penalization mechanism considers the dynamic

    and evolving task of searching for information; however, we be-

    lieve that this mechanism can be improved through machine

    learning techniques that automatically change the behavior rule

    set instead of promoting and penalizing existed rules.

    Second, a future challenge involves bridging the semantic gap

    between the extraction of the underlying raw feature data and

    the need for a semantic description of contents in order to retrieveand generate their visual form. In fact, ontology population (that is,

    transforming data into ontological instances when new content is

    retrieved from the Web or other kinds of sources) is an open re-

    search challenge. This issue motivated our decision to develop

    the context model through Semantic Web languages. Thus, future

    advances in the semantic description of content can be easily com-

    bined with our current context model. Meanwhile, we also plan to

    analyze how to enhance the semantic description of textual con-

    tent in ViMos through language analyzers that extract and catego-

    rize relevant document terms and then compare these terms with

    ontological individuals generated by the context model using fuzzy

    metrics. This is another future work with respect to ViMos.

    Finally, we started from the general idea that the automatic

    generation of user interfaces creates less aesthetically pleasing

    interfaces than those created by human designers. However, the

    user interfaces generated using Windows Presentation Foundation

    technology have obtained high evaluations from users. We do not

    intend to replace human designers, as handcrafted user interfaces

    are always more desirable and attractive because they reflect the

    creativity and experience of designers. However, we believe that

    ViMos generates sufficiently attractive interfaces from the users

    perspective.

    9. Conclusions

    This paper presents an infrastructure to support information

    adaptability for users by describing context information with

    Semantic Web languages. The context information is represented

    by a general model, which describes the world around the applica-tions and users, in a simplified way. Also, we presented a specific

    model to describe how the raw data are transformed into a view,

    as well as their scalability, interaction and relationships.

    This approach allows the initial automatic generation of user

    interfaces at run-time with the necessary dynamism for adapting

    users needs according to their context. A simple language, such

    as SWRL, to define which context changes should create new views

    in the display, has proven sufficient. Users have expressed a high-

    level of acceptance of the manner in which they can access

    information. In general, the inference mechanisms have selected

    the required documents with a general rate of acceptance of

    80.77%. The visual representation of the items of content and their

    integration into mosaics has been also positively evaluated. ViMos

    successfully adapts different types and amounts of data to the userinterface through scalability techniques at run-time. Additionally,

    the application of well-known design patterns to display the mosa-

    ics helps to use the service, giving a similar view that provided by

    common desktop applications.

    Apart from the content and its visualization, this work empha-

    sizes the way in which information can be associated with its visu-

    alization, supporting interoperability and sharing of data between

    different applications and domains.

    In the introduction, we stated the ideal scenario in which users

    receive ex