Master Thesis Proposalcs.uccs.edu/~gsc/pub/master/pjvinton/doc/thesis.doc · Web viewA thesis...
Transcript of Master Thesis Proposalcs.uccs.edu/~gsc/pub/master/pjvinton/doc/thesis.doc · Web viewA thesis...
Information Visualization Engine
(iVE)
by
PETER JOHN VINTON
B.S., University of California, 1985
A thesis submitted to the Graduate Faculty of the
University of Colorado at Colorado Springs
in partial fulfillment of the
requirements for the degree of
Masters of Science
Department of Computer Science
2004
This thesis for Master of Science degree byPeter John Vinton
has been approved for the
department of Computer Science
by
_________________________________Chair, Marijke Augusteijn
_________________________________Jugal Kumar Kalita
_________________________________Edward Chow
ii
© Copyright By Peter John Vinton 2004All Rights Reserved
iii
CONTENTS
CHAPTER
I. Introduction..............................................................................................1
Introduction of the Problem.................................................................................1
Motivation............................................................................................................2
Solving the Problem............................................................................................2
Result Summary...................................................................................................3
II. Background Research............................................................................4
Value Bars...........................................................................................................4
Information visualization using 3D interactive animation..................................5
TileBars................................................................................................................6
Cone Trees...........................................................................................................6
The Perspective Wall...........................................................................................7
Generalized Fisheye Views.................................................................................8
SemNet................................................................................................................9
Database Navigation..........................................................................................10
The Information Visualizer................................................................................12
InfoCrystal.........................................................................................................12
Summary and Comparison................................................................................13
III. Information Visualization Engine (iVE)............................................15
iv
iVE, a User-Friendly Approach to Low-level Information Visualization.........15
Components of iVE...........................................................................................17
The Information Visualization Design......................................................17
Low-Level Vision..................................................................................18
High-Level Vision.................................................................................19
GUI Design............................................................................................21
iVE GUI Architecture............................................................................23
The Program/Algorithms...........................................................................25
Language Selection, Software Architecture, Software Setup................25
Software Narrative.................................................................................27
Pseudo-Code..........................................................................................28
Algorithm/Modules of Interest..............................................................32
IV. iVE Visualizations................................................................................36
iVE Visualization Explanations.........................................................................36
V. iVE Evaluation......................................................................................50
Usability Evaluation Design Background.........................................................50
Evaluation Design......................................................................................53
Evaluation Results.....................................................................................54
Evaluation Form........................................................................................56
VI. Conclusions and Future Research......................................................58
Accomplishment................................................................................................58
Problems Faced and Solutions...........................................................................58
Improving iVE...................................................................................................59
v
Impact of Research............................................................................................61
References....................................................................................................62
vi
FIGURES
Figure
1 iVE GUI Architecture..............................................................................................24
2 iVE Main GUI..........................................................................................................38
3 iVE Tutorial.............................................................................................................39
4 Actual iVE Results...................................................................................................40
5 A Least Relevant isualization and Associated URL Document..............................42
6 A Median Relevant Visualization and Associated URL Document........................43
7 A Most Relevant Visualization and Associated URL Document............................45
vii
Chapter 1
Introduction
Introduction of the Problem
Information visualization is the concept of shifting the interpretation of
information from the cognitive domain to the perceptual domain. It is similar to the idea
that "a picture is worth a thousand words".
Information visualization can occur in any of the phases of the information
retrieval process. Visualizations of the information retrieval process can be divided into
four areas:
1. The information environment can be visualized. This could be what is seen on the
standard desktop of a personal computer. This could be as simple as the icons displayed
on the desktop.
2. The information space itself can be visualized. For example, if a user has a set of
digitized journals on his/her personal computer, the information space could be
graphically visualized as a set of books on a bookshelf.
3. Navigating through the information space can be visualized. For example, this could
be done by scrolling left and right and up and down through the virtual bookshelf.
4. The resulting information units can be visualized. For example this could be done by
a virtual list of journals produced by a virtual librarian.
However a problem exists, how does a user know which journal is the most
relevant?
Motivation
As stated in the previous section, information visualization can be implemented
during the many phases of the information retrieval process. The focus of this thesis is in
applying information visualization to a collection of resulting information units (i.e.
collection of papers) to determine which information unit is the most relevant.
This can be a very time consuming process if the user is using a “pick and see
method” on the World Wide Web (WWW). However, by shifting the interpretation of
information to the perceptual domain a user can save a significant amount of time when
searching for relevant information.
Solving the Problem
The goal of this thesis, in terms of image processing and neural networks, is to
create a feature vector visualization to be inputted into the human neural network. The
feature vector visualization created is a graphical image of an information unit and will
2
give the user the means to quickly determine relevancy. The application developed to
create this feature vector visualization is called the Information Visualization Engine
(iVE, pronounced "ivy"). This feature vector visualization is based on a basic
understanding of human psychology/physiology and the concept of good GUI design.
Result Summary
The results are from an evaluation conducted on iVE and are contained in Section
5. The evaluation consisted of observations of participants using iVE and of statistical
information resulting from participants filling out an evaluation form. The evaluation
encompasses two categories of experiments. One category is Usability evaluation of a
tool and the other is Controlled experiments of comparing two or more tools (see Section
5).
3
Chapter 2
Background Research
The following is a list of relevant papers and a synopsis of their content.
Value Bars
An information visualization and navigation tool of multi-attribute listings,
Richard Chimera1
A Value Bar displays information in the form of a long thin vertical bar. This bar
is sectioned, with each section corresponding to some unit of information (i.e. a file, a
document, etc.). The height/size of each section represents a weight with respect to some
attribute.
For example, if a user had a list of files and was interested in the size attribute of
the file, this tool would create a long thin vertical bar next to the list of files. The
size/height of each section that makes up this thin vertical bar would represent the length
attribute weight for the corresponding file.
Information visualization using 3D interactive animation
George G. Robertson, Stuart K. Card, and Jock D. Mackinlay2
Information visualization is the concept of creating visual objects to more easily
retrieve and assimilate information, thereby shifting the interpretation of information to
the perceptual domain. The whole process from initially retrieving the information to
attaining the specific information was discussed. The four main areas to accomplishing
this goal (on a computer) were:
1. Create a large workspace: create more screen space by using a concept of rooms and
a denser screen space by using animation and 3D.
2. Offload work to agents: querying/searching using search agents, organizing using
clustering agents, and interacting using interactive objects.
3. Maximize real-time interaction rates: rapid interaction using a cognitive coprocessor
scheduler and governor to tune the system for the human perceptual system.
4. Visually abstract information to speed pattern detection: information visualizations
using concepts of hierarchical structure (i.e. cone tree), linear structure (i.e. perspective
wall), continuous data (i.e. data sculpture), and spatial data (i.e. office floor plan).
5
TileBars
Visualization of terms distribution information in full-length document access,
Marti A. Hearst 3
The TileBar is a horizontal bar containing tiles that is used to show the features of a
document. The shade, location, and size of these tiles represent the frequency of the
desired search string. The tiles themselves represent a neighborhood of words such as a
paragraph of words or a chapter of words. The overall length of this bar represents the
relative length of the document. The TileBar is specifically designed for text documents
and focuses on items such as term frequency, term distribution, and subtopic boundaries.
Cone Trees
Animated 3D visualizations of hierarchical information, George G. Robertson,
Jock D. Mackinlay, and Stuart K. Card 4
The Cone Tree is a visualization tool that presents large information spaces in a
picture-like manner. The idea is to take advantage of the human perceptual system for
recognizing patterns in the data.
Cone Trees represent hierarchical information in the form of a root node that has
leaf nodes. Leaf nodes with the same root node form a ring, resulting in an overall shape
of a cone. In addition, each leaf node in the ring may have its own ring of leaf nodes.
The Cone Tree uses colors, shading, the third dimension, and movement (i.e. rotating the
6
tree) to shift some of the cognitive load to the human perceptual system. This design also
allows more data to be displayed on a two dimensional monitor.
The Perspective Wall
Detail and context smoothly integrated, Jock D. Mackinlay, George G. Robertson,
and Stuart K. Card 5
The Perspective Wall efficiently deals with large information spaces. It takes into
account the biological characteristics of the human eye. There is a region of the eye that
perceives details and is in focus while the surrounding area is unfocused and has low
resolution.
One can imagine the Perspective Wall as a scroll of paper that is stretched over
the face of a box. The section of paper on the face of the box is in focus. The remaining
sections on each side of the box fade off to focal points in the backdrop. By using a
mouse or some other device, different parts of the scroll can be positioned onto the face
of the box. This scrolling action is done in a smooth and consistent fashion so that object
constancy remains intact.
Some linear feature of the information is used to organize the information on the
wall. For example, a set of documents with timestamps/dates could be organized so that
the oldest documents are on the far left end of the wall and the most current documents
are on the far right end of the wall.
7
Another reason for this design was the limited space on a computer monitor. By
having the less focused left and right sides of the wall fade off into the distance, a large
amount of information can be displayed. Information objects also remain in context with
the rest of the information space. The vertical direction can be used to display
hierarchical information.
Generalized Fisheye Views
Generalized Fisheye Views, George W. Furnas 6
'Fisheye Views' is a viewing strategy dealing with large information spaces. The
concept is to show the closest area (i.e. are of interest) of the information space in great
detail while at the same time keeping a sense of its relation to the whole global
information space.
This Fisheye View occurs naturally to humans. Examples of this natural
occurrence are:
- Newspaper articles focus on many local stories where the only 'distant stories' are the
ones of greater importance.
- In the management domain, people know with some detail who are their immediate
section heads and managers but very little of other departments.
Fisheye Views are naturally occurring so it seemed appropriate to apply this idea
to large information spaces. To formalize this concept a "Degree of Interest" function
8
was created, which is based on "a priori" importance and distance. The resultant of this
function would determine the most interesting points that would need to be displayed.
The results of the Fisheye View were successful when compared to a flat view of
an information space. One example cited involved viewing source code. When
compared to a flat view the Fisheye View was more informative. In the Fisheye View
superfluous lines of code were removed, resulting in significant contextual information.
Only the significant 'while', 'if', and 'switch' statements of the algorithm surrounding the
source code lines were shown.
SemNet
Three-dimensional graphic representations of large knowledge bases, Kim M.
Fairchild, Steven E. Poltrock, George W. Furnas 7
SemNet is a 3 dimensional graphical user interface designed to help people
interact with large information spaces. It is hypothesized that three items must be
recognized for a user to comprehend an information space:
1. Identities of individual elements in the knowledge base.
2. Relative position of an element within a network context.
3. Explicit relationships between elements.
SemNet uses a 3 dimensional directed graph to visually represent a knowledge
base. SemNet positions the knowledge elements of a knowledge base using mapping
functions (i.e. heuristics, multidimensional scaling) specific to the knowledge base and
9
connects these elements using colored arcs. It is also possible for knowledge elements to
be positioned by the user if the user has information not contained in the knowledge base.
The result is a grouping of knowledge elements with arcs between them. SemNet also
uses a "generalized fisheye view" when there are too many arcs and knowledge elements
to be comprehensible.
To navigate through the knowledge base five methods were studied.
1. Relative movement: as a helicopter navigating through a landscape.
2. Absolute movement: having an overall map of the knowledge base.
3. Teleportation: immediately being able to go back to a position in the knowledge base
previously visited.
4. Hyperspace movement: moving between knowledge elements by a certain attribute.
5. Moving in space: moving the knowledge base as a whole rather than navigating
through it.
Debugging the knowledge base was performed by tracing a query through the arcs
and knowledge elements to verify their relation.
Database Navigation
An office environment for the professional, Robert Spence, Mark Apperly 8
Professional people spend much of their time working with information. The two
main tasks dealing with information handling are:
1. Specification of the information.
10
2. Retrieval of the information item.
A proposed computer system design, called "Office of the Professional", was
researched with the goal to help the professional in information handling. The design of
the "Office of the Professional" took into account the following human factors:
1. Lack of enthusiasm to learn a complex language.
2. Appearance of their personal office.
3. Limitations set by human memory.
4. Highly developed spatial memory in humans.
5. Highly developed "search by visual scan" in humans.
A system consisting of virtual office objects was created. Besides obvious objects
such as a calculator and an in-basket, a complex index system was developed. This index
system (e.g. an index system for a set of journals) consisted of a bifocal display.
The bifocal display was divided into a section of focus in the middle of the display
surrounded by a section of less focus. The overall idea is similar to the "Generalized
Fisheye View" where the items of interest are more detailed than the surrounding items.
Higher zoom capabilities were also available depending on the type of information space
that was being navigated. A point and touch system for selecting the information items
was also studied since this was the most intuitive form of selecting items.
11
The Information Visualizer
A 3D user interface for information retrieval, J. D. Mackinlay, G. G. Robertson,
S. K. Card 9
The Information Visualizer is a user interface that addresses the overall process of
acquiring relevant information. In addition to the main goal of information retrieval, the
cost of information from secondary storage to immediate use was also taken into account.
The three main components of the Information Visualizer are:
1. 3D/Rooms: These rooms accommodate 'Locality of Reference' and clustering,
resulting in larger and denser immediate information storage.
2. The Cognitive Coprocessor: An animated user interface architecture. This user
interface creates an animated view of the virtual information world and handles items
related to human physiology and psychology (i.e. perceptual processing time constant,
immediate response time constant, object constancy…).
3. Information Visualizations: Structure-oriented browsers for using different sets of
information; Cone Trees for hierarchical information and the Perspective Wall for linear
information.
InfoCrystal
A visual tool for information retrieval and management, Anselm Spoerri 10
12
The InfoCrystal is an information visualization tool that attempts to show the
connection between different concepts in an information space. The InfoCrystal is based
on the concept of Venn Diagrams and uses shaped objects, colors, and location of objects
to represent the information space.
For example, assume there are four documents and three concepts of interest. Let
us also assume only one of the documents contains all three concepts. The InfoCrystal
would have an external shape of a triangle with each vertex representing a different
concept. Inside the external triangle would be four geometric shapes (i.e. circle, square,
rectangle, pentagon) corresponding to the four documents. The geometric shape
representing the document containing all three concepts would be mapped to the center of
the external triangle. The other geometric shapes would be pulled toward one vertex or
the other depending on how much of each concept it contained.
In connection to a Venn diagram, the center of the InfoCrystal would correspond to
the area where all three (concept) circles intersect.
Summary and Comparison
The visualization tools, mentioned in the above papers, fall into three categories.
The first category, which the bulk of the tools fall under (2.2 Information visualization
using 3D interactive animation, 2.4 Cone Trees, 2.5 The Perspective Wall, 2.6
Generalized Fisheye Views, 2.7 SemNet, 2.8 Database Navigation, and 2.9 The
Information Visualizer) are at a different scope than iVE. These category 1 tools deal
13
with the overall information space and are concerned with issues such as how to navigate
and organize the information space. In contrast, iVE is only concerned with information
units once the user gets to the area of interest in the information space.
The second category of tools, 2.10 InfoCrystal, is concerned with multiple
concepts. iVE on the other hand is concerned with the comparison and relevancy of
information units under one concept.
The third category of tools, 2.1 Value Bars and 2.3 TileBars, is concerned with
the comparison of information units and is the category which iVE falls under as well.
Though these two tools allow comparison of information units they did not seem to take
into account the properties of light, the human visual system, or general concepts of good
GUI design. These tools also appeared to visualize only one attribute or feature. Value
Bars is concerned with file size and TileBars is concerned with word
frequency/distribution. The domain iVE focuses on is concept relevancy, meaning that
the user is able to determine which document is most relevant in reference to some topic
of interest. In addition, visualization items such as color, shape, and labeling were not
used in the visualizations of these other two tools. The use and reasoning of these
visualization items in iVE is further explained in Section 3. This being the case, iVE has
incorporated very little, if any, feature(s) from any of these other tools.
14
Chapter 3
Information Visualization Engine (iVE)
iVE, a User-Friendly Approach to Low-level Information Visualization
Information visualization can be applied to any or all of the phases of the
information retrieval process. The information world where this information retrieval
process occurs can be thought of as consisting of 4 areas. The first is the information
environment, the second is the information space, the third is the navigation through the
information space, and fourth is the display of the information units.
To further clarify these 4 areas of the information world, the following two
examples are given:
Example 1: Journals on a Perspective Wall:
The first area, the information environment, is visualized by what is seen on the
standard desktop on a personal computer. In the case of the Perspective Wall, this could
be as simple as an icon amongst many other icons on the desktop.
The second area, the body of data, is visualized by the Perspective Wall. The
Perspective Wall can be thought of as a bookshelf, where journals on the bookshelf are
categorized by date, going from left to right, the oldest journals on the left end and the
most current journal on the right end. Journals on the same vertical would have
approximately the same date.
The third area, the navigation through the Perspective Wall, is a visualization of
the method used to go to different sections of the Perspective Wall. In the case of the
Perspective Wall, the wall itself scrolls by from left to right or right to left depending on
which section is desired to be viewed.
The fourth area, the display of the information units, is visualized by the journals
sitting on this wall.
Example 2: Documents on the World Wide Web:
The first area, the information environment, is visualized by what is seen on the
standard desktop on a personal computer. In the case of the World Wide Web, this is an
icon amongst many other icons on the desktop.
The second area, the body of data, is not really visualized in the WWW, it is just
understood that a user can access a large nebulous body of data.
The third area, the navigation through the WWW, is visualized by the GUI
(Graphical User Interface) of the web browser.
The fourth area, the display of the information units, is visualized by the listing of
URLs resulting from your web search.
But the problem exists that after a set of journals or a set of URLs is collected,
which of the journals or URLs is the most relevant? iVE resolves this problem.
16
Components of iVE
iVE can be thought of as consisting of two major parts:
1. The information visualization design.
2. The program/algorithm.
The information visualization design is the design of the visualization that
represents the information unit. The program/algorithm is the code that produces this
visualization.
The Information Visualization Design
The visualization design of iVE was created by taking into account three areas of
study:
1. Low-Level components of the human visual system.
2. High-Level components of the human visual system.
3. GUI design.
Low-Level components of the human visual system are the characteristics and
properties of light and the biological components of the human visual system that react to
light11.
High-Level components of the human visual system are considered as the
cognitive processes involved in "seeing" this light. This involves how the mind interprets
what is seen and how we think about what is being seen. This level relates to the
17
psychology of the mind and how we learn, amongst other things.
GUI design is the design features in creating a GUI that contains the visualization
and takes into account the High and Low Level components.
Low-Level Vision
iVE has taken into account the various characteristics of light and how the
biological components of the human visualization system react to light.
The receptors of the human visualization system consist of rods and cones. The
cones of the human eye are what allow people to see color and allow people to determine
the overall brightness of images. But these cones are not equally sensitive to all the
colors. Cones are more sensitive to red and less sensitive to blue. This being the case,
red objects appear closer than blue objects and draw more attention12. iVE therefore uses
red to represent the most pertinent parameter in its visualization. In addition iVE uses
blue to represent the least pertinent parameter in its visualization and colors between red
and blue to represent parameters of intermediate importance.
Due to the structure of the human eye only a small part of the image people see is
in focus and in high resolution. The cones of the human eye are concentrated in the part
of the eye called the fovea which is near the optic nerve. The part of the image that falls
on the fovea is therefore the part of the image in focus and in high resolution. iVE has
taken this into account and has created an image whose size is small enough to be in
focus at a single glance.
18
High-Level Vision
iVE has taken into account the many aspects and theories of high-level vision.
High-level vision, as stated above, is concerned with how the human visual system works
with the mind, such as memory, learning, and perception.
The human memory system can divided into 3 areas13. The first area is the
sensory buffer area which stores information input from the senses, such as vision and
touch. Information is retained in this area of memory for about 2 seconds. The second
area is short-term memory or working memory. Information is retained in this area for
about 18 to 20 seconds unless it is purposely rehearsed. The third area is long-term
memory which can last a lifetime. iVE is concerned with the sensory buffer area and
short-term/working memory and their respective theories.
MILLER'S THEORY
Miller's theory is that short-term/working memory works best with 7 (+/-) 2 small
chunks of data14.
For instance it is easier to retain
(617) 459 -1223
than
6174591223
and it is easier to retain
DEC IBM GMC
than
DECIBMGMC
19
Taken this into account, iVE only displays 4 parameters/features of an
information unit.
NUMBER OF COLORS
The use of color was already explained in Low-Level Vision Section but it
appears from the high-level vision point of view it is best to limit the number of colors to
five15. iVE basically uses only four colors, one for each parameter/feature.
FITT'S LAW
Fitt's Law states that the speed at which a person can maneuver to different parts
of an image is based on distance, size and the relative precision of the objects in the
image. This being the case it is quicker to maneuver to objects positioned in a circular /
perimeter orientation than objects displayed in a list fashion16. Taking into account the
future enhancements of iVE, where a user may select/click on one of the feature dots of
iVE to bring up a similar image of sub-features, iVE uses a circular image.
BRAIN POWER
Vision accounts for approximately 40% of brain functioning. This functioning
has yet to be rivaled or duplicated by anything man-made17. This is probably the reason
why people are naturally and implicitly visually perceptive creatures. Because of this
processing power, people are able to pick out and see shapes extremely well. Examples
of this are their ability to recognize faces and the ability to see and pick out objects in a
cluttered background18. iVE uses this important aspect of the human visual system by the
fact that the overall shape/pattern, the shape/pattern created by the feature dots, is to be
used by the user when comparing the resulting visualizations. The rule of thumb for the
visualization is that if all the feature dots reside on the delimiting circle of the
20
visualization then this image and its corresponding information unit is the most relevant
document.
GUI Design
iVE has also taken into account several general features related to good GUI
design19:
First the background color of any GUI should not distract the user or be hard on
the user’s eyes. In this case, boring is good. iVE therefore has a gray background for its
GUIs20.
Second, since iVE is based on the concepts described in the High-Level Vision
Section and the Low-Level Vision Section, iVE is intuitive and easy to interpret, which is
another quality of a good GUI design.
Third, a straight forward tutorial is provided to the user so the user can be up and
running quickly.
Fourth, iVE has pertinent use of color by taking into account the various aspects
of Low-Level Vision and High-Level Vision.
Fifth, iVE provides reasonable feedback by providing a window that allows the
user to see the data that iVE is processing.
And sixth, since not all people are able to see in color, labels have been used as
secondary indicators.
Though GUI design is a bit of an art form, iVE implements most of the general
21
features for a good GUI design.
22
iVE GUI Architecture
The iVE GUI architecture consists of 3 levels shown on the following page.
When the application is executed, the first level of GUIs appears. There are two Level 1
GUIs: the Main GUI which accepts the user’s search phrase, and an accompanying
DOS/Perl shell GUI which displays the data as it is being processed along with any
warnings and errors.
After processing has completed the second level GUI appears. This Level 2 GUI
displays the visualization results as well as corresponding URL buttons. The user is then
free to select any of these URL buttons to bring up the third level of GUIs.
The Level 3 GUIs consists of the URL documents brought up in the browser.
23
Figure 1 iVE GUI Architecture
Level 1b: DOS/Perl shell, accompanies the
Main GUI, displays data/warnings while
processing.
Level 1a: Main GUI used for
entering the search phrase.
Level 2: iVE visualization results GUI, displays
visualizations and active URL buttons after processing is
completed. Active buttons are used to look at the actual
documents.
Level 3: URL documents displayed by clicking on the active URL buttons from the
iVE visualization results GUI.
24
The Program/Algorithms
Language Selection, Software Architecture, Software Setup
iVE was written in PERL because of its built in features for extracting URL
documents (i.e. user agents) and its ability to pattern-match. Perl also works relatively
well on a Microsoft Windows platform. In addition Perl/Tk provides the utilities for
creating the GUIs and the information visualizations.
The experimental setup is a personal computer system running with a Windows
Operating System (Windows 98 or higher) that is connected to the Internet. The software
must be run in a directory that contains a Windows Explorer executable.
The software is very flat and sequential. The approximate size of this algorithm is
2MB. The flow of the software is strictly sequential and has a polynomial time
complexity of O(n). A generalized view of the flow can be seen in the following
diagram.
25
SOFTWARE FLOW
Query and search the WWW for potentially relevant information units
Extract features for each information unit
Calculate the visual parameters for each information unit
Collect the information units for feature extraction
Present the visualizations in a user-friendly GUI for the user to access
26
Software Narrative
When the application is first executed, a GUI appears to accept search terms.
Once the search terms are entered and the Search button is selected, a user agent (a built-
in Perl object) connects to a search engine on the WWW. The user agent then submits
the search terms to the search engine and retrieves the resultant URL pages. The URL
pages are then stored in a data structure on the local computer.
Using Perl's pattern matching utilities, search words are tagged in each resultant
URL page. A visualization is then created using these tagged words in combination with
three other features of the document. The four features of the visualization are:
1. search phrases:
This feature is the total number of search phrases in the URL document. To be counted
as a search phrase all words of the search phrase must exist within a certain neighborhood
of words.
2. search words:
This feature is the sum total of each of search word in the document.
3. links:
This feature is the total number of relevant WWW links in the document. To be counted
as a relevant link, at least one of the search words must be in the link.
4. words:
This feature is total number of all words in the document.
27
Pseudo-Code
The pseudo-code for iVE is as follows and is sectioned by the modules of
the code:
Main
{
create gui for user to enter search terms and view the tutorial;
call driver subprogram;
loop;
}
The driver subprogram below calls all the necessary subprograms used in
creating the information visualization images. Each of the subprograms is further
described following this driver subprogram.
sub driver
{
do_search;
get_urls;
get_htmls;
initialize_parameter_image_hashed;
initialize_image_parameters;
splitting_out_keywords;
no_tags;
analysis_num_words;
28
analysis_search_word_count;
analysis_num_phrases;
analysis_num_links;
find_max_parameter_value;
create_rations;
trouble_shoot;
another_gui;
}
Pseudo-code/description of each module called by the driver subprogram:
sub do_search
{
create user agent;
connect to WWW;
submit search words to search engine;
collect results of search;
}
sub get_urls
{
extract pertinent URLs from the collected results;
}
sub get_htmls
{
29
have the user agent retrieve each extracted URLs document;
}
sub initialize_parameter_image_hashes
{
initial hash tables used in creating the image;
}
sub initialize_image_parameters
{
initialize parameters for creating the image;
}
sub splitting_out_keywords
{
put each search word in its own index of an array;
determine the number of search words entered by the user;
}
sub no_tags
{
filter out all html tags, leaving only document content;
}
sub analysis_num_words
{
determine the number of words in each html document;
}
30
sub analysis_search_word_count
{
determine the number of search words in each html document;
}
sub analysis_num_phrases
{
determine the number of phrases in each html document;
//all search words taken together are considered a phrase
}
sub analysis_num_links
{
determine the number of related url links in each html document;
}
sub find_max_parameter_value
{
determine the maximum value of each feature;
//will be used in creating images that are relative to each other.
}
sub create_ratios
{
for each document determine the percentage of each feature relative to the max value of
each feature;
//again, these values are used when creating each image so that all images are relative
31
//to each other.
}
sub troubeshoot
{
output each filtered document to a file as well as the corresponding features, for use in
troubleshooting;
}
sub another_gui
{
creates the visualization for each document as well as an active link to each document;
}
sub tutorial_gui
{
creates the tutorial gui;
//activated when the user clicks on the corresponding button.
}
Algorithm/Modules of Interest
The only module of interest is the "analysis_num_phrases". All the other
modules contain standard algorithms that use standard data and control structures.
"analysis_num_phrases" pseudocode
32
The following algorithm determines the number of search phrase occurrences in a
document and uses a technique similar to convolution in image processing. Each
document can be thought of as a 1-Dimensional image, each word in the document can be
thought of as a pixel (i.e. word pixel).
First, each word of the document is stored in documentWordArray[ ] and each
word of the search phrase is stored in searchWordArray[ ]. Corresponding to the
documentWordArray[ ] is the documentWordTagArray[ ], which is used to store a
number tag for each word. This array is initialized to all zeroes. Corresponding to the
searchWordArray[ ] is the uniqueNumberArray[1, 3, 5, 7, 17], which contains the
number tags used when a search words are found in the documentWordArray[ ]. The
list of numbers in uniqueNumberArray was chosen due to their unique summed value.
Currently this algorithm can accept a search phrase of up to five words, corresponding to
the five numbers in the uniqueNumberArray[ ].
Next the phraseDeterminer is calculated by summing up the unique numbers
corresponding to the search words. For example if the search phrase is climbing1 mount3
everest5, the phraseDeterminer is 1+3+5 = 9.
The neighborhood is then calculated, which is numberOfSearchWords + 3.
This equation is based on the idea that sometimes filler words like to and and can be
interjected between the search words. In addition, a neighborhood greater than 5 words,
corresponding to a search phrase of two words, can run the risk of a repeat search word
being found in the neighborhood, resulting in a missed a search phrase (see next
paragraph).
33
Last, the number of search phrase occurrences is determined. Starting at the
beginning of documentWordTagArray[ ] a neighborhood of contiguous indexes is
summed. This summed value is compared to the phraseDeterminer and if equal the
numberOfPhrases for the document is incremented. This neighborhood is then shifted
one array location right (toward the end of the array) and the process is repeated until the
end of the documentWordTagArray[ ] is reached.
//analysis_num_phrases
uniqueNumberArray = [1, 3, 5, 7, 17]
-put each word of the Search Phrase into searchWordArray[ ]
-put each word of the document in documentWordArray[ ]
-initialize the corresponding documentWordTagArray[ ] to contain zeroes
for i = 0, i <= numberWordsInDocument, i = i+1
for j = 0, j <= numberOfSearchWords, j = j+1
if documentWordArray[i] = searchWordArray[j]
documentWordTagArray[i] = uniqueNumberArray[j]
end if
end for
end for
//neighborhood is the range of indices of documentWordTagArray[ ]
//to search for a phrase
neighborhood = numberOfSearchWords + 3
for k = 0, k <= numberOfSearchWords, k=k+1
//phraseDeterminer is the value used to determine if a phrase
34
//has been found in a neighborhood of words
phraseDeterminer = phraseDeterminer + uniqueNumberArray[k]
end for
//determine the number of phrases
for m = 0, m <= numberOfWordsInDocument, m = m+1
if documentWordTagArray[m] not equal 0
for n = m, n <= neighborhood + m, n = n+1
phraseSum = phraseSum + documentWordTagArray[n]
end for
if phraseSum = phraseDeterminer
numberOfPhrases = numberOfPhrases + 1
end if
end if
end for
35
Chapter 4
iVE Visualizations
iVE Visualization Explanations
Figure 2 Main GUI
Figure 2 is the Main GUI that appears once the iVE application is started.
Figure 3 iVE Tutorial
Figure 3 is the Tutorial GUI that appears when the Tutorial Button in the Main
GUI is clicked. The information in this GUI should be read before submitting a search in
order to understand the results of the iVE visualization.
Figure 4 Actual iVE Results
This figure illustrates the results of an actual iVE query. This particular GUI was
the result of entering the search phrase machu picchu hiking. There is a large amount of
perceptual information. The perceptual information is quite evident; look for the result
that has the most dots on or near the outer circle. There is also significant cognitive
information upon further examination. The cognitive information can be perceived by
examining which type of dot is closest to the outer circle (each dot represents a different
feature of the document as stated in the Figure 1 iVE Tutorial).
Figures 5 through 7, Least, Median, and Most Relevant iVE Results and their
corresponding URL documents, respectively:
The next set of figures shows what iVE considers the least relevant URL
document (figure 5), the median relevant URL document (figure 6), and the most relevant
URL documents (figure 7). The individual visualizations were cropped from figure 4
Actual iVE Results. The URL document below the visualization is the corresponding
URL document activated by clicking on the associated URL button. There is some
subjectivity as to exactly which of most relevant documents is the most relevant, but
there is a definite objective difference between the least relevant documents and the most
relevant documents.
37
Figure 2 iVE Main GUI, appears when iVE application is started
38
Figure 3 iVE Tutorial, activated by the Tutorial Button in the Main GUI
39
Figure 4 Actual iVE Results, resulting from entering the search term machu picchu hiking in the Main GUI
40
Figure 4 cont.
41
Figure 5 A Least Relevant Visualization and Associated URL Document, this visualization is cropped from Figure 4 Actual iVE Results, third from the bottom
42
Figure 6 A Median Relevant Visualization and Associated URL Document, this visualization is cropped from Figure 4 Actual iVE Results, sixth from the top
43
Figure 6 cont.
44
Figure 7 A Most Relevant Visualization and Associated URL Document, this visualization is cropped from Figure 4 Actual iVE Results, second from the bottom
45
Figure 7 cont.
46
Figure 7 cont.
47
Figure 7 cont.
48
Figure 7 cont.
49
Chapter 5
iVE Evaluation
Usability Evaluation Design Background
Usability evaluation for GUIs and websites are somewhat standardized. GUIs and
websites are usually evaluated in regard to some task being performed. For example
people intending or planning to buy a book first search for a book, review the results of
the search, and then select and purchase the book. Each of these actions can have many
levels or steps.
Looking at just the purchasing step, a typical process is for the customer to first
select a book, add it to some sort of shopping cart, check out, go to a purchasing form,
and then confirm the purchase. There is usually a separate GUI associated with each of
these sub-steps. As you can see, from the viewpoint of website/GUI design, the overall
accomplishment of a task is quite involved.
In this task based domain the items usually evaluated are: ease of navigating
through the website, ability to redo/undo actions, smooth flow from GUI to GUI,
knowledge of relative location within the website21 etc. However, in regards to
information visualization, there is currently no standardized usability testing22 and
usability testing in this domain is considered an ad-hoc process23. This is due to several
reasons. First information visualization is still a very young area of research. During the
years of 1995-1997 only 6% of the papers about information visualization dealt with
usability testing24. Also how a person interprets a visualization relies on factors such as
their socio-demographic profile, cognitive abilities, competency in understanding the
visualization, knowledge base25, and amount of training26, etc. Trying to take all these
items into account in the form of a standardized usability test is a daunting task and has
yet to be done.
Although there is no standardized usability testing for information visualization,
some papers have categorized and structured testing into certain groups.
One paper has categorized usability testing for information visualization into 3 main
groups27.
1. visual representation usability, referring to the expressiveness and quality of the
resulting image.
2. interface usability, related to the set of interaction mechanisms provided to users
so they can interact with the data through the visualization
3. data usability, devoted mainly to the quality of data for supporting users’ tasks
In addition a recent paper has broken down testing of information visualization into 4
types of experiments/studies28:
1. Controlled experiments comparing design elements such as specific widgets in a
GUI such as scroll bars or alpha sliders or types of buttons, etc.
2. Usability evaluation of a tool by itself where feedback is provided by users.
3. Controlled experiments consisting of comparing two or more tools such as
comparing types of tree visualization tools.
51
4. Case studies of tools in a realistic setting where users are able to spend a lot of
time with a tool in an actual environment of use (i.e. not just in a lab) and then
give feedback.
There are also some general usability testing concepts:
First tests should be made simple enough to be accomplished is a short period of time
and the items or questions in the test need to be specific enough to measure performance.
The same input data should be used for each evaluation to provide some uniformity and
standardization29 30.
Second, a captive audience should be used31. In general an online survey form is not
a good idea since people will sometimes fill out the survey before using the website/GUI
or, after using the website/GUI, will often leave the website/GUI without filling out the
survey.
Third the evaluator performing the test should watch what the people actually do
rather than rely on their feedback after the test32. This is because people in general will
tend to bend the truth toward what they think the evaluator wants to hear. Also in telling
the evaluator what they did people are actually telling the evaluator what they remember
doing. Finally people will rationalize their actions when all that is known for certain is
what they actually did.
Taking all the above factors into account any statistical results from a usability test
should be taken with caution.
It should also be noted that iVE focuses on a very specific and narrow segment of the
information retrieval task, it is in a sense a one-two punch in contrast with a whole 15-
round fight of the information retrieval task.
52
Evaluation Design
Taking all these factors into account a usability test for iVE has been tailored.
First no question is directly asked to participants in the evaluation. Second, observations
are recorded on how the participants respond and interact with the visualization tools.
And third an evaluation sheet is given to the participants after they have used the tool.
The evaluation questions consist of a combination of two experiment categories, one
category is the usability evaluation of the tool and the other category is a controlled
experiment of comparing iVE to another tool (www.kartoo.com). All users give each
tool the same input and there are only 10 questions on the evaluation sheet to keep the
evaluation short and hopefully concise.
53
Evaluation Results
The actual evaluation form can be seen in the next section. The statistical data
collected between iVE and kartoo is listed below:
Rating Scale: 5 – good / works well……………..1 – needs work
iVE kartoo
1. Tutorial: 4.00 4.42
2. Visual Organization: 4.00 4.54
3. Meaningful Use of Color: 3.82 4.00
4. Data Interaction: 3.64 4.64
5. Learning Curve: 3.91 4.63
6. Visualization Content: 3.36 4.55
7. Visualization Comparison: 3.81 3.72
8. Use in Other Domains: 3.72 4.09
9. Speed of Use: 3.63 3.91
10. Overall Functionality: 3.91 3.91
The were 11 participants in this study and all the participants were college
students.
As you can tell from the statistical data, the participants preferred kartoo in almost
every category, but there appears to be a contradiction between liking the tool versus the
functionality of the tool. This is evident by the ‘10. Overall Functionality’ rating of iVE
and kartoo being equal. The other category where participants preferred iVE was ‘7.
Visualization Comparison’. Both of these categories are more functional in nature.
54
The general comments noted about kartoo were: “…it was more interactive”, “…
the background coloring is better”, “…it was more fun to use” (because of its animation).
The general comments noted about iVE were: “…it was more old-school” and “…it was
more precise”.
Overall, in this demographic, the entertainment factor appears to have been a
determining factor in rating a visualization tool. And again it should be noted that
statistical information about information visualization testing should be taken with
caution.
Observations of the participants using iVE revealed the following functional
deficiencies: the scrolling resolution was too course, the tutorial would benefit with some
simple pictorial comparisons at the beginning, participants often hit the enter key on the
keyboard before having to click on the submit button, and the participants preferred
having all the visualizations visible in one window instead of having to scroll in the
window to see all the visualizations.
55
Evaluation Form
Comparison / Usability Evaluation
RATING SCALE: 5 – GOOD / WORKS WELL……………..1 – NEEDS WORK
1. Tutorial: Was the tutorial or help file understandable / informative20p229?
iVE Rating: __________ kartoo Rating: __________
2. Visual Organization: Were the visualizations presented in an orderly manner and were the visualizations evident (one visualization did not hide or obscure the another) 18slide10?
iVE Rating: __________ kartoo Rating: __________
3. Meaningful Use of Color: Did the colors help in interpreting/examining the visualization 18slide11?
iVE Rating: __________ kartoo Rating: __________
4. Data Interaction: Were you able to examine/interact with the data the visualization represented 15section1para4?
iVE Rating: __________ kartoo Rating: __________
5. Learning Curve: Was it easy to learn how to use the visualization tool?
iVE Rating: __________ kartoo Rating: __________
56
6. Visualization Content: By just looking at the visualization with no further action did the visualization provide useful characteristics of the document?
iVE Rating: __________ kartoo Rating: __________
7. Visualization Comparison: Did the visualization provide you with a means of comparing on document to another?
iVE Rating: __________ kartoo Rating: __________
8. Use in Other Domains: Based on the design of the visualization could it be applied to other areas, such as determining which photographs are more relevant, or which bacteria are the most relevant (probable cause), or which drugs are the most relevant (remedy for certain conditions)?
iVE Rating: __________ kartoo Rating: __________
9. Speed of Use: Was the image quick to interpret15section 2?
iVE Rating: __________ kartoo Rating: __________
10. Overall Functionality: Did the visualization tool save you time in determining which document was more relevant?
iVE Rating: __________ kartoo Rating: __________
57
Chapter 6
Conclusions and Future Research
Accomplishment
Based on the evaluation study of iVE it appears iVE has accomplished the task of
document relevancy through information visualization. iVE is quick to interpret, and
gives the user meaningful cognitive information. The user is able to quickly select and
view just the most relevant documents.
Problems Faced and Solutions
The following two standardization issues were encountered:
1. One problem encountered is URLs do not have a standardized format: some URLs
end in a '/', some do not, some start with 'www.', some do not …By analyzing the valid
URLs that did not appear in the visualization, it was apparent how the URL differed from
what was already being 'pattern-matched'. To compensate, another pattern was added for
matching (though there are probably more variations).
2. There are no standardized usability tests for information visualization, so though an
evaluation was performed, its statistical results should be viewed with some caution.
Improving iVE
Reviewing the results of the evaluation there are several areas iVE can improve
upon and several areas where there appears to have been some confusion. The apparent
existence of areas of confusion is based on the participants’ comments and actions.
The first items to be covered will be the areas of improvement followed by a
discussion of the areas of confusion. For clarity, each evaluation area is italicized and
followed by the corresponding question number used in the evaluation. In addition many
of these areas are somewhat coupled, and an improvement in one area may improve
another. For example, a better tutorial (1.)could improve the speed of use (9.)as well as
the learning curve (5.).
The tutorial (1.) and learning curve (5.) areas can be improved by putting the
visualization examples at the beginning of the tutorial instead of the end. Only two types
of visualizations should be shown, one visualization representing a relevant document
and the other visualization representing an irrelevant document. Several sets of these
visualizations could be shown to give the user a quick visualization training session.
The visual organization (2.) and speed of use (9.) can be improved by first
discarding the scrollbar paradigm in favor of displaying all the visualizations in one
window. A hierarchical ordering could be designed wherein the most important
visualization would reside on an inner delimiting ring and the visualizations of less
importance would reside on concentric outer delimiting rings. Also, the visualizations
59
near the center of this display would have a localized reddish background to key the
user’s eye in on the more relevant documents. Visualizations residing further from the
center would have colors gradated towards the blue end of the light spectrum to
downplay their relevancy. This design would also be in agreement with Fitt’s Law.
Furthermore, participants did not use the attribute of mentally creating a shape
based on the positions of the feature dots as a relevancy factor. This overall shape could
be made more explicit by enveloping all the indicator dots together in a single outline and
filling all the ‘non-feature dot’ areas within the outline with a color different than the
background. This would result in an image whose shape would be easier to view.
An additional enhancement to iVE, not directly related to the evaluation, would
be to make the visualizations expandable and collapsible so that more cognitive
information would be captured in the visualization. For example, a user would click on
one of the existing a high-level feature dots and this would bring up another visualization
of the same design that breaks down the selected feature into sub-features.
The first area of confusion to be discussed is visualization content (6.). The result
in this area directly conflicts with the area of visualization comparison (7.) where iVE
had a higher rating than kartoo. The question to pose is, “How can an item of less
visualization content produce better visualization comparison?” Due to this confusion an
improvement in this area may not be unnecessary.
The next two areas of confusion are data interaction (4.) and meaningful use of
color (3.). Based on the participants comments such as “…it was more interactive”, “…I
like the blue background color”, “…it was more fun to use”, and “…its just like neurons
firing” and based upon the participants action of moving the cursor around the kartoo
60
visualizations to view the animation, it appears that there was confusion between data
interaction vs. visualization animation/action and between meaningful use of color vs.
aesthetically pleasing color. Again due to this confusion, it may not be necessary to
change iVE in these areas.
In the area of use in other domains (8.) no comments or actions were observed.
But due to the characteristic that ‘if features in other domains are measurable or
quantifiable, iVE can visualize them’ and due to the fact that iVE is already better in the
area of visualization comparison, improvement in this area may not be necessary.
A drawback to iVE is that it has only been tested in the domain of the written word.
iVE should theoretically work in any domain but further testing is needed for
confirmation. A challenge exists in finding meaningful features to extract for the other
domains.
Impact of Research
The impact of this research is that iVE can significantly reduce the time a user
spends searching for relevant information. There could also be the side effect that the
overall shape of the iVE visualization could reveal other qualities/groupings of
information units just like molecules with similar shapes having similar properties.
61
References
CHAPTER 2
1 R. Chimera, May 1992 “Value Bars: An Information Visualization and Navigation
Tool of Multi-attribute Listings", In Proceedings of the SCM SIGCHI Conference on
Human Factors in Computing Systems, pp. 293-294
2 G. G. Robertson, S. K. Card, J. D. Mackinlay, "Information visualization using 3D
interactive animation”, Edited by Paul S. Jacobs, publisher Lawrence Erlbaum
Associates, ISBN 0-8058-1189-3.
3 M. A. Hearst, 1995, “TileBars: Visualization of terms distribution information in full-
length document access", Association for Computing Machinery
4 G. G. Robertson, J. D. Mackinlay, S. K. Card, 1991, "Cone Trees: Animated 3D
visualizations of hierarchical information", Proceedings of SIGCHI'91, pp. 189-194
5 Mackinlay J.D., Robertson G.G., Card S.K., 1991, "The Perspective Wall: Detail and
context smoothly integrated", In Proceedings of SIGCHI'91, pp. 173 179
6 G. W. Furnas, 1986, "Generalized Fisheye Views", Proceedings of SIGCHI'86, pp. 16-
23
7 S. E. Poltrock, G. W. Furnas, K. M. Fairchild, 1988, "SemNet: Three-Dimensional
Graphic Representations of Large Knowledge Bases", Cognitive Science and its
Application for Human-Computer Interface, R. Guindon (Editor), Lawrence Erlbaum,
New Jersey
8 R. Spence, M. Apperly, 1982, "Data base navigation: an office environment for the
professional", Behaviour and Information Technology 1, (1), 43-54
9 J. D. Mackinlay, G. G. Robertson, S. K. Card, The Information Visualizer: A 3D user
Interface for Information Retrieval, Xerox Palo Alto Research Center,
10 A. Spoerri, Nov 1993, "InfoCrystal: A visual tool for information retrieval and
management", Proceedings of Information Knowledge and Management '93,
Washington, D.C.
CHAPTER 3
11 F. Nicolls, Oct. 11, 2004, The Human Visual System, University of Capetown,
section 1, http://dsp7.ee.uct.ac.za/~nicolls/lectures/eee401f/hvs.pdf
12 J. Landay, 1997, University of California, Berkeley,
http://bmrc.berkeley.edu/courseware/cs160/fall97/lectures/09-24-97/sld016.htm
13 Federal Aviation Administration, The Human Visual System, FAA Human Factors
Awareness Web Course,
http://www.hf.faa.gov/Webtraining/Cognition/Information/info_model1.htm
14 J. Landay, 1999, University of California, Berkeley,
http://bmrc.berkeley.edu/courseware/cs160/fall99/lectures/human-abilities/
sld034.htm
15 J. Landay, 1997, University of California, Berkeley,
http://bmrc.berkeley.edu/courseware/cs160/fall97/lectures/09-24-97/sld023.htm
63
16 J. Landay, 1999, University of California, Berkeley,
http://bmrc.berkeley.edu/courseware/cs160/fall99/lectures/human-abilities/
sld038.htm and sld039.htm
17 Federal Aviation Administration, The Human Visual System, FAA Human Factors
Awareness Web Course,
http://www.hf.faa.gov/Webtraining/VisualDisplays/HumanVisSys1a.htm
18 F. Nicolls, Oct. 11, 2004, The Human Visual System, University of Capetown,
section 2, http://dsp7.ee.uct.ac.za/~nicolls/lectures/eee401f/hvs.pdf
19 Carl Zetie, 1995, “Practical User Interface Design”, ch. 5, McGraw-Hill,
20 Carl Zetie, 1995, “Practical User Interface Design”, p. 199, McGraw-Hill,
CHAPTER 5
21 Dal Sasso Freitas C.M., Luzzardi P.R.G., Cava R.A., Winckler M.A.A., Pimenta
M.S., Nedel L.P., Evaluating Usability of Information Visualization Techniques, ch.
3.2, http://www.inf.ufrgs/cg/publications/carla/FreitasEtAHHC.pdf
22 Dal Sasso Freitas C.M., Luzzardi P.R.G., Cava R.A., Winckler M.A.A., Pimenta
M.S., Nedel L.P., Evaluating Usability of Information Visualization Techniques, ch.
1, http://www.inf.ufrgs/cg/publications/carla/FreitasEtAHHC.pdf
23 Plaisant C., 2004, The Challenge of Information Visualization Evaluation, Human-
Computer Interaction Laboratory, University of Maryland, p. 3,
http://www.cs.ubc.ca/~tmm/courses/cpsc533c-04-fall/readings/plaisant04eval.pdf
24 Fabrikant S.I., 2001, Evaluating the Usability of the Scale Metaphor for Querying
Semantic Spaces, Department of Geography, University of Santa Barbara, ch. 1,
http://www.geog.ucsb.edu/~sara/html/research/cosit01/fabrikant_cosit01.pdf
64
25 Fabrikant S.I., 2001, Evaluating the Usability of the Scale Metaphor for Querying
Semantic Spaces, Department of Geography, University of Santa Barbara, ch. 3,
http://www.geog.ucsb.edu/~sara/html/research/cosit01/fabrikant_cosit01.pdf
26 Plaisant C., 2004, The Challenge of Information Visualization Evaluation, Human-
Computer Interaction Laboratory, University of Maryland, p. 2,
http://www.cs.ubc.ca/~tmm/courses/cpsc533c-04-fall/readings/plaisant04eval.pdf
27 Dal Sasso Freitas C.M., Luzzardi P.R.G., Cava R.A., Winckler M.A.A., Pimenta
M.S., Nedel L.P., Evaluating Usability of Information Visualization Techniques, ch.
1, http://www.inf.ufrgs/cg/publications/carla/FreitasEtAHHC.pdf
28 Plaisant C., 2004, The Challenge of Information Visualization Evaluation, Human-
Computer Interaction Laboratory, University of Maryland, p. 2,
http://www.cs.ubc.ca/~tmm/courses/cpsc533c-04-fall/readings/plaisant04eval.pdf
29 Dal Sasso Freitas C.M., Luzzardi P.R.G., Cava R.A., Winckler M.A.A., Pimenta
M.S., Nedel L.P., Evaluating Usability of Information Visualization Techniques, ch.
2, http://www.inf.ufrgs/cg/publications/carla/FreitasEtAHHC.pdf
30 Plaisant C., 2004, The Challenge of Information Visualization Evaluation, Human-
Computer Interaction Laboratory, University of Maryland, p. 3,
http://www.cs.ubc.ca/~tmm/courses/cpsc533c-04-fall/readings/plaisant04eval.pdf
31 Nielsen J., Aug. 18, 2004, Usability Testing: Get a Captive Audience + No Online
Surverys, para. 5,
http://www.masternewmedia.org/news/2004/08/usability_testing_get_a_captive.htm
31 Nielsen J., Aug. 18, 2004, Usability Testing: Get a Captive Audience + No Online
Surverys, para. 2,
65
http://www.masternewmedia.org/news/2004/08/usability_testing_get_a_captive.htm
66