Geovisual evaluation of public participation in decision making: The grapevine

17
Geovisual evaluation of public participation in decision making: The grapevine $ Robert Aguirre n , Timothy Nyerges Department of Geography, University of Washington, Box 353550, Seattle, WA 98195, USA article info Article history: Received 18 December 2008 Received in revised form 11 December 2010 Accepted 27 December 2010 Available online 1 January 2011 Keywords: Grapevine Geovisual analytics Public participation Decision making Spatio-temporal events Human–computer–human interaction abstract This article reports on a three-dimensional (time–space) geovisual analytic called a ‘‘grapevine.’’ People often use metaphors to describe the temporal and spatial structure of online discussions, e.g., ‘‘threads’’ growing as a result of message exchanges. We created a visualization to evaluate the temporal and spatial structure of online message exchanges based on the shape of a grapevine naturally cultivated in a vineyard. Our grapevine visualization extends up through time with features like buds, nodes, tendrils, and leaves produced as a result of message posting, replying, and voting. Using a rotatable and fully interactive three-dimensional GIS (Geographic Information System) environment, a geovisual analyst can evaluate the quality of deliberation in the grapevine visualization by looking for productive patterns in fine-grained human– computer–human interaction (HCHI) data and then sub-sampling the productive parts for content analysis. We present an example of how we used the technique in a study of participatory interactions during an online field experiment about improving transpor- tation in the central Puget Sound region of Washington called the Let’s Improve Transportation (LIT) Challenge. We conclude with insights about how our grapevine could be applied as a general purpose technique for evaluation of any participatory learning, thinking, or decision making situation. & 2010 Elsevier Ltd. All rights reserved. 1. Introduction The last decade and a half has seen significant progress in tool building for online participatory interaction. Cyberinfrastructure tools now exist that are capable of supporting large numbers of people over wide areas in participatory thinking, learning, and decision making activities [1–6]. Especially promising is the potential use of cyberinfrastructure to scale the ‘‘analytic–deliberative’’ decision making process, as advocated by the National Research Council (NRC), to larger numbers of people participating from wider regional areas [1–3]. Analysis, the systematic application of specific theories and meth- ods for interpreting data and drawing conclusions about phenomena, is one way of knowing what course of action to take in decision making. Deliberation, any process for communicating or raising and collectively considering issues, is another way of knowing what course of action to take. In analytic–deliberative decision making, analysis and deliberation are used together as complementary ways of knowing. Over the same decade and a half period of time that has seen significant progress in tool building for online participatory interaction, social scientists evaluating pub- lic participation in decision making topics like transporta- tion improvement have not been reporting similar levels of progress when it comes to the quality of interaction. Evaluators have not always found meaningful and diverse interaction when non-experts communicate with experts Contents lists available at ScienceDirect journal homepage: www.elsevier.com/locate/jvlc Journal of Visual Languages and Computing 1045-926X/$ - see front matter & 2010 Elsevier Ltd. All rights reserved. doi:10.1016/j.jvlc.2010.12.004 $ This paper has been recommended for acceptance by S.-K. Chang. n Corresponding author. E-mail addresses: [email protected] (R. Aguirre), [email protected] (T. Nyerges). Journal of Visual Languages and Computing 22 (2011) 305–321

Transcript of Geovisual evaluation of public participation in decision making: The grapevine

Contents lists available at ScienceDirect

Journal of Visual Languages and Computing

Journal of Visual Languages and Computing 22 (2011) 305–321

1045-92

doi:10.1

$ Thin Corr

E-m

nyerges

journal homepage: www.elsevier.com/locate/jvlc

Geovisual evaluation of public participation in decision making:The grapevine$

Robert Aguirre n, Timothy Nyerges

Department of Geography, University of Washington, Box 353550, Seattle, WA 98195, USA

a r t i c l e i n f o

Article history:

Received 18 December 2008

Received in revised form

11 December 2010

Accepted 27 December 2010Available online 1 January 2011

Keywords:

Grapevine

Geovisual analytics

Public participation

Decision making

Spatio-temporal events

Human–computer–human interaction

6X/$ - see front matter & 2010 Elsevier Ltd.

016/j.jvlc.2010.12.004

s paper has been recommended for acceptan

esponding author.

ail addresses: [email protected] (R.

@u.washington.edu (T. Nyerges).

a b s t r a c t

This article reports on a three-dimensional (time–space) geovisual analytic called a

‘‘grapevine.’’ People often use metaphors to describe the temporal and spatial structure

of online discussions, e.g., ‘‘threads’’ growing as a result of message exchanges. We

created a visualization to evaluate the temporal and spatial structure of online message

exchanges based on the shape of a grapevine naturally cultivated in a vineyard.

Our grapevine visualization extends up through time with features like buds, nodes,

tendrils, and leaves produced as a result of message posting, replying, and voting. Using

a rotatable and fully interactive three-dimensional GIS (Geographic Information

System) environment, a geovisual analyst can evaluate the quality of deliberation in

the grapevine visualization by looking for productive patterns in fine-grained human–

computer–human interaction (HCHI) data and then sub-sampling the productive parts

for content analysis. We present an example of how we used the technique in a study of

participatory interactions during an online field experiment about improving transpor-

tation in the central Puget Sound region of Washington called the Let’s Improve

Transportation (LIT) Challenge. We conclude with insights about how our grapevine

could be applied as a general purpose technique for evaluation of any participatory

learning, thinking, or decision making situation.

& 2010 Elsevier Ltd. All rights reserved.

1. Introduction

The last decade and a half has seen significant progressin tool building for online participatory interaction.Cyberinfrastructure tools now exist that are capable ofsupporting large numbers of people over wide areas inparticipatory thinking, learning, and decision makingactivities [1–6]. Especially promising is the potential useof cyberinfrastructure to scale the ‘‘analytic–deliberative’’decision making process, as advocated by the NationalResearch Council (NRC), to larger numbers of peopleparticipating from wider regional areas [1–3]. Analysis,

All rights reserved.

ce by S.-K. Chang.

Aguirre),

the systematic application of specific theories and meth-ods for interpreting data and drawing conclusions aboutphenomena, is one way of knowing what course of actionto take in decision making. Deliberation, any process forcommunicating or raising and collectively consideringissues, is another way of knowing what course of actionto take. In analytic–deliberative decision making, analysisand deliberation are used together as complementaryways of knowing.

Over the same decade and a half period of time thathas seen significant progress in tool building for onlineparticipatory interaction, social scientists evaluating pub-lic participation in decision making topics like transporta-tion improvement have not been reporting similar levelsof progress when it comes to the quality of interaction.Evaluators have not always found meaningful and diverseinteraction when non-experts communicate with experts

R. Aguirre, T. Nyerges / Journal of Visual Languages and Computing 22 (2011) 305–321306

and executives in a public meeting, and the skepticism isnot limited to use of online tools [3]. If online systems canbe built to scale participatory interactions out to greaternumbers of people over a wider regional area, how canthey also be built to enhance the quality and the outcomeof those participatory interactions?

A common explanation for the lack of quality interac-tion has to do with how public meetings are convened, orhow carefully meetings have been structured to ensuremeaningful participation. A surprising assumption oneoften hears is that people are not as engaged online asthey are in a face-to-face situation, presumably no matterhow carefully public participation has been structured.A number of findings from long-term surveys like theDigital Future Report suggest otherwise, demonstratingsignificant increase in Internet use by American house-holds engaged in online communities dedicated to socialor political issues or in learning outside of the classroom[7]. For example, about 80 percent of Americans surveyeduse the Internet, one of the highest rates in the world.A large and growing percentage of those surveyed weremembers of online communities related to social causes,although it should be noted that relatively low percen-tages believed the Internet was a tool for public influencein terms of giving people more of a say in what thegovernment does. Still, about one-third of those surveyedin the Digital Future Report agree or strongly agree thatby using the Internet they could have more politicalpower, and a large and growing percentage say that goingonline can help people better understand politics. Wetend to view the question whether to structure an onlineor a face-to-face situation as merely one of methodologi-cal preference. Online tools that can work consistentlyand repeatedly to support representative and broadlybased samples of people in structured public participationsituations rather than conventional public meetings cancertainly be made useful. The fundamental researchquestion is really about exactly what ‘kinds’ of onlinesituation work better than others.

A seldom given explanation for the lack of qualityinteraction during public participation situations has todo with evaluation itself. Are the evaluation methodsused by social scientists able to distinguish productivefrom unproductive interactions at a fine-grained level? Inany given participatory situation there may have beenmany short or isolated periods in which a productivedeliberation emerged, but the observations and methodsused for evaluation, e.g., summative self-report measureslike questionnaires or post-situation interviews, were notable to pick them out [2,3].

Probably the most overlooked advantage in selectingan online system to convene a participatory learning,thinking, or decision making situation is that the systemitself can act as a fine-grained data collection tool.An online system can unobtrusively log almost every-thing that voluntary participants are doing with thesystem. Ideally, these observations are triangulated withappropriately sub-sampled self-report data from onlinequestionnaires and interviews. With this kind of mixed-method approach using event log data, evaluators arebetter prepared to find productive interactions hidden

amongst generally unproductive ones, a pattern that maybe symptomatic of situations where large groups of laypeople from a wide regional area are asked to voluntarilywork together to make choices about highly technicalinformation.

There is also a strategic advantage in using onlinesystems to convene participatory interactions. By tyingsocial science evaluation more closely into participantinteraction data recorded in a system event log, it ties theneeds of evaluators more closely into the world of thedesigners and developers of the original tool, tighteningthe feedback loop so to speak. Designers and developers,in turn, might learn to use evaluation results to decidewhich features were useful and which were not. Even ifthe results of closer interaction between developers,users, and evaluators results in a series of disagreementsabout who controls whose needs or whose work drivesthe whole enterprise we see that as progress, and theresults can certainly have a stimulating effect. We expectthat entangling the data and methods of social scienceevaluators into the choices that developers make and theneeds that users have will not only result in moreadvanced systems but measurably better ones.

In addressing the fundamental question of how toscale public participation to larger numbers of peopleover wider regional areas using cyberinfrastructure thereare at least two major challenges. One major challenge isimproving evaluation. Valid social science research meth-ods are not always followed especially when it comes tosampling. Self-report assessments collected from a parti-cipant by the actual convener of the public participationsituation at the end of the process, which depending onthe research design are vulnerable to the biases orsympathies of the conveners themselves, might simplybe unable to distinguish exactly where a deliberation wassuddenly productive amidst uneven or declining activity.

Another major challenge is to make sure that theresults of evaluation, both positive and negative, get backto the designers and developers to improve the systemitself. There should be, but seldom is, meaningful inter-active feedback between the people who designed anddeveloped an online system, the people who use thesystem in ways that work best for his or her own specificpurposes, and the people who evaluate the process andthe outcomes. Investigating the use of cyberinfrastructureto improve the quality and scale of analytic–deliberativedecision making is an inherently interdisciplinary projectsynthesizing three very different domains of researchincluding system design and development, participantuse, and social science evaluation. The three differentdomains of research work together like a ‘‘virtual’’ orga-nization, whether the different groups of people recognizeit or not.

1.1. Designing and developing an online tool

For projects like ours with the resources to build acustom application, we started from scratch in the domainof system design and development using a conceptualmodel of ‘‘best process’’ for structuring public partici-pation in decision making [1–3]. A tool was designed and

Fig. 1. Side view and top down view of the grapevine visualization

shows the central geographic location of the PPGIS server to participant

locations.

R. Aguirre, T. Nyerges / Journal of Visual Languages and Computing 22 (2011) 305–321 307

developed by the Participatory GIS for Transportation(PGIST) project using recommendations from the NationalResearch Council about broadly based analytic–deliberativedecision making [1–3]. The base software platform hasproven flexible enough for multiple uses and has since beenmodified for a Voicing Climate Concerns (VCC) project aboutthe regional impacts of global climate change on the Oregoncoast [8]. The Web portal houses a collection of participa-tory tools and instructions organized in a sequence with aworkflow engine. The design and development team builtthe Web portal so that participants would have access to acertain set of tools at a certain time to complete an objectiveor ‘‘Step.’’

The LIT Challenge was designed as a month-longagenda of five steps. Each step contained two or moresub-steps, including twelve in all. For brevity, we focus onour expectations and instructions to participants at thestep level. Each step in the LIT Web portal had either agenerally ‘‘deliberative’’ or ‘‘analytic’’ objective. To start,participants enter information about his or her travel pathand then voice values and concerns about improvingtransportation in the central Puget Sound region. A mod-erator performs a synthesis of concerns using a specialonline tool by grouping them into a set of common themes.Participants review the common themes and then vote onwhether they agree that the themes adequately representthe original concerns (LIT Step 1). After voicing his or herconcerns and voting, participants review and weigh differ-ent factors useful as criteria for selecting a transportationimprovement package (LIT Step 2). Each participant thencreates a unique package with a set of geospatial analysistools, first selecting projects from a large spatial inventoryof proposed projects all over the central Puget Soundregion and then selecting funding mechanisms to coverthe cost (LIT Step 3). After each participant creates atransportation improvement package, an off-line processis used to synthesize all of the participant’s contributionsand identify six diverse packages. Participants are asked todeliberate about the six packages and then vote on the sixpackages in order of preference (LIT Step 4). Finally, afterthe most preferred package is selected, participants reviewand endorse a final report to agency decision makers andtechnical specialists about the outcomes of the decisionmaking process and the final package recommendation(LIT Step 5).

1.2. Convening an online field experiment

The LIT Challenge was a month-long, online and asyn-chronous decision making situation in late 2007 involvingabout 200+ community participants from a three countyarea around Seattle, WA who were asked to be part of acitizen advisory group. In total, 246 participants registeredfor the experiment. Of the 246 registrants, 179 qualified forpayment based on geographic criteria, representing thegroup we call our quota participants. On average, onlyabout half of the 179 quota participants were active in theLIT Challenge at any one time, ranging from a high of 60percent to a low of 40 percent by the end of the experi-ment. The participants’ collaborative task was to decide onthe best transportation improvement package involving

$20+ billions of dollars for the central Puget Sound region.The research question motivating the LIT online fieldexperiment was, ‘‘What Internet platform designs andcapabilities, particularly including GIS technology, canimprove public participation in ‘analytic–deliberative’transportation decision making within large groups?’’

The project used a quasi-experimental research designto test use of the LIT Web portal, crossing a ‘field’ studywith a laboratory ‘experiment’ [9]. Online field experi-ments balance the advantage of a controlled experimentalsituation with the advantage of observing people inter-acting in a natural situation over a long period of time,equally important for validity [9–11]. Current federal andstate transportation laws mandate public participation indecisions about long-range planning, capital improve-ment programming, and major investment studies. Thus,it was given that public participation in decision makingabout improving transportation was both necessary anddesirable, something that admittedly may not apply inevery participatory situation. As a result, the projectenjoyed key collaboration with public agencies in thecentral Puget Sound region in developing the transporta-tion improvement programming substance of theexperiment.

The month-long LIT Challenge experimental decisionprocess (15 October 2007–13 November 2007) was timedto coincide with a 6 November 2007 ballot initiativeasking voters to support a $17.8 billion regional transpor-tation improvement package for the central Puget Soundregion of Washington State, including King, Snohomishand Pierce counties. Fig. 1 displays the unfiltered grape-vine visualization georeferenced to the three-county areaof the online field experiment. Further below we explainhow we used a rotatable and fully interactive version ofthe three-dimensional time–space grapevine visualizationto evaluate the quality of deliberative interactions duringthe LIT Challenge.

In Fig. 1, a 2D top down static view of the grapevineshows the central location of the LIT Web portal server onthe University of Washington campus relative to thelocations of all the registered participants throughoutthe central Puget Sound region. Participant locations were

R. Aguirre, T. Nyerges / Journal of Visual Languages and Computing 22 (2011) 305–321308

self-reported by informed and consenting voluntaryhuman subjects as either a geocoded home address orhome zip code. Based upon online questionnaireresponses, most of our participants were actively inter-ested in the topic of improving transportation and feltcomfortable using online tools. In other words, we wereconvening a fairly natural participant use situation andexpected high levels of activity.

1.3. Evaluating an online field experiment

We evaluated the quality and scale of public participa-tion in the online field experiment using a combination ofclient–server event log data; online questionnaires admi-nistered before, during, and after the experiment itself; asample of voluntary screen recordings; and finally, asample of in-depth face-to-face interviews. Over thecourse of the month-long LIT Challenge, the LIT Webportal logged over 120,000 client–server interactionevents. We downloaded the Web portal event database,coded the interaction events by whether they were theresult of analytic or deliberative HCHI activities, and thenapplied our grapevine technique using 3D GIS software.

For the remainder of the paper we turn to the devel-opment and use of the three-dimensional time–spacevisualization itself, organized as follows. In Section 2, wepreface our explanation of the grapevine with somebackground about the origins of the three-dimensionalgeovisualization. We then provide a brief literaturereview outlining how our grapevine technique synthe-sized three methodologies including sequential data ana-lysis, social network analysis, and time–space geography.In Section 3, we describe the grapevine and all its organic-looking features, which were designed to be like theanatomy of an actual grapevine plant. We break thecomplex visual structure into its component featuresincluding a main stem, nodes, buds, tendrils, and leaves.We also explain the three different types of human–computer–human interaction events that can generatethe component features of the grapevine and why theyare important in a visual representation of the quality ofdeliberation. In Section 4, we report our findings using thegrapevine technique in terms of filtering and analyzingproductive clusters of deliberation in the LIT Challenge.We then suggest how the grapevine can be used for widerapplication to any participatory situation. Finally, inSection 5 we conclude by considering how the results ofevaluation using geovisual analytic techniques like thegrapevine can provide feedback on best practices forsystem design and system use to improve the qualityand scale of public participation in decision making.

2. The grapevine as an evaluation method

The purpose in creating a three-dimensional space–time visualization was to balance the power of computingto process fine-grained interaction event data with thehuman process of spatial thinking [12] using a GIS, underconditions where it was difficult or undesirable to usepowerful parametric statistical techniques. An unantici-pated outcome of our evaluation phase was that despite

our best efforts we could not use parametric statisticaltechniques because some basic assumptions could not bemet. Most of the difficulties stemmed from the fact thatwith well over 200 participants and almost as manydifferent event types, the majority of data sets for inputinto statistical packages were large and sparse matriceswith unknown data category frequency distributions,notoriously difficult for parametric statistical methods todeal with.

One interesting non-parametric statistical techniquewe discovered was a categorical analysis called ‘‘config-ural frequency analysis,’’ a method designed to focus onpeople rather than variables [13]. With this method wecould analyze categorical profiles of participants based ondifferences in interactions with the LIT Web portal ordifferences based on self-reported data from question-naires and interviews. The problem was that the primaryresearch question motivating the LIT Challenge researchdesign did not match an exploratory investigation intohow different personal or demographic characteristicsgenerally affected the quality and scale of online partici-pation, even if suggestive.

Given our expertise using GIS software, we felt that wecould develop a geovisualization for analysis of human–computer–human interaction data. Our geovisualizationwould be capable of displaying very large amounts of datain a form that human spatial thinking could make sense ofwithout unanticipated or unintended confusion aboutpatterns. To come up with the grapevine idea, we lookedat three different bodies of research including exploratorysequential data analysis [14], social network analysis[15,16], and time–space or time geography emphasizingvisualization [17–23].

2.1. Sequential: exploratory sequential data analysis (ESDA)

The analysis of sequences has a long past and therehave been many statistical techniques offered. Forinstance, a geographic treatment by Getis [24] describeda quantitative method for exploring adjacent categories ofthings that he suggests could be used to study land usetypes as observed from the window of a moving train aseasily as it could be used to study soil type cross-sectionsin physical geography. Contemporary sequential analysistechniques have become quite sophisticated. For example,Magnusson’s [25] ‘‘T-patterns’’ method can find recurringsequential patterns of event types that are not necessarilyadjacent but repeat in the same order hidden in the midstof other event types.

Exploratory Sequential Data Analysis (ESDA) repre-sents another set of techniques for characterizing thesequential structure of events in time and it is widelyused in human–computer interaction. The side viewgraphic in Fig. 1 represents a sequential structure ofevents, but our emphasis is as much on the location ofthe interaction events in geographic space as it is on theiroccurrence in time. ESDA is explained in much greaterdetail in Sanderson and Fisher [14]. The general purposeHCI spectrum for ESDA is composed of eight event types,ranging from a fine-grained mouse-click or keyboardpress that lasts a millisecond, to coarse-grained social

R. Aguirre, T. Nyerges / Journal of Visual Languages and Computing 22 (2011) 305–321 309

activities lasting hours, to project activities lastingmonths. Depending on the theoretical research question,one researcher might tend to code every single mouse-click and keyboard press whereas another researchermight want to lump fine-grained activity into a singlelong act like ‘‘using the computer.’’ Scientists who startout preferring to ask cognitive, behavioral or socialtheoretical questions will generally choose differentranges of the HCI activity spectrum to code. Cognitiveand behavioral researchers prefer high-frequency andshort-interval observations like eye movements ormouse-clicks. Social researchers prefer low-frequencyand long-interval observations like deliberating or comingto a consensus. The HCI spectrum does not simplify everyimportant difference between HCI researchers to that ofmerely theoretically recognizing one part of the HCIspectrum while being blind or deaf to others, since thereare statistical and grammatical method differences as well[26]. However, there are trade-offs and overlaps betweenbehavioral, cognitive and social theoretical approaches tothe study of HCI. A spectrum of HCI time intervals hasbeen also discussed by other authors [27,28], who listmicro-scale acts affecting human cognitive and behavioralperformance with visual displays of information on acomputer. We used a similar idea in thinking about howto infer client–server events against a continuous spec-trum of more ‘‘analytic’’ versus more ‘‘deliberative’’ HCHIactivities.

Researchers in fields ranging from computer scienceand communications to geography and sociology haveused ESDA techniques to interpret coded sequences of HCIevents in single user, group face-to-face, and group onlineexperimental settings. ESDA techniques have also beenincorporated into a software application called MacShapa[29]. In the late 1990s, Jankowski and Nyerges [30] usedMacShapa to analyze dozens of hours of videotaped groupdecision making by systematically coding consecutive30-s intervals of time into discrete categories of decisionmaking activity, and then seeing if certain expectedsequences of coded activity tend to occur morefrequently. Researchers have applied sequential techni-ques to verbal protocol and client-side interaction log datain order to analyze small group use of a geovisualizationtool for the purpose of usability and cognitive testing[31,32]. Other researchers like Tanimoto, Hubbard, andWinn [33] and Keel [34] have reported on the design anddevelopment of visualizations to support group sense-making and learning using computational agents to unob-trusively gather data about sequences of individual useractivities, and then perform an analysis and return theresults back to participants as a form of instant feedback.

2.2. Social: social network analysis

Social network analysis has a long past in sociometryand has become a popular method for analyzing relation-ships between people, data, computers, and concepts onthe Internet. The early origins of social network analysis insociometry and sociology are discussed elsewhere [15,16].A social network analysis can infer social roles (e.g.,centrality in communication, broker, leader, follower,

etc.) based on the frequency of who communicates withwhom and for how long. Researchers in a number ofcomputer fields outside of sociology have used socialnetwork analysis of event log data for process mining inbusiness enterprise systems because event logs are‘‘process-aware’’ [15,16]. Unlike e-mail traffic, event logsrecord what step within a structured process an event tookplace, e.g., whether participants exchanged a message whileworking on LIT Step 1 or LIT Step 2. Process mining is amethod for exploring how people can use the same tool butwork together in very unique sequences of work, based onactual executions by human users as opposed to simplydescribing an expected process based on a formal workflowdiagram. A process mining approach to social networkanalysis visualizes not only the frequency of interpersonalrelationships but also the roles of people, data, computers,and concepts within the context of tasks in a ‘‘process.’’Process mining with event logs can illustrate how usersworking together tend to deviate from the normal work-flow, which is important information when a system hasbeen designed to be flexible. In a sense, process miningcombines an exploratory sequential data analysis of anunfolding process of interaction with a social networkanalysis of emerging social roles and relationships.

Although the identification of roles in a process canprovide useful insights, the influence of temporal andspatial context is largely unaccounted for when chartingsocial networks. Social network analyses rest on a meresnap shot of social relations across a featureless physio-graphic space called a ‘‘sociogram.’’ Geographic con-straints affecting the exchange of information andmessages over the Internet might seem irrelevant, or atleast not nearly as relevant as geographic constraintsaffecting the movement of people and goods. However,in a social network analysis the context for decisionmaking situations like transportation improvement,which include constraints and opportunities associatedwith the geographic features of the terrain itself (e.g.,geographic locations, neighborhoods with certain socialor political characteristics, ease of access to public trans-portation), are ignored. It is impossible to investigatedurable, unavoidable, or persistent geographic factorsbehind emergent online social networks when the parti-cipants are removed from a real social and geographiccontext. Zook et al. [35] surveyed current literature on therole of geographic locations in the study of the Internetand makes a supportive case for the role of geography.More than a decade earlier, Wallace [36] had expressedthe need to adapt social network analysis to real geo-graphic space as ‘‘sociogeographic networks,’’ which hedefines as spatially focused nets of social interaction.Despite these and other exceptions, we find it curiousthat it is difficult to find a lot of published research in thefield of human–computer interaction that infers a socio-geographic network based on the physical locations ofserver and client computers.

Our reading of the social network analysis literature,particularly social network analysis based on event logs,led us to consider how to factor real geographic locationsinto social networks that emerge during shared processesof decision making work. We considered the advantages

R. Aguirre, T. Nyerges / Journal of Visual Languages and Computing 22 (2011) 305–321310

and disadvantages of plotting proximity based on fre-quency and type of interaction as in social networkanalysis, versus proximity based on geographic location,and concluded that we would still be able to use socialnetwork analysis techniques with our grapevine techni-que by sub-sampling interactions at various cross-sec-tions of the grapevine, which suggest not only actual timebut also step in a structured process. For instance, the topdown view in Fig. 1 conveys the idea of a ‘‘network’’ andcould mean a network of people interacting through aWeb portal, a network of computers interacting as clientsand servers, a network of spatial locations based onproximity and geographic context, and most importantly,all three simultaneously. Advanced development of thegrapevine technique in a 3D GIS might involve a proce-dure or set of calculations to visually ‘‘reproject’’ or warpgeographic relationships based on an evolving social net-work space, allowing the analyst to visually toggle backand forth between geographic and social space.

2.3. Space–time: time geography

Hagerstrand [19] introduced the classical space–time(actually ‘‘time–space’’) concept of time geography duringa 1969 presidential address to the European Congress ofthe Regional Science Association. He visualized the time–space concept by plotting an individual’s X and Y coordi-nate location and Z timestamp in a time–space volume inorder to trace hypothetical movement across geographicspace and up through time. On the basis of patterns insuch 4D visualizations, he believed inferences could bemade about constraints on mobility, goal-oriented beha-vior, avoidance-oriented behavior, or the social and cog-nitive influence of new information on mobility-relateddecisions. Visualization of the time–space concept was,according to Hagerstrand [19], a technique serving alarger ‘‘point of view’’ that focused on the disaggregatefate of individual human beings in complex systems.

Hagerstrand’s time–space concept has experiencedan empirically motivated rebirth over the last decade[20–23,37]. Unlike Pred’s [38] theoretical interest in timegeography as a complement to structuration theory,recent interest in time–space geography is related to thewidespread availability of Global Positioning System(GPS) devices. A recent compilation of research usingHagerstrand’s time geography can be found in an editedvolume by Miller [22]. An important factor in the recentempirical turn in time geography has been the ability tocollect travel paths with mobile GPS devices and then plotthe data in three-dimensional space using GIS software. Inthis respect, our capability to use the LIT Web portal tocapture hundreds of thousands of unobtrusive observa-tions of client–server interactions is similar to the abilityto collect unobtrusive observations of participant’s loca-tions using GPS.

An emerging investigation by geographers using timegeography is how the use of mobile devices, the Internet,or geospatial information technology can alter the choicesthat an individual has to make about where they need togo next. Mobile devices and information technologies donot erase unavoidable constraints of geography and time

as much as they simply remove the individual’s need toconfront them in the first place. For instance, with amobile device like a cell phone a person is free to choosenot to make a long trip and instead communicate remo-tely while traveling to another destination. Yu and Shaw[23] designed a set of adjusted time–space prisms tovisualize new constraints that apply when an individualmixes physical and virtual activities together.

Although the grapevine side view in Fig. 1 resemblesHagerstrand’s classical time–space visualizations in termsof extruded time data referenced to geographic space,time geography concepts were limited as a basis foranalyzing the movement and exchange of informationbetween computers. The measurable units of analysis intime geography studies are people moving in geographicspace over continuous periods of time, not packets ofinformation moving in cyberspace between people andcomputers at mostly fixed locations over long but inter-mittent periods of time. Contemporary time geographyresearch has not been able to specify much about thebehavioral, cognitive, or social influence of informationevents across space and through time beyond how itspecifically influences travel behavior. In addition, withthe exception of animal movement studies that use GPSdata from multiple animals to visualize time–space pat-terns and infer social and behavioral concepts [39], timegeography has not been used to investigate very largegroups over long periods of time spanning monthsor years.

3. Creating a grapevine with event data

The client–server interaction ‘‘event’’ represented ourproxy for analytic or deliberative HCHI ‘‘activity’’ (seeTable 1). In any investigation, there are units of analysisthat simply carry data and then there are units of analysispertinent to theory. In geospatial data modeling, an eventis a data element standing for something that exists ingeographic space but only for a limited amount of time[40–44]. Yuan and Hornsby [43] suggest that just as data‘‘objects’’ represent static geographic ‘‘entities,’’ dataevents represent ephemeral geographic ‘‘occurrents’’ thathappen in space for a certain period of time and then goaway. An event represents a transaction or exchange ofinformation via the Internet between a client browserapplication on a computer somewhere and the LIT serveron the University of Washington campus. Thus the LITWeb portal event represents an interaction betweencomputers, one that we use to infer the occurrence ofHCHI between people in real geographic space and time.All observations of client–server interaction events arerecorded by the LIT Web portal itself in the system eventlog. Unlike conventional server event logs, nearly every-thing that a participant requested a browser to do had tobe executed directly by the server. We tested this usingmultiple user screen recordings of every step in the LITWeb portal. Thus, we felt comfortable that client–serverinteraction events were being reliably recorded by the LITWeb portal log. The main challenge was examiningwhether client–server events were a meaningful proxyfor the analytic and deliberative HCHI activities of actual

Table 1A representative example of how we made inferences about HCHI

activity based on client–server event data logged by the LIT Web portal.

When an individual ‘‘event’’ is logged by the LIT Web portal, it is tagged

with additional information at multiple levels of granularity, helping us

to infer specifically wherein the analytic–deliberative process the event

occurred.

Six levels of granularity in event data logged by the LIT WebPortal

CCTAgent.setCommentVotingId=1087375

1. ‘‘Event’’ The server logs a record that it successfully executed a

key script and method for a specific client identified by a User ID,

giving it a unique Event ID and including additional information

about the target and any user-generated content as a result of

executing the script and method. Based on the fact that the user

generated vote content as a ‘‘1’’ and not a ‘‘0,’’ and because the log

includes the target of the vote, we can infer that a specific human

participant voted to ‘‘agree’’ with an another participant’s

comment, identified as comment No. 1087375.

c0�e1=number:1086435

2. ‘‘Paired-Event’’ Now we also know that the target of the voting

event was a comment event that had been logged earlier by the

server as Event ID 1086435, from which we can find out the

specific user ID of the participant who posted the original

message, when they posted it, and where they posted it from (i.e.,

their self-reported home address or zip code).

ioid=1107097

3. ‘‘Technique’’ Because the event was logged with the information

object ID (ioid) 1107097, we know the participant voted while

doing a sequence of events associated with browsing all of the

messages in Step 1 that the moderator felt fit into a ‘‘Governance

and Funding’’ theme.

activityId=1078294

4. ‘‘Method’’ Activity ID is 1078294 also tells us that the participant

voted while working within ‘‘Step 1c: Review summaries.’’ If

browsing the ‘‘Governance and Funding’’ theme were possible in

more than one LIT Step or sub-Step, we would be able to

distinguish where the participant was working when they voted.

contextId=1078302

5. ‘‘Session’’ The Web portal supports multiple steps within an

experiment. Context ID 1078302 tells us that the participant voted

to agree with while working within ‘‘Step 1: Discuss concerns.’’

workflowId=1078232

6. ‘‘Situation’’ The LIT Web portal can support multiple experiments.

Workflow ID 1078232 tells us that the participant voted in the

‘‘Final LIT’’ experiment within LIT Web portal.

R. Aguirre, T. Nyerges / Journal of Visual Languages and Computing 22 (2011) 305–321 311

participants in real space and time, pertinent to thetheory that participation in an analytic–deliberative pro-cess of interaction can improve decision making.

The LIT Web portal logged 120,396 client–serverinteraction events during the LIT Challenge. Client appli-cations of registered participants logged 120 differentcategories of events, which we call event types. Everyrecord of a client–server interaction event is logged with aunique identifier as well as other attributes including thetime, the registered user ID, the specific script andmethod (event type) called, the LIT Step and sub-Step inthe Web portal where the script and method wasrequested, the unique ID of whatever content the clientrequested from the server, and the unique ID of any user-generated content the client posted to the server (seeTable 1). The frequency of the 120 participant event types

varied widely. For example, the event type indicating aparticipant deleted his or her own transportation concernoccurred only once whereas the event type indicating aparticipant voted on a post or a reply occurred over2000 times.

Having distinguished 120 different participant eventtypes, we then categorized event types by whether we feltthey were the result of an analytic or deliberative HCHIactivity or sequence of acts. To match our theoreticalexpectations, we relied primarily on the definitions ofanalysis, deliberation, and broadly based deliberationprovided by the NRC [1,2]. In addition, we looked atfindings in HCI, experimental psychology, and other fieldsof study to try to understand a cognitive basis of differ-ence between analytic and deliberative HCHI activity interms of what the human mind may be doing in perceiv-ing, processing, and making sense of incoming auditory/verbal or visual/pictorial information [45–47].

A sequence of HCHI acts for sending or exchanging amessage usually in the form of words is a deliberativeactivity. For instance, an event type indicating a partici-pant used the concern tool to write several sentencesabout the social inequities of using tolls to pay for anexpensive transportation improvement project falls onthe deliberative side of the spectrum. On the other hand, asequence of HCHI acts like clicking a link to look at a mapof a transportation improvement project is an analyticactivity. We focused first on distinguishing the mostimportant and clearly deliberative HCHI activities andidentified five. One deliberative HCHI activity was pas-sively viewing someone else’s message and presumablyreading it, which was somewhat problematic to infer fromclient–server events and thus was not used to a greatextent in our grapevine technique. The other four HCHIactivities included actively sending a message using oneof four different tools including (1) type your concern,(2) type your comment on someone else’s concern,(3) type your post, and finally, (4) type a reply to someoneelse’s post.

Even though we elaborated criteria for distinguishinganalytic and deliberative HCHI activities, we did not atthis stage belabor the distinction by looking into thesubtleties of each and every HCHI activity possible withthe tools of the LIT Web portal. For instance, using the toolfor adding a transportation-related term into the LITglossary or using the tool for tagging one’s own concernwith a set of keywords might be a little more on thedeliberative side of the HCHI spectrum, because theyinvolve communicating to others with text content. How-ever, for simplicity, we decided that any event type otherthan the five most clearly deliberative types, with theexception of voting, was an ‘‘analytic’’ activity.

Voting to agree or disagree with a message repre-sented the only popular HCHI activity in the LIT Webportal that seemed both analytic and deliberative simul-taneously. In the deliberative democracy literature, a voteis generally considered an analytic act [6]. The content ofa vote is a binary or rank variable like one would expect asan analytic judgment of a phenomena. However, like adeliberative activity, voting on a message can reinforce ashared understanding about values and concerns, or

R. Aguirre, T. Nyerges / Journal of Visual Languages and Computing 22 (2011) 305–321312

simply act as optional shorthand for the text message,‘‘I agree with what you are saying.’’ Therefore, we decidedto think of event types indicating voting on concerns,concern comments, posts, or replies in the LIT Challengeas deliberative activities that uniquely straddled both theanalytic and deliberative side of the HCHI spectrum.

Fig. 2. A static representation of the grapevine displaying all analytic–

deliberative activity (A), analytic activity only (B), and deliberative

activity only (C). A visual analyst would use a rotatable 3D display and

be able to zoom in and out of the grapevine.

3.1. The grapevine metaphor

Software designers and online moderators achieve ananalytic–deliberative balance by getting participants todeliberate with each other about a specific set of analy-tical procedures, essentially making the participants latchtheir attention onto the details of the analysis itself and inthe process attempting to discourage tangential conversa-tions. By continually adjusting and encouraging certainkinds of activity, a moderator can move participatoryinteractions as a whole towards generating productive‘‘clusters’’ of deliberation about the analysis used fordecision making.

The grapevine visualization uses the familiarity of arecognizable plant shape in nature to help the human eyeevaluate large amounts of fine-grained data about thequality and scale of the analytic–deliberative process inpublic participation decision making. In nature, grape-vines can often be found growing wild up the sides offences or just about any free-standing structure. However,the grapes that these wild natural vines produce are notpreferable for the purposes of human consumption. Asopposed to a wild grapevine, cultivated grapevines in avineyard are situated so that they naturally latch onto aspecifically arranged set of support structures like a metalcable or wooden stakes. Then the grapevine plant itself iscontinually pruned and coerced to grow in a certain waythat will generate productive grape clusters for harvest-ing. Thus the challenge for the viticulturist is to balancevegetative growth (energy spent to spread out new stemsand leaves) with reproductive growth (energy spent toproduce a certain abundance of grape clusters). Theviticulturist achieves this balance by training the grape-vine and constantly pruning its growth.

The grapevine visualization is based on the metaphorof grapes in nature not only because of the familiar plant-like shape but also because the metaphor captures the‘‘spirit’’ of participatory interaction [49]. Wild grapevinesin nature are like the deliberations that can unfold in anygiven online community. They can grow off in everydirection based upon whatever happens to be of interestto whomever the participants happen to be at the time.However, let us assume a decision has to be made aboutcourses of action aimed at changing existing situationsinto preferred ones, in other words a ‘‘design’’ goal, but inthis case a goal that affects the population of a particulararea [48]. The challenge in convening a productivebroadly based analytic–deliberative decision making pro-cess about design is to balance participant breadth ofdeliberation (energy spent broadening a diverse spectrumof topics, like the vegetative growth of a grapevine innature) with participant’s depth of focused deliberativeinsight (energy spent addressing simplifying assumptions

or omissions in a particular analysis, like the reproductivegrowth of a grapevine in nature).

3.2. A supporting analytic structure

In considering the grapevine visualization the firstthing to understand is that in an analytic–deliberativeprocess the deliberative activities are supposed to focuson analysis, which supports the nature of the discussion.In other words, deliberation is not just a deliberationabout anything that comes to mind. The analytic supportstructure is a sequence of analytical activities. Coresequences of analytic activity like browsing and selectingGIS maps thus provide a structure so that deliberativeactivities like posting messages can have somethingspecific to talk about.

For the LIT online field experiment, the developerscreated a set of GIS-based transportation planning analysissteps for participants to browse and select. Participantscreate a package by selecting from a set of improvementprojects and then adding a preferred funding mechanism.Participants were given a choice of 19 major categories ofproposed road or transit improvement projects and 15major categories of funding options. Participants wereassisted in what choice to make with a ‘‘Tax Calculator’’tool that opened in a new tab in a browser. For example,participants could browse the funding option category ‘‘Gastax increase’’ and select from five options including ‘‘2 centsper gallon, 6 cents per gallon, 12 cents per gallon, 16 centsper gallon, and 20 cents per gallon.’’ Using the ‘‘TaxCalculator,’’ participants could enter personal travel andhousehold financial information in order to estimate howmuch money they would be responsible for on a yearlybasis given the choice of funding options. After doing thisanalysis in a step by step way using the tools available inthe LIT online system, participants were then asked todeliberate about the transportation improvement program-ming analysis itself; in order to then agree upon onepreferred set of projects and funding mechanisms.

The dense structure in blue in Fig. 2 represents theparticipant’s analytic interactions, i.e., browsing andselecting information for the purpose of performing some

R. Aguirre, T. Nyerges / Journal of Visual Languages and Computing 22 (2011) 305–321 313

operation. The analytic interactions with the LIT Webportal in Fig. 2 represent the support structure withoutwhich participants would have nothing to latch onto interms of discussion. The organic looking grapevine itselfrepresents deliberative message exchange and isdescribed in more detail in Table 2 (see also Fig. 3). Thegreen grapevine structure in Fig. 2 represents messagingactivity and does not include analytic activities like

Table 2Seven main features of the grapevine, in terms of what it represents as a visual

versus unproductive growth. See also Fig. 3.

Feature What it represents Productive

A. Main stem A running average of the locations of

the last 10 participants who generated

a message.

Stem twists

rapid messa

participants

B. Node andinternode

A message added along the main stem

from a particular location and point in

time. Nodes can generate buds if there

is a reply.

Many large n

internodes, b

rapidly posti

agree or disa

C. Bud A message that at least one other

participant replied to with their own

message. Buds generate shoots and

leaves.

Many large b

many partic

other’s mess

D. Tendril A vote to agree or disagree with the

message in a node, bud, or leaf. A

tendril grows from a node, bud, or leaf

to the specific time and location of the

voting participant.

Nodes with

and long bra

at a relativel

rapid and ge

responses.

E. Shoot A reply to a bud. A shoot grows from a

bud and ends in a leaf at the time and

location of the responding participant.

Many shoots

branching ou

relatively low

F. Leaf A message sent as a reply. A leaf is

generated from a bud and exists at the

end of a shoot.

Many large l

voted to agr

G. Cluster A cluster of shared understanding, the

proverbial fruits of an analytic–

deliberative process. A synthesis of

sense and meaning in message

exchange, best harvested from

productive areas of a grapevine.

Participants

energies bet

messages or

to each othe

on their sha

something in

Fig. 3. The features of a geovisual grapevine in comparison to the anatomy of th

main stem (A), nodes (B), buds (C), tendrils (D), shoots (E), leaves (F), and clus

browsing and then selecting messages. The grapevinefeatures in Figs. 1–5 were processed and displayed in afully interactive 3D GIS environment, ESRI’s ArcGIS 3DAnalyst or ArcScene.

The complexity of any grapevine structure comes fromdifferent combinations and properties of five features(Table 2) including: (1) a main stem; (2) nodes and (3)buds that grow along its main stem; (4) tendrils that grow

ization of event-based HCHI activity and expected patterns in productive

Unproductive

back and forth because of

ge turn-taking from

at different locations.

Main stem grows straight up with

little twisting because of a lack of

rapid message exchange or lack of

geographic diversity.

odes with short

ecause participants are

ng messages and voting to

gree.

Few or mostly small nodes are

generated, because participants are

not posting messages or voting on

each other’s messages.

uds are generated because

ipants are replying to each

ages.

Few mostly small buds, or a greater

proportion of nodes to buds, because

participants are not replying to each

other’s messages.

many tendrils, both short

nching out in all directions

y low angle, indicating

ographically diverse voting

Nodes with a few short tendrils

branching out in only a few directions

at a relatively high angle, indicating

delayed and non-geographically

diverse voting responses.

both short and long

t in all directions at a

angle to the bud.

Few or no shoots branching out in

only a few directions at a high angle

relative to the bud.

eaves, because participants

ee or disagree with a reply.

Few or small leaves, because few

participants voted to agree or

disagree with a reply.

balance their discussion

ween posting their own

new topics, with replying

r’s messages and focusing

red understanding about

particular.

Participants spend too much

discussion energy posting their own

messages about unrelated topics,

rather than replying to others or

discussing their shared

understanding about something.

e natural grapevine it was specifically designed to look like, including the

ters (G) (see Table 2).

Fig. 4. Another static display of the grapevine showing the main stem when all other features are turned off (A), nodes turned on (B), nodes displayed

proportional to the number of votes (C), and buds displayed proportional to the number of replies (D). Dashed line in blue represents end of LIT Step 1.

(For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)

Fig. 5. More static displays of the grapevine showing the stem with nodes displayed proportional to the number of votes (A), node tendril features turned

on (B), and nodes, leaves and leaf tendril features turned on (C).

R. Aguirre, T. Nyerges / Journal of Visual Languages and Computing 22 (2011) 305–321314

out of nodes and latch onto the analytic support struc-ture; and finally (5) shoots that grow from buds and endin a leaf (Fig. 3). By recognizing productive patterns innodes, buds, leaves, main stem, and tendrils with visualcues, a visual analyst can find the most productiveclusters of shared understanding to ‘‘harvest’’ for furtheranalysis. One can get a feel for the grapevine by zoomingin and rotating it in an interactive three-dimensionalenvironment. Using the fully interactive environment,any visual analyst will be able to visually recognize andrank sections of the grapevine if they know what patternsto look for, that is, guided by visual cues as a mentalpicture of what productive versus unproductive activitylooks like. In addition, in order to validate visual analystrankings of each visual cue, described below, we devel-oped a unique computer calculation for each (Table 3).

3.3. Nodes connected by a main stem

The first major features of the grapevine are nodes ona main stem. A node represents when a user posts amessage and the main stem of the grapevine growsfrom one node to the next (Fig. 4). Whenever a participantposts a new message it creates a new node. The nodeis a dynamic point event that we plotted in three-dimen-sional time–space in ESRI’s ArcScene using the self-reported location of the participant in latitude and long-itude coordinates and the time (Pacific Standard Time)that the participant posted the message. However, thegrapevine’s main stem is not just a plot of the locationof new messages in time and space wherever theyoccurred. The main stem was something that we gener-ated to represent the changing center of gravity where the

Table 3Results of using the grapevine technique comparing visual cue ranks based on human spatial thinking skills (in bold) versus computer calculations

(in italics, bold italics). Messages exchanged during the highlighted days, representing the top 12 most productive clusters, were selected for further

content analysis.

Date Cluster Cue 1 Cue 2 Cue 3 Cue 4 Cue 5 Cue 6 Mean Diff.

12-Nov 33 29 25 29 27 29 NA 30 29 27 26 5b 5a 28.8 26.8 �2.1

11-Nov 32 21 20 21 23 27 NA 28 28 19 19 5a 5a 23.2 22.5 �0.7

10-Nov 31 28 27 27 30 28 NA 29 27 25 27 5a 5a 27.4 27.8 0.4

9-Nov 30 26 26 28 26 24 24 22 26 26 28 5a 5a 25.2 26.0 0.8

8-Nov 29 8 8 12 11 14 16 16 15 9 6 5a 5a 11.8 11.2 �0.6

7-Nov 28 20 21 17 19 9 10 10 12 17 21 5a 4a 14.6 16.6 2.0

6-Nov 27 14 14 15 15 18 19 18 7 16 13 4b 4a 16.2 13.6 �2.6

5-Nov 26 18 15 14 17 15 14 15 6 18 14 4a 4a 16.0 13.2 �2.8

4-Nov 25 25 24 24 22 19 21 23 22 23 22 4a 4a 22.8 22.2 �0.6

3-Nov 24 11 13 10 12 7 8 3 10 4 16 4a 4a 7.0 11.8 4.8

2-Nov 23 7 7 11 13 4 5 4 9 8 9 4a 3c 6.8 8.6 1.8

1-Nov 22 10 16 23 25 25 NA 24 25 21 23 3c 3c 20.6 22.3 1.7

31-Oct 21 9 10 22 20 17 20 17 20 20 20 3c 3c 17.0 18.0 1.0

30-Oct 20 30 NA 30 29 30 NA 27 30 28 NA 3b 3c 29.0 29.5 0.5

29-Oct 19 4 5 13 7 10 6 6 5 6 7 3c 3c 7.8 6.0 �1.8

28-Oct 18 24 28 25 28 22 23 26 21 24 29 3b 3a 24.2 25.8 1.6

27-Oct 17 19 18 18 16 26 NA 25 24 17 25 3b 3a 21.0 20.8 �0.3

26-Oct 16 15 12 16 14 16 11 14 4 14 4 3a 3a 15.0 9.0 �6.0

25-Oct 15 16 17 20 21 21 18 19 17 18 15 2b 2b 18.8 17.6 �1.2

24-Oct 14 27 29 26 24 23 22 21 23 22 12 2b 2b 23.8 22.0 �1.8

23-Oct 13 23 23 9 8 12 13 11 18 11 18 2b 2b 13.2 16.0 2.8

22-Oct 12 6 6 4 5 8 9 7 13 5 5 1c 1c 6.0 7.6 1.6

21-Oct 11 17 19 8 9 13 17 13 19 12 10 1c 1c 12.6 14.8 2.2

20-Oct 10 22 22 19 18 11 15 12 14 15 24 1c 1c 15.8 18.6 2.8

19-Oct 9 2 2 2 2 1 2 1 2 1 2 1b 1c 1.4 2.0 0.6

18-Oct 8 1 1 1 1 3 1 2 1 2 1 1b 1b 1.8 1.0 �0.8

17-Oct 7 12 9 7 10 20 12 20 16 10 11 1b 1b 13.8 11.6 �2.2

16-Oct 6 13 11 6 4 5 4 9 11 13 17 1b 1b 9.2 9.4 0.2

15-Oct 5 3 4 3 3 2 3 5 3 3 3 1b 1b 3.2 3.2 0.0

14-Oct 4 5 3 5 6 6 7 8 8 7 8 1b 1a/b 6.2 6.4 0.2

R. Aguirre, T. Nyerges / Journal of Visual Languages and Computing 22 (2011) 305–321 315

most recent messages were coming from. In other words,the latitude and longitude locations of nodes alongthe main stem are a running average of the locationsfrom which participants added a message. We purposelymade the twisting and coiling of the main stem ‘moder-ately’ sensitive to message turn-taking behavior bycalculating each new node based on an average of thelatitudes and longitudes of the last 10 messages (Fig. 4).So, for instance, if participants from exactly the samelocation (e.g., a zip code centroid) generated 15 posts ina row, by the time of the tenth post the main stem wouldhave drifted up and over in time–space until it wasdirectly over that zip code location. If participants interactwith rapid turn-taking from locations that are widelydispersed or on opposite sides of a population center, themain stem would twist and turn back and forth witha dense collection of nodes (Fig. 4). If participants werenot interacting at all, the grapevine would appear unpro-ductive and display a barren and straight main stem withonly a few nodes separated in time by long internodegaps.

Over a wide regional area, without controls on who isparticipating, one could imagine that the main stemwould hover over the most populated areas. On the otherhand, if there was some effort at recruiting participantsfrom less populated areas and capping registration fromthe most populated areas, then when the main stem

began to shift from one area to the next it could indicatethat the topics being discussed had some sort of ‘‘regio-nal’’ basis. However, the geographic patterns to beexpected are probably not as simple as regions. Forinstance, participants who live near major highwaysmight have a generally different point of view than thosewho live far from major highways, thus changes in thegrowth of the main stem due to greater participation bypeople living near highways would probably require somecomputational awareness to help a human geovisualanalyst recognize when such a pattern was occurring.Nonetheless, simpler geographic patterns such as a backand forth contest over time with respect to the center ofgravity of a discussion between participants living nearcoastal areas versus those living in more inland areaswould be relatively easy to pick out, especially if theparticipant recruitment strategy was specifically designedto look at those differences by balancing differences inpopulation density.

A productive grapevine will not only display an abun-dance of nodes, but also large nodes, all along the mainstem, indicating lots of voting activity (Fig. 5). In our 3DGIS, we displayed the size of each node in proportion tohow many votes a post or concern received (Fig. 5). In theLIT Web portal, voting on a post or concern meant votingto either agree or disagree. High levels of agreement ordisagreement are displayed exactly the same way, as a

R. Aguirre, T. Nyerges / Journal of Visual Languages and Computing 22 (2011) 305–321316

large node, unless we change the color of the node toindicate the ratio of positive to negative voting.

3.4. Nodes with tendrils

Tendrils grow up and out from the site of a node or aleaf to the time and location at which a participant voted.Most nodes and leaves will produce at least one tendril.Participants could also vote on a reply or a concerncomment and generate an additional set of tendrils offof leaves (Fig. 5). Any node that no other participant votedon or replied to is essentially a ‘dead’ node. In order toavoid cluttering the grapevine with lines going everywhich way, we displayed tendrils with a semi-transparentthin green line (Fig. 5). A productive cluster should have aproliferation of tendrils growing in an open patternbranching off at a low angle rather than a high angle tothe node and extending out in all different directions (seealso Fig. 3). Tendrils that branch off at a low angle indicatea relatively rapid voting response. Overall, a healthymixture of long and short tendrils branching out at lowangles in many different directions from a large nodemeans participants from many different locations votedon a post or concern and did so relatively rapidly.

3.5. Buds with leaves

A productive pattern in participant discussion occurswhen participants actually reply to each other’s messages.When this occurs we say that the node, the site of theoriginal message, has developed into a bud giving rise to areply displayed as a shoot with a leaf (Fig. 5). All budsdevelop from nodes but not all nodes generate buds. Wedisplayed the size of each bud (i.e., post or concern)proportional to how many replies (i.e., reply or concerncomment) it received. A productive grapevine will havean abundance of large buds distributed along the main

Fig. 6. A screenshot of ESRI 3D GIS environment illustrating the visual analyst

from most productive at upper left to least productive at lower right then reco

stem indicating that there was relatively sustained replyactivity.

4. Our technique using the grapevine

The first procedure in our method of using the grape-vine technique was to filter the overview of deliberativeactivity down to the bare minimum necessary for eachvisual cue, by turning off all non-essential layers. Dailyactivity was a ‘‘natural break’’ in the distribution ofnodes and represented the best unit for statistical com-parison of events and closer inspection of content. Thusthe unit of comparison was a small window in ArcScenecentered on a single day of activity, beginning withDay 4 and ending with Day 33, totaling 30 windows.The second procedure was to rank each cluster withthe 30 small view windows in ArcScene. The visualanalysis task was to visually rearrange all 30 viewwindows like a puzzle from most productive to leastproductive according to each visual cue (Fig. 6). The orderin which the view windows were arranged for each visualcue became the rankings in Table 3, ranked from 1 to 30.Fig. 6 shows the result of a visual analyst’s work with cue1, rearranging daily deliberative activity from most pro-ductive to least productive looking at the number and sizeof nodes.

In some ways, the grapevine might be considered amore primitive or even unreliable form of evaluationbecause it depends on human spatial thinking skills; onthe other hand, for that very reason it could be consideredsuperior. The spatial processing abilities of the humanbrain make the visualization work thus it is a synergybetween the power of computing and the power ofhuman spatial thinking skills. We expect any visualanalyst evaluator with a bit of practice can learn torecognize and rank chosen sections of the rotatablethree-dimensional grapevine visualization from least

‘‘game’’ of rearranging views representing daily clusters of deliberation,

rding the rank order.

R. Aguirre, T. Nyerges / Journal of Visual Languages and Computing 22 (2011) 305–321 317

productive to most productive. The question is, howreliably can a human being interpret patterns in anadmittedly complex-looking visualization?

For every visual cue that a human analyst would use torank a section of the grapevine, we developed a simplecomputer calculation to mimic human judgment. Bycomparing human rank order with rank order derivedfrom computer calculations, we could validate humanspatial thinking performance and determine if there wereany significant differences using simple non-parametricstatistical tests. A reasonable statistical test of agreementbetween multiple raters is to use a coefficient of con-cordance or community of judgment like Kendall’sW-statistic [50]. However, to compare human versuscomputer-based sets of ratings we decided to use aMarginal Homogeneity (MH) test. The MH test can dis-tinguish if there is a significant difference between twosamples. We calculated computer ranks for each visualcue and compared human visual rankings versus compu-ter calculated rankings. The MH test results indicated thatthe human visual analyst’s rankings and the calculatedrankings were not significantly different, and in factalmost identical. Testing visual analyst performance rank-ing clusters of deliberative activity with the grapevineunder different conditions remains an area for furtherresearch. After verifying human judgment of productiveclusters, we selected content from within the highestranking clusters.

4.1. Recognizing productive clusters of deliberation

One of the first things we noticed using the grapevinetechnique was that nodes were not evenly distributedalong the main stem (Fig. 4). There was a higher abun-dance of nodes associated with activity in LIT Step 1,whereas there was a lot of bare stem after LIT Step 1indicating declining deliberative activity except for onedistinctive surge of activity. Activity increased during LITsub-Steps 3c and 4a when participants were decidingwhich projects were best for the central Puget Soundregion and which funding options should be used to payfor them. Among quota or paid human subjects (n=179)the number of people actively participating declined from60 percent to 40 percent, which gave the late surge thatoccurred mainly during Steps 3c and 4a (days 23 and 24of the experiment) a unique importance.

Six of the top dozen clusters of deliberation wereassociated with LIT Step 1 and six were associated withlater steps, totaling 209 separate exchanges plus thou-sands of votes. Relying primarily on the computer calcu-lations of visual cues, we sub-selected 45 above averagedeliberative exchanges and processed the text contentusing a demo version of a software tool called Connexor[51]. The Connexor software parsed the 45 messages into17,145 individual elements of content. Each individualcontent element was tagged with detailed informationincluding a unique ID, the cluster and day the elementwas generated by a user, the numerical order of theelement in the sentence, its word baseform, and itssyntactic relation, syntax, and morphology.

4.2. Results from geovisual analysis

We felt that most participant references to transporta-tion-related features, objects, concepts, and occurrencesin the central Puget Sound region would be expressed asnouns, either as the subject or object of the sentence or ofa prepositional phrase [52,53]. Thus distilling clusters ofdeliberation into elements of meaning meant filteringmessages down to the most frequently occurring nounbaseforms used as subjects or objects of meaning. Weused the Connexor tags to select nouns and discoveredthat during the most highly productive discussions parti-cipants mentioned 3728 unique nouns representing 1155noun baseforms. For example, the words ‘‘bike’’ and‘‘bicycle’’ are just different forms of the same baseform‘‘bicycle.’’ We considered a number of methodologicalissues in our content analysis, such as whether pullingsubject and object nouns out of the sentence reallycaptured the sense and meaning of participant sharedunderstanding, as well as whether we should haveincluded gerunds since they could also be the subject orobject of a sentence. Nonetheless, we decided that a noun-based comparison of deliberative message exchanges wasan appropriate start. After distilling the discussion intonoun baseforms, we re-examined the original context inwhich noun baseforms were mentioned and re-read mes-sage exchanges to get a sense for what participants meant.

4.3. Results from content analysis

The content analysis indicates a major shift in thequalitative nature of shared understanding after LIT Step1 when participants did a geospatial analysis. The 99thpercentile of noun baseforms in terms of frequency ofoccurrence (frequency of 30 or more) in the first sixclusters before the end of LIT Step 1 included the follow-ing ten words (Fig. 7): bicycle (91), bus (73), transit (62),pedestrian (40), car (39), bicyclist (34), transportation(33). people (31), road (31), and traffic (31). The 99thpercentile of noun baseforms (frequency of 16 or more) inthe last six clusters after the end of LIT Step 1 included thefollowing six words (Fig. 8): toll (36), project (32),package (23), people (20), improvement (16), and tax(16). After re-reading the original messages with the99th percentile noun baseforms in mind, it became clearthat the shift in the frequency of noun baseforms didindicate a shift in the sense and meaning of the discus-sion. When asked to deliberate about values and concernsfor improving transportation during LIT Step 1, partici-pants spent a lot of energy discussing alternate modes oftransportation like bikes, buses, or rail in a broad discus-sion about ways of reducing car traffic thus reducing theneed for expensive roads and transit projects. However,when asked to review planning factors, create a regionaltransportation improvement package using a geospatialanalysis, and then deliberate about the results of theanalysis to select the best package; participants focusedon calling out social and economic inequalities inherent inselecting different funding options, like taxes or tolls, asways of fairly redistributing the cost of transportation

Fig. 7. Noun baseforms mentioned by participants during message exchanges in LIT Step 1, in terms of frequency of occurrence. Only noun baseforms

above the 90th percentile (47 gray line) are shown. Noun baseforms above the 99th percentile (430 white line) are highlighted in white.

Fig. 8. Noun baseforms mentioned by participants during message exchanges after LIT Step 1, in terms of frequency of occurrence. Noun baseforms

above the 90th percentile (45 gray line) are shown. Noun baseforms above the 99th percentile (415 orange line) are highlighted in orange. (For

interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)

R. Aguirre, T. Nyerges / Journal of Visual Languages and Computing 22 (2011) 305–321318

improvement among those living and traveling in thecentral Puget Sound region.

The noun baseform ‘‘package’’ was used only onceduring the most productive deliberative exchanges ofLIT Step 1 (Fig. 7). However, ‘‘package’’ was the 3rd mostfrequently mentioned noun baseform (23 times) duringthe most productive exchanges after the end of LIT Step 1

(Fig. 8). The dramatic increase in the noun ‘‘package’’ is anindicator that the shift in shared understanding weobserved after the end of LIT Step 1 was the result ofparticipant experience with geospatial tools in the LITWeb portal, and not a result of participant experience withthings outside of the LIT Web portal. As corroboratingevidence, many of the most productive deliberative

R. Aguirre, T. Nyerges / Journal of Visual Languages and Computing 22 (2011) 305–321 319

exchanges contributing to the topical shift in sharedunderstanding occurred simultaneously with an enormousspike in analytic activities of using GIS maps to create atransportation package during LIT Step 3.

The content analysis of the most productive delibera-tive exchanges, identified by grapevine visual analysis,provides evidence that participant use of geospatial ana-lysis tools in later steps of the LIT Web portal improvedthe quality of deliberation even though it may haveironically decreased the scale of deliberation and partici-pant interest, as compared with other parts of the LIT Webportal like LIT Step 1. We believe that the moderatedanalytic–deliberative process in later steps of the LITChallenge successfully ‘‘trained’’ participant deliberativeenergies to latch onto the analytic support structureprovided by the transportation package analysis, particu-larly Step 3 of the LIT Web portal. By engaging in analyticHCHI activity with the LIT Web portal, participantsrefocused the growth of deliberation on simplifyingassumptions and omissions in the funding options com-ponent of the analysis, rather than continuing to con-tribute to the growth of a broad and unattacheddiscussion about personal values and concerns that hadlittle or nothing to do with the assumptions of thesimulated public agency transportation analysis. If wehad investigated activity data alone, without combining acontent analysis to help us distill sense and meaning, wemight have concluded that LIT Step 1 was clearly the mostsuccessful part of the entire process and should beemulated in future design and development, since ourhuman subjects spent a great deal of energy deliberatingabout a broad and largely activist spectrum of concerns.Participant activity as measured by number of partici-pants and time spent declined after Step 1 and evenfaltered at various points during Steps 2, 3, 4 and 5.However from a certain quality of interaction perspective,even though the earlier LIT Steps were more wildlyproductive in terms of message posting and exchanging,the geospatial exercises in the LIT Challenge along withpersistent moderation trained participant energies todeliberate about the analysis itself despite lower overallproductivity as measured by messaging. In many ways,the most important deliberation of the entire LIT Chal-lenge occurred in the midst of declining overall participa-tion near the end of the experiment during LIT Step 4.

Natural participant deliberative energies should benurtured but the growth of discussion should also bepruned and trained so that it does not grow off in everydirection and ultimately collapse. Instead, the growth ofdeliberation should be trained to latch onto and make useof analytic support activities, in order to focus deliberativeenergy to produce clusters of deliberation about ananalysis. By supporting the analytic–deliberative processwith geospatial analysis tools, the LIT Web portal seemedto refocus broadly based participant energies on deliber-ating about the analysis, resulting in a more useful out-come from the standpoint of agency decision makers andtechnical specialists who use similar analyses in trans-portation planning.

After using a grapevine-based content analysis toidentify shifts in shared understanding during a broadly

based analytic–deliberative process, we feel we are in abetter position to provide public agencies with empiri-cally based evaluations of whether, in the use of analysisto make actual decisions, these agencies may have ser-iously miscalculated something about the breadth anddepth of concern among stakeholders. To most agencyplanning specialists, the transportation package analysisin the LIT Web portal would probably appear fairlystraightforward in terms of its funding options, as it wasmodeled after the process used by the Puget SoundRegional Council, the metropolitan planning organizationof the central Puget Sound region. Evaluation of theanalytic–deliberative process in the LIT Web portal pro-vided us with empirical evidence of the breadth anddepth of a particular stakeholder concern in the centralPuget Sound region about unknown, unintended, orunanticipated social inequities being generated by trans-portation planning agencies as a result of selection ofcertain funding options. Further investigation into rela-tionships between words and modifying words usingnatural language processing, concept map visualizations,or other methods integrating content and discourse ana-lysis would reveal much more about the sense andmeaning of qualitative shifts in shared understandingas a result of participant experience with the LIT Webportal.

5. Conclusion

Development and exploration of HCHI using the grape-vine technique contributes to a growing literature invisual and geovisual analytics about insightful techniquesto examine collaborative decision processes with geospa-tial technology. Thomas and Cook [54] and Andrienkoet al. [55] have made calls for more research about the useof visualizations of information to support analysis anddeliberation about complex problems. Considering howother geovisualization techniques have been used forexploratory analysis of spatio-temporal data [56–58], wecreated a custom geovisual analytic technique thatbalanced the computing power of a GIS to display largeamounts of fine-grained event data, with the humanprocess of spatial thinking or visual reasoning to identifythe emergent patterns.

Creating a grapevine requires an understanding of GISand some basic effort in data processing. However, theeffort required is no more than any other type of GISanalysis in our estimation. The grapevine required nostatistical assumptions to use. One of the things wewanted to do with our geovisual analytic technique,quickly and reliably, was to examine and recognize themost productive daily clusters of HCHI activity as areas offocus for content analysis. Human spatial thinking per-formance in matching and ranking observed patternsbased on expected patterns was not statistically differentthan computer-based ranks. Further testing will helpidentify where spatial thinking skills are most challengedwhen it comes to recognizing productive patterns in thegrapevine visualization.

The grapevine technique also contributes to the litera-ture about the broadly based analytic–deliberative

R. Aguirre, T. Nyerges / Journal of Visual Languages and Computing 22 (2011) 305–321320

decision making process. An important hypothesis behindanalytic–deliberative decision making is that decisionsare better when they come from a combination of analysisand deliberation rather than from analysis alone. Thefindings suggest that indeed when deliberation is struc-tured by a specific analysis the two ways of knowing canenhance each other. However, we also found that con-vening an analytic–deliberative process in order to engagea broadly based lay public with technical information (orwith technical specialists and decision-making execu-tives), even when done in a convenient online tool withhuman moderators and a structured process of publicparticipation, tends to produce neither a continuous norstable quality of deliberation. The analytic–deliberativeprocess unfolds in fits and starts of productive delibera-tion mixed within and among relatively unproductiveinteractions. Though the process may start fast it canundergo measurable decline in terms of participation andinterest. For those with experience trying to implementthe NRC’s recommendations and convene a broadly basedanalytic–deliberative process this may appear as an all-too-familiar and discouraging pattern.

Our results with the grapevine suggest that unevengrowth and quality of deliberation in an analytic–deliberative process may not be indicative of a systematicfailure to engage participants. An analytic–deliberativeprocess that participants generally considered confusingor boring might still have contained sporadic and stimu-lating episodes of broadly based deliberation usefulfor improving analysis in decision making aboutpublic health, public safety, and the environment. Thegrapevine method may, therefore, be particularly useful ifbroadly based analytic–deliberative decision making pro-cesses tend to be sporadically productive in ways thatonly a fine-grained formative type of assessment candistinguish.

As further steps, comparative research projects thatstudy the quality and scale of public participation inanalytic–deliberative decision making under varying con-ditions are needed to shed light on largely untestedexpectations expressed in the participatory democracyliterature [3,6,59]. Results that show the effect of differentcombinations of public participation tools and recruit-ment strategies in different decision making conditionswould be particularly valuable. Specifying the theoreticaldifferences between ‘‘analytic’’ and ‘‘deliberative’’ eventtypes along an analytic–deliberative HCHI spectrum, forthe purpose of coding a client–server event log, isalso needed. Further investigations might confirm thatthough broadly based analytic–deliberative processesappear to the untrained eye as unproductive and lessthan engaging, evaluation of fine-grained interaction datacan locate the irregular episodes of exceptional publicdeliberation about analysis that will improve decisionmaking.

Acknowledgments

This research was partially supported by NationalScience Foundation Information Technology Research

Program Grant no. EIA 0325916, National Science Founda-tion Geography and Spatial Sciences Grant no. BCS-0921688, and National Oceanic and Atmospheric Admin-istration Grant no. NA07OAR4310410. Support of theNational Science Foundation and the National Oceanicand Atmospheric Administration is gratefully acknowl-edged. We would also like to acknowledge our twoanonymous reviewers and members of various researchteams including Michalis Avraam, Piotr Jankowski,Michael Patrick, Kevin Ramsey, Zhong Wang, Matt Wilsonand Guirong Zhou; and especially Tanveer Randhawa andKanwar Buttar for work in development of automatedgrapevine applications. The authors are solely responsiblefor the content.

References

[1] National Research Council, Understanding Risk: Informing Decisionsin a Democratic Society, National Academy Press, Washington,DC, 1996.

[2] National Research Council, Decision Making for the Environment:Social and Behavioral Science Research Priorities, National Acad-emy Press, Washington, DC, 2005.

[3] National Research Council, Public Participation in EnvironmentalAssessment and Decision Making, National Academy Press,Washington, DC, 2008.

[4] T. Nyerges, Scaling-up as a grand challenge for public participationGIS, Directions Magazine (2005) /www.directionsmag.comS.

[5] T. Nyerges, P. Jankowski, D. Tuthill, K. Ramsey, Participatory GISsupport for collaborative water resource decision making: results ofa field experiment, Annals of the Association of American Geogra-phers 96 (4) (2006) 699–725.

[6] J. Gastil, P. Levine (Eds.), The Deliberative Democracy Handbook,Jossey-Bass, San Francisco, CA, 2005.

[7] Digital Future Report, 2010, /http://www.digitalcenter.orgS.[8] Let’s Improve Transportation, 2007, /www.letsimprovetransporta

tion.orgS.[9] D.T. Cook, D.T. Campbell, Quasi-experimental Design: Design and

Analysis Issues for Field Settings, Rand McNally, Skokie, IL, 1979.[10] D. Brinberg, J. McGrath, Validity and the Research Process, Sage,

Thousand Oaks, 1985.[11] J.A. Konstan, Y. Chen, Online field experiments: lessons from

CommunityLab, in: Proceedings of the Third Annual Conferenceon e-Social Science Conference, Ann Arbor, MI, 2007.

[12] National Research Council, Learning to Think Spatially, NationalAcademy Press, Washington, DC, 2006.

[13] A. von Eye, Configural Frequency Analysis—A Program for 32 BitWindows Operating Systems Program Manual, Michigan StateUniversity, East Lansing, MI, 2008.

[14] P. Sanderson, C. Fisher, Exploratory sequential data analysis:foundations, Human–Computer Interaction 9 (1994) 251–317.

[15] W. van der Aalst, B. van Dongen, J. Herbst, L. Maruster, G. Schimm,A. Weijters, Workflow mining: a survey of issues and approaches,Data and Knowledge Engineering 47 (2) (2003) 237–267.

[16] W. van der Aalst, H.A. Reijers, M. Song, Discovering social networksfrom event logs, Computer Supported Cooperative Work 14 (2005)549–593.

[17] N. Andrienko, G. Andrienko, P. Gatalsky, Exploratory spatio-tem-poral visualization: an analytical review, Journal of Visual Lan-guages and Computing 14 (6) (2003) 503–541.

[18] K.P. Hewagamage, M. Hirakawa, An interactive visual language forspatiotemporal patterns, Journal of Visual Languages & Computing12 (3) (2001) 325–349.

[19] T. Hagerstrand, What about people in regional science?, Papers ofthe Regional Science Association 24 (1970) 6–21.

[20] M.P. Kwan, Time, information technologies and the geographies ofeveryday life, Urban Geography 23 (2002) 471–482.

[21] M.P. Kwan, Feminist visualization: re-envisioning GIS as a methodin feminist geographic research, Annals of the Association ofAmerican Geographers 92 (2002) 645–661.

[22] H.J. Miller (Ed.), Springer Science, Dordrecht, The Netherlands,2007.

[23] H. Yu, S.-L. Shaw, Revisiting Hagerstrand’s time-geographicframework for individual activities in the age of instant access,

R. Aguirre, T. Nyerges / Journal of Visual Languages and Computing 22 (2011) 305–321 321

in: H.J. Miller (Ed.), Societies and Cities in the Age of Instant Access,Springer Science, Dordrecht, The Netherlands2007, pp. 103–118.

[24] A. Getis, A method for the study of sequences in geography, Papersof the Regional Science Association (1966) 87–92.

[25] M.S. Magnusson, Discovering hidden time patterns in behavior:T-patterns and their detection, Behavior Research Methods, Instru-ments, & Computers 32 (1) (2000) 93–110.

[26] G. Olson, M.J.D. Herbsleb, H.H. Rueter, Characterizing the sequen-tial structure of interactive behaviors through statistical andgrammatical techniques, Human–Computer Interaction 9 (3/4)(1994) 427–472.

[27] T. Nyerges, T.J. Moore, R. Montejano, M. Compton, Interactioncoding systems for studying the use of groupware, Human–Computer Interaction 13 (2) (1998) 127–165.

[28] W.D. Gray, D.A. Boehm-Davis, Milliseconds matter: an introductionto microstrategies and to their use in describing and predictinginteractive behavior, Journal of Experimental Psychology: Applied 6(4) (2000) 322–335.

[29] P. Sanderson, Release notes for MacSHAPA 1.0.3, Crew SystemErgonomics Information Analysis Center (CSERIAC), Wright-Patter-son Air Force Base, OH, 1995.

[30] P. Jankowski, T. Nyerges, GIS-supported collaborative decisionmaking: results of an experiment, Annals of the Association ofAmerican Geographers 91 (1) (2001) 48–70.

[31] A. MacEachren, Moving geovisualization toward support for groupwork, in: J. Dykes, J.A. MacEachren, M.J. Kraak (Eds.), ExploringGeovisualization, Elsevier2005, pp. 445–461.

[32] D. Haug, A.M. MacEachren, F. Hardisty, The challenge of analyzinggeovisualization tool use: taking a visual approach, PennsylvaniaState University, University Park, PA, 2001.

[33] S. Tanimoto, S. Hubbard, W. Winn, Automatic textual feedback forguided inquiry learning, in: Proceedings of the International Arti-ficial Intelligence in Education (AIED) Society Conference, 2005.

[34] P. Keel, EWall: a visual analytics environment for collaborativesense-making, Information Visualization 6 (2007) 48–63.

[35] M. Zook, M. Dodge, Y. Aoyama, New digital geographies: informa-tion, communication, and place, in: S. Brunn, S.L. Cutter,J.W. Harrington Jr. (Eds.), Geography and Technology, KluwerAcademic Publishers, The Netherlands, 2004.

[36] R. Wallace, A fractal model of HIV transmission on complex socio-geographic networks: towards analysis of large data sets, Environ-ment and Planning A 25 (1993) 137–148.

[37] N.R. Hedley, A. Lee, C.H. Drew, E. Arfin, Hagerstrand revisited:interactive space–time visualizations of complex spatial data,Informatica 23 (2) (1999) 155–168.

[38] A. Pred, Structuration, biography formation, and knowledge: obser-vations on port growth during the late mercantile period, Environ-ment and Planning D: Society and Space 2 (3) (1984) 251–275.

[39] J.M. Gudmundsson, M. van Kreveld, B. Speckmann, Efficient detec-tion of motion patterns in spatio-temporal data sets, Technicalreport UU-CS-2005-044, Institute of Information and ComputingSciences, Utrecht University, 2005.

[40] M. Worboys, Event-oriented approaches to geographic phenomena,International Journal of Geographical Information Science 19 (1)(2005) 1–28.

[41] M. Worboys, K. Hornsby, From objects to events: GEM, thegeospatial event model, in: Proceedings, Geographic InformationScience 2004, Springer Verlag, Adelphi, MD, 2004, pp. 327–344.

[42] D.J. Peuquet, N. Duan, An event-based spatiotemporal data model(ESTDM) for temporal analysis of geographical data, InternationalJournal of Geographical Information Systems 9 (1) (1995) 7–24.

[43] M. Yuan, K.S. Hornsby, Computation and Visualization for Under-standing Dynamics in Geographic Domains: a Research Agenda,CRC Press, Boca Raton, FL, 2007.

[44] K.S. Hornsby, M. Yuan, Understanding dynamics of geographicdomains, CRC Press, Boca Raton, FL, 2008.

[45] S.K. Card, J.D. Mackinlay, B. Shneiderman, Information visualiza-tion: using vision to think, Morgan-Kaufmann, San Francisco, CA,1998.

[46] R.E. Mayer (Ed.), Cambridge University Press, New York, 2005.[47] D. Billman, G. Convertino, J. Shrager, J.P. Massar, P. Pirolli, Colla-

borative intelligence analysis with CACHE and its effects oninformation gathering and cognitive bias, Palo Alto ResearchCenter, Paper presented at the Human Computer Interaction Con-sortium Workshop, Snow Mountain, CO, 2006.

[48] C. Steinitz, On scale and complexity and the needs for spatialanalysis, Paper Presented at Spatial Concepts in GIS and Design,2008, /http://ncgia.ucsb.edu/projects/scdgS.

[49] N.S. Contractor, D.S. Siebold, Theoretical frameworks for the studyof structuring processes in group decision support systems: adap-tive structuration theory and self-organizing systems theory,Human Communication Research 19 (4) (1993) 528–563.

[50] M.G. Kendall, B.B. Smith, The problem of m rankings, The Annals ofMathematical Statistics 10 (3) (1939) 275–287.

[51] Connexor, 2008, /www.connexor.euS.[52] D.M. Mark, Spatial representation: a cognitive view, in:

D.J. Maguire, M.F. Goodchild, D.W. Rhind, P. Longley (Eds.), Geo-graphical Information Systems: Principles and Applications, 2ndEdition, Wiley, New York, 1999.

[53] D.M. Mark, A. Skupin, B. Smith, Features, objects, and other things:ontological distinctions in the geographic domain, Lecture Notes inComputer Science 2205 (2001) 488–502.

[54] J.J. Thomas, K.A. Cook, Illuminating the Path: the Research andDevelopment Agenda for Visual Analytics, National Visualizationand Analytics Center, Richland, WA, 2005.

[55] G. Andrienko, N. Andrienko, P. Jankowski, D. Keim, M.-J. Kraak,A. MacEachren, S. Wrobel, Geovisual analytics for spatial decisionsupport: setting the research agenda, International Journal ofGeographical Information Science 21 (8) (2007) 839–857.

[56] A. MacEachren, M. Gahegan, W. Pike, Visualization for constructingand sharing geo-scientific concepts, PNAS Early Edition, 2003.

[57] C.D. Hundhausen, Using end-user visualization environments tomediate conversations: a ‘Communicative Dimensions’ framework,Journal of Visual Languages & Computing 16 (3) (2005) 153–185.

[58] P. Compieta, S. Di Martino, M. Bertolotto, F. Ferrucci, T. Kechadi,Exploratory spatio-temporal data mining and visualization, Journalof Visual Languages and Computing 18 (3) (2007) 255–279.

[59] R. Kingston, Public participation GIS and the internet, in: T. Nyerges,H. Couclelis, R. McMaster (Eds.), Handbook of GIS and SocietyResearch, Sage Publications, London, forthcoming.