Scavenger Hunt: An Empirical Method for Mobile Collaborative...

7
P ervasive computing enables field re- searchers to accomplish efficient field- work in teams that they once performed in isolation, over several trips, or not at all. Field researchers can deduce new information from findings they make while in the field, and apply it immediately to the situation at hand. This is especially important in fields where the time or resources to conduct several studies isn’t available. This domain can be termed mobile collaborative problem-solving. Because collaboration systems that support mobile users can be costly and difficult to create, designers need tools to ensure later iterations of their systems will function as expected. Mobile collaborative problem- solving is a young domain of study, and not much is known about how people can collabo- rate effectively in the field using computers. Moreover, applica- tions that are found to be effective in a laboratory study might not actually be effective in the setting where they will be used (for example, laptops for use by patrolling police officers). Evaluating mobile collaborative systems requires methods for study- ing team use of these systems in realistic yet con- trolled settings. Before deploying a mobile collab- orative problem-solving system, early evaluation methods can help identify problem areas in the user experience. By troubleshooting design prob- lems while the product is still in development, designers can save time and money. We suggest a scavenger hunt model that pro- vides a viable prototyping method. A scavenger hunt is a typical mobile activity that both adults and children can perform. Participants are divided into teams and given a list of items, often unrelated and obscure. The first team to collect all the listed items within a given time limit wins the game. We use the essential elements of this game format—a timed task, teamwork, and mobility—to create a prototyping method for mobile collaborative prob- lem-solving systems. These elements of the scav- enger hunt mimic several field challenges in the lab. For example, a search for a missing person, an edu- cational field trip to a nature preserve, or repeated trips to the same area for field research all share these characteristics. In these situations, the data capture and the data analysis feed into each other immediately. This tight coupling of data capture and analysis is useful in other situations as well. The scavenger hunt empirical tool lets pro- grammers and system designers study the effec- tiveness of their mobile collaborative problem- solving environments in a setting that offers laboratory-like controls while mimicking the real- world problems facing mobile users (see the “Finding a Medium between Laboratory and Field Studies” sidebar for some other work in this area). It presents users a well-defined problem that only the group can solve and simultaneously requires them to navigate a public area. By observ- ing participants during a scavenger hunt trial, we learn more about the field problems they’ll encounter regarding software, group dynamics, infrastructure, and mobility. 1536-1268/07/$25.00 © 2007 IEEE Published by the IEEE Computer Society PERVASIVE computing 81 COMPUTER-SUPPORTED COOPERATIVE WORK Scavenger Hunt: An Empirical Method for Mobile Collaborative Problem-Solving The scavenger hunt prototyping model avoids some challenges of complex field studies, while supporting significant mobility and collaboration testing in a more controlled environment. Michael Massimi University of Toronto Craig H. Ganoe and John M. Carroll Pennsylvania State University

Transcript of Scavenger Hunt: An Empirical Method for Mobile Collaborative...

Page 1: Scavenger Hunt: An Empirical Method for Mobile Collaborative …mikem/pubs/MassimiGanoeCarroll... · 2007-03-10 · rather how well the scavenger hunt for-mat highlighted the problems

Pervasive computing enables field re-searchers to accomplish efficient field-work in teams that they once performedin isolation, over several trips, or not atall. Field researchers can deduce new

information from findings they make while in thefield, and apply it immediately to the situation athand. This is especially important in fields wherethe time or resources to conduct several studiesisn’t available. This domain can be termed mobilecollaborative problem-solving.

Because collaboration systems that supportmobile users can be costly and difficult to create,

designers need tools to ensurelater iterations of their systemswill function as expected.Mobile collaborative problem-solving is a young domain ofstudy, and not much is knownabout how people can collabo-rate effectively in the field usingcomputers. Moreover, applica-

tions that are found to be effective in a laboratorystudy might not actually be effective in the settingwhere they will be used (for example, laptops foruse by patrolling police officers). Evaluating mobilecollaborative systems requires methods for study-ing team use of these systems in realistic yet con-trolled settings. Before deploying a mobile collab-orative problem-solving system, early evaluationmethods can help identify problem areas in theuser experience. By troubleshooting design prob-lems while the product is still in development,designers can save time and money.

We suggest a scavenger hunt model that pro-vides a viable prototyping method. A scavengerhunt is a typical mobile activity that both adultsand children can perform. Participants are dividedinto teams and given a list of items, often unrelatedand obscure. The first team to collect all the listeditems within a given time limit wins the game. Weuse the essential elements of this game format—atimed task, teamwork, and mobility—to create aprototyping method for mobile collaborative prob-lem-solving systems. These elements of the scav-enger hunt mimic several field challenges in the lab.For example, a search for a missing person, an edu-cational field trip to a nature preserve, or repeatedtrips to the same area for field research all sharethese characteristics. In these situations, the datacapture and the data analysis feed into each otherimmediately. This tight coupling of data captureand analysis is useful in other situations as well.

The scavenger hunt empirical tool lets pro-grammers and system designers study the effec-tiveness of their mobile collaborative problem-solving environments in a setting that offerslaboratory-like controls while mimicking the real-world problems facing mobile users (see the“Finding a Medium between Laboratory andField Studies” sidebar for some other work in thisarea). It presents users a well-defined problem thatonly the group can solve and simultaneouslyrequires them to navigate a public area. By observ-ing participants during a scavenger hunt trial, welearn more about the field problems they’llencounter regarding software, group dynamics,infrastructure, and mobility.

1536-1268/07/$25.00 © 2007 IEEE ■ Published by the IEEE Computer Society PERVASIVEcomputing 81

C O M P U T E R - S U P P O R T E D C O O P E R A T I V E W O R K

Scavenger Hunt: An Empirical Method for Mobile CollaborativeProblem-SolvingThe scavenger hunt prototyping model avoids some challenges of complex field studies, while supporting significant mobility and collaboration testing in a more controlled environment.

Michael MassimiUniversity of Toronto

Craig H. Ganoe and John M. CarrollPennsylvania State University

Page 2: Scavenger Hunt: An Empirical Method for Mobile Collaborative …mikem/pubs/MassimiGanoeCarroll... · 2007-03-10 · rather how well the scavenger hunt for-mat highlighted the problems

Interactive prototyping in mobile collaborativeenvironments

Linchuan Liu and Peter Khooshabehdescribe the advantages and disadvan-tages of paper versus interactive-proto-typing techniques.1 Specifically, they sug-gest that fidelity (look and feel) andautomation (amount of human interven-tion) are important dimensions for gaug-ing a prototyping technique’s success. Our

scavenger hunt prototyping and empiri-cal analysis technique offers excellentfidelity but only moderate automation.

Liu and Khooshabeh also argue thatinteractive prototypes are necessary partsof the design process and that interactive-prototyping methodologies are importantfor the product’s success. Furthermore,they argue that “[a]lthough prototypinghas been used with great success inobtaining usability data during the design

of traditional UIs, its use in ubicomp hasnot been thoroughly investigated.”1 Asan empirical tool, the scavenger huntmethodology advances progress towarda realistic prediction model about the suc-cess of a mobile collaborative problem-solving environment.

Previous work has explored the indi-vidual areas of mobile collaboration andcollaborative problem-solving. Theirfusion, however, requires designers to

82 PERVASIVEcomputing www.computer.org/pervasive

C O M P U T E R - S U P P O R T E D C O O P E R A T I V E W O R K

R esearchers have explored both the use and usefulness of lab-

oratory versus field studies in mobile human-computer in-

teraction. Jesper Kjeldskov and Connor Graham examined 102

research papers in mobile HCI published between 2000 and 2002

and found that only 41 percent involved some kind of user evalua-

tion (71 percent laboratory, 19 percent field, and 10 percent

through survey).1 At least in some cases, this tendency toward lab-

oratory studies appears to be justified.

Kjeldskov and his colleagues compared laboratory and field

evaluations of their MobileWard prototype to support hospital

morning procedures for nurses.2 Of 37 usability problems identi-

fied in the study, 36 occurred in the laboratory setting while only

23 occurred in the field. Researchers spent 34 staff hours on the

laboratory evaluation of six users and 65 staff hours on the field

evaluation of another six users. Although this might seem to be a

strong argument for conducting most mobile-application evalua-

tions in the lab, the application being evaluated was for a single

user and geared toward data collection and retrieval.

Melanie Kellar and her colleagues designed and conducted a field

study in which pairs of participants collaborated with their software

in the scavenger-hunt-like City Chase (www.thecitychase.com).3

They argue that their field study let them observe six external factors

that are difficult to control and that impact both research into and

adoption of mobile technologies:

• software—failures of the software being tested and wireless con-

nectivity failures;

• materials—lack of a home base for items such as paperwork and

equipment;

• social considerations—influence by the public (interactions and

self-consciousness);

• weather/environment—rain, wind, and sun (one study noted,

“tree sap dripped onto equipment”);

• audio and video—background noise and shaky and poor video

angles; and

• mobility—observation and note-taking difficulties in crowded

areas, while crossing streets, and while moving in general.

Although we agree that these factors come into play in field-study

settings, whether most of them would have much influence on

software user interface design (rather than the ability to collect

field data) is less clear.

Khai Truong and his colleagues make a strong case for prototyp-

ing mobile computing systems with users in mind.4 Too often, they

claim, programming tools and systems are “device-centric, rather

than user-centric.” We believe that the scavenger hunt can be used

in two ways: as a tool to assure that users will find mobile collabora-

tion systems useful and as a lens for studying individual and group

planning. We attempt to look beyond the device and offer a

method for examining how people plan and act in the field.

REFERENCES

1. J. Kjeldskov and C. Graham, “A Review of Mobile HCI Research Meth-ods,” Proc. Int’l Conf. Human Computer Interaction with Mobile De-vices and Services (Mobile HCI), ACM Press, 2003, pp. 317–335.

2. J. Kjeldskov et al., “Is It Worth the Hassle? Exploring the Added Value ofEvaluating the Usability of Context-Aware Mobile Systems in the Field,”Proc. Int’l Conf. Human Computer Interaction with Mobile Devices andServices (Mobile HCI), ACM Press, 2004, pp. 61–73.

3. M. Kellar et al., “It’s a Jungle Out There: Practical Considerations forEvaluation in the City,” Proc. Conf. Human Factors in Computing Sys-tems (CHI 05), ACM Press, 2005, pp. 1533–1536.

4. K.N. Truong et al., “How Do Users Think about Ubiquitous Compu-ting?” Proc. Conf. Human Factors in Computing Systems (CHI 04),ACM Press, 2004, pp. 1317–1320.

Finding a Medium between Laboratory and Field Studies

Page 3: Scavenger Hunt: An Empirical Method for Mobile Collaborative …mikem/pubs/MassimiGanoeCarroll... · 2007-03-10 · rather how well the scavenger hunt for-mat highlighted the problems

rethink their strategies when creatingsoftware for field users. Because prob-lem solving is the overarching goal, de-signers must equip systems with rapiddata-acquisition techniques, easy dataretrieval, and unobtrusive analysis tools.Users should be able to spend their timepiecing together information rather thannavigating an interface. They should beable to easily share the data or hide it attheir discretion. At the same time, thedesign must consider user mobility; userswill be traveling through an environmentthat also requires monitoring and an oc-casional response.

A blog-based mobilecollaborative problem-solving system

Evaluating the scavenger hunt method-ology required developing software thatwas representative of current trends ininformation sharing. The ultimate goalwas to use a scavenger hunt to unearththe software’s design flaws. As experi-menters, we weren’t evaluating the usersor even necessarily the product, butrather how well the scavenger hunt for-mat highlighted the problems our usersfaced. On the other hand, designers usingthis tool will be more interested in theflaws the scavenger hunt uncovers thanthe method itself.

A rapid examination of current infor-mation-sharing tools led us to create ablog-style collaboration and problem-solving system. Many organizations useblogs to distribute information to inter-ested parties. For field agents workingon a project over an extended period,blogs offer a chronological way tostructure data and findings. They’realso relatively easy to program andmaintain. This approach also expediteddata collection because blogging soft-ware already contains much of the nec-essary user information (posting time,name of the user submitting the post,and so on).

Our blog was simple because we did-n’t want users to become too involved inunderstanding its features during the userstudies. Users added posts using a smallonscreen (soft) keyboard on the hand-held computer. New posts were ap-pended to the end of the blog. However,users could rearrange a post’s positionrelative to other posts by pressing “up”

or “down” buttons. They could also edittheir posts in place (that is, they didn’tneed to submit an edit request). Afterediting, users could save their changes bypressing a “save” button next to the post.Finally, users had to request updates tothe blog manually—pressing a “refresh”button updated the blog to the most cur-rent version.

Empirical environmentAn accurate methodology for study-

ing the usability of collaborative tools inpervasive environments will give partic-ipants a realistic representation of a col-laborative task. To be as realistic as pos-sible, the methodology must supportseveral requirements:

• The task is well defined. It doesn’tinundate users with information, noris its premise so scant that users can’tadequately create a plan of action. Theproblems in our scenarios have a spe-cific goal.

• The environment allows participantmonitoring and rapid data collection.

• The task is important enough to par-ticipants that they won’t get discour-aged and disengage from the task.

• The task requires both individual and

collaborative work within the prob-lem-solving system.

• Participants must have a limitedamount of time in which to completethe task. Given unlimited time, par-ticipants might not exploit the tools totheir fullest potential.

• The pieces of information that usersmanipulate must be independently

meaningful, but collectively powerfulenough to accomplish the goal. Inde-pendent meaning is necessary so thatthe users can understand the relation-ships between pieces of information.

• The distribution of the pieces of infor-mation has no predefined logic—thatis, no one person knows everything.Users should acquire the pieces ofinformation semirandomly beforesharing them.

• No piece of information deductivelyimplies another piece. So, users can’tcomplete the goal simply by discover-ing a single piece of knowledge.Instead, they must compile and ana-lyze a constellation of facts.

In our scavenger hunt method, partic-ipants search for clues that form the basisof a logic puzzle. The clues are locatedthroughout an academic building. Eachparticipant carries a handheld computerwith wireless Internet access and awalkie-talkie as well as a slip of paperwith a transliterated version of Einstein’sfamous riddle (we altered the riddle toprevent participants who might havepreviously encountered it from immedi-ately recognizing it). The riddle’s premiseis as follows:

JANUARY–MARCH 2007 PERVASIVEcomputing 83

Because problem solving is the overarching

goal, designers must equip systems with rapid

data-acquisition techniques, easy data retrieval,

and unobtrusive analysis tools.

Page 4: Scavenger Hunt: An Empirical Method for Mobile Collaborative …mikem/pubs/MassimiGanoeCarroll... · 2007-03-10 · rather how well the scavenger hunt for-mat highlighted the problems

In a parking lot, five cars of differ-ent models are parked next to eachother. The owner of each car has adifferent profession. The five carowners each play a different sport,listen to a different type of music,and eat a different food. Thequestion: Who eats lo mein?

Participants have one hour to walkthrough the building looking for clues.On finding a clue, participants use theirhandheld computer to record the clueand broadcast it to their teammates. AWeb browser on the handheld computerdisplays the blog containing all the clues.Once participants feel they have enoughclues to solve the riddle, they return tothe starting point and verbally reporttheir answer to the experimenter.

User study parametersAll study participants were students

(ages 15 through 17) involved in a sum-mer program for gifted youth. They par-ticipated in the study as an extracurricularactivity. Although participants stated thatthey didn’t have extensive experience withmobile devices, they rapidly adjusted tothe input methods of the handheld com-puters we gave them. We conducted four

trials with different groups of three stu-dents each. The first trial was a pilot study,and the other three were full studies.

We placed 24 clues on brightly coloredpaper (see figure 1) and distributed themstrategically throughout an academicbuilding with 7,700 square feet of pub-lic space on three floors. Participantscould easily access all clues on foot. Ineach trial, participants needed 18 of theclues to solve the puzzle. The remainingsix clues served as distractors, whichensured that the participants were actu-ally analyzing the information they col-lected, as opposed to simply copying itand deferring analysis until later. In thepilot trial, two of the distractors wereirrelevant—they provided informationthat participants didn’t need to answerthe riddle. Another two were premises—the actual riddle described earlier. Theremaining two distractors were dupli-cates—they were simply copies of cluesthat were elsewhere in the building. Insubsequent full trials, we replaced theirrelevant clues and premises with fourmore duplicates to make the task easierto complete.

We gave each participant a Hewlett

Packard iPAQ h5450 handheld computerrunning Microsoft Windows Mobile. Anintegrated IEEE 802.11b adapter con-nected the handheld computers to thebuilding-wide wireless network via a vir-tual-private-network client. We wrote theblog software in PHP and used a MySQLbackend database to collect data and rep-resent clues. The Internet browser wasMicrosoft Internet Explorer. The blogsoftware lets users add, delete, edit, andmove posts up or down on the Web page,and tracks these changes.

Camera operators followed one or twoparticipants in each group, recordingtheir walks through the building. Weasked all participants to think aloud asthey worked with their handheld com-puters so that we could better gauge theirreactions to the task. At the end of thetask, we videotaped interviews with theparticipants. Participants then completedquestionnaires containing 24 items witha seven-point Likert-type response mea-sure and three open-ended questions.

Trial breakdownThe pilot group consisted of three male

participants. As mentioned earlier, thisgroup encountered irrelevant clues as dis-tractors, which we later replaced withduplicates. Furthermore, we didn’t givethis group the premise at the outset;instead, we disguised it as an additionalclue in the field, as we discuss in moredetail later. The group commented on thelack of automatic refreshing, the inter-face’s slow scrolling speed, and generalfrustration with the software. Theydivided the building’s floors among them-selves, assigning one person to each floor.The participant who discovered andentered a clue was accountable for un-derstanding its contents and rapidly re-sponding to teammates’ questions aboutthe clue’s content using walkie-talkies.The group also delegated particular tasksto individual team members.

84 PERVASIVEcomputing www.computer.org/pervasive

C O M P U T E R - S U P P O R T E D C O O P E R A T I V E W O R K

Figure 1. A sample clue. Clues aremounted to walls and tables throughoutthe building.

Page 5: Scavenger Hunt: An Empirical Method for Mobile Collaborative …mikem/pubs/MassimiGanoeCarroll... · 2007-03-10 · rather how well the scavenger hunt for-mat highlighted the problems

As a direct result of our observation ofparticipants during the pilot study, wemodified the empirical method for thesubsequent three full study groups. At thebeginning of the pilot study experiment,we simply asked the participants “Whoeats lo mein?” and let them determine therest of the riddle on their own. Their re-sponses in the questionnaire showed thatthey found the task very difficult. Thisindicated that to make immediate pro-ductive use of a handheld computer in amobile and collaborative environment,users must have a cognitive scaffoldingof sorts—that is, a clear goal and well-understood instructions. Similarly, well-structured tasks can make better use ofhandheld computers than ill-definedones.

The first group in the full study con-sisted of only two people, as one partic-ipant failed to arrive. Both participantswere female. One participant seemed es-pecially dispassionate about the task butalso exhibited a knowledge of whichclues she had encountered earlier in thetask. She simply skipped over these cluesand rapidly discarded distractors. How-ever, she was not thorough in exploringthe building and therefore her team failedto find several clues. They didn’t split upthe building in terms of work, but insteadwandered individually with little com-munication as to who would be respon-sible for which clue. They moved cluesthat had relevance to one another closertogether on the Web blog as well.

The second full group consisted ofthree male participants. They immedi-ately divided the building among them-selves and began collecting clues on theirassigned floors. They created an aggre-gate post that served as a “good copy” oftheir knowledge thus far. They at-tempted to draw a picture in one of theblog posts using textual symbols as asubstitute for lines (that is, ASCII art),but seemed to disregard it after a time asuseless. In this group, one of the partic-

ipants seemed to assume a leadershiprole. This participant began to assigntasks to the other two team members,while he maintained the good-copy postand thought aloud the most.

Like the second group, the third andfinal full group consisted of three male

participants. This group also split thefloors among themselves. They immedi-ately recognized the type of problemwhen they were given the premise. Aftercollecting most of the clues, this groupaccidentally deleted the post that con-tained their aggregated knowledge. Toremedy this, they sent one person out togather all the information again whilethe other two tried their best to completethe puzzle.

From these observations, we noticedparticular flaws in our blogging softwarethat we would have otherwise likely dis-regarded. The accidental deletion of cluesis an excellent example. As observers, wesaw this as a red flag—we needed todesign a way for the users to retrieve lostposts.

Results and user responsesOf the three full studies and one pilot

study, only the second group successfullyanswered the overall riddle. Table 1shows responses to the postexercisequestionnaire.

Participants’ responses to the ques-tionnaire make clear the flaws in ourblogging software. Items with responsesnear the extremes (+3 or –3) indicateareas that are especially important toaddress. Although we’ve only used thistool with a single product (a custom

blog), our run indicates that the tool wassuccessful in uncovering design flawsthat might otherwise have gone unno-ticed until deployment. We look forwardto trying the scavenger hunt prototypingtool with other products to determine itsflexibility as a method.

The data gathered from the scavengerhunt on how to improve our blog cameprimarily in three forms:

• The questionnaires let us maintainconsistency across trials and providedbackground information on our par-ticipants in a formal manner.

• The experimenters’ observations let usmonitor users’ progress at any givenpoint in the run. Observations alsohelped us to understand subtleties inthe design that caused emotional re-sponses (such as anger or frustration).By observing users in a constrained,realistic environment, we can captureitems that aren’t readily apparent inquestionnaires or interviews.

• Semistructured interviews, however,let us direct our line of inquiry at theend of each session on the basis ofwhat had occurred in that session.This let us tease out responses fromthe participants that, again, weren’treadily observable in a questionnaire.Participants also suggested valuablenew features or changes to our blog-ging software that we hadn’t consid-ered. Because we conducted groupinterviews, participants could build oneach others’ suggestions for improve-ments, leading to more specific oralternative designs.

JANUARY–MARCH 2007 PERVASIVEcomputing 85

To make immediate use of a handheld computer

ain a mobile, collaborative environment, users

must have a cognitive scaffolding of sorts—a

clear goal and well-understood instructions.

Page 6: Scavenger Hunt: An Empirical Method for Mobile Collaborative …mikem/pubs/MassimiGanoeCarroll... · 2007-03-10 · rather how well the scavenger hunt for-mat highlighted the problems

The scavenger hunt method’s value liesin its ability to compromise between sev-eral extremes. It gives experimenters agood amount of control, but not so muchthat users are guided to an outcome.Users instead are free to push the soft-ware to the boundaries of its use. Fur-thermore, the scavenger hunt methoddoesn’t require a trip to the target localewhere the software is to be deployed.However, it still offers a fair amount ofecological validity by mimicking the sit-uations faced by workers in the field.

DiscussionHelen Cole and Danaë Stanton offer

insight from three case studies that lever-age mobile devices for collaboration:KidStory, Hunting of the Snark, andAmbient Wood.2 Because, as they note,people “use their own mobility and themobility of artifacts to coordinate their

collaboration with one another,” it is im-portant to investigate how mobility im-pacts collaborative problem-solving.Their work does not focus on problemsolving specifically, but instead examinescollaboration in adjacent areas: learningenvironments (Ambient Wood), chil-dren’s entertainment (Hunting of theSnark), and storytelling (KidStory).Their use of location-based informationin Ambient Wood and Hunting of theSnark parallels our own use of clues inthe scavenger hunt.

As in Hunting of the Snark, we mod-eled our prototyping tool as a game. Thislets us leverage people’s preexistingknowledge about games. They knowthat the task is timed and that they needto overcome an obstacle to achieve agoal. Because games are familiar andfun, users feel more comfortable thanwhen in a stilted, strictly controlled lab-

oratory environment. The sense of chal-lenge also encourages participants toremain engaged and active. They aremotivated to win throughout the trial.

Because the scavenger hunt acts as anearly-stage evaluation tool for iteratingthrough designs of mobile collaborativeproblem-solving software, developerswould likely have to run multiple scav-enger hunts each time they change thesoftware. Changes suggested by scavengerhunts will influence the product’s designin future iterations. Eventually, scavengerhunts will fail to reveal new flaws in thesoftware. Then, developers will need toperform actual field tests or deploy thesoftware. In either case, we hope they canbenefit from the software tests and debug-ging in a realistic environment.

Although the scavenger hunt methodeffectively mimics the challenges of work-ing in the field, it is by no means perfect.

86 PERVASIVEcomputing www.computer.org/pervasive

C O M P U T E R - S U P P O R T E D C O O P E R A T I V E W O R K

Table 1. Postexercise questionnaire responses. Ratings range from –3 (strongly disagree) to +3 (strongly agree). Average response calculated from eight participants in three full studies (pilot study data not included).

Number Question text Average response

1 Prior to today, I had extensive experience with handheld computers. –1.625

2 Prior to today, I had extensive experience with mobile telephones (cell phones). –1.625

3 I found it relatively easy to enter clue information into my handheld computer. 0.625

4 I felt I was able to control the sharing and receiving of clues efficiently. –0.125

5 At all times, I was generally aware of the location of each of my teammates. 1.125

6 Most of the time I was unsure what each of my teammates was doing. 0

7 I became frustrated because I worked on a task and later found out that someone –0.5else had already completed it.

8 I found it easy to rearrange the order of the clues as displayed on my handheld. –2

9 I became frustrated trying to share clues with my teammates. 0.625

10 I thought this task was difficult to complete. 1

11 I thought the handheld computer helped me organize information effectively. –1.75

12 I would have liked a space to work on the puzzle that was not visible by my teammates. –0.875

13 The handheld computer distracted me from other tasks. –1.5

14 I would have liked to draw pictures during the task. 2.75

15 I would have liked to make a table during the task. 3

16 I would probably only use a handheld computer if I were working alone. 0

17 I would not want to use a handheld computer for group work at school. –1

18 I feel all team members contributed equally to the completion of the task. 2.75

19 If I were to complete this task again, I would want to use a handheld computer –2.25with the same software I used today.

20 If I were to complete this task again, I would want to use a handheld computer –1.125more than paper and pencil.

21 If I were to complete this task again, I would want to use a mobile phone –0.5more than a handheld computer.

Page 7: Scavenger Hunt: An Empirical Method for Mobile Collaborative …mikem/pubs/MassimiGanoeCarroll... · 2007-03-10 · rather how well the scavenger hunt for-mat highlighted the problems

JANUARY–MARCH 2007 PERVASIVEcomputing 87

Its use can mean settling for a watered-down version of the actual task to becompleted. Usability flaws are alwayspossible if the circumstances aren’t iden-tical. What’s more, in the laboratory youcan’t compensate for environmentalproblems resulting from working in thefield. Users might be confused by the taskor react negatively to the time limit orcompetitiveness that can arise in thegame-like circumstances. The scavengerhunt, like any laboratory-based simula-tion of real-life conditions, doesn’t per-fectly mimic the realities workers in thefield face. However, its low overhead andrealistic setting makes it an attractive sub-stitute for field conditions.

The scavenger hunt is a light-weight, cost-effective way toprototype a project whilemaintaining similar conditions

to those faced in the field. Further workin this area will strive to improve the col-laborative problem-solving software formobile users and to refine the scavengerhunt model for evaluating design. Ourmethod improved rapidly from initialexperimental design, to the pilot study,

to our user studies. We believe that moreuser studies will help us refine the empir-ical method. We’re interested in hearingfrom practitioners and designers whouse scavenger hunts to test their prod-ucts and hope we’ve provided a founda-tion that developers can build on to cre-ate better prototyping tools.

ACKNOWLEDGMENTSWe thank Matthew Dalius, Michael Race, andBrent Schooley for technical assistance; JanetMontgomery for testing the riddle and feedback;Matthew Peters and Cecelia Merkel for recruit-ment assistance; and the members of Penn State’sComputer-Supported Collaboration and Learninglab for their support and accommodation.

REFERENCES1. L. Liu and P. Khooshabeh, “Paper or Inter-

active? A Study of Prototyping Techniquesfor Ubiquitous Computing Environments,”Proc. Conf. Human Factors in ComputingSystems (CHI), ACM Press, 2003, pp.1030–1031.

2. H. Cole and D. Stanton, “Designing MobileTechnologies to Support Co-Present Col-laboration,” Personal and Ubiquitous Com-puting, vol. 7, no. 6, 2003, pp. 365–371.

the AUTHORSMichael Massimi is a graduate student with the Dynamic Graphics Project in theDepartment of Computer Science at the University of Toronto. His research interestsinclude assistive technologies for the cognitively impaired, computer-supported col-laborative work, ubiquitous computing, and human-computer interaction. He has aBS in computer science from the College of New Jersey. He’s a student member ofIEEE and the ACM. Contact him at [email protected].

Craig H. Ganoe is a senior research associate with the College of Information Sciencesand Technology at Pennsylvania State University. His research interests include human-computer interaction, in particular applied to mobile computer-supported cooperativework, community computing, and collaborative learning. He has an MS in computerscience from Virginia Tech. He’s the information director for the ACM Transactions onComputer-Human Interaction and a member of the ACM. Contact him at [email protected].

John M. Carroll is the Edward M. Frymoyer Chair Professor of Information Sciencesand Technology at Pennsylvania State University. His research interests include meth-ods and theory in human-computer interaction, particularly as applied to networkingtools for collaborative learning and problem solving, and the design of interactiveinformation systems. He serves on several editorial boards for journals, handbooks,and series and is editor in chief of the ACM Transactions on Computer-Human Interac-tions. He is a fellow of the ACM, the IEEE, and the Human Factors and ErgonomicsSociety. Contact him at [email protected].

Any products your peersshould know about?

Write a review for IEEEPervasive Computing,

and tell us why you wereimpressed. Our NewProducts department

features reviews of thelatest components,

devices, tools, and otherubiquitous computing

gadgets on the market.

Send your reviews andrecommendations [email protected]

today!

Tried anynew gadgets

lately?

www.computer.org/pervasive