Detecting Concealment of Intent in Transportation Screening: A Proof of Concept

10
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 10, NO. 1, MARCH2009 103 Detecting Concealment of Intent in Transportation Screening: A Proof of Concept Judee K. Burgoon, Douglas P. Twitchell, Matthew L. Jensen, Thomas O. Meservy, Mark Adkins, John Kruse, Amit V. Deokar, Gabriel Tsechpenakis, Shan Lu, Dimitris N. Metaxas, Jay F. Nunamaker, Jr., and Robert E. Younger Abstract—Transportation and border security systems have a common goal: to allow law-abiding people to pass through security and detain those people who intend to harm. Understanding how intention is concealed and how it might be detected should help in attaining this goal. In this paper, we introduce a multidisciplinary theoretical model of intent concealment along with three verbal and nonverbal automated methods for detecting intent: message feature mining, speech act profiling, and kinesic analysis. This paper also reviews a program of empirical research supporting this model, including several previously published studies and the results of a proof-of-concept study. These studies support the model by showing that aspects of intent can be detected at a rate that is higher than chance. Finally, this paper discusses the implications of these findings in an airport-screening scenario. Index Terms—Concealment, kinesic analysis, message feature mining, security, speech act profiling. Manuscript received March 6, 2007; revised May 7, 2008 and October 6, 2008. First published February 3, 2009; current version published February 27, 2009. This work was supported in part by the U.S. Air Force Office of Scientific Research under the U.S. Department of Defense University Research Initiative Grant F49620-01-1-0394 and in part by the U.S. Department of Homeland Security under Cooperative Agreement N66001-01-X-6042. The Associate Editor for this paper was D. Zeng. J. K. Burgoon is with the Center for Identification Technology Research, The University of Arizona, Tucson, AZ 85719-4427 USA (e-mail: jburgoon@ cmi.arizona.edu). D. P. Twitchell is with the School of Information Technology, Illinois State University, Normal, IL 61790-5150 USA (e-mail: [email protected]). M. L. Jensen is with the Management Information Systems Division, Price College of Business, University of Oklahoma, Norman, OK 73019 USA (e-mail: [email protected]). T. O. Meservy is with the Department of Management Information Sys- tems, University of Memphis, Memphis, TN 38152 USA (e-mail: tmeservy@ memphis.edu). M. Adkins is with Accenture, Sahuarita, AZ 85629 USA (e-mail: mark@ akcollaborations.com). J. Kruse is with AK Collaborations, Sahuarita, AZ 85629 USA (e-mail: [email protected]). A. V. Deokar is with the College of Business and Information Systems, Dakota State University, Madison, SD 57042 USA (e-mail: amit.deokar@ dsu.edu). G. Tsechpenakis is with the Center for Computational Sciences, University of Miami, Coral Gables, FL 33146 USA (e-mail: [email protected]). S. Lu is with Topcon Medical Systems, Inc., Paramus, NJ 07652 USA (e-mail: [email protected]). D. N. Metaxas is with the Computational Biomedicine Imaging and Modeling Center, Rutgers University, Piscataway, NJ 08854 USA (e-mail: [email protected]). J. F. Nunamaker, Jr. is with the Center for the Management of Infor- mation, The University of Arizona, Tucson, AZ 85719-4427 USA (e-mail: [email protected]). R. E. Younger is with Space and Naval Warfare (SPAWAR) Systems Center, San Diego, CA 92152-5001 USA (e-mail: [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TITS.2008.2011700 I. I NTRODUCTION S AFEGUARDING the homeland against deception and infiltration by adversaries who may be planning hostile actions poses one of the most daunting challenges of the 21st century. Achieving high information assurance is complicated not only by the speed, complexity, volume, and global reach of communications and information exchange that current infor- mation technologies now afford but by the fallibility of humans to detect hostile intent as well. All too often, the people pro- tecting our borders, transportation systems, and public spaces are handicapped by untimely and incomplete information, over- whelming flows of people and materiel, and the limits of human vigilance. The interactions and complex interdependencies of informa- tion systems and social systems render the problem difficult and challenging. We simply do not have the wherewithal to specif- ically identify every potentially dangerous individual around the world. Although completely automating concealment detec- tion is an appealing prospect, the complexity of detecting and countering hostile intentions defies a fully automated solution. A more promising approach is to integrate improved human detection with automated tools for behavioral analysis that augment other biometric systems, the end goal being a system that singles out individuals for further scrutiny in a manner that reduces false positives and false negatives. Transportation and border security systems have a common goal: to allow law-abiding people to pass through checkpoints and detain those people with hostile intent. These systems employ a number of security measures that are aimed at ac- complishing this goal. The methods and technologies described in this paper are intended to be useful, someday, in prescreen- ing, primary screening, and secondary screening activities. We begin by distinguishing between deception and intent detection, and by reviewing alternative detection methods that have been investigated. Next, we introduce a theoretical model of intent concealment and review empirical evidence from a program of study that has been undertaken at The University of Arizona, Tucson, to understand deception, concealment, and their detec- tion. These studies show that verbal and nonverbal automated methods have the potential to be useful in detecting deception, including concealment. We also report the methods and the results of a proof-of-concept study showing the potential of automated kinesic analysis in tracking behaviors indicative of concealment. Finally, we discuss how these methods might be used in an airport-screening scenario to augment current security measures. 1524-9050/$25.00 © 2009 IEEE

Transcript of Detecting Concealment of Intent in Transportation Screening: A Proof of Concept

Page 1: Detecting Concealment of Intent in Transportation Screening: A Proof of Concept

IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 10, NO. 1, MARCH 2009 103

Detecting Concealment of Intent in TransportationScreening: A Proof of Concept

Judee K. Burgoon, Douglas P. Twitchell, Matthew L. Jensen, Thomas O. Meservy, Mark Adkins, John Kruse,Amit V. Deokar, Gabriel Tsechpenakis, Shan Lu, Dimitris N. Metaxas, Jay F. Nunamaker, Jr., and Robert E. Younger

Abstract—Transportation and border security systems have acommon goal: to allow law-abiding people to pass through securityand detain those people who intend to harm. Understanding howintention is concealed and how it might be detected should help inattaining this goal. In this paper, we introduce a multidisciplinarytheoretical model of intent concealment along with three verbaland nonverbal automated methods for detecting intent: messagefeature mining, speech act profiling, and kinesic analysis. Thispaper also reviews a program of empirical research supportingthis model, including several previously published studies andthe results of a proof-of-concept study. These studies support themodel by showing that aspects of intent can be detected at arate that is higher than chance. Finally, this paper discusses theimplications of these findings in an airport-screening scenario.

Index Terms—Concealment, kinesic analysis, message featuremining, security, speech act profiling.

Manuscript received March 6, 2007; revised May 7, 2008 and October 6,2008. First published February 3, 2009; current version published February 27,2009. This work was supported in part by the U.S. Air Force Office of ScientificResearch under the U.S. Department of Defense University Research InitiativeGrant F49620-01-1-0394 and in part by the U.S. Department of HomelandSecurity under Cooperative Agreement N66001-01-X-6042. The AssociateEditor for this paper was D. Zeng.

J. K. Burgoon is with the Center for Identification Technology Research,The University of Arizona, Tucson, AZ 85719-4427 USA (e-mail: [email protected]).

D. P. Twitchell is with the School of Information Technology, Illinois StateUniversity, Normal, IL 61790-5150 USA (e-mail: [email protected]).

M. L. Jensen is with the Management Information Systems Division, PriceCollege of Business, University of Oklahoma, Norman, OK 73019 USA(e-mail: [email protected]).

T. O. Meservy is with the Department of Management Information Sys-tems, University of Memphis, Memphis, TN 38152 USA (e-mail: [email protected]).

M. Adkins is with Accenture, Sahuarita, AZ 85629 USA (e-mail: [email protected]).

J. Kruse is with AK Collaborations, Sahuarita, AZ 85629 USA (e-mail:[email protected]).

A. V. Deokar is with the College of Business and Information Systems,Dakota State University, Madison, SD 57042 USA (e-mail: [email protected]).

G. Tsechpenakis is with the Center for Computational Sciences, Universityof Miami, Coral Gables, FL 33146 USA (e-mail: [email protected]).

S. Lu is with Topcon Medical Systems, Inc., Paramus, NJ 07652 USA(e-mail: [email protected]).

D. N. Metaxas is with the Computational Biomedicine Imaging andModeling Center, Rutgers University, Piscataway, NJ 08854 USA (e-mail:[email protected]).

J. F. Nunamaker, Jr. is with the Center for the Management of Infor-mation, The University of Arizona, Tucson, AZ 85719-4427 USA (e-mail:[email protected]).

R. E. Younger is with Space and Naval Warfare (SPAWAR) Systems Center,San Diego, CA 92152-5001 USA (e-mail: [email protected]).

Color versions of one or more of the figures in this paper are available onlineat http://ieeexplore.ieee.org.

Digital Object Identifier 10.1109/TITS.2008.2011700

I. INTRODUCTION

SAFEGUARDING the homeland against deception andinfiltration by adversaries who may be planning hostile

actions poses one of the most daunting challenges of the 21stcentury. Achieving high information assurance is complicatednot only by the speed, complexity, volume, and global reach ofcommunications and information exchange that current infor-mation technologies now afford but by the fallibility of humansto detect hostile intent as well. All too often, the people pro-tecting our borders, transportation systems, and public spacesare handicapped by untimely and incomplete information, over-whelming flows of people and materiel, and the limits of humanvigilance.

The interactions and complex interdependencies of informa-tion systems and social systems render the problem difficult andchallenging. We simply do not have the wherewithal to specif-ically identify every potentially dangerous individual aroundthe world. Although completely automating concealment detec-tion is an appealing prospect, the complexity of detecting andcountering hostile intentions defies a fully automated solution.A more promising approach is to integrate improved humandetection with automated tools for behavioral analysis thataugment other biometric systems, the end goal being a systemthat singles out individuals for further scrutiny in a manner thatreduces false positives and false negatives.

Transportation and border security systems have a commongoal: to allow law-abiding people to pass through checkpointsand detain those people with hostile intent. These systemsemploy a number of security measures that are aimed at ac-complishing this goal. The methods and technologies describedin this paper are intended to be useful, someday, in prescreen-ing, primary screening, and secondary screening activities. Webegin by distinguishing between deception and intent detection,and by reviewing alternative detection methods that have beeninvestigated. Next, we introduce a theoretical model of intentconcealment and review empirical evidence from a program ofstudy that has been undertaken at The University of Arizona,Tucson, to understand deception, concealment, and their detec-tion. These studies show that verbal and nonverbal automatedmethods have the potential to be useful in detecting deception,including concealment. We also report the methods and theresults of a proof-of-concept study showing the potential ofautomated kinesic analysis in tracking behaviors indicative ofconcealment. Finally, we discuss how these methods mightbe used in an airport-screening scenario to augment currentsecurity measures.

1524-9050/$25.00 © 2009 IEEE

Page 2: Detecting Concealment of Intent in Transportation Screening: A Proof of Concept

104 IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 10, NO. 1, MARCH 2009

II. DECEPTION, CONCEALMENT, INTENT, AND BEHAVIOR

Deception is defined as a message that is knowingly trans-mitted with the intent to foster false beliefs or conclusions [1].Thus, concealment is one strategy a deceiver may use tomislead. Over the past several years, the Center for the Man-agement of Information, The University of Arizona, has con-ducted over a dozen experiments to study deception with over3500 subjects [2]–[5]. These experiments have been instrumen-tal in understanding the factors influencing deception and haveguided the building of automated tools for detecting deceptionand the creation of training for security personnel [6], [7].Inasmuch as a person will most likely be deceptive abouthostile intentions, research in deception detection has led to thequestion of whether concealed malicious intent can be inferredfrom deception cues in communication.

The quest for the perfect lie detector or truth serum hasresulted in only a few modest successes. The most common andprobably the most controversial method of deception detectionis the use of the polygraph, which is commonly known as the“lie-detector test.” In a summary of laboratory tests, Vrij [8]reports that the polygraph is about 82% accurate at identify-ing deceivers. The National Academy of Sciences, however,concluded that such figures often overestimate actual results,particularly in personnel screening [9]. Although it is not ad-missible in court, the polygraph is useful in some investigationsfor identifying potential suspects and is relatively accurate com-pared to other methods. Nevertheless, as a screening tool, thepolygraph suffers from being an invasive and time-consumingprocedure that requires highly trained examiners to interpretresults. These qualities render it an infeasible option for use inmost everyday situations or large-scale screenings.

Other techniques, such as statement validity analysis meth-ods, criteria-based content analysis (CBCA), and reality mon-itoring (RM), are based on the content of interviews withsubjects rather than on physiological arousal, as with the poly-graph. These methods, which require a structured interviewwith the subject suspected of being deceptive, are still intrusive,although not physically invasive. They also require highlytrained interviewers to conduct the interview and highly skilledanalysts to review and interpret the statements. Finally, theylack the prospect of returning real-time or rapid results. Ina face-to-face study of 73 nursing students, Vrij et al. [10]found that the use of CBCA and RM to detect deception wassuccessful at rates of 79.5% and 64.1%, respectively.

Not all deception detection methods are invasive or intru-sive. Layered voice analysis, for example, is a technique thatanalyzes stress-related vocal changes as a measure of arousaland can be conducted without subject awareness from au-dio recordings. However, despite claims that the technique isroughly equivalent in accuracy to the polygraph [11], system-atic research has challenged the reliability of the instruments[12], [13].

Shortcomings in technological detection are matched byhuman inability to detect deception at rates that are betterthan chance [14]. Several possible reasons have been givenfor the lack of detection accuracy, including truth bias, visualdistraction, situational familiarity, and idiosyncratic behaviors

that cloud true deception cues (see [14, pp. 79–81 and 98–99]for more details).

Searching for deceptive cues in behavior has led us to ex-amine one of the roots of deception—intentions that the senderwants to conceal. Intentions may be manifest by any numberof behaviors, and a single behavior could indicate a numberof different intentions. The relationship between observablebehavior and multiple intentions renders the task of identifyingconcealment of intent particularly difficult. Hence, our currentresearch focuses on understanding these mappings and lever-aging experience in the area of deception detection to producemethods that identify whether the true intent of a person is beingconcealed.

III. THEORETICAL FOUNDATION

Several theories and models offer useful perspectives on thelinkage between concealment and overt behavioral manifes-tations that elicit trust or suspicion. Three theories that areparticularly germane—interpersonal deception theory (IDT),expectancy violations theory (EVT), and signal detection the-ory (SDT)—are integrated to produce a model of suspicious andtrust-eliciting verbal and nonverbal communication. Addition-ally, we are developing a theory-guided taxonomy for cluster-ing verbal and nonverbal behaviors into appropriate groupingsof suspicious and nonsuspicious behaviors to identify thosewho have the highest probability of harboring concealed mali-cious intent.

IDT is a key theory for mapping behavioral cues into gen-eral behavioral characteristics of deception [15]. IDT depictsthe process-oriented nature of interpersonal deception and themultiplicity of preinteractional and interactional factors thatare thought to influence it. Among its relevant precepts is theassumption that deception is a strategic activity subject to avariety of tactics for evading detection. It also recognizes theinfluence of receiver behaviors on sender displays, and it viewsdeception as a dynamic and iterative process and as a game ofmoves and countermoves that enable senders to make ongo-ing adaptations that further hamper detection. Consequently, atheory of suspicious and trust-eliciting behavior must take intoaccount a variety of moderator variables, each of which mayspawn a different behavioral profile.

EVT is concerned with what nonverbal and verbal behaviorpatterns are considered normal or expected, what behaviorsconstitute violations of expectations, and what consequencesviolations create [16]. Its proponents contend that specificbehavioral cues are less useful than knowing whether a sender’sbehavior conforms to or violates expected behavioral patternsand that receivers are more likely to be attuned to such viola-tions. In other words, it is more useful to classify communi-cation according to whether it includes behavioral anomalies,deviations from a baseline, or discrepancies among indicators.Behavioral patterns that include deviations and anomalies arepredicted to influence receiver judgments of credibility anddeceit. The theory distinguishes between positive and negativeviolations. Positive violations may actually foster perceivedtrustworthiness and credibility, whereas negative violationsshould foster suspicion.

Page 3: Detecting Concealment of Intent in Transportation Screening: A Proof of Concept

BURGOON et al.: DETECTING CONCEALMENT OF INTENT IN TRANSPORTATION SCREENING 105

The process of interpreting different verbal and nonverbalcues and clustering them together in the form of behavioralcharacteristics to contrast with the expected behavioral char-acteristics is nontrivial and challenging, considering the largevariation in the behaviors of different human beings.

Finally, a threshold for deriving the level of suspicion or trustis based on SDT. Developed by Green and Swets [17], SDTdefines two sets of probabilities in a signal detection task, inwhich two possible stimulus types must be discriminated. Twopossible types are truthfulness and deception.

According to SDT, the output of such a binary test is basedon the value of a decision variable, which, in the context ofconcealment identification, is the suspicion level. The thresholdvalue of the decision variable is called the criterion. For hu-mans, the selection of a criterion is related not only to the valueof actual stimuli but also to their psychological characteristics.In other words, the criterion is a function of perceived stimuli,which, in the context of concealment detection, are the behav-ioral profile deviations. SDT calculation methods described in[18] can be used to study the distribution of the values of thesuspicion-level variable across the behavioral profile deviationsto determine the appropriate criterion for the final decisionmaking.

We have integrated these multidisciplinary theories and mod-els into a single systemic framework that guides our experimen-tal work and tool development. The model, shown in Fig. 1, is adecision model for judging how trustworthy an individual is ona trust–suspicion spectrum, based on demonstrated behavioralcues. The actual intent of the individual can be consideredas input for the model, which is demonstrated in the formof the behavioral cues, either verbal or nonverbal. These be-haviors include linguistic, content, metacontent, kinesic, prox-emic, chronemic, and paralinguistic cues. The behaviors areinfluenced by the interaction of sender and receiver actions,cognition, and their mutual influence.

Linguistic cues include features like word selection, phras-ing, and sentence structure. An example is a person demonstrat-ing other-centeredness by consistently not referring to himselfor herself. Content/theme cues are taken from the meaningof the sender’s words. Metacontent cues relate to content, butcan be described independent of the specifics. For example,the quantity of details measured with RM is a metacontentfeature. Kinesic cues are found in the way a person moves.Proxemic cues relate to spacing patterns (e.g., sitting, standing)and interpersonal distance from other people and other objects.For example, sitting in the back row of a meeting might indicatedisinterest in the meeting. Chronemic cues concern a person’suse of time. For example, a person might establish dominanceby arriving late to a meeting. Paralinguistic cues are obtainedfrom the vocal channel. They include such features as pitch,speaking tempo, and dysfluent speech.

The observed behavioral characteristics of the sender can becompared with the normal or expected characteristics stored ina repository. Unexpected deviations may indicate concealmentof malicious intent.

Ideally, the immediate behavioral characteristics are com-pared with the individual’s historical characteristics across mul-tiple episodes within a given context. When individual-level

histories are not available, only the second set of expectationscan be utilized. This second set of expectations is composedof a general profile of expected behavior across people withinthe same scenario. For example, when guilty suspects arequestioned face-to-face, they may show a combination of verbalbrevity, vocal tension, and overcontrol of movement. The resultis that such individuals typically look more tense, unpleasant,aroused, and submissive than those with nothing to hide [19].

The deviation between the observed behavioral character-istics and the expected individual and group characteristicsin either positive or negative directions affects judgments ofsuspiciousness. By setting a proper threshold on this deviationmeasure, and given a certain context, the automated tool will beable to indicate the probability that the sender is suspicious ortrustworthy.

IV. METHODS FOR DETECTING CONCEALMENT

To begin examining the effectiveness of our model, we devel-oped three automated methods for capturing cues to deception.Two of these (message feature mining and speech act profil-ing) capture verbal cues, whereas the other (kinesic analysis)captures nonverbal cues. This section describes each of thesemethods. The verbal methods have the potential to aid trans-portation security by giving screeners, particularly secondaryscreeners, feedback concerning potential concealment in se-curity interactions. To effectively work in this context, theseverbal deception and concealment detection methods would re-quire effective speech recognition software. We do not discussautomatic speech recognition in this paper; however, currentspeech-recognition technologies should be sufficient for therequirements of the verbal detection methods since neither re-quire complete word recognition, and the probabilistic nature ofboth means that any speech recognition errors are simply (but,of course, undesirably) added to the total error of the system.

A. Message Feature Mining

Message feature mining [7] is a method for classify-ing messages as deceptive or truthful based on content-independent message features. It can be divided into two majorsteps—extracting features and classifying messages—eachwith its own substeps.

Extracting features includes choosing appropriate featuresthat are present in the messages to be classified, determining thegranularity of feature aggregation, and calculating the featureson the desired text. Of these steps, the most difficult is theselection of appropriate features, given the potentially infinitenumber of possible features. Choosing features that are mostappropriate for classifying deception or concealment requiresknowledge of the deception domain. A number of general fea-tures have been identified and may be useful in many contexts.Table I (adapted from [3]) shows four example features and howthey are calculated.

Classifying messages involves designating ground truth ina randomly selected training set, preparing data for automaticclassification, choosing an appropriate classification method,training and testing the model, and evaluating the results.

Page 4: Detecting Concealment of Intent in Transportation Screening: A Proof of Concept

106 IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 10, NO. 1, MARCH 2009

Fig. 1. Model of concealment based on observed behavioral cues.

TABLE IEXAMPLE FEATURES

After the data are ready for classification, appropriate classi-fication methods must be chosen from among several possiblemethods, each with its own advantages and disadvantages.Previous studies have successfully used a number of statisticalmethods as well as machine-learning techniques (e.g., [2]).

B. Speech Act Profiling

Speech act profiling [20] is a method of analyzing andvisualizing conversations and participants’ behavior accordingto how they converse rather than the subject of the conversation.

The speech act theory posits that any utterance (usually a sen-tence) contains a propositional content part c and an illocution-ary force f [21]. The propositional content is the meaning ofthe words that created the utterance. For example, the statement“it’s cold in here” has the propositional content that the roomor area where the speaker is located is cold. The illocutionaryforce, or speech act, is what the speaker does when speaking.In this case, the speaker asserts that something about the worldis true. If the same utterance “it’s cold in here” was uttered bya general in the army to a private, however, it might be an orderto turn up the thermostat rather than just an assertion.

Speech acts are important in deception detection for tworeasons. First, they are the means by which deception is trans-

mitted, and second, they provide a mechanism for studyingdeception in conversations in a content-independent manner.Deceptive speakers may express more uncertainty in their mes-sages than truth tellers [19], and this uncertainty can be detectedin the type of speech acts speakers use. For example, uncertainspeakers should tend to use more opinions, maybe expressions,and questions than truth tellers.

Methods have been created to automatically identify and ex-tract speech acts [22]. One approach uses a manually annotatedcorpus of conversations to train n-gram language models anda hidden Markov model (HMM), which, in turn, identify themost likely sequence of speech acts in a conversation. Ratherthan classifying each utterance as a single speech act, speechact profiling extracts a list of all possible speech acts and theirprobabilities from the HMM. It then aggregates these probabil-ities together and subtracts from them a “normal” conversationprofile (created from the training corpus) to create a profile foran entire conversation. An example profile is shown in Fig. 2.

Fig. 2(a) is a speech act profile created from all of theutterances from a single multiplayer online game, StrikeCom[23]. One of the players—Space1—has been told to decep-tively steer the group away from bombing the correct lo-cations (the goal of the game) as well as to conceal herintentions. In this particular game, the profile indicates thatthe participant playing Space1 is uncertain compared to theother participants—Air1 and Intel1—as indicated by the greaternumber of MAYBE/ACCEPT-PARTs (maybe) and OPINIONs(sv), as magnified in Fig. 2(b), and fewer STATEMENTs (sd),as magnified in Fig. 2(c). For example, the following excerptsfrom one game illustrate a pattern of uncertain language. Earlyin the game, Space1 hedges the comment “i got a strike on c2”with the comment “but it says that it can be wrong. . .” Later,

Page 5: Detecting Concealment of Intent in Transportation Screening: A Proof of Concept

BURGOON et al.: DETECTING CONCEALMENT OF INTENT IN TRANSPORTATION SCREENING 107

Fig. 2. (a) Sample speech act profile showing the uncertain behavior by thedeceiver as indicated by the (b) greater number of MAYBE/ACCEPT-PARTs(maybe) and OPINIONs (sv) and (c) fewer STATEMENTs (sd).

Space1 qualifies her advocacy of grid space e3 with “i have afeeling.” In reality, there was no target at e3, and Space1 waslikely attempting to deceive the others as instructed.

C. Kinesic Analysis

1) Location Estimation of the Head and the Hands: Kinesicanalysis operates by identifying the nonverbal behavior thathas been linked with deception. Sample behaviors include acessation of gesturing, postural shifts, and nervous fidgeting[19]. Central to the recognition of these nonverbal signals isthe ability to recognize and track body parts such as the headand the hands in a video. This issue has been investigated inthe past (see [24]), and our use of “blob analysis” providesa useful approach to examining human movement [25], [26].Using color analysis, eigenspace-based shape segmentation,and Kalman filters, we have been able to track the position, thesize, and the angle of different body parts with great accuracy.Fig. 3 shows a single frame of a video that has been subjectedto blob analysis. The ellipses in the figure represent the bodyparts’ position, size, and angle.

Blob analysis extracts hand and face regions using the colordistribution from an image sequence. A 3-D lookup table (LUT)is prepared to set the color distribution of the face and the hands.This 3-D LUT is created in advance using skin-color samples.

Fig. 3. Blobs capture the location of the head and the hands.

Fig. 4. Sample blob measurements and blob data.

After extracting the hand and face regions from an imagesequence, the system computes elliptical “blobs” identifyingcandidates for the face and the hands. The 3-D LUT mayincorrectly identify candidate regions that are similar to the skincolor; however, these candidates are disregarded through finesegmentation and the comparison of the subspaces of the faceand hand candidates. Thus, the most facelike and handlike re-gions in a video sequence are identified. From the blobs, the lefthand, the right hand, and the face can be tracked continuously.For a detailed description of this process, see [27].

From each blob, a number of measurements are recorded foreach frame in an image sequence. As demonstrated in Fig. 4,the center of the blob is captured as x and y coordinates. Thesecoordinates are based on the pixels contained in each frame.Furthermore, the lengths of the major and minor axes of theellipse are recorded in pixels. Finally, the angle of the majoraxis of the blob is recorded. Fig. 4 contains a small exampledata stream from a single blob [28].

From the positions and the movements of the hands and theface, we can make further inferences about the trunk and rela-tions to other people and objects. This allows the identificationof gestures, posture, and other body expressions [27], [29].

Although methods based on color analysis offer a great dealof precision in tracking the head and the hands, there aredrawbacks to such an approach. First, the process is more timeintensive than other gesture recognition methods, and the coloranalysis requires complex initialization that is not found withother methods [30]. Furthermore, the process can be disruptedby significant occlusion such as a subject wearing gloves.However, in a controlled setting, such as indoors at an airport

Page 6: Detecting Concealment of Intent in Transportation Screening: A Proof of Concept

108 IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 10, NO. 1, MARCH 2009

checkpoint, the color analysis offers an effective foundation onwhich behavioral analysis is possible.

V. PREVIOUS EXPERIMENTAL FINDINGS

A. Message Feature Mining

Message feature mining was first tested in a desert survivalexperiment [2]. The experiment placed groups of two ina situation where they ranked 12 items according to howimportant the items are to survival in the desert. Beforebeginning the task, all of the subjects were given expert adviceon how to survive in the desert, whereas one member of asubset of the groups was instructed to deceive. Group membersdiscussed the items and attempted to come to a consensus onhow to rank the items. The deceptive members were encouragedto change the group’s consensus contrary to their own opinionswithout the other group members discovering the deception.

The data consisted of all messages sent by the participants.Each message was manually classified as deceptive or truthfulbased on whether the participant was instructed to be decep-tive. Using message feature mining, the deceiver was correctlyidentified in 70%–80% of the cases, with variability in accuracyrates coming from the classification method [3], [31].

Message feature mining has also been used to examinedeception in face-to-face interviews [32], [33]. Both of thesestudies required some experiment participants to commit amock crime and later be interviewed by a trained interrogator.The interviews were then transcribed, and the statements fromthe innocent participants were compared with the statementsfrom the guilty participants.

B. Speech Act Profiling

To test speech act profiling’s ability to aid in deceptiondetection, Twitchell et al. [34] identified those speech actsthat were related to uncertainty and summed their proportionsfor each participant in each conversation. They found that thedeceptive participants in the conversations were significantlymore uncertain than their fellow players. An example of uncer-tain speech acts from this study was previously shown in Fig. 2.These results show that using speech act profiling as part ofdeception detection in text-based synchronous conversations ispromising.

C. Kinesic Analysis

Kinesic analysis has been successfully used to identify de-ception in a number of experiments involving mock thefts[28], [33]. In each experiment, a randomly selected group wasrequired to perform a mock crime, and then, both innocent andguilty participants were interviewed by trained interviewers.Kinesic analysis was performed on the segments of a videocontaining the descriptions of the crime, and these analysesyielded accuracy rates up to 71% [28].

VI. CURRENT EXPERIMENTAL FINDINGS

Past work in kinesic analysis examined which cues may bediagnostic when judging an interaction. Although this contribu-

Fig. 5. Noticeable differences exist in the changes of velocities of the handsand the head between the states.

tion is useful, more must be done to capture behavioral normsand deviations from norms that are critical for detecting con-cealment (as shown in Fig. 1). To this end, a proof-of-conceptstudy was conducted to investigate the automatic identificationof agitation, overcontrol, and relaxation. Agitation and over-control are frequently associated with deception [8]. Relatedto agitation are manifestations of nervousness and fear [35].One example of nervous behavior is fidgeting [36]. Liars maybe aware of behavioral cues, such as fidgeting, which mightreveal their deception. In an effort to suppress deceptive cuesand appear truthful, liars may overcompensate and dramaticallyreduce all behavior [8], [19]. Such tenseness and overcontrolcan be seen in decreased head movements [37], leg movements[38], and hand and arm movements [10].

Using our model presented in Fig. 1, a baseline of behaviorwas established for agitated, relaxed, and overcontrolled be-havioral states using a set of three professional actors and twostudent participants. Recorded video segments were subjectedto blob analysis, and the data from each video frame, as wellas the velocity of the hands’ movements, the frequency of thehands touching the face, and the frequency of the hands comingtogether, were used to calculate the behavioral state that is beingdisplayed. Fig. 5 graphically displays some sample data andsample frames from the video segments. The figure shows thevelocity of the head and the hands for each frame of the video.In the agitated state, the change in the head and hand positionsis rapid and frequent. In the controlled state, the change in thehead and hand positions is slow and infrequent, and the relaxedstate shows moderate changes in position and velocity.

The data captured in the blob analysis from the video clipswere then used to calculate the behavioral state that is associ-ated with the movement. The state is calculated as

State = [W1 ∗ F1 + W2 ∗ (F2 + F3)] ∗ F0 (1)

where F1 is the variance of the head velocity Vhead, i.e., F1 =var(Vhead), and Fi = var(Vhand(i))/var(Phand(i)), i = 2, 3,with Vhand(i) and Phand(i) indicating a hand’s velocity andposition, respectively.

Also, W1 and W2 are weights that are associated with headand hand parameters, which are defined as

W1 =1

var2(Phead)(2)

Page 7: Detecting Concealment of Intent in Transportation Screening: A Proof of Concept

BURGOON et al.: DETECTING CONCEALMENT OF INTENT IN TRANSPORTATION SCREENING 109

TABLE IIBEHAVIORAL STATES OF ACTORS

where Phead is the position of the head, and

W2 =1

(fhand−face + fhand−hand)(3)

where fhand−face is the frequency of a hand touching theface, and fhand−hand is the frequency of the hands touchingeach other.

The weights defined in (2) and (3) have the following mean-ing. From our observations, we could not discern whether asubject is agitated or relaxed from the head movements, whichare usually rapid in these cases. Thus, when the head abruptlymoves and often, we do not take it into consideration for ourresults. Also, the more often two blobs are merged into one, i.e.,the more often the hands touch each other or a hand touches theface, the less information we have about the hand movementsoutside these events (time segments), and thus, the respectivepositions and velocities are less useful.

Finally, the parameter F0 is used as a normalization fac-tor, i.e.,

F0 =fhand−face

Dhand−face(4)

where Dhand−face is the duration (the number of frames) of theevent “hand on face.” After normalization in the range between0.0 and 1.0, we can obtain the rough estimation of the state asshown in Table II [27].

Using these state thresholds, the state was accurately deter-mined for the two student participants and the three professionalactors displaying agitated, controlled, and relaxed behaviors.

Clearly, automatically judging behavioral states is very dif-ficult. Although this proof-of-concept study is simplistic incalculating behavioral states from the observed movement, itdoes show that such an approach may be possible. To gainmore acceptable results, a more flexible model must be createdfor behavioral state determination, and more features and cuesshould be included in the model.

VII. FUTURE APPLICATION TO

TRANSPORTATION SECURITY

Both verbal and nonverbal methods for concealment anddeception detection have the potential to be useful in trans-portation security. In airport and border screening, at least oneinstance of face-to-face verbal interaction occurs. With ade-quate speech recognition—which is continuously improving—message feature mining and speech act profiling, along withother methods such as vocalic analysis, could be combinedto aid primary screeners in detecting those persons that areattempting to conceal hostile intent. However, since relativelylittle is often said in a primary screening, the verbal methods

Fig. 6. Airport security layers. (Bolded boxes) Layers amenable to nonverbalanalysis. (Double-bordered boxes) Layers that could be subject to both verbaland nonverbal analyses.

should have greater success in a secondary screening, where aninterview-style conversation occurs. The nonverbal methods de-scribed above could also help during the primary and secondaryscreening activities, but, additionally, could be of great serviceduring prescreening and postscreening surveillance.

A possible application for these methods is in an airport sce-nario. Aviation security is perhaps the most familiar transporta-tion security scenario. Most security systems are implementedin levels. No single level of security is expected to apprehendall attempts at a security breach. Instead, each level lowers theprobability that a breach will occur. Aviation security oper-ates on this principle. The layers begin with law-enforcementagencies searching out and apprehending potential threats toaviation security. They end with such measures as reinforcedcockpit doors, flight crew training, and professional air mar-shals. Between these layers lies security in the airport itself.

Fig. 6 depicts some of the major layers that are involvedin passenger screening at airports. The first layer, i.e., ticketpurchase, might occur inside or outside the airport, and thetransaction only sometimes occurs with a verbal exchange. Thisverbal exchange has the potential to be subject to analysis; how-ever, any analysis would likely only be able to be done post hocsince training ticket agents to use a deception detection systemwould be extremely challenging. After a ticket purchase in theU.S., passenger information that is obtained during the ticket-purchase process is sent to a system that prescreens the passen-ger. This system checks the passenger information against thewatch lists of known threats to the U.S. security and assigns thetravelers with security risk ratings based on confidential criteria.Some of the information that is sent to the prescreening systemis also collected at check-in time, i.e., the next layer.

Check-in provides another layer where passengers arescreened using the prescreening system, and verbal analysiscould but is unlikely to be used for the same reasons at theticket-purchase layer. Nonverbal analysis, however, could beginat this point. Cameras that are placed around the airportcould track the movements of individuals at most layers of

Page 8: Detecting Concealment of Intent in Transportation Screening: A Proof of Concept

110 IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 10, NO. 1, MARCH 2009

Fig. 7. Process of augmenting verbal deception detection in the secondaryscreening portion of Fig. 6.

the security process. Surveillance while travelers are waitingimmediately before screening could provide an opportunityfor identifying concealment using blob analysis. At this point,travelers are slowed and often stopped in their progress towardtheir gate, reducing the need to track forward movement andallowing a more controlled environment. Screening areas couldbe designed so that only one person is in the field of view of thecamera at one time. Additionally, the anxiety of being screenedmay elicit more of the behaviors of interest from a suspectthan at other areas of the airport. Although prescreening maybe a productive area, surveillance and nonverbal analysis couldcontinue through the remainder of the process until boarding.

Unlike verbal analysis, nonverbal analysis does not neces-sarily require the airline or security agents to interact withthe concealment detection system. Instead, trained experts ina control room could use the system to create alerts whensuspicious activity occurs, which would require further manualanalysis to ascertain any threat.

The ability to detect concealment or deception becomes evenmore critical when a suspicious individual has been identifiedduring primary screening, which may happen at a ticketingcounter, metal detector, or through preliminary verbal or non-verbal analysis. Often, such an individual is asked to undergosome form of secondary screening. The secondary screeningusually takes the form of an interview-style conversation, wherethe suspicious individual is asked several pointed questions.During the interview, the agent must decide the validity ofthe individual’s responses and whether the individual shouldbe allowed to proceed. Because even trained interviewers havedifficulty detecting deception, it would be useful for the inter-viewer to be augmented with unbiased feedback concerning thedeceptive potential of the interviewees. Fig. 7 illustrates howthe verbal and nonverbal methods discussed in this paper couldaugment human agents in interviewing suspicious individualsin secondary screening.

During the secondary screening, if the interviewee does notadmit guilt to any offense, the interviewer must determinewhether the interviewee is concealing hostile intent. There are anumber of interviewing methods that are used by law enforce-ment and others to identify hostile intent. These interviewingmethods can be augmented by a computerized concealment-detection system. The system uses automatic speech recogni-tion to convert the speech into usable text and then feeds thattext and the audio/video stream into a concealment-detectionsystem. These behavioral input streams are run through meth-ods such as message feature mining, speech act profiling, and

kinesic analysis, each of which provides interviewing agentswith feedback to help them determine the credibility of the sub-ject. For example, if the interviewee attempts to equivocate inresponse to the questions posed or if the nonverbal behavior ofthe individual is overly rigid, the concealment-detection systemcould detect the uncertainty expressed and alert the interviewer.The interviewer could then use this information to pursue amore extensive line of questioning than he or she would havedone otherwise. Although the system provides some input, thehuman agent still makes the final judgment concerning what todo with the subject.

The security scenario at border crossings is similar to theairport scenario. Instead of ticketing, check-in, screening, andboarding, border crossings often only have primary and sec-ondary screenings. Verbal and nonverbal analyses could aug-ment human abilities during these screenings.

VIII. CONCLUSION

The concealment-detection methods described in this papercould be implemented as part of a comprehensive system forpreventing hostile threats to transportation systems and, moregenerally, to homeland security. Each part of the comprehensivesystem represents a layer of security that reduces the proba-bility of threatening actions. As part of an integrated system,concealment detection, as described here, should be tested bothindependently and together with other security measures.

It is evident that the concealment-detection methods arelargely still exploratory in nature. Much needs to be doneto improve and validate the concealment-detection model andmethods.

The development of a fusion engine to combine the remain-ing nonverbal cues with previously identified text-based andkinesic indicators is one way to strengthen reliability in con-cealment detection and has been depicted in the model in Fig. 1.The data streams from each type of indicator of concealment,both verbal and nonverbal, should be combined for more robustdetection of concealment.

One issue in field-testing any concealment detection systemin airports is the relative rarity of perpetrators that are engagedin concealment with malicious intent. Border crossings in theU.S., on the other hand, are replete with offenders attempting tosmuggle narcotics or themselves across the border. Numerousarrests are made each day at U.S. borders. This high levelof criminal activity represents a potentially valuable data-collection opportunity to study concealment. To this end, itwould be useful to conduct studies and establish baselines forspecific behaviors on border crossings. The data gathered wouldbe extremely rich in behavioral cues and would provide anotherecologically valid test bed, where subjects possess relativelyhigh motivation due to serious possible consequences. Manyof the lessons learned in the border context could then betransferred to the airport context, where criminal activities areless abundant but potentially more dangerous.

With data from contextually valid sources such as interviewswith people who have high motivation to deceive, one couldinvestigate cues that security and law-enforcement officersuse to determine the probability of concealed hostile intent.

Page 9: Detecting Concealment of Intent in Transportation Screening: A Proof of Concept

BURGOON et al.: DETECTING CONCEALMENT OF INTENT IN TRANSPORTATION SCREENING 111

Rich data sets may also be available for training and testingmachine-learning tools in applicable settings, where contextualconstraints such as lighting, space, and equipment issues arepresent.

Although the idea of attaining high information assuranceby automatically detecting deception and concealment seemsappealing, a much more realistic goal is the development of atool to assist humans in their judgment of these behaviors. Thecreation of a tool may be possible through adherence to a the-oretically based model and use of realistic data sets. Althoughthe proof-of-concept study presented here is a small first step,we believe that our approach shows promise in understandingthe detection of concealment.

History has shown that transportation systems are particu-larly vulnerable to security threats, and although much has beendone to mitigate threats to the transportation system, still, morecan be done. Concealment of physical objects has been andcontinues to be a major priority, but concealment of intent is anarea that may also be fruitful to increase security. Concealmentdetection focusing on behavioral characteristics that are auto-matically tracked using computer-vision techniques, messagefeature mining, and speech act profiling could be effectivemeans of adding extra transportation security.

REFERENCES

[1] D. Buller and J. Burgoon, “Interpersonal deception theory,” Commun.Theory, vol. 6, pp. 203–242, 1996.

[2] L. Zhou, J. K. Burgoon, J. F. J. Nunamaker, and D. P. Twitchell, “Au-tomated linguistics based cues for detecting deception in text-basedasynchronous computer-mediated communication: An empirical investi-gation,” Group Decis. Negot., vol. 13, pp. 81–106, Jan. 2004.

[3] L. Zhou, D. P. Twitchell, T. Qin, J. K. Burgoon, and J. F. Nunamaker, Jr.,“An exploratory study into deception detection in text-based computer-mediated communication,” in Proc. 36th Annu. Hawaii Int. Conf. Syst.Sci., Big Island, HI, 2003. CD/ROM.

[4] J. K. Burgoon, J. P. Blair, and E. Moyer, “Effects of communicationmodality on arousal, cognitive complexity, behavioral control and decep-tion detection during deceptive episodes,” in Proc. Annu. Meeting Nat.Commun. Assoc., Miami Beach, FL, 2003.

[5] J. K. Burgoon, J. P. Blair, T. Qin, and J. F. Nunamaker, “Detecting de-ception through linguistic analysis,” in Proc. NSF/NIJ Symp. Intell. Secur.Inform., 2003, p. 958.

[6] J. George, D. P. Biros, J. K. Burgoon, and J. Nunamaker, “Trainingprofessionals to detect deception,” in Proc. NSF/NIJ Symp. Intell. Secur.Inform., Tucson, AZ, 2003, p. 960.

[7] M. Adkins, D. P. Twitchell, J. K. Burgoon, and J. F. Nunamaker, Jr., “Ad-vances in automated deception detection in text-based computer-mediatedcommunication,” in Proc. SPIE Defense Secur. Symp., Orlando, FL, 2004,pp. 122–129.

[8] A. Vrij, Detecting Lies and Deceit: The Psychology of Lying and Implica-tions for Professional Practice. Chichester, U.K.: Wiley, 2000.

[9] B.o.B. Committee to Review Scientific Evidence on the Polygraph, Cog-nitive, and Sensory Sciences and Committee on National Statistics, Divi-sion of Behavioral and Social Sciences and Education, National ResearchCouncil of the National Academies, The Polygraph and Lie Detection,2003, Washington, DC: Nat. Acad. Press.

[10] A. Vrij, K. Edward, K. P. Roberts, and R. Bull, “Detecting deceit viaanalysis of verbal and nonverbal behavior,” J. Nonverbal Behav., vol. 24,no. 4, pp. 239–263, Winter 2000.

[11] R. G. Tippett, A Comparison Between Decision Accuracy Rates ObtainedUsing the Polygraph Instrument and the Computer Voice Stress Analyzerin the Absence of Jeopardy, vol. 2003. Tallahassee, FL: Florida Dept.Law Enforcement, 1994.

[12] D. Upchurch, K. Damphousse, and L. Pointon, “Evaluating voice stressanalysis software for deception detection,” in Proc. Annu. Meeting Amer.Soc. Criminol., Los Angeles, CA, 2006.

[13] H. Hollien and J. D. Harnsberger, “Voice stress analyzer instrumentationevaluation,” in Proc. CIFA, Gainesville, FL, 2006.

[14] G. R. Miller and J. B. Stiff, Deceptive Communication. Thousand Oaks,CA: Sage, 1993.

[15] J. K. Burgoon, D. B. Buller, L. K. Guerrero, and W. A. Afifi, “Inter-personal deception: XII, information management dimensions underlyingdeceptive and truthful messages,” Commun. Monogr., vol. 63, no. 1,pp. 50–69, Mar. 1996.

[16] J. K. Burgoon, “A communication model of personal space violations:Explication and an initial test,” Human Commun. Res., vol. 4, no. 2,pp. 129–142, Winter 1978.

[17] D. M. Green and J. A. Swets, Signal Detection Theory and Psychophysics.New York: Wiley, 1966.

[18] H. Stanislaw and N. Todorov, “Calculation of signal detection theorymeasures,” Behav. Res. Meth. Instrum. Comput., vol. 31, no. 1, pp. 137–149, Feb. 1999.

[19] B. M. DePaulo, B. E. Malone, J. J. Lindsay, L. Muhlenbruck, K. Charlton,and H. Cooper, “Cues to deception,” Psychol. Bull., vol. 129, pp. 75–118,Jan. 2003.

[20] D. P. Twitchell and J. F. Nunamaker, Jr., “Speech act profiling: A prob-abilistic method for analyzing persistent conversations and their partici-pants,” in Proc. 37th Annu. Hawaii Int. Conf. Syst. Sci., Big Island, HI,2004. CD/ROM.

[21] J. R. Searle, “A taxonomy of illocutionary acts,” in Expression and Mean-ing: Studies in the Theory of Speech Acts. Cambridge, U.K.: CambridgeUniv. Press, 1979, pp. 1–29.

[22] A. Stolcke, K. Reis, N. Coccaro, E. Shriberg, R. Bates, D. Jurafsky,P. Taylor, C. Van Ess-Dykema, R. Martin, and M. Meteer, “Dialogueact modeling for automatic tagging and recognition of conversationalspeech,” Comput. Linguist., vol. 26, no. 3, pp. 339–373, Sep. 2000.

[23] D. P. Twitchell, K. Wiers, M. Adkins, J. K. Burgoon, and J. F. J.Nunamaker, “StrikeCOM: A multi-player online strategy game for re-searching and teaching group dynamics,” in Proc. Hawaii Int. Conf. Syst.Sci., Big Island, HI, 2005, p. 45b. CD/ROM.

[24] D. M. Gavrila, “The visual analysis of human movement: A survey,”Comput. Vis. Image Underst., vol. 73, no. 1, pp. 82–98, Jan. 1999.

[25] K. Imagawa, S. Lu, and S. Igi, “Color-based hands tracking system forsign language recognition,” in Proc. 3rd Int. Conf. Autom. Face GestureRecog., 1998, pp. 462–467.

[26] S. Lu, D. Metaxas, D. Samaras, and J. Oliensis, “Using multiple cues forhand tracking and model refinement,” in Proc. IEEE CVPR, Madison, WI,2003, pp. 443–450.

[27] S. Lu, G. Tsechpenakis, D. N. Metaxas, M. L. Jensen, and J. Kruse,“Blob analysis of the head and hands: A method for deception detec-tion,” in Proc. 38th Annu. Hawaii Int. Conf. Syst. Sci., Big Island, HI,2005, p. 20c.

[28] T. O. Meservy, M. L. Jensen, J. Kruse, D. P. Twitchell, G. Tsechpenakis,J. K. Burgoon, D. N. Metaxas, and J. F. Nunamaker, “Deception detectionthrough automatic, unobtrusive analysis of nonverbal behavior,” IEEEIntell. Syst., vol. 20, no. 5, pp. 36–43, Sep./Oct. 2005.

[29] J. K. Burgoon, M. Adkins, J. Kruse, M. L. Jensen, A. Deokar,D. P. Twitchell, S. Lu, D. N. Metaxas, J. F. Nunamaker, Jr., andR. E. Younger, “An approach for intent identification by building ondeception detection,” in Proc. 38th Annu. Hawaii Int. Conf. Syst. Sci.,Big Island, HI, 2005, p. 21a.

[30] C. Wren, A. Azarbayejani, T. Darrell, and A. Pentland, “Pfinder: Real-time tracking of the human body,” IEEE Trans. Pattern Anal. Mach.Intell., vol. 19, no. 7, pp. 780–785, Jul. 1997.

[31] L. Zhou, D. P. Twitchell, T. Qin, J. K. Burgoon, and J. F. Nunamaker, Jr.,“Toward the automatic prediction of deception—An empirical compari-son of classification methods,” J. Manag. Inf. Syst., vol. 20, no. 4, pp. 139–166, Spring 2004.

[32] J. K. Burgoon and T. Qin, “The dynamic nature of deceptive verbalcommunication,” J. Lang. Soc. Psychol., vol. 25, no. 1, pp. 76–96, 2006.

[33] M. L. Jensen, T. O. Meservy, J. K. Burgoon and J. F. Nunamaker, “Auto-matic, multimodal evaluation of human interaction,” Group Decis. Negot.,to be published.

[34] D. P. Twitchell, J. F. Nunamaker, Jr., and J. K. Burgoon, “Using speechact profiling for deception detection,” in Proc. 2nd NSF/NIJ Symp. Intell.Secur. Inf. Intell. Secur. Inform., Tucson, AZ, 2004.

[35] P. Ekman, Telling Lies. New York: Norton, 1985.[36] M. Zuckerman, B. M. DePaulo, and R. Rosenthal, “Verbal and nonverbal

communication of deception,” in Advances in Experimental Social Psy-chology, L. Berkowitz, Ed. New York: Academic, 1981, pp. 1–59.

[37] D. Buller, J. Burgoon, C. White, and A. Ebesu, “Interpersonal deception:VII. Behavioral profiles of falsification, equivocation and concealment,”J. Lang. Soc. Psychol., vol. 13, pp. 366–395, 1994.

[38] P. Ekman, “Lying and nonverbal behavior: Theoretical issues and newfindings,” J. Nonverbal Behav., vol. 12, no. 3, pp. 163–176, Sep. 1988.

Page 10: Detecting Concealment of Intent in Transportation Screening: A Proof of Concept

112 IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 10, NO. 1, MARCH 2009

Judee K. Burgoon received the Ph.D. degree incommunication and educational psychology fromWest Virginia University, Morgantown.

She is currently a Professor of communication andthe Site Director for the Center for IdentificationTechnology Research, The University of Arizona,Tucson. She has authored eight books and over 250articles, chapters, and reviews on topics that are re-lated to nonverbal and interpersonal communication,deception, and computer-mediated communication.

Douglas P. Twitchell received the Ph.D. degree inmanagement information systems from the Univer-sity of Arizona, Tucson, in 2005.

He is currently an Assistant Professor with theIllinois State University, Normal. His interestsinclude text mining, conversational analysis andprofiling, machine learning, and natural languageprocessing.

Matthew L. Jensen received the Ph.D. degree inmanagement information systems from the Univer-sity of Arizona, Tucson, in 2007.

He is currently an Assistant Professor with theDivision of Management Information Systems, Uni-versity of Oklahoma, Norman. His interests includeevaluation of credibility, human–computer interac-tion, and computer-mediated communication.

Thomas O. Meservy received the Ph.D. degree inmanagement information systems from the Univer-sity of Arizona, Tucson, in 2007.

He is currently an Assistant Professor with theDepartment of Management Information Systems,University of Memphis, Memphis, TN. His researchinterests include software development tools andmethodologies, collaboration, and automated under-standing of human nonverbal behavior.

Mark Adkins received the Ph.D. degree in com-munication from the University of Arizona, Tucson,in 2000.

He is currently with Accenture, Sahuarita, AZ. Heis a senior professional who provides insight to lead-ing corporations, governments, and agencies aroundthe world. He is an authority on network centric oper-ations, collaboration, group decision making, humancommunication, information systems, and humani-tarian assistance/disaster relief.

John Kruse received the Ph.D. degree in manage-ment information systems from the University ofArizona, Tucson, in 2001.

He is currently a Lead Information Systems Engi-neer with AK Collaborations, Sahuarita, AZ. He hasextensively worked with a wide range of educational,governmental, and military groups to help developgroup processes, software that supports collaborativework, and automated means for deception and intentdetection.

Amit V. Deokar received the Ph.D. degree in man-agement information systems from the University ofArizona, Tucson, in 2006.

He is currently an Assistant Professor of informa-tion systems with the College of Business and Infor-mation Systems, Dakota State University, Madison,SD. His areas of research interest include workflowprocesses, collaboration process management, deci-sion support systems, and knowledge management.

Gabriel Tsechpenakis received the Ph.D. degree incomputer engineering from the National TechnicalUniversity of Athens, Athens, Greece.

He is currently a Research Scientist with the Cen-ter for Computational Sciences, University of Miami,Coral Gables, FL. His current research is mainlyfocused on computer vision and machine learning.

Shan Lu received the Ph.D. degree in electricalengineering from Kyoto University, Kyoto, Japan.

He is currently a Leading Scientist and the Soft-ware Development Manager with Topcon MedicalSystems, Inc., Paramus, NJ. He has been working inthe fields of digital image processing, video tracking,and biomedical image segmentation for more than20 years.

Dimitris N. Metaxas received the Ph.D. degree incomputer science from the University of Toronto,Toronto, ON, Canada.

He is a Professor of computer and information sci-ences. He currently directs the Center for Computa-tional Biomedicine, Imaging and Modeling, RutgersUniversity, Piscataway, NJ. He conducts research incomputer vision, computer graphics, and medicalimaging.

Jay F. Nunamaker, Jr. received the Ph.D. degreein systems engineering and operations research fromCase Institute of Technology.

He is the Regents and Soldwedel Professor ofmanagement information systems, computer science,and communication and the Director of the Centerfor the Management of Information, The Universityof Arizona, Tucson. His research interests includecredibility assessment and supporting teams withtechnology.

Robert E. Younger received the M.B.A. degree inbusiness management from the National University,San Diego, CA, in 1984.

He is currently a Program Manager with theUnited States Department of Defense. He is alsowith the Space and Naval Warfare Systems Center,San Diego, CA.