5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the...

82
Academic Track of 13 th Annual NYS Cyber Security Conference Empire State Plaza Albany, NY, USA June 16-17, 2010 5th Annual Symposium on Information Assurance (ASIA ’10) Symposium Chair: Sanjay Goel Information Technology Management, School of Business University at Albany, State University of New York conference proceedings

Transcript of 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the...

Page 1: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

Academic Track of 13th Annual NYS Cyber Security Conference

Empire State Plaza Albany, NY, USA

June 16-17, 2010

5th Annual Symposium on

Information Assurance (ASIA ’10)

Symposium Chair:

Sanjay Goel Information Technology Management, School of Business University at Albany, State University of New York

conference proceedings

Page 2: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

Proceedings of the 5th Annual Symposium on Information Assurance

Academic track of the 13th

Annual 2009 NYS Cyber Security Conference

June 16-17, 2010, New York, USA.

This volume is published as a collective work. Rights to individual papers remain with the author or the author’s

employer. Permission is granted for the noncommercial reproduction of the complete work for educational research

purposes.

Symposium Chairs

Sanjay Goel, Chair Director of Research, NYS Center for Information Forensics and Assurance (CIFA)

Associate Professor, Information Technology Management, School of Business, University at Albany, SUNY

Laura Iwan, Co-Chair State ISO, NYS Office of Cyber Security and Critical Infrastructure Coordination (CSCIC)

Program Committee

Alvaro Ortigosa, Autonoma Universidad de Madrid

Anil B. Somayaji, Carleton University, Canada

Arun Lakhotia, University of Louisiana at Lafayette

Billy Rios, Google, Inc.

Daniel O. Rice, Technology Solutions Experts, Inc.

Dipankar Dasgupta, University of Memphis

George Berg, University at Albany, SUNY

Gurpreet Dhillon, Virginia Commonwealth University

Hemantha Herath, Brock University

Hong C. Li, Intel Corporation

Ibrahim Baggali, Zayed University, Abu Dhabi, UAE

Jeffrey Carr, GreyLogic

John Crain, ICANN

Kenneth Geers, CCD-COE, Talinn, Estonia

Martin Loeb, University of Maryland

Michael Lavine, Johns Hopkins University

Michael Sobolewski, Texas Tech University

Melissa Dark, Purdue University

M.P. Gupta, Indian Institute of Technology, Delhi

Nasir Memon, Polytechnic University of NYU

Pavel Pilugin, Moscow State University, Moscow

Raghu T. Santanam, Arizona State University

Rahul Singh, University of North Carolina, Greensboro

Raj Sharman, University at Buffalo, SUNY

Robert Bangert-Drowns, University at Albany, SUNY

Ronald Dodge, USMA West Point

R. Sekar, Stony Brook University, SUNY

Saeed Abu-Nimeh, Websense, Inc.

Shambhu J. Upadhyaya, University at Buffalo, SUNY

Shiu-Kai Chin, Syracuse University

Siwei Lyu, University at Albany, SUNY

S. S. Ravi, University at Albany, SUNY

Stelios Sidiroglou-Douskos, MIT

Stephen F. Bush, GE Global Research Center

External Reviewers

Manoj Jha, Center of Advanced Transportation and Infrastructure Engineering Research, Morgan State University

Carlo Kopp, Computer Science, Monash University, Melbourne, Australia

Rain Ottis, Cooperative Cyber Defence Center of Excellence, Estonia

Sankardas Roy, Game Theory in Cyber Security Group, University of Memphis

Submissions Chair Damira Pon, University at Albany, SUNY

SYMPOSIUM DINNER SPONSOR

Note of Thanks We would like to express our appreciation to all of the

sponsors which supported the symposium.

CONFERENCE KILOBYTE SPONSOR

Page 3: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

Academic Track of 13th Annual NYS Cyber Security Conference

Empire State Plaza Albany, NY, USA

June 16-17, 2010

5th Annual Symposium on

Information Assurance (ASIA ’10)

Symposium Chair:

Sanjay Goel Information Technology Management, School of Business University at Albany, State University of New York

conference proceedings

Page 4: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009
Page 5: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

MESSAGE FROM SYMPOSIUM CHAIRS

Welcome to the 5th

Annual Symposium on Information Assurance (ASIA’10)! This symposium

complements the NYS Cyber Security Conference as its academic track with a goal of increasing

interaction among practitioners and researchers to foster infusion of academic research into

practice. For the last several years, this symposium has been a great success with excellent

papers and participation from academia, industry, and government and highly attended sessions.

This year, we again have an excellent set of papers, invited talks, and keynote addresses.

The keynote speakers this year are Dr. Larry Ponemon and Billy Rios. Dr. Ponemon is the

Chairman and Founder of the Ponemon Institute, a research “think tank” dedicated to advancing

privacy and data protection practices. Billy Rios is currently working as a Security Engineer at

Google, Inc. and has previously had positions at Microsoft, Versign, the Department of Defense,

and E&Y. The symposium has papers in multiple areas of security, including DNS security,

information warfare, risk and security management, intellectual property theft, education and

forensics. We have also included the roundtable discussion on forensics training and education to

the program that we held in previous years based on strong participant interest.

We would like to thank the talented program committee that has supported the review process of

the symposium. In most cases, the papers were assigned to at least three reviewers who were

either members of the program committee or experts outside the committee. We ensured that

there was no conflict of interest and that each program committee member was not assigned to

review more than two papers. We also personally read the papers and the reviews and concurred

with the assessment of the reviewers. For this year’s symposium, we have ten refereed papers

and several invited speakers. Our goal is to keep the quality of submissions high as the

symposium matures. The program committee serves a critical role in the success of the

symposium and we are thankful for the participation of each member.

We were fortunate to have extremely dedicated partners in the NYS Office for Cyber Security

and Critical Infrastructure Coordination (CSCIC) and the University at Albany, State University

of New York (UAlbany). Our partners have managed the logistics for the conference, allowing

us to focus on the program management. We would like to thank the University at Albany’s

School of Business and Symantec for providing financial support for the symposium.

We hope that you enjoy the symposium and continue to participate in the future. In each of the

subsequent years, we plan to have different themes in security-related areas. Next year, again we

are planning on holding the symposium. If you would like to propose a track, please let us know.

The call for papers for next year’s symposium will be distributed in the fall and we hope to see

you again in 2011.

Sanjay Goel Director of Research, CIFA

Associate Professor, School of Business

Laura Iwan NYS ISO, Office of Cyber Security and Critical

Infrastructure Coordination (CSCIC)

Page 6: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009
Page 7: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

i

SYMPOSIUM ON INFORMATION ASSURANCE AGENDA

DAY 1: Wednesday, June 16, 2010

REGISTRATION – Base of the Egg (7:00am – 4:00pm)

VISIT THE EXHIBITORS – Base of the Egg (7:00am – 9:00am)

MORNING SESSION & KEYNOTE – Mtg. Room 6 (9:00am – 11:30am)

Welcome Address: William Pelgrin, Director, CSCIC & George Philip, President, UAlbany

Plenary Talk – Trends in Cyber Crime and Cyber Warfare: Sanjay Goel

Hacking Demo: Sanjay Goel & Damira Pon, School of Business and NYS Center for

Information Forensics and Assurance at the University at Albany, SUNY

Introduction: Peter Bloniarz, CCI, University at Albany, SUNY

Keynote: Dr. Larry Ponemon, Chairman and Founder, Ponemon Institute

LUNCH / VISIT EXHIBITORS – Base of the Egg (11:30am – 12:30pm)

SYMPOSIUM SESSION 1: DNS Security (10:45am – 11:45am) Chair: George Berg, University at Albany, SUNY

Invited Talk: Protecting the Internet

John Crain, ICANN

The Futility of DNSSec

Alex Cowperthwaite & Anil Somayaji, Carleton University

SYMPOSIUM SESSION 2: Information Warfare (1:40pm – 2:40pm) Chair: Stephen F. Bush, GE Global Research

Invited Talk: Big Allied and Dangerous – Terrorist Networks

Victor Asal & R. Karl Rethemeyer, University at Albany, SUNY

Flow-Based Information Warfare Model

Sabah S. Al-Fedaghi, Kuwait University

BREAK / VISIT THE EXHIBITORS – Base of the Egg (2:40pm – 3:00pm)

SYMPOSIUM SESSION 3: Security and Risk Analysis (3:00 – 4:00) Chair: Gurpreet Dhillon, Virginia Commonwealth University

A Multi-Staged Approach to Risk Management in IT Software Implementation

Sunjukta Das, Raj Sharman, Manish Gupta, Varun N. Kutty, Yingying Kang

University at Buffalo, SUNY

A Survivability Decision Model for Critical Information System Based on Bayesian

Network

Suhas Lande, Yunjun Zuo, & Malvika Pimple, University of North Dakota

Page 8: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

ii

SYMPOSIUM ON INFORMATION ASSURANCE AGENDA, CONT’D.

DAY 2: Thursday, June 17, 2010

REGISTRATION / VISIT EXHIBITORS – Base of Egg (7:30am – 4:00 pm)

ASIA Keynote & Best Paper Award – Mtg. Rm. 6 (9:00am – 10:15am)

Introduction: Sanjay Goel, Symposium Chair

Welcome Address: Donald Siegel, Dean, School of Business, UAlbany

So You Wanna Be a Bot Master?

Billy Rios, Google, Inc.

Best Paper Award Presentation: Laura Iwan, Symposium Co-Chair

SYMPOSIUM SESSION 4: Intellectual Property Theft (10:30 – 11:30am) Chair: Anil Somayaji, Carleton University

A Week in the Life of the Most Popular BitTorrent Swarms Mark Scanlon, Alan Hannaway, & Mohand-Tahar Kechadi, University College Dublin, Centre

for Cybercrime Investigations

Information Assurance in Video Streaming: A Case Study

Sriharsha Banavara, Eric Ismand, & John A. Hamilton, Jr., Auburn University

LUNCH / VISIT THE EXHIBITORS – Base of the Egg (11:30am – 12:30pm)

SYMPOSIUM SESSION 5: Security Management (12:30 – 1:30pm) Chair: Raj Sharman, University at Buffalo, SUNY

Assessing the Impact of Security Culture and the Employee-Organization Relationship in

IS Security Compliance Gwen Greene, Towson University & John D’Arcy, University of Notre Dame

Invited Paper: Employee Emancipation and Protection of Information Yurita Abdul Talib & Gurpreet Dhillon, Virginia Commonwealth University

BREAK / VISIT THE EXHIBITORS – Base of the Egg (1:30pm – 1:50pm)

SYMPOSIUM SESSION 6: Security Education (1:50pm – 2:50pm) Chair: Siwei Lyu, University at Albany, SUNY

Enhancing Security Education with Hands-On Laboratory Exercises

Wenliang Du, Karthick Jayaraman, & Noreen B. Gaubatz, Syracuse University

A Digital Forensics Program to Retrain America’s Veterans

Eric Ismand & John A. Hamilton, Jr., Auburn University

SYMPOSIUM SESSION 7: Roundtable: Forensic Education (3:00pm – 3:50pm)

Panelists: Fabio Auffant, NY State Police; Kevin Williams, Psychology, UAlbany; John A.

Hamilton, Jr., Auburn University

CLOSING REMARKS (3:50pm – 4:00pm) Sanjay Goel, Symposium Chair

Page 9: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

iii

TABLE OF CONTENTS

Session 1: DNS Security

Invited Talk: Protecting the Internet ………………………………………….…..............

John Crain, ICANN

The Futility of DNSSec ………………………………………….….....................................

Alex Cowperthwaite & Anil Somayaji, Carleton University

1

2

Session 2: Information Warfare

Invited Talk: Big Allied and Dangerous – Terrorist Networks …………………..…..…

Victor Asal & R. Karl Rethemeyer, University at Albany, SUNY

Flow-Based Information Warfare Model ………………………….…………….……….

Sabah S. Al-Fedaghi, Kuwait University

9

10

Session 3: Security and Risk Analysis

A Multi-Staged Approach to Risk Management in IT Software Implementation……... Sunjukta Das, Raj Sharman, Manish Gupta, Varun N. Kutty, & Yingying Kang

University at Buffalo, SUNY

19

A Survivability Decision Model for Critical Information System Based on Bayesian

Network ………………………………………………….………………………………….

Suhas Lande, Yunjun Zuo, & Malvika Pimple, University of North Dakota

23

Symposium Keynote Address

So You Wanna Be a Bot Master? ……………………………………………………….…

Billy Rios, Google, Inc.

31

Session 4: Intellectual Property Theft A Week in the Life of the Most Popular BitTorrent Swarms ….……….……………......

Mark Scanlon, Alan Hannaway, & Mohand-Tahar Kechadi, University College Dublin, Centre

for Cybercrime Investigations

Information Assurance in Video Streaming: A Case Study ……………………………...

Sriharsha Banavara, Eric Ismand, & John A. Hamilton, Jr., Auburn University

32

37

Session 5: Security Management

Assessing the Impact of Security Culture and the Employee-Organization Relationship in

IS Security Compliance …………………………………………………..………………

Gwen Greene, Towson University & John D’Arcy, University of Notre Dame

Invited Paper: Employee Emancipation and Protection of Information ………………..

Yurita Abdul Talib & Gurpreet Dhillon, Virginia Commonwealth University

42

50

SESSION 6: Security Education

Enhancing Security Education with Hands-On Laboratory Exercises ………………….

Wenliang Du, Karthick Jayaraman, & Noreen B. Gaubatz, Syracuse University

A Digital Forensics Program to Retrain America’s Veterans …………………...……….

Eric Ismand & John A. Hamilton, Jr., Auburn University

56

62

SESSION 7: Roundtable: Forensic Education…………………………………………………. Panelist: Fabio Auffant, NY State Police; Kevin Williams, UAlbany; John A. Hamilton, Jr.,

Auburn University

67

Author Biographies…………………………………………………………………………. 68

Index of Authors ……………………………………………………………………...……... 73

Page 10: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

Invited Talk: The Changing Landscape of the DNS

John Crain, Chief Technical Officer ICANN (Internet Corporation for Assigned Names and Numbers)

HE DNS is one of the things that people rarely think about, or even know about, when they use the

Internet. John will give you an update of some of the things you may notice and some of the things you

won’t. There are some things that go on behind the scenes of the management of the DNS that hardly ever

get noticed that will be revealed in this talk. His update will also touch upon what may be two of the

largest changes since the DNS was introduced. 1) DNSsec, which has enabled a new layer of security, and

2) IDN, which allows navigation of the network in many scripts. These advances will at some point affect

everyone who uses the Internet.

T

Page 11: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

Abstract—The lack of data authentication and integrity

guarantees in the Domain Name System (DNS) facilitates a wide

variety of malicious activity on the Internet today. DNSSec, a set

of cryptographic extensions to DNS, has been proposed to address

these threats. While DNSSec does provide certain security

guarantees, here we argue that it does not provide what users

really need, namely end-to-end authentication and integrity. Even

worse, DNSSec makes DNS much less efficient and harder to

administer, thus significantly compromising DNS’s availability—

arguably its most important characteristic. In this paper we

explain the structure of DNS, examine the threats against it,

present the details of DNSSec, and analyze the benefits of DNSSec

relative to its costs. This cost-benefit analysis clearly shows that

DNSSec deployment is a futile effort, one that provides little long-

term benefit yet has distinct, perhaps very significant costs.

I. INTRODUCTION

HE Domain Name System (DNS) has become an vital part

of the modern Internet. DNS translates human readable

host names such as www.carleton.ca to machine routable IP

addresses such as 134.117.1.32. These operations are

performed automatically as part of nearly every transaction

across the Internet today.

DNS was designed in the early 1980’s to be fast, efficient,

and highly available in the face of network disruptions [1][2].

These were all essential design goals given the limited and

unreliable nature of the computational and networking

resources at their disposal. To achieve these goals DNS used a

distributed hierarchical database, a simple query response

protocol, and aggressive caching with permissibly stale data.

With the rapid and global growth of the Internet, the DNS

system has scaled very well and maintained its core attributes

of efficiency and high availability; unfortunately, it has also

maintained its lack of data authenticity and integrity. These

omissions have become increasingly significant as malicious

activity has increased on the Internet. A falsified DNS reply

can redirect an unsuspecting user to an attacker-controlled

host, making them vulnerable to phishing, malware

distribution or numerous other malicious activities.

DNS Security Extensions (DNSSec) addresses DNS’s lack

of data authentication and integrity. DNSSec provides these

This work was supported by Canada’s Natural Sciences and Engineering

Research Council (NSERC) through the Internet-worked Systems Security

Network (ISSNet) and Discovery Grant program.

guarantees by using public key cryptography to sign each DNS

record. DNSSec has been carefully designed to be as efficient

and scalable as possible; as we will explain, however, the

bandwidth, computational, and administrative overhead of

DNSSec do potentially compromise DNS’s essential

availability properties. More significant, however, is that

DNSSec solves the wrong problem: it secures hostname to IP

address mappings when what is really needed is better end-to-

end security guarantees. Thus we believe that the deployment

of DNSSec is a futile gesture, one that will lead to minimal

long term security benefits while resulting in significant

security and economic costs.

We proceed to make this argument in the rest of this paper

as follows. First, we describe the Domain Name System in

more detail in Section II. Next, we discuss cache poisoning

attacks, the most serious current threat to DNS, in Section III.

Section IV presents DNSSec and explains how DNSSec can

block cache poisoning attacks. We then explore how DNSSec

reduces the availability of DNS in Section V; Section VI

discusses how attackers could circumvent the protections

provided by DNSSec. Section VII presents potential

alternatives to DNSSec adoption while Section VIII discusses

more general issues relating to DNSSec and end-to-end

security. Section IX concludes.

II. THE DOMAIN NAME SYSTEM

The Domain Name System is a distributed database for

storing information on domain names, the primary namespace

for hosts on the Internet. While the primary purpose of DNS is

to map domain names to IP addresses, it actually supports

several kinds of records. The primary record types are as

follows:

1. A records specify an IPv4 address for a host.

2. AAAA records specify an IPv6 address for a host.

3. NS records point to the authoritative name servers

for a zone (domain).

4. CNAME records allow the specification of host

aliases, e.g. make www.example.com refer to

berners-lee.example.com.

5. MX records identify mail servers that can receive

email for a domain.

The DNS system is composed of two types of systems,

name servers and resolvers. Name servers return information

The Futility of DNSSec

Alex Cowperthwaite and Anil Somayaji School of Computer Science, Carleton University

Ottawa, ON Canada K1S 5B6

{acowpert, soma}@ccsl.carleton.ca

T

Page 12: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

about domains or zones when queried by DNS resolvers. DNS

resolvers send queries to one or more name servers in order to

service DNS requests from applications.

When a name server is configured as authoritative for its

zone, its resource records are always interpreted as correct. To

improve availability and reduce upstream server load,

resolvers may also cache resource records until they have

expired; while some domains specify time-to-live values on the

order of minutes, it is more common for domains to permit

their records to be cached for days. Cached records are not

considered to be authoritative, i.e., they may contain incorrect

or stale information; most consumers of DNS information,

however, do not distinguish between authoritative and non-

authoritative results.

Applications normally send queries to simple stub resolvers

implemented as library functions. Stub resolvers forward

requests to standalone recursive resolvers that perform the

multiple name server queries normally required to answer any

request. Recursive resolvers work best when most responses

can be answered using cached queries (i.e., they see multiple

requests for the same information). ISPs normally operate

DNS servers with recursive, caching resolvers for their

customers; alternately, users can configure their systems to

connect to other open DNS servers1.

DNS name servers are structured as a hierarchical tree

where the tree levels correspond to the dot-separated

components of a domain name. A complete lookup follows a

chain starting from the DNS root servers to a top level domain

(TLD) down to any number of domains and sub-domains

below. We review the steps in a typical DNS lookup below.

1. An application (e.g., a web browser) invokes a local

stub resolver to look up the IP address of

www.example.com.

2. The stub resolver sends the request to a local DNS

server (a recursive, caching DNS resolver).

3. The DNS server will check its cache for an entry for

www.example.com. If it is does not have an entry it

will request a lookup from a randomly chosen root

server. The IP addresses of standard root servers

are built in to DNS servers.

4. The root server will reply with the NS record

describing the name of the authoritative name

server for the Top Level Domain (TLD) .com. To

reduce traffic, it will also append ―the glue‖: the A

record (IPv4 address) for the .com TLD name

server.

5. The recursive DNS resolver will request a lookup

for www.example.com from the .com authoritative

name server (as specified in the returned glue A

record).

1 Traditionally, recursive resolvers and name servers were both implemented

as part of the same application (e.g., BIND 9 and earlier [31]; Newer DNS

server designs, however, divide these functions into separate programs (e.g.

tinydns and dnscache of djbns [4]).

6. The .com name server will reply with the

authoritative name server record for example.com.

It will also append the A record for the

example.com authoritative name server.

7. The recursive DNS resolver will then ask the

authoritative name server for example.com for the

lookup of www.example.com

8. The name server for example.com will reply with an

authoritative answer for www.example.com, likely

only an A record.

9. Optionally, the recursive DNS resolver will add or

update this entry in its cache.

10. The recursive DNS server will then return a non-

authoritative answer (an A record) to the

requesting application’s stub resolver.

Each DNS query and response is normally sent in a single

UDP packet. The query and response use the same structure;

the query is sent with the response field blank. A 16 bit query

ID field used to distinguish outstanding requests [5][6].

Note how multiple characteristics of DNS make it both

efficient and highly available. Queries are simple and small

(there are few record types and the amount of information in

each is small), minimizing bandwidth and storage overhead.

Full and partial results are easily cached (i.e., servers cache

results from every level of the hierarchy). And, DNS’s

database is distributed in a simple, scalable way that is

(relatively) easily debugged and tuned—i.e., if you don’t get

an appropriate response, you can quickly identify what server

gave you incorrect information.

III. DNS CACHE POISONING

While DNS has many positive characteristics, note that

DNS has no integrity, authenticity, or confidentiality

guarantees. Confidentiality can be added by encrypting queries

(e.g., via IPsec); integrity and authenticity cannot be so easily

added, however, because of the distributed nature of DNS.

DNS responses can be modified or completely forged almost

trivially by an intruder-in-the-middle attack. Indeed, because

DNS normally uses UDP, attackers do not even need to hijack

TCP connections. The weaknesses of DNS are actually even

worse, however, because the existing DNS infrastructure can

be used to deliver malicious responses.

Specifically, malicious responses can originate from

legitimate, otherwise uncompromised servers through the use

of cache poisoning attacks [7]. With standard DNS cache

poisoning attacks, a victim can be redirected to a malicious

host through the injection of a forged A record. The key idea

behind such attacks is that a caching resolver has no way to

authenticate received data; it will accept any query response

that ―looks right.‖ An attacker even has many chances to

generate authentic-looking responses as unexpected responses

will simply be discarded.

While such attacks are problematic enough, in the summer

of 2008 Dan Kaminsky described a direct extension to known

Page 13: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

DNS cache poisoning techniques that made them much more

dangerous and reliable. Kaminsky’s primary observation was

that instead of spoofing the final A record of the destination,

an attacker could spoof any one of the query responses in the

lookup process. By forging an NS record, an attacker could

take over an entire domain, up to and including a TLD such as

.com.

Kaminsky also improved the reliability of cache poisoning

attacks. Previously, if an entry was cached attackers would

have to wait for the cached entry to expire before they could

attempt to replace it; thus, it was relatively hard to poison the

cache of popular domains. Kaminsky, however, noted that by

querying an obscure sub-domain not already cached (i.e.

reallyobscuresubdomain.example.com) an attacker could force

a lookup. The lookup would go directly to the authoritative

nameserver for the domain, assuming it was cached. This

creates an opportunity for an attack to forge a reply with no

answer for the obscure sub-domain but include an additional

update for the A record of the authoritative nameserver. The

opportunistic caching resolver will update their cache with the

false record. This method allows an attacker to repeat the

attack many times giving them a greater chance of returning an

answer that will be accepted, successfully poisoning the cache

[8].

The easiest-to-implement defense against these cache

poisoning attacks is to make responses harder to predict.

Clearly the 16-bit query ID field does not have enough entropy

to prevent brute force attacks. This entropy can be

supplemented by randomizing the outgoing request port. If

every available port could be used, that would give us 32 bits

(as UDP ports are 16 bit quantities); while we cannot reach

this upper bound in practice, we can get close enough to make

cache poisoning attacks much harder. Even with such

improvements, however, it is still possible to mount DNS

cache poisoning attacks. To prevent forged requests, ideally

we need the kind of authentication guarantees that are best

provided by cryptographic mechanisms. DNSSec was

designed to provide precisely these guarantees.

IV. DNSSEC

DNSSec is a set of extensions to DNS that are designed to

authenticate and protect the integrity of DNS responses

through the use of public key-based digital signatures. The

design of DNSSec started in the mid-1990s, motivated by the

growing recognition of the dangers of DNS cache poisoning

[7]. The first version of DNSSec was officially published as

RFC 2065 in January 1997 [9]; an improved version followed

in RFC 2535 (March 1999) [10]; after trial deployment

experience, the current version of DNSSec was finalized in

RFC 4033-4035 in March 2005 [11][12][13].

DNSSec allows domain owners to digitally sign resource

records. One key design decision in DNSSec is that name

servers do not implement any cryptography; signatures are

generated offline, while signatures are checked by resolvers.

While this design choice minimizes the computational

overhead of DNSSec, it also greatly complicates the process of

authenticating DNS responses because the results returned by

every authoritative name server in a query chain must be

individually verified. Thus, for resolving www.example.com,

the responses from the root, .com, and example.com name

servers must each be authenticated independently. The

information necessary to authenticate responses is stored in a

set of new DNS record types:

RRSIG records contain digital signatures for other

resource records. Each regular DNS resource

record should have its own RRSIG record (except

for the DNSKEY record).

DNSKEY records contain public keys for verifying

RRSIG records. Note that a zone may have

multiple DNSKEY records.

DS, or designated signer records, store

cryptographic hashes of the public keys of a zone’s

children. DS records are what allow .com to

specify what key should sign example.com’s

records, thus enabling trust chains. DS records

should be signed with a corresponding RRSIG

record.

NSEC and NSEC3 records provide authenticated

denial of existence. NSEC provided this

information by specifying two hostnames between

which no other hostnames exist. NSEC allows

attackers to enumerate the hosts in a domain;

NSEC3 [14] addresses this problem by specifying

the ranges in terms of hashes of domain names

rather than the domain names themselves.

To authenticate a resource record, a resolver first checks

that the RRSIG signature on the requested record was

generated by the zone’s key, as specified by its DNSKEY

record. To make sure the DNSKEY record has the right value,

the corresponding DS record of the zone’s parent is checked to

see if it has the hash of the DNSKEY key. Then, to verify the

DS record is genuine, its RRSIG has to be checked against the

parent zone’s DNSKEY public key. This process repeats until

we get to the root zone; here, we assume the resolver already

knows the authentic public keys belonging to the root name

servers.

Note that DNSSec clearly defeats cache poisoning attacks.

Responses can only be forged if the underlying cryptography is

broken. Because resolvers will only forward or cache authentic

responses, the resolver’s cache would remain uncorrupted.

Unfortunately this benefit comes along with significant costs.

V. DNSSEC AVAILABILITY ISSUES

While DNSSec does provide strong integrity and

authenticity guarantees for DNS responses, it also requires

significantly more computer resources and requires a

significant investment in key management. Both of these

factors reduce DNS’s availability and so counterbalance the

potential security gains from DNSSec.

Page 14: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

A. Resource Usage

DNSSec significantly increases the computational and space

footprint of DNS. DNSSec requires additional computation for

generating and verifying records. These costs are either offline

or are borne by requesting clients and/or their local resolvers;

as such, cryptographic overhead shouldn’t be a problem for

most DNSSec deployments.

The most obvious source of increased space usage with

DNSSec is the use of additional records: every regular record

requires a RRSIG signature, along with at least one

nameserver-specific DNSKEY and verifying DS (from the

parent) records. While the space impact of this increase isn’t

too large given the relatively small size of DNS zone files,

these additional records must also be communicated,

increasing DNS’s bandwidth requirements significantly.

Given today’s computers and networks, this increased

resource consumption should be very manageable under

normal circumstances. Root and TLD DNS servers, however,

have been and will continue to be targets of Distributed

Denial-of-Service (DDoS) attacks [15][16].

The increased bandwidth requirements of DNSSec will

require significant hardware, especially bandwidth, upgrades

to maintain the same level of DDoS resistance. Any outage or

latency at the root or TLD servers will be felt throughout the

internet. The increased bandwidth consumption is particularly

worrisome because of its asymmetric nature: small DNS

requests can result in large DNSSec replies. This characteristic

makes DNSSec-enabled systems more attractive for DNS

DDoS amplification attacks. Such attacks happen when a

DDoS attack is passed through DNS servers to amplify the

volume of data being sent to a target. Amplification attacks are

feasible because it is so easy to forge the source address of

UDP packets; thus, any open resolver can be used to send

packets to almost any host on the Internet [17][18].

To evaluate the possibility of the amplification attack, we

recorded DNS lookups using Wireshark and the DNS lookup

utility dig (See Table 1). We looked up the host

www.dnssec.se using the L root server and the open .se

DNSSec test server. We chose these servers because they offer

some of the strongest DNSSec support at the time of this

writing. We found that many servers do not fully implement

DNSSec as RFC 4033–4035 specify, so we report both the

actual values observed on the wire and the expected values

based on the RFCs. Specifically, we manually reconstructed

the full chain of DNS replies that is supposed to be returned

when the checking disabled (CD) flag is set. From our data it

is clear that DNSSec greatly increases the opportunity for an

amplificaion attack.

DNS amplification attacks can be mitigated in multiple

ways. DNS resolvers can be shielded from external requests

through firewall rules or through IP address-based request

filters on DNS servers themselves. Unicast Reverse Path

Forwarding (URPF), defined in RFC 3074 [19], limits the

scope of IP address spoofing. Such defenses, however, have

only been partially deployed; further, even where deployed

limited forms of IP address spoofing are still possible. With

DNSSec deployment attackers will need far fewer DNS

amplifiers to perform potent denial of service attacks, making

DNS amplification attacks that much more feasible.

While neither the increased DDoS susceptibility and the

increased potential for DNS DDoS amplification attacks

should, by themselves, stand in the way of DNSSec

deployment, they are both clear drawbacks to a transition to

DNSSec.

B. Key Management

We think a more significant DNSSec disadvantage will be

key management in practice. DNSSec has a number of

provisions designed to make key management easier. It allows

a zone to have as many signing keys (DNSKEY records) as

they wish, and those keys can be divided into separate

functions, e.g., Zone Signing Keys (ZSK) and Key Signing

Keys (KSK). The fundamental fact is, however, is that DNS

administrators will have to become public-key infrastructure

(PKI) administrators as well. They will have to secure private

signing keys, decide who has access to them, and generate

signatures for new and old information as records change and

as keys expire. DNS administrators are already busy people;

certificate management is a new, non-trivial responsibility for

which many (arguably most) DNS administrators do not have

the background, training, or experience. The most obvious

potential issue with such inexperienced DNSSec

administrators is that they will store and use keys insecurely,

making themselves vulnerable to attack. While such direct

attacks may be a concern, we think such issues will be minor

compared to two others: DNSSec-caused denial of service and

false alarms.

Ideally, DNSSec would be used such that any signature

verification error would cause a DNS lookup to fail. If

DNSSec is deployed this way, however, any key management

or signature generation error would result in a denial of

service. Sign using the wrong key—site goes down. Sign with

an expired key—site goes down. Forget to update signatures

when a key expires—site goes down as well. Anecdotally we

know users occasionally experience certificate verification

errors with SSL [20]; when every domain on the Internet must

manage keys, and when every hostname must be appropriately

signed, user-visible key management errors will inevitably go

up dramatically.

We hypothesize that these errors will be so common that it

will not be feasible for DNSSec to cause DNS queries to fail;

TABLE I

AMPLIFICATION RELATIVE TO ONE 70-BYE

QUERY PACKET FOR WWW.DNSSEC.SE

No. of

Packets

Total Size

(bytes) Total Size

(bytes)

DNS Reply 1 208 2.9

DNSSec Reply (Actual) 1 1215 17.3

DNSSec Reply (Expected) 1 2257 32.2

DNSSec Reply Chain (Actual) 5 3045 43.5

DNSSec Reply Chain (Expected) 5 11865 169.5

Page 15: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

instead, DNSSec authentication errors will be used to generate

warnings that users or administrators can choose to follow or

ignore—just as with SSL certificates. And, as with SSL

certificates, most users will choose to ignore these warnings

[21].

In other words, key management with DNSSec will be

problematic enough that either users will have to tolerate a

significant loss in system availability due to non-malicious

authentication failures, or they will become accustomed to

ignoring DNSSec warnings, eliminating any security benefit

DNSSec provides.

VI. CIRCUMVENTING DNS SECURITY

When properly deployed, DNSSec does guarantee integrity

and authenticity of DNS data. Even if we assume that DNSSec

is used optimally, however, that does not mean that we get

what we really want, namely secure communications.

Attackers have two basic strategies for circumventing

DNSSec: they can attack from below or from above. Attackers

can get ―below‖ DNSSec by targeting other aspects of network

communication. DNS, at best, tells a computer the IP address

of another computer. An attacker who mounts an intruder-in-

the-middle attack (using, say, ARP spoofing on a local

network [22] or BGP route hijacking [23][24]) can potentially

take over any IP address they desire. Thus, DNS data can be

completely secure yet attackers can still make victim machines

connect to malicious hosts. Targeting the actual IP

communications with the host instead of the DNS lookup, an

attacker easily circumvents any security gained through

DNSSec.

Of course, all this assumes that the DNS hostname we are

attempting to resolve is the right hostname. Phishing attacks

have demonstrated that users are easily fooled into visiting

malicious sites via links containing fraudulent hostnames [25].

Users do not understand the significance of hostname

components and their ordering. Indeed, why shouldn’t a

regular user visit a URL with www-mybank.secure.com when

their bank is really located at www.mybank.com? A proper

explanation requires a basic understanding of DNS—

something that we cannot expect regular users to know about.

DNSSec is a (potentially) strong defense surrounded by

insecurity both above and below. While DNSSec deployment

might reduce the incidence of DNS-specific attacks, we cannot

see how it would have a significant impact on the number or

severity of security incidents on the Internet. It is simply too

easy for an attacker to use other means to get systems and

users to connect to malicious hosts.

VII. BEYOND DNSSEC

While there are alternatives to DNSSec such as DNSCurve

[26] that are more efficient and that arguably have better

security properties, to address the above concerns we need to

realize that DNS, by itself, is not the security problem that we

need to address. What is needed are better ways to provide

end-to-end security: when a person uses a computer to interact

with a remote host, we need ways to authenticate that remote

host to defend against intruder-in-the-middle attacks and

attacker attempts to masquerade as that remote host. DNSSec

does not provide end-to-end authentication because IP-level

traffic is not authenticated and because people can easily

confuse malicious domain names with legitimate ones (or,

they’ll just ignore malicious domain names).

Today we have two technologies in wide use for providing

end-to-end authentication. One, SSL/TLS [27] as it is

implemented on the web, authentication occurs using trust

chains grounded in public keys bundled with web browsers.

The other is Secure Shell (SSH) [28], where the public keys of

remote hosts are either cached or supplied by an administrator.

With SSL, a third party (a certificate authority), generally

unknown to a user, vouches that the host being accessed is the

one denoted by its domain name (e.g., Verisign says that we

are connecting to www.bankofamerica.com when browsers

visit that domain), and with the new extended validation

certificates, that third party says what organization controls

that server (e.g., Verisign says that www.bankofamerica.com

belongs to Bank of America Corporation). Alternately, with

SSH on our first connection we cannot authenticate the host;

however, on subsequent connections SSH authenticates the

remote host by correlating the domain name and server-

supplied public key with the key information that was saved

previously. So long as you connect to the same hosts you have

in the past, you are protected from attackers masquerading or

intercepting traffic at the network level.

SSH gives us easy-to-use end-to-end authentication (and

more) so long as we connect to the same remote hosts; it

provides no authentication for connections to new hosts. SSL

gives us remote host authentication so long as we trust the

certificate authority keys included with our browsers, we

verify the domain name we are visiting is, indeed, the correct

one, and we cancel connections to hosts that generate

certificate errors. Of course, all three of these assumptions are

not actually true: sometimes certificate authorities are

organizations that many people distrust [29][30] and users

regularly ignore malicious URLs and certificate errors [25]. As

we discussed earlier, however, these issues also apply to

DNSSec or, indeed, any scheme that relies on the DNS

namespace and arbitrary third-party signers of keys.

Both SSH and SSL are built on top of DNS and they both

provide the authentication DNS lacks. Even better, they

provide host authentication, not host IP address authentication,

although with some issues. Thus, it is reasonable to argue that

SSH and SSL together provide remote host authentication for

all users who want or need such authentication, and they do so

in a way that also provides end-to-end security, thus

obliviating the need for DNSSec. Yet neither address the

bigger problem of bridging the gap between hostnames and the

remote host (and organization) a user actually intends to

access.

Page 16: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

VIII. DISCUSSION

The Domain Name System is efficient, fault tolerant, and

fundamentally insecure—much as is the case with TCP/IP.

Both are foundational technologies for the Internet. Since the

mid-1990’s, many researchers have worked on developing

secure variants of both protocols, namely DNSSec and IPsec;

neither of these more secure proposals have been widely

deployed. Note that while DNSSec by itself can be

circumvented by IP-level attacks, IPsec and DNSSec together

provide a very secure foundation. So, why haven’t they been

widely deployed?

Part of the reason is simple laziness (in the most reasonable

sense): both of these transitions would involve significant

ongoing cost because of the burden of key management, and

without widespread deployment their security benefits are

minimal. The more fundamental reason, though, is that

complete layer-level changes are simply infeasible today on

the Internet. They may happen over time, but only if

circumstances are just right. The normal case is to expect

fragmentation—some will adopt the new technology while

others won’t. IPv6 and software patches are in the same boat

as DNSSec and IPsec— some will deploy but all will not. In

this environment, the more a solution requires universal or

near-universal acceptance to provide benefits, the more likely

it will never provide those benefits.

While DNS cannot provide end-to-end security without a

cryptographic extension such as DNSSec, we could make DNS

immune to cache poisoning attacks, the main source of DNS

insecurity, with a variety of relatively minor changes (e.g., by

increasing the size of the query ID field from 16 to 64 bits)—

that is, assuming that the existing best practice of randomizing

source ports and query IDs is not sufficient protection. To get

end-to-end security, we simply need to encourage more sites to

adopt SSL or, as appropriate, SSH. SSL is already ubiquitous

and SSH provides benefits even in limited deployments.

Some advocates of DNSSec are arguing that it should be

adopted not because of the security guarantees it provides, per

se, but because of the new, more secure applications that could

be built on top of it [31][32]. But this argument assumes

widespread deployment, something that has not happened and

may never happen in a way that allows other applications to be

built on top of it. Rather than waiting for a technology that has

been in development for fifteen years to suddenly take off,

why not use existing technologies to achieve the same ends?

But pushing for better end-to-end security at the DNS level

(using DNSSec, SSL, or SSH) misses the larger security issues

of authenticating destinations for network communications,

particularly web-based communications initiated by users. The

fundamental issue here is that the hostnames namespace is not

one that regular people understand. People understand brands

and, to a lesser extent, company names. Given the numerous

and still growing number of top-level domains, there exists no

clean mapping between brands, companies, and hostnames.

Many Internet users have discovered a very simple strategy for

dealing with this confusion: they do not access sites via

domain names and URLs; instead, they use search engine

queries. Such use of search engines can sometimes lead to

users accessing the wrong site (this happened with Facebook

recently [33]). Accessing hosts via search engines is

potentially insecure because there is no guarantee that the link

the search engine returns is the one the user actually wants;

however, this method is fast, relatively reliable, and

understandable to end users. If we wish to secure Internet

communications, we must take into account how computer

users actually behave, not how we would like them to behave.

What we need are ways for users to specify remote hosts

precisely and securely, using a namespace that does not

require a computer science degree to understand and properly

use. We then need a way to handle authentication failures in a

way that people are more likely to follow, not ignore them.

These are hard questions, ones that need further research. The

arguments we have made against DNS and DNSSec have been

made separately by many others; our contribution here is to

bring them together and to put them in perspective versus the

real security problem of end-to-end security. For

administrators, we hope this work will help them better

evaluate security technologies such as DNSSec. Also, we hope

that once researchers see the limited benefits and the

significant drawbacks of DNSSec that they will move on to

developing incrementally deployable solutions that address the

true end-to- end security needs of users on today’s Internet.

IX. CONCLUSION

DNSSec is designed to improve Internet security by

providing authentication and integrity protection for DNS.

While DNSSec would potentially protect against DNS cache

poisoning, it does so at the cost of increased susceptibility to

DDoS attacks, increased utility for DDoS amplification

attacks, and significant human administrator overhead for key

management. The key management issue is particularly

important because poor key management (which we expect to

be the rule, not the exception) will result in denials of service

or, more likely, in users being trained to ignore DNSSec-based

warnings.

These significant tradeoffs take place against a backdrop of

other, simpler methods for defending against cache poisoning

attacks such as source-port randomization and DNSSec’s

inability to provide end-to-end authentication. While DNSSec

could be extended to provide such protection, we already have

solutions that provide essentially the same guarantees: SSL

and SSH.

The true futility of DNSSec, however, arises from the fact

that it addresses the wrong problem. Users do not understand

the semantics of DNS well enough to distinguish between

malicious and legitimate hosts. While extended validation SSL

certificates are a small step in the right direction, interface

issues [34] and user reliance on search engines instead of

URLs [33] make them a partial solution at best.

Rather than adopt DNSSec, we argue that the Internet

community should invest in making DNS more robust to cache

Page 17: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

poisoning attacks without relying upon a PKI infrastructure

while SSL and SSH should be more widely adopted.

Meanwhile, research is needed in ways to close the gap

between user perception and remote host authentication.

ACKNOWLEDGMENT

We wish to thank the members of Carleton Computer

Security Laboratory and the anonymous reviewers for their

suggestions.

REFERENCES

[1] P. Mockapetris, ―Domain names: Concepts and facilities,‖ RFC 882,

Internet Engineering Task Force, Nov. 1983. [Online]. Available:

http://www.ietf.org/rfc/rfc882.txt

[2] ——, ―Domain names: Implementation specification,‖ RFC 883,

Internet Engineering Task Force, Nov. 1983. [Online]. Available:

http://www.ietf.org/rfc/rfc883.txt

[3] I. S. Consortium, ―Isc bind.‖ [Online]. Available: https://www.isc.org/

software/bind

[4] D. J. Bernstein, ―dnjdns.‖ [Online]. Available:

http://cr.yp.to/djbdns.html

[5] P. Mockapetris, ―Domain names - concepts and facilities,‖ RFC 1034

(Standard), Internet Engineering Task Force, Nov. 1987, updated by

RFCs 1101, 1183, 1348, 1876, 1982, 2065, 2181, 2308, 2535, 4033,

4034, 4035, 4343, 4035, 4592. [Online]. Available:

http://www.ietf.org/rfc/rfc1034.txt

[6] ——, ―Domain names - implementation and specification,‖ RFC 1035

(Standard), Internet Engineering Task Force, Nov. 1987, updated by

RFCs 1101, 1183, 1348, 1876, 1982, 1995, 1996, 2065, 2136, 2181,

2137, 2308, 2535, 2845, 3425, 3658, 4033, 4034, 4035, 4343. [Online].

Available: http://www.ietf.org/rfc/rfc1035.txt

[7] S. Bellovin, ―Using the domain name system for system break-ins,‖ in

The 5th Usenix UNIX Security Symposium. Usenix, June 1995.

[8] S. Friedl, ―An illustrated guide to the kaminsky dns vulnerability,‖ Aug

2008. [Online]. Available: nurlfhttp://unixwiz.net/techtips/ iguide-

kaminsky-dns-vuln.htmlg

[9] D. Eastlake 3rd and C. Kaufman, ―Domain Name System Security

Extensions,‖ RFC 2065 (Proposed Standard), Internet Engineering Task

Force, Jan. 1997. [Online]. Available:

http://www.ietf.org/rfc/rfc2065.txt

[10] D. Eastlake 3rd, ―Domain Name System Security Extensions,‖ RFC

2535 (Proposed Standard), Internet Engineering Task Force, Mar. 1999.

[Online]. Available: http://www.ietf.org/rfc/rfc2535.txt

[11] R. Arends, R. Austein, M. Larson, D. Massey, and S. Rose, ―DNS

Security Introduction and Requirements,‖ RFC 4033 (Proposed

Standard), Internet Engineering Task Force, Mar. 2005. [Online].

Available: http://www.ietf.org/rfc/rfc4033.txt

[12] ——, ―Resource Records for the DNS Security Extensions,‖ RFC 4034

(Proposed Standard), Internet Engineering Task Force, Mar. 2005,

updated by RFC 4470. [Online]. Available: http://www.ietf.org/rfc/

rfc4034.txt

[13] ——, ―Protocol Modifications for the DNS Security Extensions,‖ RFC

4035 (Proposed Standard), Internet Engineering Task Force, Mar. 2005,

updated by RFC 4470. [Online]. Available: http:

//www.ietf.org/rfc/rfc4035.txt

[14] B. Laurie, G. Sisson, R. Arends, and D. Blacka, ―DNS Security

(DNSSEC) Hashed Authenticated Denial of Existence,‖ RFC 5155

(Proposed Standard), Internet Engineering Task Force, Mar. 2008.

[Online]. Available: http://www.ietf.org/rfc/rfc5155.txt

[15] D. Moore, C. Shannon, D. Brown, G. Voelker, and S. Savage, ―Inferring

Internet denial-of-service activity,‖ ACM Transactions on Computer

Systems (TOCS), vol. 24, no. 2, p. 139, 2006.

[16] N. Brownlee, K. Claffy, and E. Nemeth, ―DNS measurements at a root

server,‖ GLOBECOM-NEW YORK-, vol. 3, pp. 1672–1676, 2001.

[17] R. Vaughn and G. Evron, ―DNS amplification attacks,‖ 2006. [Online].

Available: http://www.isotf.org/news/DNS-Amplification-Attacks.pdf

[18] V. Paxson, ―An analysis of using reflectors for distributed denial-

ofservice attacks,‖ ACM SIGCOMM Computer Communication Review,

vol. 31, no. 3, pp. 38–47, 2001.

[19] F. Baker and P. Savola, ―Ingress Filtering for Multihomed Networks,‖

RFC 3704 (Best Current Practice), Internet Engineering Task Force,

Mar. 2004. [Online]. Available: http://www.ietf.org/rfc/rfc3704.txt

[20] C. Herley, ―So long, and no thanks for the externalities: The rational

rejection of security advice by users,‖ in Proceedings of the New

Security Paradigms Workshop (NSPW), 2009.

[21] J. Sunshine, S. Egelman, H. Almuhimedi, N. Atri, , and L. Cranor,

―Crying wolf: An empirical study of ssl warning effectiveness,‖ in

Proceedings of the 18th USENIX Security Symposium, 2009.

[22] R.Wagner, ―Address resolution protocol spoofing and man-in-the-

middle attacks,‖ The SANS Institute, 2001.

[23] A. Kapela and A. Pilosov, ―Stealing the internet - a routed, wide-area,

man-in-the-middle attack,‖ in Defcon 16, Las Vegas, August 2008.

[24] R. Kuhn, K. Sriram, and D. Montgomery, ―Border gateway protocol

security,‖ July 2007.

[25] R. Dhamija, J. Tygar, and M. Hearst, ―Why phishing works,‖ in

Proceedings of the SIGCHI conference on Human Factors in

computing systems. ACM, 2006, p. 590.

[26] D. J. Bernstein, ―DNSCurve: Usable security for DNS.‖ [Online].

Available: http://dsncurve.org

[27] T. Dierks and E. Rescorla, ―The Transport Layer Security (TLS)

Protocol Version 1.2,‖ RFC 5246 (Proposed Standard), Internet

Engineering Task Force, Aug. 2008. [Online]. Available: http:

//www.ietf.org/rfc/rfc5246.txt

[28] T. Ylonen and C. Lonvick, ―The Secure Shell (SSH) Protocol

Architecture,‖ RFC 4251 (Proposed Standard), Internet Engineering

Task Force, Jan. 2006. [Online]. Available:

http://www.ietf.org/rfc/rfc4251.txt

[29] E. Felten, ―Mozilla debates whether to trust chinese CA,‖ February

2010. [Online]. Available: http://www.freedom-to-tinker.

com/blog/felten/mozilla-debates-whether-trust-chinese-ca

[30] J. Corbet, ―China internet network information center accepted as a

mozilla root CA,‖ Linux Weekly News, February 2 2010. [Online].

Available: http://lwn.net/Articles/372264/

[31] P. Vixie, ―What DNS is not,‖ Communications of the ACM, vol. 50, no.

12, pp. 43–47, December 2009.

[32] M. S. Mimoso, ―Kaminsky interview: DNSSEC addresses cross

organizational trust and security,‖ June 2009. [Online]. Available:

http://searchsecurity.techtarget.com/news/interview/0, 289202,sid14

gci1360143,00.html

[33] T. Maly, ―I have some opinions about the RWW facebook login

hilarity,‖ February 2010. [Online]. Available:

http://quietbabylon.posterous.com/ i-have-some-opinions-about-the-

rww-facebook-l

[34] J. Sobey, R. Biddle, P. C. van Oorschot, and A. S. Patrick, ―Exploring

user reactions to new browser cues for extended validation certificates,‖

in Proceedings of the 13th European Symposium on Research in

Computer Security. Springer, 2008, pp. 411–427.

Page 18: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

Invited Talk: Big Allied and Dangerous

– Terrorist Networks

Victor Asal and R. Karl Rethemeyer University at Albany, SUNY

1400 Washington Ave. Albany, NY 12222

[email protected], [email protected]

HY are some terrorist organizations so much more deadly than others? Why do some target the

United States? Why do some organization use or pursue CBRN? This presentation examines

organizational characteristics such as ideology, size, age, state sponsorship, alliance connections, and

control of territory to see how they can help us understand terrorist organizational behavior.

W

Page 19: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

Abstract—Information warfare refers to actions that deny,

exploit, corrupt, or destroy the enemy’s information and its

functions. A model in this context classifies strategies for

attacking the enemy’s information infrastructure. Recent

research has defined four canonical strategies using Shannon’s

information theory. This paper introduces a complementary

approach that constructs a conceptual model of information flow.

The model is used to identify attack strategies.

Index Terms—Information warfare, information

infrastructure, attack strategies

I. INTRODUCTION

HE Information Revolution, manifested as an information

explosion aided by computers, mass media, and

telecommunication networks, has contributed to the concept of

warfare in relation to exploiting/denying information.

The importance of IW/IO [Information Warfare/

Information Operation] today is a simple consequence

of the reality that the digital infrastructure in the

developed world processes, concentrates and carries the

flow of knowledge which provides us with our decisive

economic and military advantage against other

players… The digital infrastructure is thus a lucrative

target, in every sense, since it provides a single point of

failure for the developed world‘s economic and military

power base. It is also a capable weapon for penetrating

our economic, military and political fabric, exploitable

by foreign and domestic players with hostile agendas.

[10]

Information warfare as a discipline deals with many issues,

including human factors, legal problems, rules of engagement,

damage assessment problems, and structuring forces to execute

information warfare (attributed to Winn Schwartau) [10]. An

important field of study in IW is the ways in which information

is exploited/protected.

In this paper, we analyze IW in terms of strategies of attack.

While such a contribution offers a new approach to the issue of

attack technicalities, the broader aim is to propose building a

foundation for IW upon the notion of flow of information.

Flow of information underlies all information systems, and this

is the reason we concentrate in section II on reviewing the

definition of IW and its notions of denying, exploiting,

corrupting, or destroying information. In section III, we

review a model for information in IW based on Shannon‘s

communication model. The purpose is outlined in section IV,

where the need for a conceptual foundation for IW analysis is

shown. Section V introduces such a conceptual foundation

through a model of information flow. Section VI applies the

concept of information flow to the problem of modeling

(information) combat theater. Specifically, this methodology is

used to analyze attack strategies.

II. DEFINITION OF INFORMATION WARFARE

According to Borden [5], the USAF model [16] and the

Network-Centric model [14] of Information Warfare define

IW as "any action to Deny, Exploit, Corrupt or Destroy the

enemy‘s information and its functions; protecting ourselves

against those actions and exploiting our own military

information functions" [16] (italics added).

We can investigate this focus on the operations of denying,

exploiting, corrupting, and destroying the enemy‘s information

as follows.

A. Deny the Enemy’s Information and its Functions

Smith‘s [17] description of this operation is ―IO can be used

to deny the enemy access to information. This may be done

through techniques like electronic warfare or the destruction of

command units‖ (italics added). So it seems that to deny

information means to prevent access to information about one

of the sides involved in a conflict. Fig. 1 illustrates information

denial/prevention. Denying information in this context seems

to denote a ―blockage‖ in information flow from our

information sphere to that of the enemy.

Flow-based Information Warfare Model

Sabah Al-Fedaghi Computer Engineering Department, Kuwait University

PO Box 5969 Safat, Kuwait 13060

[email protected]

T

Page 20: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

A more complete view to be introduced in our conceptual

model is ―blockage‖ of the enemy from collecting/receiving

any type of information (Fig. 1(d)). Adopting such a concept,

we can concentrate on different types of blocked information:

our information, information about us, relevant information,

and information per se. The information warfare model ought

to incorporate such a concept of ―information suffocation‖ in

the definition of IW.

Thus, the use of ―deny‖ seems not to reflect any significant

operation on information. Collecting/receiving information is

an operation resulting in a basic state of information.

―Blockage‖ in this case refers to preventing information from

being collected/received information. This description, in

terms of states of information, will become clear when we

introduce our information flow model. The point here is that to

―deny information‖ is not a fundamental notion ―about‖ that

information.

B. Exploit the Enemy’s Information

Exploit refers to making the most of the enemy‘s

information. The verb exploit is used in the senses, ―use or

manipulate to one's advantage,‖ and ―make good use of.‖ The

US DoD views exploitation as an ―attack measure‖ defined ―as

gathering an adversary‘s flow of information to facilitate

friendly information generation‖ [5]. According to Borden [5],

since exploitation ―does not in itself produce an immediate

causal effect in the function of the target [communication]

channel, it cannot be classified as an offensive Information

Warfare strategy … Rather, it is an information gathering

‗technique‘.‖ The new IO (Information Operation) USA DoD

doctrine [12] does not consider ―Exploiting an enemy

information asset for intelligence purposes‖ an aspect of IO.

In addition, ―exploiting information‖ does not complement

―denying information.‖ ―Denying information‖ may be

regarded as a by-product of ―exploiting information.‖ For

example, according to Kopp [10], ―IW/IO are all operations . .

. conducted to exploit information to gain an advantage over

an opponent, and to deny the opponent information which

could be used to an advantage.‖ Thus, it seems that the terms

deny and exploit in the definition of IW do not reflect

systematic operations on information.

C. Corrupt or Destroy the Enemy’s Information

Corrupting information refers to changing information,

which creates new information (e.g., from Tanks approach

from south to Tanks approach from north). As we will see, this

new information is created from existing information. A more

complete description of IW thus includes ―the enemy‘s ability

to create information (from scratch or from old information).‖

We will view information flow in terms of five stages:

receiving, processing, creating, releasing, and transferring of

information. Creating information includes the notion of

corrupting information.

Destroying the enemy‘s information is one possible way to

disturb the enemy‘s information system. Other methods

include disturbing the internal flow of information such as ―not

processing received information,‖ ―not drawing a conclusion

(creating) when processing information,‖ etc. Even ―copying

(stealing) enemy‘s information‖ is an important operation that

might be more valuable than destroying that information.

We can see that Deny, Exploit, Corrupt, or Destroy the

enemy‘s information seems to be an incomplete description of

operations involved in IW. In section V, we describe an

information flow model that can be used to develop an

alternative specification of IW. In the next two sections, we

review and scrutinize IW modeling.

III. REVIEW: IW MODELS

Recently, the field of information warfare modeling has

been an active research area, e.g., [5][6][9][18]. Related works

have applied these models in several directions, including

information assurance and the semantic web. It seems that

none of these models addresses a high-level conceptualization

of the notion of information warfare. Every model typically

shows certain properties representing different viewpoints and

processes that occur during warfare-related processes. For

example, CycSecure Architecture introduced in [18] includes

such notions as ―Dynamic HTML User Interfaces,‖ CERT,

web-browser, etc. While these levels of detail are suitable for

the purposes of the authors, a conceptual description of

information warfare is different. It aims at identifying the

phases of a warfare life cycle via abstract and generic

(b) Prevent receiving information about

us

Enemy

Relevant

Information

(c) Prevent receiving information

Enemy

Information

about Us

Enemy

Our

Information

(a) Deny access to our information

Enemy

Information

(d) Prevent receiving information

Fig. 1. Difference between Deny and Prevent

receiving information

Page 21: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

concepts.

A conceptual model is a high-level abstraction that

encapsulates main knowledge and behavior in relation to real-

world entities or phenomena. The model provides some

critical insights into the real world to be used in understanding

it and communicating about it.

Borden [5] and Kopp [9][10] proposed a model for

―information‖ in IW based on Shannon‘s model [15] for a

communication channel:

Consider for instance the radar detection model, where

a radar illuminates a target to establish its position,

kinematic parameters and possibly even identity. In this

model the ‗message‘ is encoded in the modulation and

timing of the power backscattered from the target, while

the ‗transmitter‘ is the reflecting body of the target

vehicle, which is supplied with power by the microwave

source in the radar producing the illumination. In this

example the attacker may degrade the S/N of the radar

by applying a microwave absorbent coating to his

vehicle, and in doing so hide the vehicle‘s radar return

in the background noise and noise floor of the radar‘s

receiver. [5] (Italics added).

According to Borden [5],

Information Warfare in the most fundamental sense

amounts to manipulation of a channel carrying

information in order to achieve a specific aim in

relation to an opponent engaged in a survival conflict.

[5, 10] (Italics added).

Notice that ―misinformation‖ in information theory is

related to ―noise‖ (unintentionally) produced by the channel.

In this context, misinformation is a type of information. Of

course this is not applicable to ―semantic information‖, where

the notion of a motive is applied to misinformation.

In view of Shannon‘s channel capacity theorem,

C = W Log2 (1+ P/N)

an attacker manipulates the flow of information by controlling

the capacity of the channel, using bandwidth, W, and signal

power vs. noise power, P/N [8].

Accordingly, four canonical strategies are available to

attackers (see Fig. 2) [8]:

- Degradation, or Denial, achieved by injecting noise and thus

degrading the channel. This is the so-called (covert) active

form of degradation, exemplified by injecting noise into the

channel to degrade capacity. In its passive form, the power in

the channel is driven to zero, burying the message carrying

information in background noise [8].

- Corruption, or Mimicry involves the attacker using a (covert)

signal that mimics the victim‘s signal so well it cannot be

distinguished from the real signal and is used instead.

- Denial, or Destruction involves the expedient of damaging

or destroying the enemy‘s receiver or the transmission link.

- Denial via Subversion or Subversion is an attack that

penetrates the internal functions of the victim to compel it to

do something self-destructive, using its own resources to

damage itself (e.g., virus and worm).

These strategies can be broken down into a collection

of more basic attacks, often mutually dependent but

each comprising only one of the four canonical

strategies. From a science perspective these strategies

can be said to be atomic in the sense that they are

simplest forms, from which all other more complex

forms are constructed. It is one of the realities of nature

that something of extreme complexity, such as strategic

deceptions, can be broken down into an interconnected

web of mutually dependent but essentially simple forms

of attack. What the four canonical strategies

demonstrate is that the vast footprint of various IW

techniques can all be broken down and modelled

mathematically, or in simulations. [11]

IV. SCRUTINIZING IW MODELS

Modeling of communication is an evolutionary process in

which new concepts enhance and complement earlier

communication models. Shannon‘s model is fundamental for

all subsequent communication models.

Consider the contentious point related to the interiority of

the channel. Conceptually, a disturbing element in Borden-

Kopp‘s model is the focus on channel capacity to classify

―basic attacks.‖ Implicitly, the model indicates that

information passes, or is transferred, through the presence of a

channel in the model. Such a view is analogous to

conceptualizing the notion of travel as transfer from one point

and arrival at another point. In a more comprehensive view,

the process of (air) travel includes the notions of being

received at the travel station (airport), processed (e.g., luggage

and passports), released to boarding (waiting for boarding),

and actual transfer onto the plane. On the other end, after

being transferred, passengers arrive, are processed, released,

and then transfer (leave for hotels). Similarly, information in

the communication stream is not just sent and received, but

also has repeated lifecycle states: it is received, processed

(changes its form), created (generates information from

information), released (e.g., waits for channel to be available),

and transferred.

We contend that the life cycle of information in any

communication system consists of iterations of stages

according to a state diagram. The five stages are receiving,

processing, creation, release, and transfer. Life cycle refers to

the ―birth‖ of a communicative act through initiation of a flow

of information directed to a certain destination, and the

―decay‖ of such an act through seizure of flow of information.

Seizure or stoppage of flow of information can occur at any

point in the stream of information flow regardless whether a

destination is reached. The flow stream comprises successive

stages of the stages described previously across different

Page 22: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

ATTACKER’S INFORMATION SOURCE

ATTACKER’S TRANSMITTER

ATTACKER’S SIGNAL BURIED IN NOISE

RECEIVER DESTINATION NOISE SOURCE

(a) Degradation Strategy − Passive Form

ATTACKER’S NFORMATION SOURCE

ATTACKER’S TRANSMITTER

ATTACKER’S SIGNAL BURIED IN NOISE

RECEIVER DESTINATION NOISE SOURCE

Denial Strategy

ATTACKER’S NFORMATION SOURCE

ATTACKER’S TRANSMITTER

ATTACKER’S SIGNAL BURIED IN NOISE

RECEIVER DESTINATION NOISE SOURCE

(d) SUB/Denial Strategy

ATTACKER’S TRANSMITTER

ATTACKER’S INFORMATION SOURCE

ATTACKER’S NFORMATION SOURCE

ATTACKER’S TRANSMITTER

ATTACKER’S SIGNAL BURIED IN NOISE

Fig. 2. The four canonical strategies (from [11]).

DESTINATION NOISE SOURCE

(b) Degradation Strategy − Active Form

ATTACKER’S NOISE SOURCE

ATTACKER’S NFORMATION SOURCE

ATTACKER’S TRANSMITTER

ATTACKER’S SIGNAL BURIED IN NOISE

RECEIVER DESTINATION NOISE SOURCE

(c) Corruption Strategy

ATTACKER’S TRANSMITTER

ATTACKER’S INFORMATION SOURCE

Page 23: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

participants‘ boundaries.

In addition, Shannon‘s model does not reflect conceptually

the ontological nature of communication to be taken as a

foundation for ―strategies that can be said to be atomic in the

sense that they are simplest forms, from which all other more

complex forms are constructed.‖

For example, it is well known that a communication act

involves information, a message (symbolic representation),

and signals (e.g., physical or electronic signals). The flow of

these three types of things is represented in Shannon‘s model

by a single arrow between the sender and receiver. Such a

conceptualization is analogous to representing gas, water, and

electricity lines in the design of a building with only one type

of arrow in the blueprint. Information is usually created by the

sender, while (signal) noise is created in the channel. The

noise is physical signals. Conceptually, this noise ought not to

be mixed with randomness (entropy) created at the source in

some applications. Entropy is a ―type of information,‖ whereas

noise generated in the channel comprises physical signals. Of

course, noise interweaves with messages while being

converted to signals for transmission purposes.

The Borden-Kopp model is limited by limiting the

operations of Corruption and Degradation to the channel

instead of along the line of flow of information. The channel

and its capacity are the centerpiece of operations, but the

model is inconsistent because it shifts direction in ―Denial via

Subversion or Subversion attacks‖ to non-channel operation

on the internal functions of the target.

From an evolutionary perspective on idealized

communication models, communication models evolved from

Shannon‘s model to a network model (see Fig. 3) that included

many details such as authentication, routing identification,

governing, data compression and decompression, and

detection of errors in transmission and arranging for their

correction. Hence, focusing modeling attacks on channel

capacity would seem to hinder development of a more

comprehensive approach to IW techniques.

Before such an approach is introduced, the next section

reviews the basic model used to explore an alternative

methodology for identifying critical security vulnerabilities in

information infrastructure.

V. THE FLOW MODEL

This section reviews the basic aspects of the Flow Model

(FM) to make the paper self-contained [1][2][3][4]. To

simplify this review of FM, we introduce flow in terms of

information flow.

Information passes through a sequence of states as it moves

through stages of its life cycle, as follows:

1. Information is received (i.e., it arrives at a new sphere,

similar to passengers arriving at an airport).

2. Information is processed (i.e., it is subjected to some type of

process, e.g., compressed, translated, mined).

3. Information is disclosed/released (i.e., it is designated as

released information, ready to move outside the current sphere,

such as passengers ready to depart from an airport).

4. Information is transferred to another sphere (e.g., from a

customer‘s sphere to a retailer‘s sphere).

5. Information is created (i.e., it is generated as a new piece of

information using different methods such as data mining).

6. Information is used (i.e., it is utilized in some action,

analogous to police rushing to a criminal‘s hideout after

receiving an informant‘s tip). Using information indicates

directing or diverting the information flow to another type of

flow such as actions. We call this point a gateway in the flow.

7. Information is stored. Thus, it remains in a stable state

without change until it is brought back to the stream of flow.

8. Information is destroyed.

The first five states of information form the main stages

of the stream of flow, as illustrated in Fig. 4. When

information is stored, it is in a substate because it exists in

different stages: information that is created (stored created

information), processed (stored processed information), and

received (stored received/row information).

The term ―information‖ is utilized in its common meaning.

It is a discrete unit (called flowthing [4]) that is received,

processed, created, released, and transferred, as described in

Fig. 4. The model can be redefined as a "signal flow model",

and also as other flowthings, such as money, materials, and

even actions, and concepts. A concept (whatever its nature) is

a ―thing‖ that can be received, processed, created, released,

and transferred. Accordingly, use of the term ―information‖ (a

signal, or content of a message) does not introduce difficulty at

a fundamental level.

Communication network

Messages

Recipients Senders

Fig. 3. Model of communication network (modified

from [13]).

Fig. 4. Information states in FM.

Created

Received

Released Processed Transfer

Page 24: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

We use the term ―states of information‖ in the sense of

properties; for example, states/phases of water in physics:

liquid, ice, and gas.

The five-stage scheme can be applied to humans and to

organizations. It is reusable because a copy is assigned to each

agent or entity. Consider an information sphere that includes a

small organization with two departments; it comprises three

information schemes: one for the organization at large, and

one for each department. Fig. 5 shows the internal information

flow in such a sphere.

The five information states are the only possible ―existence‖

patterns in the stream of information. To follow information as

it moves along paths, we can start at any point in the stream.

Suppose that information enters the processing stage, where it

is subjected to some process. The following are ultimate

possibilities:

1. It is stored.

2. It is destroyed.

3. It is disclosed and transferred to another sphere.

4. It is processed in such a way that it generates new

information (e.g., statistical analysis generates the information

that Smith is a risk).

5. It triggers another type of flow. For example, upon receiving

patient health information, a physician takes some action such

as performing medical treatment. Actions can also be received,

processed, created, released, and communicated.

In Fig. 4, note that the arrows between Released on one

hand and the stages of Received, Processed, and Created are

bidirectional. This flow in opposite directions accounts for

cases when it is not possible to communicate information, as in

the case of a broken channel. In this case, at the Release stage,

the information can be:

- destroyed after a certain period

- stored indefinitely

- returned to release in the receiving, processing, or creation

stages

VI. MODELING COMBAT THEATER

FM assumes that the involved parties are represented by the

five components or stages of FM: receiving, processing,

creation, release, and transfer. Next, we examine these stages

in view of Shannon‘s communication model.

According to Shannon, the elements involved in

communication of information are source, transmitter, channel,

receiver, and destination.

A. Source/Transmitter

The source produces messages to be communicated to the

receiving terminal. FM extends this side of communication to

highlight the ―origin‖ of the message, whether received from

outside the source, or created within the source; thus, the

source can be described as creator or recipient, in addition to

being the sender of the message.

Fig. 5. Information flow within a company and its two departments.

Created

Received

Disclosed Processed Transfer

Created

Received

Disclosed Processed Transfer

Created

Received Disclosed

Processed

Transfer

Page 25: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

Such a qualification may be significant in certain

circumstances (e.g., networking where communication

involves a chain of two-party exchanges).

The transmitter converts the message to a signal suitable for

transmission over the channel. In FM, the source has two

spheres, the messages sphere, and the signal sphere; thus, this

element of Shannon‘s model reflects the source as a processor

triggering the creation of signals that are released and

transmitted. This conceptualization distinguishes between two

explicit flowthings: information and signals, thus separating

the flow of information from the flow of signals.

Shannon‘s information theory makes a clear distinction

between signals and information. In many communication

systems, signals transmission is involved only in transferring

data, without the direct intent of conveying information along

with the data. In conventional terminology, the notion of data

is introduced as a form of information more suitable for

transmission. Considering data from the FM point of view,

data are processed (stage in FM) as digitally encoded

information. We thus have two ways to conceptualize the

relationship between information and data. If data are viewed

as a different flowthing from information, then another sphere

(besides information and signal) for data is distinguished in

FM; however, in a communication context, without loss of

generality, we view data as a form of processed information.

B. Receiver/Destination

Shannon‘s model abstracts transmission as a component of

the source and receiving as a component of the destination.

This is a reasonable way to look at the communication process,

because ―transmission‖ requires a deliberate act of message

releasing, and ―receiving‖ requires another deliberate act of

accepting the message. Even if ―transmission‖ is

conceptualized as being in the channel proper, the notions of

―transmission‖ and ―receiving‖ are still decisions to be made at

the source and the destination, respectively.

For example, it is possible that a message (e.g., an e-mail)

arrives at its destination; however, the communication process

is not completed if the receiver deletes it without reading it.

One objection to Shannon‘s model is that ―the receiver is

constantly being fed pieces of information with no choice in

the matter—willingly or unwillingly‖ [7]. FM is a more

suitable conceptual representation since it divides

communication into two types: one under the control of the

sender or receiver and one in the channel.

C. Channel

If the ―transmission channel‖ carries the signal from its

―transmitter‖ to a ―receiver‖ (e.g., device), this physical

activity is different from the abstract pre/post transfer stages of

―releasing‖ and ―receiving‖ the message at the source and

destination spheres. Furthermore, the nature of the channel is

different from the ―nature‖ of the source and the destination.

Clearly, the source and the destination deal first with

information, whereas the channel is a ―physical sphere‖ that

deals (in this case) with physical signals. Therefore, the basic

thing that is flowing (―transmitted‖ and ―received‖ in the

source and destination, respectively) is information, while the

thing that flows in the channel is only a physical signal. The

message has informational form when released by the source

and prior to channel transmission, and it returns to such a form

after channel transmission, when it is received at its

destination.

We propose to completely separate these spheres, as shown

in Fig. 6. The sender, channel, and receiver each have five

stages in the FM. The dotted arrows in the figure are

recognition that information/data flow is distinguished from

signal flow.

Adding different stages to the channel invites different

possibilities to be explored. For example, the channel is not an

implicit participant in the communication act; rather it is fully

represented as a communication sphere of signals flow. It

receives, processes, creates, releases, and communicates

signals.

Tran

sfer

Creation

Processing Releasing

Receiving

Tran

sfer

Creation

Processing Releasing

Receiving

Tran

sfer

Creation

Releasin

g

Processing

Receiving

Tran

sfer

Transfer

Fig. 6. FM version of communication.

Sig

nals

Sig

nals

Sender/

Source

Channel

Receiver/

destination

Page 26: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

Receiving and communicating are the standard functions of

channels in current communication models. Creation in the

channel is manifested by noise (a type of ―signal‖) generated

in the channel. Noise is explicitly recognized in the channel.

The channel may ―delay‖ delivering the signal (e.g., traffic

congestion), thus putting it in the released state.

VII. MODEL FOR ATTACK STRATEGY

At this point, it is not difficult to develop a map of

information flow and to identify points vulnerable to attack. To

simplify the map, we follow the Borden-Kopp model and limit

the representation to a theater combat model involving three

elements: attacker, channel of communication, and victim, as

shown in Fig. 7. In this simplified conceptualization, if the

information area involves a network with multiple nodes

separating the attacker from the victim, the entire network is

represented as a channel.

A theater here refers to the informational area (sphere) of

action of conflict. This area generally consists of information

spheres and subspheres (e.g., victim‘s sphere, which includes

victim‘s subdivisions; see Fig. 5) represented as FM schema

including flows, and triggering relationships. The theater

appears as a map of information spheres and their information-

relevant flows.

Fig. 7 shows various information spheres for the attacker,

channel, and victim, and sample attacks. Attack 1 (Information

noise) represents what the Borden-Kopp model calls

―Degradation Strategy − Passive Form‖ in Fig. 2(a). The

attacker creates ―information noise,‖ releases it, and transfers

it to the channel.

Semantically, information noise is different from signal

noise generated inside the channel. Spamming is an example

of such a noise, where the receiver is overwhelmed with too

many messages. This type of attack is different from jamming

the content of a message with signals/symbols that distort the

original content.

In the channel, noise can be triggered (attack 2 – signal

noise) by different methods, including damaging the channel

physically, directly injecting a signal, etc. Jamming, for

example, involves (deliberate) transmission of signals that

disrupt communications by decreasing the noise ratio.

However, in this paper we are not concerned with multi-

dimensional consequences such as that jamming may alert the

victim that he is under attack.

The information spheres of the victim can be targeted by

many types of attacks:

- Victim‘s receiving system can be damaged

- After receiving a message, victim‘s receiving system can

malfunction or be starved for storage space and overwhelmed

by a shortage of power, parts, speed, etc.

- The value of received data can be devalued by denying the

ability to process it

- Processed data can be devalued by preventing analysis that

draws conclusions (new knowledge) because of lack of

processing power, speed, necessary memory storage, etc.

- Created data (conclusions) can be prevented from being

released/transferred to decision makers.

In comparison with the forms of the Borden-Kopp model,

these strategies can be said to be atomic in the sense that they

are the simplest forms and those from which all more complex

forms are constructed. Attacks can follow the enemy‘s

information, with one attack after another along the

information flow, like bullets chasing a target from one place

to another.

VIII. CONCLUSION

This paper introduces a new approach to IW that constructs

a conceptual model of information flow. The model is used to

identify attacking strategies. IW can be defined in terms of the

resultant model, such as IW undertaken to undermine the

information sphere of an enemy. Undermining an information

sphere refers to causing malfunctioning states and flows of

information.

Tran

sfer

Creation

Processing Releasing

Receiving

Tran

sfer

Creation

Processing Releasing

Receiving

Tran

sfer

Creation

Releasing Processing

Receiving

Tran

sfer

Transfer

Fig. 7. FM version of sample attacks.

Sig

nals

Sig

nals

Sender/

Source

Channel

Receiver/

destination

1. Information noise 2. Signal noise

Page 27: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

The flow-based model does not deal with the content of

information. The type of information can be studied at the next

level. Our model is a general conceptualization of flow of

information (receive information, transfer information, etc.). It

is at the same level as conceptual models for understanding

knowledge that includes such phases as knowledge scanning,

knowledge evaluation, knowledge transfer, and knowledge

application without specifying what knowledge. It is also

similar to the famous observe-orient-decide-act (OODA)

model of attack, where the types of actions are not specified.

REFERENCES

[1] S. Al-Fedaghi and A. Alrashed, ―Threat risk modeling,‖ presented at the

International Conference on Communication Software and Networks

(ICCSN 2010), Singapore, February 26–28, 2010.

[2] S. Al-Fedaghi, ―How the Pride Attacks‖, presented at the 9th European

Conference on Information Warfare and Security, Thessaloniki, Greece,

1-2 July, 2010.

[3] S. Al-Fedaghi, ―Modeling communication: One more piece falling into

place,‖ presented at The 26th ACM International Conference on Design

of Communication (SIGDOC 2008), Lisboa, Portugal, Sept. 22–24,

2008.

[4] S. Al-Fedaghi, ―Systems of things that flow,‖ presented at The 52nd

Annual Meeting of the International Society for the Systems Sciences

(ISSS 2008), University of Wisconsin, Madison, USA, July 13-18,

2008.

http://journals.isss.org/index.php/proceedings52nd/article/viewFile/100

1/354

[5] A. Borden, ―What is information warfare?‖ Aerospace Power

Chronicles, United States Air Force, Air University, Maxwell AFB

USAF, IWS: Information Warfare Site, November 2, 1999.

http://www.iwar.org.uk/iwar/resources/airchronicles/borden.htm

[6] L. S. Johnson, ―Toward a functional model of information warfare,‖

Central Intelligence Agency, May 8, 2007.

https://www.cia.gov/library/center-for-the-study-of-intelligence/kent-

csi/docs/v40i5a07p.htm

[7] M. McLuhan, Understanding Media: The Extensions of Man. New

York: The New American Library, 1964. Excerpt: ―Interpretations with

limitations: A critical essay on the mathematical theory of

communication‖ at http://www.uweb.ucsb.edu/~andreabautista/

[8] C. Kopp, ―The four canonical strategies of information conflict,‖

Lecture 04, 2006.

http://www.csse.monash.edu.au/courseware/cse468/2006/Lectures/CSE-

468-04.pdf

[9] C. Kopp, ―Shannon, hypergames and information warfare,‖ Journal of

Information Warfare, vol. 2, no. 2, 2003, pp. 108-118.

http://www.csse.monash.edu.au/courseware/cse468/2006/Lectures/_JIW

-2002-1-CK.pdf

[10] C. Kopp, ―Information warfare. Part 1. A fundamental paradigm of

infowar,‖ February 2000.

http://www.ausairpower.net/OSR-0200.html

[11] C. Kopp, ―Fundamentals of information warfare,‖ NCW 101 Part 14,

DefenceToday. http://www.ausairpower.net/NCW-101-14.pdf

[12] R. L. Mackey, ―Information operations: Reassessing doctrine and

organization,‖ 2003.

http://www.dtic.mil/cgi-

bin/GetTRDoc?AD=ADA413656&Location=U2&doc=GetTRDoc.pdf

[13] A. Pfitzmann and M. Hansen, ―Anonymity, unlinkability,

unobservability, pseudonymity, and identity management – A

consolidated proposal for terminology,‖ Aug. 25, 2005.

http://dud.inf.tu-dresden.de/Literatur_V1.shtml

[14] D. Silverberg (Ed.), Network centric warrior, Q&A. Interview with Vice

Admiral Arthur Cebrowski, Director, Space, Information Warfare,

Command and Control, Chief of Naval Operations. Military

Information Technology, vol. 2, no. 2, April-May 1998.

[15] C. E. Shannon, ―A mathematical theory of communication.‖ The Bell

System Technical Journal, vol. 27, July, October 1948, pp. 379–423,

623–656.

[16] S. E. Widnall and R. R. Fogelman, "Cornerstones of information

warfare," SAF/CSAF publication, 1997.

http://knxas1.hsdl.org/homesec/docs/dod/nps08-021105-01.pdf

[17] R. Smith, ―Simulating information warfare using the HLA Management

Object Model,‖ SIW Discussions, Spring 2000.

http://www.modelbenders.com/papers/00F-SIW-021.pdf

[18] B. Shepard, C. Matuszek, C. B. Fraser, W. Wechtenhiser, D. Crabbe, Z.

Güngördü, J. Jantos, T. Hughes, L. Lefkowitz, M. J. Witbrock, and D.

Lenat, ―A knowledge-based approach to network security: Applying Cyc

in the domain of network risk assessment,‖ in Proceedings of the

Seventeenth Innovative Applications of Artificial Intelligence

Conference, Pittsburgh, Pennsylvania, July 2005, pp. 1563-1568

Page 28: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

A Multi-Staged Approach to Risk

Management in IT Software Implementation

Sanjukta Das, Raj Sharman, Manish Gupta, Varun N. Kutty, Yingying Kang

State University of New York, Buffalo, NY

Amherst, New York, USA, 14260

{sdsmith4, rsharman, mgupta3, vn27, ykang4 }@buffalo.edu

Abstract: Software implementations in organizations are

highly complex undertakings with non-trivial risk implications

pertaining to data, applications and other resources. This

research in progress paper presents two-tiered approach to

managing risk over a multi-staged implementation. As an

example, we use the application area of Single Sign-on (SSO)

whereby a user can gain access to multiple, interdependent

software systems using a single username/password

combination. We develop an optimization strategy that enables

addition of appropriate software systems to SSO

implementation in multiple phases. We formulate an integer

programming (IP) problem to determine the optimal cluster of

applications to be moved to SSO in each phase based on the

inherent risks involved therein. We then use Real Options

Theory to help managers determine the timing of the different

stages of their investment decision.

I. INTRODUCTION

HIS research in progress paper addresses issues with

IT software implementations in organizations that

tend to increase in complexity with greater amounts

of pervasiveness, data complexities and other such factors.

Such implementations can be fraught with a variety of risk

implications pertaining to a range of resources. We offer a

framework for managing risks in a multi-staged

implementation environment pertaining to an exemplar

application area of Single Sign On (SSO). Due to

proliferation, SSO is an authentication mechanism by which

a user can gain access to multiple, software systems and

organizational resources using a single username/password

combination. Without SSO, users have to manage a set of

authentication credentials for every system they need to

access. A single sign on system should provide for secure

storage of user passwords, support for more than one user

password, as well as support for multiple target logon

methods [3]. The research in the area of SSO is at best

scanty. Further, the investigation of implementation issues

from a research perspective and the application of real

options in SSO investments are virtually non-existent. This

paper develops an optimization strategy that provides

managerial insights on implementation options available to

managers using an IP formulation and real options theory.

In this paper we have assumed the software systems to be

independent of each other. The risk mitigation strategies in

SSO are of particular importance to every organization

because of the security concerns when adopting SSO.

The contributions of this paper are twofold; risk

mitigation strategy by adopting SSO and timely

implementation of the investment strategy through effective

managerial decision making by the use of real options. The

IP formulation solves the problem by provision of a solution

that informs on a risk mitigating strategy on which

applications to implement as part of the single sign-on in

each phase based on a number of constraints. Gordon et. al.

have also real-options based approach of “wait and see” in

security investments does make sense for most of the

information security capital expense decisions [5]. Real-

options analytic framework have been used to show that

information security investments are time based and are

sometimes deferred [7]. The decision to invest is made by

considering real options. Herath & Herath [6] developed an

integrated real options model for information security

investments using Bayesian statistics that incorporates

learning effects. The advantages of moving to SSO are a

reduction in time spent in entering passwords, improved

security by the reduced requirement of handling multiple

passwords, reduced helpdesk costs by lowering the number

of helpdesk calls related to password reset and centralized

reporting for compliance adherence. SSO also makes the

satisfying of audit requirements (such as compliance with

Sarbanes-Oxley (SOX)) easier, robust and less expensive as

the process can become more centrally managed. The

disadvantages of moving to SSO are single point of failure

and single source of attacks or hacking.

II. MODEL PARAMETERS

In this paper, we assume that application software

systems are first classified into several risk categories such

as high, medium or low risk or more specific classes such as

customer facing, employee facing, federal government

facing, etc symbolizing the risk they pose. Applications are

considered for inclusion in a phase for implementation by

taking into consideration costs and threshold data risks

involved per period based on security policies in place.

Threshold data risk (TDR) is a function of other

organization factors and requires analysis by IT experts to

the level of data risk they wish to assume per phase of

implementation (details on the computation of such risks is

beyond scope of this paper – we assume this is computed or

provided for the purposes of this paper).

Also, three matrices have been defined:

a. Application Confidentiality Matrix AC,

if application j contains

confidential data, else .

b. Application Data Size Matrix AD,

= data in Megabytes, handled

by application j.

T

Page 29: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

c. We define a matrix

, to be used in the formulation. This

matrix specifies which data belonging to application j is

confidential.

Consider matrix for Total Variable Cost

, where is the number of users using

software system j at time t and is the variable cost

per user per application j at time t.z`

III. COST MINIMIZATION PROBLEM

The Integer Programming problem here is to identify

optimal cluster of software systems per phase to be adopted

for Single Sign On at the lowest cost. The problem takes

into account the data exposure risk and business criticality

risk involved and gives an optimal solution. At the outset

we define the notation used in the formulation in Table I.

Minimize:

Subject to:

(1a)

(1b)

(2a)

(2b)

(3)

(4)

(5)

(6)

The formulation is presented in equations 1a-6. The

objective function consists of fixed costs and variable costs

(licensing fees, training costs). Constraint (1a) takes care of

data threshold risk (TDR) for every implementation phase.

As the implementation team gains confidence in

implementing SSO, the learning effect is captured by the

=incremental values of the threshold risk that are assigned

to each phase. For example, TDR may be modeled as a

linear function of time or a non-linear function such as an

exponential function as is relevant to that organization.

Constraint (1b) ensures that some of individual risk

categories of the software systems are less than the data

threshold risk per risk type. Either (1a) or (1b) could be

TABLE I NOTATIONS

Symbol Description

FC0 Initial fixed cost

FCt Fixed cost (of implementation) at time interval t, assumed to remain constant per time period

TVCt,j Total Variable Cost using software system j at time t

to SSO at time interval t r(q)j This is risk stemming from all factors other than

data. For instance, a bank could have risk levels

related to customer/employee facing applications, federal filings, etc. Another firm might simply rank

risks as high, medium-high, medium, medium-low,

low, etc. In general, r(q)1,j has a value of 1 if software system j is of risk type r(q) and 0

otherwise.

TVCt,j Total Variable Cost using software system j at time t to SSO at time interval t

r(q)j This is risk stemming from all factors other than

data. For instance, a bank could have risk levels

related to customer/employee facing applications,

federal filings, etc. Another firm might simply rank

risks as high, medium-high, medium, medium-low, low, etc. In general, r(q)1,j has a value of 1 if

software system j is of risk type r(q) and 0

otherwise. AppDataRiskj Risk stemming from confidential data that the

application supports TDRt Threshold Data Risk at time interval t

n Number of software systems

T Total time horizon R Total types of software system risks

LowerTper Implement each high risk software system every

Tperiodic number of years UpperTper Implement ( Tperiodic )

-1 high risk software systems

every year

Bt Maximum Budget at time interval t xjt Binary variable indicating whether software system

j was moved to SSO at time interval t

used based on management preferences. This is done for

security purposes as we wish to limit data exposure to a

minimum at every phase. Either constraint (2a) or (2b),

which is meant to cluster specific software systems per

phase, is used based on the following decision:

Let us calculate

There are two scenarios to be considered when clustering

the r(q) type of risk software systems, only one of them

must be implemented based on value of Tperiodic(q):

Constraint 3(a): Tperiodic (q) > 1: This means that we do

not have to implement r(q) risk systems every year. We can

implement one system every lower integer value of

Tperiodic(q) or LowerTper(q) number of years. For example,

suppose total number of periods is five years, number of

high risk systems is four. Therefore, LowerTper(q) = lower

integer bound (1.25) =1; which implies one high risk

software systems are implemented every year.

Constraint 3(b): Tperiodic (q) 1: We take the greater

integer value of 1/Tperiodic(q) or UpperTper(q).For example,

suppose total number of periods is 5 years, number of

customer facing applications (which is ranked fairly high on

the risk scale by the firm) is seventy. Therefore,

UpperTper(q)= upper integer bound (1/0.0714285) = 14;

which implies fourteen of these software systems are

implemented every year.

Constraint (4) assigns the lowest risk software systems in

every phase on the basis of costs per phase and assumable

risk per phase after taking into account all higher level risks

Page 30: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

in Constraints (3) and (1). Once a software system has been

implemented under SSO it is no longer available for

reimplementation in another phase and this is ensured via

Constraints (4). Constraint (5) is used to restrain costs from

exceeding a budgetary threshold limit in every phase.

IV. REAL OPTIONS

Real Options Analysis is a method that allows

consideration for deferment among other possible actions in

the context of valuation of investments which traditional

budgeting techniques such as Net Present Value (NPV)

analysis cannot accommodate. Real Options computes the

value of embedded options (managerial flexibility)

separately and adds it to the traditional NPV, also called

Passive NPV (NPVP, which is obtained by subtracting cost

from utility gained, both present valued by a single period at

a risk free rate, which is the assumption made). What we

then get is the Active NPV (NPVA) [1][2].

We have defined the decision making process as such:

i. Invest if NPVP >0 and Option Value>0

ii. Abandon two tiered investment in first

phase if NPVA<0, NPVP<0 and Option Value>0

iii. Defer investment by one period if NPVA >

NPVP

iv. After first deferral, in the second phase,

we implement investment if NPVP>0 and Option

Value>0 or abandon if NPVP<0 and Option Value>0.

If the very first investment is abandoned we abandon

subsequent movements to SSO, because of sequential

consideration of implementation.

It is assumed that as more and more software systems are

added to SSO, employees become more familiar with the

system as time progresses and therefore, variable costs and

training costs reduce over time.

Fig. 1. Help desk cost per number of phases (n).

Help desk costs monotonically decrease as more software

systems are included under SSO. This is because of a

reduction in the number of password related calls. Helpdesk

costs is assumed to be an exponentially decreasing function

(Cost (n) = constant*e-k*(n-1)

), as shown in Figure 1. When

implemented for organizations, help desk costs will be a

function of number of users, help desk cost per user and

number of calls per user per year1. Upon deferment,

helpdesk costs in the next year are considered to be the

same as the previous year. An increase in user utility (or

productivity) is observed from the adoption of SSO, as users

spend a reduced amount of time making helpdesk related

calls and therefore is related to a reduction in help desk

calls. Utility function is assumed to be an increasing

function of nth

root of number of phases implemented

(Utility (n) = constant*(n1/k

)), as in Figure 2. The idea is that

utility increases as more and more software systems are

added to SSO.

Fig. 2. Utility over number of phases (n)

Satisfaction surveys such as SERVQUAL are adopted to

reach a consensus on the utility function needed to be

modeled. The observed increase in utility is a function of

percentage reduction in helpdesk costs, average wage per

user per annum, number of help desk calls per year

(assuming 40 hours work time per week for 52 weeks) and

time per help desk call.

Fig. 3. Technology cost over time

In Figure 3, it is seen that technology costs have also

been considered to be a monotonically decreasing

exponential function. We consider investment decisions two

phases at a time as in Figure 4.

Fig. 4. Investment decision tree

Page 31: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

Figure 5 explains via pseudo code the method used to

apply real option managerial decision making.

Fig. 5. Real Options Algorithm

V. PROOF OF CONCEPT

Related work has been drawn from [4]. The paper

considered the case of a major North-East US Financial

Institution which considered moving to single sign on. The

parameters chosen to cluster software systems were user

base, criticality based on business classification,

authentication mechanisms, regulations applied to content

and process around application and application architecture.

Similar application characteristics and business aspects of

decision making are borrowed from that published work.

Ideas for research of this conference paper were kindled by

some tangential feedback we received from practitioners

working in the area during development of [4].

VI. CONCLUSION

In this paper, we develop a multi-staged SSO

implementation strategy that optimizes cost based on

risk thresholds using an Integer Programming

formulation and Real Options analysis. The IP

formulation tries to find the best clusters of software

systems along the time dimension by minimizing total

cost involved across all phases. The real options

analysis imparts managerial flexibility by determining

the ideal timing issues. This mechanism has been

implemented using CPLEX and MATLAB. A full-

fledged data collection and trace driven simulation will

be conducted next. This implementation does not take

into account typical organizational roles embodied in

Role Based Access Control (RBAC) and is an area for

future extension.

REFERENCES

[1] M. Benaroch, “Option-Based Management of Technology Investment Risk”, IEEE Transactions on Engineering Management,

Vol. 48, No. 4, pp. 428-444, November 2001.

[2] M. Benaroch, "Managing Information Technology Investment Risk: A Real Options Perspective," Journal of MIS, Vol. 19, No. 2, pp. 43-

84, Fall 2002.

[3] R. Cohen, R. A. Forsberg, P. A. Kallfelz, J. R. Meckstroth, C. J. Pascoe, A. L. Snow-Weaver, “Coordinating user target logons in a

single sign-on (SSO) environment”, Patent number: 6178511, Filing

date: Apr 30, 1998, Issue date: Jan 23, 2001, Assignee: International Business Machines Corporation.

[4] M. Gupta, R. Sharman, D. Bryan, “Enabling Business and Security

Through Technology Implementation: A Financial Services Case Study”. Journal of Applied Security Research, 1936-1629, Volume 4,

Issue 3, 2009, Pages 322 – 340, Taylor and Francis Group.

[5] L. Gordon, M. Loeb, and W. Lucyshyn, "Information security

expenditures and real options: A wait and see approach," Computer

Security Journal Volume XIX, Number 2, 2003, pp.17.

[6] H. Herath and T. Herath, “Investments in Information Security: A

Real Options Perspective with Bayesian Postaudit," Journal of

Management Information Systems, Winter 2008-9, pp. 337-375.

[7] K. Tamtumi and G. Makoto, "Optimal Timing of Information

Security Investment: A Real Options Approach" 2009 Workshop on Economics and Information Security (WEIS).

1. While number of phases is less than or equal to two

1.1. If NPVP>0 and Option Price>0 1.1.1.Implement Phase I

1.2. Endif

1.3. If NPVA>NPVP and NPVP<0 and Option Price>0 1.3.1. Defer Phase I

1.3.1.1. If NPVP < 0 and Option Price>0

1.3.1.1.1. Abandon Phase I 1.3.1.1.2. Go to 2.

1.3.1.2. Endif

1.3.1.3. If NPVP > 0 and Option Price>0 1.3.1.3.1. Implement Phase I

1.3.1.4. Endif

1.4. Endif 1.5. If NPVA<0 and NPVP<0 and Option Price>0

1.5.1. Abandon Phase I

1.5.2. Go to 2.

1.6. Endif

1.7. Increment phase number counter

2. Stop while loop

Page 32: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

A Survivability Decision Model for Critical

Information Systems Based on Bayesian Network

Abstract—Critical information systems (CISs) cover vast

number of applications and are now an essential part of our day-

to-day life. Any damage to such a system or loss of information

due to malicious attacks or system failures can cause serious

consequences to society and individuals. Therefore, it is

important to maintain the survivability of the systems and make

timely decisions on system repair, if necessary, in order for the

systems to support critical services. In this paper, we propose a

Bayesian network based decision model to help system

administrators better understand the hidden states of a CIS in

order to determine its survivability status based on prior

knowledge and current available evidence. We perceive that the

survivability of a system is dependent on several factors in such a

way that probabilistic relationships exist between these factors

and the system’s survivability status. We represent such

probabilistic relationships using a Bayesian network. Our model

can be used to determine whether it is adequate for the system to

continue supporting critical services or it needs to be repaired to

avoid further losses. Therefore, the model helps system

administrators to reduce the magnitude of possible service

interruptions due to malicious attacks or system failures. 1

Index Terms—-Bayesian networks, survivability, decision-

making

I. INTRODUCTION

UR society is increasingly dependent upon large-scale,

highly distributed information systems. Numbers of

critical information systems (CISs) have been applied to

both civilian and military critical infrastructure applications

[1, 2] such as banking and finance, telecommunication,

healthcare, and national defense — which are essential for the

well-being of our society. In many cases, the provision of

service by the infrastructure applications depends on the

correct operations of the underlying critical information

systems; and damage or loss of service in any of these

application domains would cause serious consequences.

Nowadays, much attention has been paid to the dependence

on these critical information systems because of their ultimate

importance. This dependency results in a great deal of

concerns about those systems in case of hardware or software

failures, operator errors, power outage, environmental

disasters, and attack by adversaries [3].

Despite the best efforts of system developers and security

This work was supported in part by U.S. AFOSR under grant FA 9550-09-

1-0215.

practitioners, it is still infeasible to assure that a CIS will be

invulnerable to well-organized attacks and unpredictable

system failures. The unique characteristics of a CIS make it

infeasible to build a completely secure system. First, a CIS is

typically very large in size both geographically and in term of

the number of elements. Second, the services supplied to end

users are often attained by composing different

functionalities. Thus, the complete services can only be

obtained when several subsystems cooperate and operate in

some predefined order [2]. The cost of disruptions in CISs

grows more rapidly than ever before, yet increasing reliance

on CISs increases vulnerability to disruption. Given the

dependence upon these CISs, an important property that these

systems must achieve is survivability. In general, survivability

refers to the ability of a system to continue to provide service

in the face of failures, attacks, or accidents [4].

Given the importance of a CIS in various mission-critical

applications, it is crucial to constantly monitor the system,

detect any anomaly and conduct timely diagnosis and repair,

if necessary, to make sure that the systems can continuously

function and provide the essential services to users. However,

given the increasing complication of malicious attacks and the

increasing complexity of the systems, it is often difficult to

determine if a CIS has been compromised or in a faulty state

due to reasons such that the symptoms may not be observed at

an early stage. It is also challenging to determine whether a

CIS should be allowed to continuously function given its

uncertain state. On one hand, taking the system offline for

diagnosis and repair will unavoidably affect system

productivity. Therefore, such kinds of system check should

be done at a minimum frequency. In some situations, any

disruption of the system, even for a short period of time, may

not be accepted. On the other hand, it could result in

disastrous consequences if a CIS has already been

compromised and it still carries out mission- critical

functions. In this paper we propose a probability based

decision model to help system administrators to make the best

judgment on the current state of a CIS given various

influential factors and some known evidence about the system

properties. Our model helps users to determine whether it is

necessary to conduct a thorough system diagnosis and

whether it is acceptable for the system to continue supporting

mission- critical services. The model applies cause-effect

relationships among different factors and available evidences

Suhas Lande, Yanjun Zuo, and Malvika Pimple University of North Dakota

Grand Forks, ND 58203

{suhas.lande, yanjun.zuo, malvika.pimple}@und.edu

O

Page 33: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

to reach the best actions for the system administrators to take

in order to maintain system survivability.

Bayesian networks are powerful tools for modeling cause-

effect relationships in various domains. They have proven to

be a useful tool in representing complex probability

relationships, such as determining the decision given evidence

from a set of dependent factors. In this paper, we apply the

Bayesian network to our decision model. Most of the security

related decisions are not momentary. They take time, are

dependent on other factors and can be divided into sub-stages

[5]. In our work the final decision of whether to allow a

system to continue supporting the essential services is

derived from a number of survivability influential factors

and the relationships among them.

The rest of the paper is organized as follows: Section 2

describes the related work; the survivability decision model is

presented in Sections 3; we conclude our paper in Section 4.

II. RELATED WORK

In this section, we discuss two streams of related work:

system survivability and applications of Bayesian network in

security and risk management.

A. System Survivability

Knight et al. [6] propose system architecture to enhance

the survivability of critical systems. In the paper they discuss

reconfigurable architecture in case of system errors and

propose a mechanism to achieve survivability. Such

architecture allows the detection and response to various types

of faults. By introducing fault tolerance, progress can be made

towards meeting the survivability goals of a critical

infrastructure application. Ellison et al. [7] highlight the

importance of survivability in critical systems. They present

survivability strategies to make a system robust and resilient to

attacks. The authors propose the use of traditional computer

security solutions, effective risk mitigation strategies based on

"what-if" scenario analyses and contingency planning to

improve the survivability of a system. Incorporating these

approaches into new and existing systems can improve their

survivability. Zhu et al. [8] present techniques of intrusion-

tolerance and multi-version subsystems for survivability of

critical information systems. The survivability goal is achieved

using redundancy and diversity in the presence of intrusions or

failures. Sullivan et al. [9] propose hierarchical, adaptive

distributed control systems to monitor and respond to

malicious attacks by changing the operating modes and

configurations of a system to minimize loss of system utility.

The authors evaluate their approach using distributed dynamic

models of infrastructure information systems.

In this paper we develop a model to determine the hidden

states of a system and decide whether the system is still

survivable to provide critical services in face of malicious

attacks and system failures based on intelligent evidences and

important factors that affect the survivability of the system.

The major goal of our work is to facilitate survivability

decisions using a probabilistic reasoning model.

B. Bayesian Network and its Applications

Bayesian networks are powerful tools for modeling cause-

effect relationships. They are compact networks of

probabilities that capture the relationships among events.

Formally, a Bayesian network is a joint probabilistic model for

n random variables (x1, . . . , xn) represented as a directed

acyclic graph G = (V, E) where V is a set of nodes

corresponding to the variables V = (x1, . . . , xn) and E V *

V contains directed edges connecting nodes in an acyclic

manner. Instead of weights, the graph edges are described by

conditional probabilities of nodes given their parents that are

used to construct a joint distribution P(V) or P(x1, . . . , xn).

Bayesian network has been applied to various applications

in security and risk management, where evidence and cause-

effect relationships can be used to predict probabilistic

unknown values of variables of interest. Modelo-Howard et al.

[10] propose an algorithm which systematically performs

Bayesian inference to determine the effect of intrusion

detector configuration on the accuracy and precision of

security goals in a system. The framework represents the

causality relationships between different attack steps and also

between attack steps and the detectors using conditional

probabilities. Cemerlic, et al. [11] propose an adaptive

network intrusion detection model using a Bayesian network,

trained with a mixed dataset containing real-world and

DARPA dataset traffic. Since individual events in an attack

can be represented as nodes and the causal relations among

events can be modeled as edges in Bayesian networks, their

paper uses a Bayesian network as the inference model.

Frigault, et al. [12] propose a Dynamic Bayesian networks-

based model to incorporate temporal factors, such as the

availability of exploit codes or patches. Starting from the

model, the authors study two concrete cases to demonstrate the

potential applications. The work provides a theoretical

foundation and a practical framework for continuously

measuring network security in a dynamic environment.

Bringas, et al. [13] present a unified misuse and anomaly

prevention system (ESIDE-Depian) based on Bayesian

networks to analyze network packets. The model is evaluated

against well-known and new attacks showing how it

outperforms a well-established industrial network intrusion

detection system. Abouzakhar, et al. [14] develop a

probabilistic approach using Bayesian Network to detect

distributed network attacks as early as possible. Their paper

shows how probabilistically Bayesian network detects

communication network attacks. Mukhopadhyay et al. [15]

propose a framework, based on copula aided Bayesian Belief

Network (BBN) model, to quantify the risk associated with

online business transactions, arising out of a security breach,

and thereby help in designing e-insurance products. A

representative causal diagram is developed elucidating the

probable reasons for a security breach and populated it with

expert opinions. Herath, et al. [16] develop a model for

information security investments using Bayesian inferences

for valuation and post-auditing. Their work uses Bayesian

Page 34: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

statistics to model learning options and the post-auditing of

sequential real options, an idea that can be applied to a variety

of other types of sequential investment problems.

Our work has different objectives from the above frameworks and complements the existing models in the applications of Bayesian network in different areas. We use Bayesian network to represent the cause-effect relationships between various factors that affect the survivability of a system. Based on those relationships and user input intelligent evidence, systems administrators can determine the survivability status of a CIS. Those decisions are very essential to ensure the system can provide mission-critical services to users.

III. THE SURVIVABILITY DECISION MODEL

In this section, we discuss our Bayesian network based

survivability decision model. The Bayesian networks that can

be used strictly to model the reality are called belief nets,

while those that also mix elements of values and decision

making are called decision nets. The latter also provide

optimal decisions based on a set of evidences. In this paper,

we develop a Bayesian decision network model to be applied

to survivability related decision making. A Bayesian network

allows a user to enter the expert knowledge and available

evidences about a system’s status. Then the network can be

recalibrated every time when additional information is

entered. The effect of new information propagates

automatically to the children and parents nodes, enabling the

network to update the joint probability distributions. Bayesian

networks are easily extended to computing utility, given the

degree of knowledge we have on a given situation. They can

be used in any walk of life where modeling an uncertain

reality is involved, and, in the case of decision networks, it is

helpful to make intelligent, justifiable, quantifiable decisions

that will maximize the chances of a desirable outcome. Our

Bayesian network based survivability decision model helps

system administrator to choose the best decision by

computing the utility associated with each decision choice. In

our model, the utility is computed as the cost of interruptions

of service provided by a CIS. Since a CIS must survive

malicious attacks and system failures in order to provide

essential services to users whenever those services are

required, the time and efforts required to diagnosis or repair a

CIS should be at a minimum. More specifically, our

survivability decision model will address the following

issues:

Identifying the major factors that affect a CIS system's

survivability and specifying their quantitative cause-

effect relationships;

Defining the survivability decision objectives for a CIS,

a series of decisions for the system administrators to

make, the utilities of various choices for each decision,

and the conditional probability relationships (CPRs)

between the influencing nodes (i.e., the parent nodes)

and the influenced node (i.e., the child node);

Based on the specific CPRs and user input evidence,

answering various ―what-if‖ questions related to system

survivability which can be used by the system

administrators given different attack and system failures

scenarios.

Our model will assist administrator to better understand

the actual states of a CIS related to its survivability status.

This set of knowledge can help administrator to decide

whether to conduct an offline diagnosis of the system to fully

check the system’s survivability status. The diagnosis results

(with a certain confidence level) can further help the system

administrators to determine whether it is acceptable to allow

the system to continue supporting mission critical services or

they should stop the system to schedule a repair because the

system has been compromised. These decisions are critical in

maintaining a healthy/survivable system in the following

ways. First, the decisions can help to achieve a tradeoff

between the correct system operations and minimization of

service interruptions. Second, since the impact of damage to a

CIS tends to increase in time, the sooner the administrators

understands the survivability state of the system, the sooner

they can decide to repair the system, if necessary, to avoid

potential further losses. Hence, the dependability on the

system can be maintained.

Our Bayesian network based survivability decision network is shown in Figure 1. The network components will be discussed in detail in the next sections. We have also implemented the decision model using the Netica [17] tool, where each node represents a state, and a cause-effect relationship between two nodes is denoted by an arrow, called an edge. The testing cases that we conducted indicated this model is useful in various survivability related decisions.

A. Nodes

A node in a Bayesian network represents a variable in the

situation being modeled. Each node is represented graphically

by a labeled rectangle. There are three different kinds of

nodes in a Bayesian network: nature nodes, decision nodes

and utility nodes. A node can take different values, each of

which is referred to as a state of the node. In our survivability

decision model, there are nine nature nodes, two decision

nodes, and two utility nodes as shown in Figure 1. A nature

node models the nature or reality about one aspect of the

system. The semantic meanings of the nine nature nodes in

our model are described as follows.

1. The ―Security Vulnerability‖ node represents a security

property of a CIS system under consideration and

indicates how vulnerable the system is subject to

malicious attacks. From the system engineering

perspective, every system has a certain level of

vulnerability to known or unknown attacks. This node

has High, Medium or Low value to represent the

corresponding degree of vulnerability of a CIS.

Page 35: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

Fig. 1. The Bayesian Network Based Survivability Decision Model

2. The node ―Adversary Capability‖ represents an

adversary’s ability to compromise a system. This node

has High, Medium or Low value to represent how

capable of an adversary to attack and compromise a CIS.

3. The combined effect of system security vulnerability and

adversary capability determines the likelihood of a

system being attacked. We use the node ―Hackable‖ to

represent how likely a CIS could be compromised given

the system’s vulnerability and the adversary’s ability to

attack.

4. System failures due to deficiencies of design and

implementation can endanger the survivability of the

system. The node ―System Fault‖ is used to represent

possible system failures. This node can have a High,

Medium or Low value, indicating the possibility of

system failures that could affect system survivability.

5. The effects of system failures are typically visible to

users in the form of degradation of services that the

system provides. The node ―System Performance‖

therefore represents the service quality provided by a CIS

as indicated by such system performance metrics as

network throughput, system response time, and service

quality. System performance is evaluated as normal or

poor.

6. A CIS system’s ability to conduct self-healing when

under attack or in a partial failure state is represented by

the node ―On-the-fly Repair‖. This ability is crucial to

automatically recover the system in a timely manner

before any more severe damage is resulted. This node can

have two values: ―Capable‖ or ―Not Capable‖, indicating

whether the system has the sufficient self-healing ability

from the system survivability perspective.

7. Similarly, fault tolerance allows a CIS to continue

operating properly in the event of failures. We represent

such a capability of the CIS using the ―Fault Tolerance‖

node in our model. The node has two possible values:

―Capable‖ or ―Not Capable‖, indicating whether the

system has sufficient ability to conduct fault tolerance for

the purpose of damage masking.

8. Given the values of the above 7 nodes, the condition of a

CIS is presented using a ―System Compromised or

Faulty‖ node. This node has two possible values: ―True‖

or ―False‖, which indicates whether the system is

compromised by adversary or in a faulty state due to

system failures.

Our survivability decision model has a provision for

system administrators to make a series of decisions regarding

system survivability. Each decision is represented by a

decision node (denoted as a rectangle with a different color

background from a nature node). Based on the system

condition represented by the ―System Compromised or

Faulty‖ node, system administrators can decide whether it is

necessary to perform thorough analysis of the system in order

to verify and confirm the survivability status of the system.

This decision is represented by the ―Do Offline Survivability

Diagnosis‖ decision node as shown in Figure 1. The diagnosis

result is represented by the ―Survivability Diagnosis‖ nature

node. Given the likelihood of system survivability status, the

system administrators can further decide whether to allow the

Page 36: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

CIS to continuously support the mission critical services or it

is necessary take the system down for recovery. This decision

is represented by the ―Continue to Support Mission Critical

Services‖ decision node. For each decision node, the number

next to each choice indicates the expected utility of making

that choice. This expected utility is computed as a

multiplication of the utility of an event and the probability for

that event to occur, i.e., Expected Utility(A) = Utility(A) x

P(A). The highest value of the expected utility indicates the

best choice of that decision.

A utility node in a Bayesian network represents the utility

value of each choice associated with a decision. In our model

two utility nodes are specified: ―Diagnosis Cost‖ and

―Administration Action‖. The former represents the cost (a

non-positive utility value) associated with conducting an

offline survivability diagnosis. If the system administrators

decide that an offline survivability diagnosis is necessary, then

the cost is ―-10‖, which represent the time and efforts to check

the system as well as loss of productivity due to interruptions

of services to users. Otherwise, the cost is ―0‖ should no

diagnosis is required. The latter represents the utilities

corresponding to various decision choices regarding whether

the system should be allowed to continue to support mission-

critical services given its current status. For instance, if the

system is actually compromised or in a faulty state but still

allowed to provide critical services, then the consequence

would be unpredictable. Therefore, a negative utility value of

―-200‖ is assigned. In another case, if the system is not

compromised but the system administrators disallow it to

continue functioning (i.e., a false negative), then a utility value

of ―-50‖ is assigned in this case. This smaller negative utility

represents the cost of mistakenly interrupting the system when

the system is not compromised or in a fault state.

B. Edges

An edge of a Bayesian network represents a causality relationship between two nodes. It is represented graphically by an arrow between the two nodes and the direction of the arrow indicates the logical causality (see Figure 1). The intuitive meaning of an edge drawn from node A to node B is that A has a direct influence on B. The CPT associated with a node defines quantitatively how the node (called the child node) is influenced by others (called the parent nodes). In a relationship, the child node is conditionally dependent upon their parent nodes. For instance, in our model, the level of system security vulnerability and the capacity of adversary (the parent nodes) have great impact on the possibility of system being hackable (the child node). Therefore, there are causality relationships among those nodes. Furthermore, we identify the following factors that can affect a system’s security state (i.e., compromised or faulty): system on-the-fly repair ability, fault tolerance capacity, system hackability, and system possible internal failure (evidenced by system performance). In addition, the diagnosis results are determined by (1) the system administrator’s decision to take the offline system diagnosis; and (2) the hidden state of the system (compromised or faulty).

C. Conditional Probability Tables

In a Bayesian network, the relationship between two nodes

A and B is defined as a conditional probability, P(A | B),

which represents the probability of A conditional on a given

outcome of B. The conditional probability is calculated using

Bayes’ Theorem [18]:

P)(

)(*)|(

BP

APABP (1)

where P(B|A) is the conditional probability of B given A, and

P(B) and P(A) are the unconditional probabilities of B and A,

respectively. The probabilistic dependency is maintained by

the CPT, which is attached to the corresponding node. A CPT

shows all possible outcomes of a given node and the

conditional probability corresponding to each outcome given

all its parent nodes.

To present the conditional probabilities of parent nodes on

a child node, every intermediate and leaf node has a CPT

associated with it. For every possible combination of the

parent states, there is a row in the child node’s CPT that

describes the possible state that the child node should be.

Nodes with no parents also have CPTs but they only consist

of the probabilities for each state of that node. As an example

for an intermediate node, the CPT table in Figure 2 provides

an incident of causality relationship between the four parent

nodes: ―System Performance‖, ―Hackable‖, ―On-the-Fly

Repair‖, ―Fault Tolerance‖ and the child node ―System

Compromised or Faulty‖. The table lists all the possible

outcomes (and their possibilities) of the system’s security

status given the combination of the four influencing factors as

represented by the four parent nodes.

Fig. 2. CPT Table for the Node ―System Compromised or Faulty‖

Page 37: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

Fig. 3. CPT Table Associated with the Node ―Survivability Diagnosis‖

Fig. 4. Bayesian Network Model Given the First Decision Choice to Conduct Offline Survivability Diagnosis

The values in the CPT table are added based on expert knowledge and they can be modified as more information is available. As another example, Figure 3 represents the possible diagnosis results if the system administrators decide to conduct a thorough system diagnosis for survivability verification. The values for the node ―Survivability Diagnosis' are influenced by its two parent nodes (see Figure 1): the nature node ―System Compromised and Faulty‖ and the decision node ―Do Thorough Offline Survivability Diagnosis‖. From the table, we can see that if the system is actually compromised or in a faulty state and the system administrators have decided to do a thorough system diagnosis, the test has 90% chance to be positive (i.e. system is compromised or Faulty) and 10% chance to be negative (i.e., the system is fault free). On the other hand, if the actual state of the system is fault free, the diagnosis result still have 40% chance to mistakenly report that the system is compromised or faulty. Those values are entered by users based on expert knowledge or they can be generated through a Bayesian learning process. Due to page limitation, we will not discuss those issues here.

D. Decision Scenarios

In a Bayesian network, a belief represents a user’s

judgment about the probability that a variable is in a certain

state based on available evidence. A piece of evidence is

information about a given fact about system properties.

Bayesian decision networks support vague, conflicting, and

incomplete evidence by allowing a user to enter a probability

for evidence of a variable being in each state. As shown in

Figure 1, given the evidences and the relationships among the

corresponding nodes, the expected utility of each decision

choice for ―Do Offline Survivability Diagnosis‖ is calculated

by following the Bayer’s Theorem [18]. As shown in Figure

1, the choice of not conducting offline survivability diagnosis

will result in a higher utility so it will be the best decision

given the current setting. Since the second decision, i.e.,

whether to allow the system to continue to function to support

mission-critical services, is dependent on the first decision,

i.e., whether to take the system offline for a full scale

diagnosis, no expected utilities are available in second

decision unless the first decision is made. If we assume that

Page 38: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

TABLE I

TESTING CASES AND THE CORRESPONDING BEST DECISION CHOICES

the administrators decide not to do offline survivability

diagnosis (since this is the most rational decision with the

current evidence), then the expected utilities will be

automatically calculated for the node ―Continue to Support

Mission Critical Services‖ as shown in Figure 4. From the

values of this second decision node, the best choice is to allow

the CIS system to continue supporting mission critical services

since this decision will result in a higher utility.

In addition to the above testing case using the default

values, we have conducted various other testing with different

user input as additional evidence for system states. Those

testing help answer various ―what-if‖ questions regarding

different decision scenarios given different system status

already known. The testing cases we conducted and the

corresponding best decision choices are summarized in Table

I. The highlighted columns (i.e., DOSD and CTSMS)

represent the two sequential decisions that a system

administrator needs to make.

Each entry in Table I represents a global system state with

a set of probabilistic values for each node. An entry is

composed of multiple cells. While a single-value cell

represents user-provided evidence (known fact about a system

property), a multi-value cell represents a set of possible states

of the node with the calculated probability of occurrence. We

use an example to explain a testing case. Consider the fourth

entry, where the following information is given as user input

evidence and the corresponding best decisions are provided:

1. System vulnerability is low (as indicated by the value

―Low‖ for SV);

2. Adversary ability to compromise system is low (as

indicated by the value ―Low‖ for AC);

3. System is capable of performing on-the-fly repair (as

indicated by the value ―Capable‖ for OFR);

4. System is capable of providing fault tolerance in event

of failures (as indicated by the value ―Capable‖ for

FT);

5. System fault is low (as indicated by the value ―Low‖

for SF);

6. System performance is normal (as indicated by the

value ―Normal‖ = 100% for SP); 7. Likelihood of system being hackable is low (as indicated

by the value ―Low‖=100% for HA).

With the above information, the possibility of ―System Compromised or Faulty‖ is updated by the Bayesian network as ―True‖ = 10% and ―False‖ = 90%. That means that the system has 90% chance of not being compromised or in a faulty state. Then, based on this information, the system administrators should best choose not to do offline survivability diagnosis because this choice will result in the highest utility (i.e., 8.0 as shown for DOSD). Consequently, the administrator should decide to allow the system to continue supporting mission critical services since it is less likely that the system has been compromised or in a faulty state. Other testing case (e.g., entry 3) may suggest the system administrators to conduct solid system diagnosis. If the diagnosis results are negative, the system should be suspended to schedule a repair.

IV. CONCLUSION

Survivability related decisions are dependent on various

factors and can be divided into a series of stages. The model

proposed in the paper is developed based on Bayesian network

which is a powerful tool to represent cause-effect and

probabilistic relationships. We identify a set of dependent

factors that affect the survivability of a system and then define

the probabilistic relationships among those factors and the

system states. The model helps system administrators to make

Entry SV AC OFR FT SF SP (%) HA (%) SCF

(%)

DOSD SDR (%) CTSMS

1 High High Not

Capable

Not

Capable

High Normal = 20

Poor = 80

High = 65

Medium = 30

Low = 5

True =

81.5

False =

18.5

No =

-41.68

None =100

Positive = 0

Negative = 0

No =

-41.675

2 High High Not

Capable

Not

Capable

Low Normal = 80

Poor = 20

High = 65

Medium = 30

Low = 5

True =

75.5

False = 24.5

No =

-38.975

None =100

Positive = 0

Negative = 0

No =

-38.975

3 High High Not

Capable

Not

Capable

Low Normal = 80

Poor = 20

High = 65

Medium = 30

Low = 5

True =

75.5

False =

24.5

Yes =

-70.60

None =100

Positive = 0

Negative = 0

Yes =

-70.60

4 Low Low Capable Capable Low Norma l

=100

High = 0

Medium = 0

Low= 100

True =

10

False =

90

No

=8.0

None =100

Positive = 0

Negative = 0

Yes =

8.0

5 Low High Not

Capable

Not

capa

ble

Low Norma l =

100

High = 30

Medium = 20

Low= 50

True =

57

False = 43

No =

-30.65

None =100

Positive = 0

Negative = 0

No =

-30.65

Page 39: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

a good judgment on the current state of the system given

various evidences. It also helps users determine (1) whether it

is necessary to conduct a solid system diagnosis for potential

survivability problems of the system; and (2) based on the

diagnosis results (if system testing is conducted), whether the

system should be allowed to continue function or it should be

suspended for repair to avoid further damage and future losses.

The Bayesian networks are adaptable and can perform well

with limited information. While our model only considers the

most influential factors on the survivability of a system, it can

be enhanced when more information is available when applied

to specific applications.

ACKNOWLEDGMENT

The authors are thankful to Dr. Robert Herklotz for his support, which made this work possible.

REFERENCES

[1] Knight, J., M. Elder, J. Flinn, and P. Marx. ―Summaries of Four Critical Infrastructure Systems,‖ Technical Report CS-97-27, Department of Computer Science, University of Virginia, November 1997.

[2] Elder, M. C. ―Fault Tolerance in Critical Information Systems‖, Doctoral Thesis. UMI Order Number: AAI3012667, University of Virginia, 2001.

[3] ―Critical Foundations: Protecting America’s Infrastructures‖, The Report of the President’s Commission on Critical Infrastructure Protection, United States Government Printing Office (GPO), No. 040-000-00699-1, October 1997.

[4] Ellison, R., Fisher, D., Linger, R., Lipson, H., Longstaff, T. and Mead. N. ―Survivable Network Systems: An Emerging Discipline,‖ Technical Report CMU/SEI-97-TR-013, Software Engineering Institute, Carnegie Mellon University, November 1997.

[5] Hansson, Sven Ove, "Decision Theory: A Brief Introduction", retrieved from http://www.infra.kth.se/~soh/decisiontheory.pdf on January 12, 2010.

[6] Knight, J., Sullivan, K., Elder, M. and Wang, C. ―Survivability architectures: issues and approaches‖, DARPA Information Survivability Conference and Exposition, pp. 157-17, 2000.

[7] Robert J. Ellison, David A. Fisher, Richard C. Linger, Howard F. Lipson, Thomas A. Longstaff, Nancy R. Mead, "Survivability: Protecting Your Critical Systems," IEEE Internet Computing, vol. 3, no. 6, pp. 55-63, Nov./Dec. 1999, doi:10.1109/4236.807008

[8] Zhu, J. and Wang, C. "A Survivable Scheme for Critical Information System," 2008 International Conference on Security Technology, pp.230-235, 2008.

[9] Sullivan, K., Knight, J. C., Du, X., and Geist, S. ―Information survivability control systems‖, The 21st International Conference on Software Engineering, pp. 184-192, Los Angeles, California, United States, May 16 - 22, 1999.

[10] Modelo-Howard, G., Bagchi, S., and Lebanon, G. ―Determining Placement of Intrusion Detectors for a Distributed Application through Bayesian Network Modeling‖, The 11th international Symposium on Recent Advances in intrusion Detection, Cambridge, MA, USA, September 15 - 17, 2008.

[11] Cemerlic, A., Yang, L. and Kizza, J. ―Network Intrusion Detection Based on Bayesian Networks‖, Proceedings of Software Engineering and Knowledge Engineering, July 2008.

[12] Frigault, M., Wang, L., Singhal, A. and Jajodia, S. ―Measuring network security using Dynamic Bayesian network‖, Proceedings of 4th ACM Workshop on Quality of Protection (QoP), October 27, 2008, p. 23-30.

[13] Bringas, P.,Penya, Y., Paraboschi, S. and Salvaneschi, P. ―Bayesian-Networks-Based Misuse and Anomaly Prevention System‖, Proceedings of the Tenth International Conference on Enterprise Information Systems, p. 62-69, Barcelona, Spain, June 12-16, 2008.

[14] Abouzakhar, N., Gani, A., Manson, G., Abuitbel, M. and King, D., ―Bayesian Learning Networks Approach to Cybercrime Detection‖, Proceedings of the 2003 PostGraduate Networking Conference, Liverpool, United Kingdom, 2003.

[15] Mukhopadhyay, A., Chatterjee, S., Saha, D., Mahanti, A. and Sadhukhan, S. " e-Risk Management with Insurance: A framework using Copula aided Bayesian Belief Networks," Proceedings of the 39th Hawaii International Conference on System Sciences, Hawaii, USA, January, 2006.

[16] Herath, H. and Herath, T. ―Investments in Information Security: A Real Options Perspective with Bayesian Postaudit‖, Journal of Management Information Systems, Vol.25, No.3, p.337-375, September, 2008.

[17] Home Page of Norsys Software Corporation, http://www.norsys.com.

[18] Jensen. F. ―Bayesian Networks and Decision Graphs‖, Springer–Verlag New York, 2001.

Page 40: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

Keynote: So You Wanna Be a Bot Master?

Billy Rios Security Researcher, Google, Inc.

OLLOW me as we explore the command and control software used by Bot Masters, analyze the tools

used by the shadiest characters in the underground, and uncover data being stolen by bots from their

unsuspecting victims. We'll dive into the source code and dissect the individual components of a real

botnet command and control server. We'll also take a detailed look at what types of data the bots are after

and how the bot gets the stolen data back to its master. All the software is real and all the data is genuine.

You'll get to see the actual control consoles used by Bot Masters as they control an army of bots and learn

to find these servers on the Internet.

F

Page 41: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

Abstract—The popularity of peer-to-peer (P2P) file distribution is

consistently increasing since the late 1990’s. In 2008, P2P traffic

accounted for over half of the world’s Internet traffic. P2P

networks lend themselves well to the unauthorised distribution of

copyrighted material due to their ease of use, the abundance of

material available and the apparent anonymity awarded to the

downloaders. This paper presents the results of an investigation

conducted on the top 100 most popular BitTorrent swarms over

the course of one week. The purpose of this investigation is to

quantify the scale of unauthorised distribution of copyrighted

material through the use of the BitTorrent protocol. Each IP

address, which was discovered over the period of the weeklong

investigation, is mapped through the use of a geolocation

database, which results in the ability to determine where the

participation in these swarms is prominent worldwide.

I. INTRODUCTION

N 2008, Cisco estimated that P2P file sharing accounted for

3,345 petabytes of global Internet traffic, or 55.6% of the

total usage. Cisco forecast that P2P traffic would account for

9,629 petabytes globally in 2013 (approx. 30% of the total

usage) [3]. While the volume of P2P traffic is set to almost

triple from 2008-2013, its proportion of total Internet traffic is

set to decrease due to the rising popularity of content

streaming sites and one-click file hosting sites such as

Rapidshare, Megaupload, etc. BitTorrent is the most popular

P2P protocol used worldwide and accounts for the biggest

proportion of Internet traffic when compared to other P2P

protocols. Depending on region, Ipoque GmbH has measured

BitTorrent traffic to account for anything from 20%-57% of

that region’s total Internet usage in 2009 [11].

The BitTorrent protocol is designed to easily facilitate the

distribution of files to a very large number of downloaders

with minimal load on the original file source [2]. This is

achieved through the downloaders uploading their completed

parts of the entire file to other downloaders. A BitTorrent

swarm is made up of seeders, i.e., peers with complete copies

of the content shared in the swarm, and leechers, i.e., peers

who are downloading the content. Due to BitTorrent’s ease of

use and minimal bandwidth requirements, it lends itself as an

ideal platform for the unauthorized distribution of copyrighted

Co-funded by the Irish Research Council for Science, Engineering and

Technology and Intel Ireland Ltd. through the Enterprise Partnership Scheme.

material, which typically commences with a single source

sharing large sized files to many downloaders.

To commence the download of the content in a particular

BitTorrent swarm, a metadata “.torrent” file must be down-

loaded from an indexing website. This file is then opened

using a BitTorrent client, which proceeds to connect to several

members of the swarm and download the content. Each

BitTorrent swarm is built around a particular piece of content

which is determined through a unique identifier based on a

SHA-1 hash of the file information contained in this UTF- 8

encoded metadata file, e.g., name, piece length, piece hash

values, length and path.

Each BitTorrent client must be able to identify a list of

active peers in the same swarm who have at least one piece of

the content and is willing to share it, i.e., that has an available

open connection and has the bandwidth available to upload.

By the nature of the implementation of the protocol, any peer

that wishes to partake in a swarm must be able to

communicate and share files with other active peers. There are

a number of methods that a client can attempt to discover new

peers who are in the swarm:

1. Tracker Communication – BitTorrent trackers

maintain a list of seeders and leechers for each

BitTorrent swarm they are currently tracking. Each

BitTorrent client will contact the tracker

intermittently throughout the down- load of a

particular piece of content to report that they are

still alive on the network and to download a short

list of new peers on the network.

2. Peer Exchange (PEX) – Peer Exchange is a

BitTorrent Enhancement Proposal (BEP) whereby

when two peers are communicating, a subset of

their respective peer lists are shared during the

communication.

3. Distributed Hash Tables (DHT) – Within the

confounds of the standard BitTorrent specification,

there is no intercommunication between peers of

different BitTorrent swarms. Azureus/Vuze and

μTorrent contain mutually exclusive

implementations of distributed hash tables as part

of the standard client features. These DHTs

maintain a list of each active peer using the

corresponding clients and enables cross-swarm

A Week in the Life of the

Most Popular BitTorrent Swarms

Mark Scanlon, Alan Hannaway and Mohand-Tahar Kechadi 1

UCD Centre for Cybercrime Investigation, School of Computer Science & Informatics,

University College Dublin, Belfield, Dublin 4, Ireland

{mark.scanlon, alan.hannaway, tahar.kechadi}@ucd.ie

I

Page 42: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

communication between peers. Each peer in the

DHT is associated with the swarm(s) in which he

is currently an active participant.

The most popular BitTorrent indexing website, according to

Alexa, is The Pirate Bay [9]. In January 2010, The Pirate Bay

held the Alexa global traffic rank of number 99 and is the 91st

most popular website visited by Internet users in the United

States [1]. For the purpose of the investigation outlined in this

paper, the top 100 torrents listed on The Pirate Bay were

chosen to be investigated due to the popularity of the website.

II. SPECIFICATIONS OF THE INVESTIGATION

The steps involved in the execution of this investigation are:

1. Connect to The Pirate Bay and download the

“.torrent” metadata files for the top 100 torrents.

2. Connect to each swarm sequentially and identify each

of the IP addresses currently active in the swarm until

no new IPs are found.

3. Once the active IPs are found for the entire 100

torrent swarms, the process is repeated for the next

24 hours.

4. After 24 hours, the process was begun again at step 1.

The investigation was conducted using a single dedicated

server, which sequentially monitored each torrent swarm until

all the IPs in the swarm were found. Over the course of the

seven day investigation, a total of 163 different torrents were

investigated. None of the content appearing in these torrents

was found to be distributed legally; each torrent swarm is

distributing copyrighted material without any documented

authorization.

A. Investigation Methodology

For a regular BitTorrent user, the starting point to acquiring

a given piece of content is to visit a BitTorrent indexing

website, e.g., The Pirate Bay [9]. The indexing site serves as a

directory of all the content available to the user. The user finds

the specific content of interest by browsing the website or

through keyword search. He must then download the

“.torrent” metadata file which contains information specific to

the content desired such as name, size, filenames, tracker

information, hash value for completed download, chunk size,

hash values for each chunk, etc.

This “.torrent” file is then opened with a BitTorrent client,

e.g., Azureus/Vuze, μTorrent, etc., which in turn contacts the

tracker or the DHT for a bootstrapping list of IP addresses to

get started in the swarm.

The main goal of the methodology used for this

investigation is to gather information in an intelligent and

novel manner through the amplification of regular client

usage. This is in order to collect the complete list of IPs

involved in any given swarm as efficiently as possible. For

example, in a large swarm of >90,000 IPs, the software tool

developed for this experiment is capable of collecting the

active peer information in as little as 8 seconds. However, the

precise design and specifications of the software used during

the investigation outlined is beyond the scope of this paper.

B. Specifics of the Content Investigated

From the analysis of the daily top 100 swarms, video

content was found to be the most popular being distributed

over BitTorrent. Movie and television content amounted for

over 94.5% of the total, as can be seen in Fig. 1, while music,

games and software amounted for 1.8%, 2.5% and 1.2%

respectively. One highly probable explanation for the

popularity of television content is due to the lag between US

television shows airing in the US and the rest of the world.

Fig. 1. BitTorrent swarms investigated by category

III. RESULTS AND ANALYSIS

Over the week long investigation, the total number of unique

IP addresses discovered was 8,489,287. On average, each IP

address detected was active in 1.75 swarms, or, almost three

out of every four IP addresses were active in at least two of the

top 100 swarms during the week. The largest swarm detected

peaked at 93,963 active peers and the smallest swarm shrunk

to just 1,102 peers. The time taken to capture a snapshot of

100 swarms investigated varies due to the increase and

decrease in the overall size of the swarms. The average time to

collect all the peers’ information for each swarm is 3.4

seconds.

A. File Information

Notably, 50.6% of the files contained in the top 100 torrents

are split into smaller chunks for distribution, as can be seen in

Figure 2. 38.9% of the files were also compressed into RAR

files to save on the overall size of the content. The partitioning

TABLE I BREAKDOWN OF THE INFORMATION CONTAINED

IN THE “.TORRENT” METADATA FILES

Maximum Minimum Average

Content Size 38.37GB 109.69MB 1.62GB

Chunk Size 4MB 128KB 1.3MB Number of Chunks 19,645 346 1,251

Number of Files 322 2 24

Number of Trackers 57 4 20

Page 43: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

of large files into numerous smaller files is consistent with file

distribution through one-click file hosting websites and

newsgroups where the distribution of small files greatly

improves the overall throughput of the system.

Fig. 2. Breakdown of file types discovered during the investigation

Video files are distributed as AVI, MP4 or MKV files and

are typically grouped with screenshots, subtitles and sample

videos. Music files are generally distributed as MP3 files and

are grouped with associated album playlists and artwork.

B. Worldwide Distribution

Fig. 1. Top 10 countries detected.

There are a number of assumptions that had to be made for

the purposes of the geolocation of the IP addresses and for the

count of the total number of BitTorrent users involved in the

swarms investigated:

1. The country, city and ISP level geolocation

databases used are as accurate as possible. In

January 2010, MaxMind state that their

geolocation databases are 99.5% accurate at a

country level, 83% accurate at a US city level

within a 25 mile radius and 100% accurate at the

ISP level [7].

2. For a week long investigation, each IP address

found to be participating in the swarms

investigated is assumed to only ever be allocated to

one end user. Due to a typical DHCP lease from an

Internet service provider lasting somewhere in the

order of 2-7 days, dynamic IP address allocation

may result in the reallocation of the same IP

address to two or more end users during the

investigation. Should this have occurred during the

investigation, it is ignored for the interpretation of

the results outlined below. It is deemed technically

infeasible to identify precisely when this may

occur on a global level within the scope of this

paper.

3. No anonymous proxy servers or Internet

anonymity services, e.g., I2P [5], Tor [12], etc., are

used by the IP addresses discovered.

4. It is infeasible for users on dial-up Internet

connections to download the very large file that

typically justifies distribution via the BitTorrent

protocol. The average content size for the swarms

investigated was 1.62GB, which would take a

typical 56kbps dial-up user over 69.5 hours to

download, assuming no other Internet traffic and

achieving the maximum theoretical dial-up

connection speed. For the purposes of the analysis

presented in section III below, it was assumed that

a negligible amount of dial-up users participate in

the BitTorrent swarms.

For each IP address detected during the investigation, the

geolocation is obtained using MaxMind’s GeoIP databases

[7], which results in information such as city, country, latitude,

longitude, ISP, etc., being resolved. This information is then

gathered and plotted as a heatmap to display the distribution of

the peers involved in copyright infringement on a world map,

seen in Figure 4. The most popular content tends to be content

produced for the English speaking population, which is

reflected in the heatmap, i.e., countries with a high proportion

of English speaking population are highlighted in the results.

TABLE II

NUMBER OF BROADBAND SUBSCRIBERS DISCOVERED DURING THE

INVESTIGATION (ASSUMING A NEGLIGIBLE AMOUNT OF DIAL-UP USERS)

Country

Number of

IP Addresses

Discovered

Broadband

Subscription

Count [6]

Percentage of

Broadband Subscriptions

Discovered

United States 1,116,111 73,123,400 1.53% United Kingdom 790,162 17,276,000 4.57%

India 549,514 5,280,000 8.70%

Canada 443,577 9,842,300 4.51% Spain 397,892 8,995,400 4.42%

Italy 378,892 8,995,400 3.36%

Brazil 320,829 10,098,000 3.18% Australia 261,433 5,140,000 5.09%

Poland 248,731 4,792,600 5.19%

Romania 215,403 2,506,000 15.4%

Page 44: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

A significant percentage of the worldwide broadband sub-

scribers were detected during the investigation. 2.43% of the

349,980,000 worldwide broadband subscriptions was

discovered during the investigation [6]. The percentages of

broadband subscribers detected in the top 10 countries are

outlined in Table II. The top ten countries detected account for

over 53.6% of the total number of IPs found.

C. United States

The United States is the most popular country detected with

over 1.1 million unique IP addresses, which accounted for

13.15% of all the IP addresses found. While accounting for the

largest portion of the results obtained in this investigation, this

relatively low percentage suggests that BitTorrent has a much

more globally dispersed user base in comparison to other large

P2P networks. For example, a 10 day investigation conducted

on the Gnutella network in 2009, it was found that “56.19% of

all [worldwide] respondents to queries for content that is

copyright protected came from the United States” [4]. When

the IP addresses detected during this investigation are

geolocated and graphed onto a map, the population centers can

be easily identified, as can be seen in Figure 5. The state of

California accounted for 13.7% of the US IPs found, with the

states of Florida and New York accounting for 7.2% and 6.8%

respectively.

D. Extrapolated Results

If the total amount of worldwide public BitTorrent activity is

considered to be the summation of the number of active IP

addresses in all BitTorrent swarms at any given moment

served by the world’s largest trackers, the percentage of

overall BitTorrent activity analyzed as part of this

investigation can be estimated. From analyzing the scrape

information available from two of the largest trackers,

OpenBitTorrent [8] and PublicBitTorrent [10], it is estimated

that the most popular 100 torrents at any given moment

accounts for approximately 3.62% of the total activity.

IV. CONCLUSION AND FUTURE WORK

The objective of this investigation was to attempt to identify

the scale of the unauthorized distribution of copyrighted

material worldwide using the BitTorrent protocol. 2.43% of

the broadband subscriber base was detected over the course of

one week in the 163 torrents monitored. This number is far

greater than the number of subscribers that could possibly be

prosecuted for their actions. The number of end users involved

in illegal downloading is undoubtedly much higher than this

due to the relatively small scale of this investigation. Some

network factors will also have a negative effect over the

results achieved, such as two or more end-users appearing as a

Fig. 2. Heatmap Showing the Worldwide Distribution of Peers Discovered.

Page 45: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

single Internet IP address though Internet connection sharing,

proxy services etc.

To further improve the results outlined above, a number of

additional steps would be considered:

1. Overcome dynamic IP address reallocation – While

a number of the IPs discovered during the

investigation may have been allocated to more than

one end user, this multiple allocation is not possible

to identify using the standard BitTorrent protocol.

However, the identification of individual users

through metadata examination and heuristic

approaches is possible. This would increase the

overall accuracy of the results obtained and give an

accurate measure of the total BitTorrent population.

2. Conduct a larger scale analysis – In each 24 hour

period of this investigation, only 100 swarms were

monitored. Increasing the scale of the investigation

will yield to better results. The scaled-up analysis

of all BitTorrent swarms will identify some

interesting statistics, e.g., legal vs. illegal

distribution, total worldwide BitTorrent population,

etc.

3. Identification of peers using anonymous Internet

services – By comparing the IP addresses

discovered during the investigation with a list of

known Internet traffic proxy or pass-through

services, such as that maintained by MaxMind [7],

the quality of the results collected can be greatly

improved.

4. Identification of Internet connection sharing –

Through the analysis of the peer metadata, such as

client information, downloaded content types and

categories, etc., multiple users sharing a single

Internet connection can be identified.

REFERENCES

[1] Alexa Information on The Pirate Bay, Downloaded January 2010,

http://www.alexa.com/siteinfo/thepiratebay.org

[2] The BitTorrent Protocol Specification, Downloaded January 2010,

http://www.bittorrent.org/beps/bep 0003.html

[3] Cisco Systems, Inc., Cisco Visual Networking Index: Forecast and

Methodology, 2008-2013. Retrieved from: http://www.cisco.com/en/

US/solutions/collateral/ns341/ns525/ns537/ns705/ns827/white paper

c11- 481360.pdf. 2009.

[4] A. Hannaway and M-T. Kechadi, An Analysis of the Scale and Distribu-

tion of Copyrighted Material On The Gnutella Network International

Conference on Information Security and Privacy, Orlando, USA, 2009.

[5] I2P - The Anonymous Network, http://www.i2p2.de

[6] International Telecommunication Union, United Nations, Report on

Internet, Report Generated January 2010, http://www.itu.int

[7] MaxMind Inc., GeoLite Country Database, Downloaded January 2010,

http://www.maxmind.com

[8] OpenBitTorrent, Aggregated Scrape File. Downloaded January 2010,

http://openbittorrent.com/all.txt.bz2

[9] The Pirate Bay, World’s Largest BitTorrent Tracker, Total Top 100,

Downloaded January 2010, http://thepiratebay.org/top/all

[10] PublicBitTorrent, Aggregated Scrape File. Downloaded January 2010,

http://www.publicbt.com/all.txt.bz2

[11] H. Schulze, K. Mochalski, Internet Study 2008/2009. Ipoque GmbH.

http://www.ipoque.com/resources/internet-studies/ 2009.

[12] The Tor Project, http://www.torproject.org/

Fig. 5. Distribution of IP addresses in mainland United States. The largest circles represent locations with

greater than 1500 IPs, while the smallest circles may only represent a single IP at a given location

Page 46: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

Abstract—Universities consider lectures and classes to be

intellectual property which must be protected. In this case

study we discuss the processes by which outreach courses

delivered via the Internet are protected. Specifically, we

describe two procedures used to protect outreach courses at

our University. In this setting, the School of Pharmacy uses a

streaming media system provided by Starbak system, while

the College of Engineering and College of Business utilize a

streaming system developed by Sony. Additionally, we

present a comparison of these two methods from an

Information Assurance standpoint. We also present the

different domains of CISSP (Certified Information Systems

and Security Professional) as applicable to video streaming

and discuss how the security triad concepts of confidentiality,

integrity, and availability are addressed by the differing

platforms.

Index Terms—Access Control, Role-based Access Control,

Physical Security

I. INTRODUCTION

RIOR to 1995 integrating audio or video into

websites was a major problem. Before playing a video

or an audio file it had to be downloaded to the local hard

drive and, depending on the file size and network

connection (which was usually dial up), it would take

anywhere from five minutes to two hours to download.

With the advent of new technology, however, this changed

and now with the help of streaming technology one can

listen to or watch media files as they are downloaded.

Because of the difficulties with distributing classes over

the Internet, most of the universities that engaged in

distance education at that time chose to distribute movies,

either as VHS or DVD recordings, to the students.

The rise of streaming technology enabled universities to

provide outreach and distance education courses far more

efficiently than ever before. The use of streaming

technology also introduced several new difficulties for

administrators interested in protecting the University’s

intellectual property. Whereas someone was previously

required to have physical access of some kind, either by

accessing the recording equipment or by intercepting the

physical media that was mailed to the students, the advent

of streaming media meant that thieves could electronically

steal intellectual property at several different places in the

distribution process.

In this case study, we discuss the methods by which

classes are streamed, stored and transmitted to students

located at different geographical locations. Moreover, we

will examine how the different domains of the CISSP

certification are satisfied in order to protect the

intellectual property of the University. We will look into

aspects such as Access Control, Business Continuity/

Disaster Recovery, Legal and Regulatory Compliance and

Investigation, Operations Security and Physical Security

(though not necessarily in that order). We will consider

two different platforms used for streaming classes at our

University: the Starbak system, used by the School of

Pharmacy [1] and a system developed by Sony, used by

the College of Engineering and the College of Business.

The rest of this paper is organized as follows. Section

two presents a brief overview of how classes are recorded

and distributed using each system. Section three examines

the security measures undertaken on both the Sony and

Starbak systems. Section four describes our future work

and in section five we offer some conclusions.

II. OVERVIEW OF CLASSROOM RECORDING AND

STREAMING

A. Distributing Courses with Sony Equipment

Most departments at our University record and store the

classes that are conducted for on-campus students. These

courses are also made available to students that have

enrolled in the distance education program. On occasion

the courses are streamed live for distance education

students, though this is not the norm. The College of

Engineering and College of Business use identical

equipment and procedures to stream classes to distance

education students, so we shall consider them together.

Sriharsha Banavara, Eric Imsand, and John A. Hamilton, Jr. Department of Computer Science & Software Engineering

Auburn University Auburn, AL 36849

{sbs006, eimsand, hamilton}@auburn.edu

Information Assurance in Video Streaming:

A Case Study

P

Page 47: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

The classrooms in which the lectures are recorded are

fitted with three or four robotic Sony HD cameras. A high

quality microphone is used to record the lecturer’s voice.

The equipment operator starts recording by use of a GUI

application specially designed and built by the University

Engineering department. This GUI has facility to start,

stop, pause the video and is integrated to both the

streaming server and backup server. The basic resolution

of the video is 1024 by 768. This system uses the

Windows Media Encoder. Once the operator stops the

video recording the recorded video is converted into three

different formats: MP3, MP4, and Windows Media. After

conversion the files are temporarily stored on the local

computer. This is done with the help of automated Visual

Basic scripts that are scheduled to run as soon as the

operator closes the GUI window. They are pushed onto

the server as soon as the conversion is complete. In case

this fails for some reason, there is another script that

attempts to push any local files onto the main server once

every half an hour. The students can then login to the web

portal and access the videos in any format. They even

have the option of downloading them and viewing at a

later point in time.

Total number of classes that were streamed in the year

2009 by the College of Engineering and College of

Business put together is approximately 5000. The duration

for each class period varies between 45 minutes to an hour

and fifteen minutes. The amount of space required for an

hour of video recording is around 300MB. This adds up to

approximately 2 terabytes per year of needed storage.

B. Distributing Courses with the Starbak System

There are several differences in the procedure followed

by the School of Pharmacy and the one used by the

Colleges of Business and Engineering. The primary

reason for these differences is the intended audience. The

Colleges of Business and Engineering support a

traditional distance education program: individual

students located at geographically dispersed locations.

The School of Pharmacy, on the other hand, provides

lecture footage to large groups of students all physically

located in the same place taking the course at a satellite

campus.

The other key difference between the School of

Pharmacy and the Colleges of Business and Engineering

is the need for two-way communication. Students at the

satellite campus need to be able to engage in

videoconferencing so that they can ask the instructor

questions just as any other student would.

A high quality microphone is used to record the

lecturer’s voice. Each classroom where videos are

recorded is equipped with Polycom RSS 2000, which has

a built-in camera for the purpose of recording video in

real time. Polycom RSS 2000 is an on-demand recording,

streaming, and archiving solution for multimedia

conferences. RSS 2000 recording and streaming server

provides high capacity internal storage to archive

recorded conferences to meet legal compliance. It is easy

to record using the RSS 2000 from any type of video

conferencing endpoint as it uses just the basic commands

such as Start, Pause and Stop [2]. Around fifteen to

twenty classes are streamed weekly and account for

approximately fifty hours per week on an average, which

means nearly two hundred hours per month. The RSS

2000 system is capable of handling up to nine hundred

hours of streaming per month. In the year 2009, more than

1,100 classes were streamed with the duration of each

class varying between 1 to 4 hours. More recently RSS

4000 has been launched that has more features and is

supposed to be more robust than its predecessor.

The Starbak system consists of a conference engine,

encoder, delivery node and a Manager portal as shown in

Figure 1. Video is captured using Polycom RSS 2000 and

the two streams of data are sent to the Conference Engine.

Conference Engine constantly interacts with the Encoder.

Encoder produces .wmv and .asf extension files that are

part of the Windows Media framework. The Starbak

system is capable of handling ten simultaneous dual

screen recordings since it has ten encoders. Once that is

done, Delivery node takes the responsibility of forwarding

it to the Manager portal. It is here that the students and

faculty members can login with username and password to

access the videos. Username and password is the same for

all students of SOP. Native resolution supported by the

system is 352 by 288 pixels. The videos are available on

the Windows media player that is embedded on the VSM

portal. One can also change the quality of the video

Fig. 1. Conceptual layout of the Starbak System

Page 48: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

depending upon the internet speed. VSM automatically

adjusts the resolution depending upon speed of the

internet connection. For example, with actual download

speed of 375kbps it adjusts to 640 by 480 resolution.

III. APPLICATION OF CISSP DOMAINS AND PRINCIPLES TO

STREAMING COURSES

The advent of the Internet has given rise to many bad

things: piracy, data theft, intellectual property theft, etc.

From top secret defense documents to educational

institutions, no content on the internet can be claimed to

be 100% safe. It can be argued that educational

institutions face attack from an unusually large population

demonstrating an extremely large range in skill and

ability. Videotaped lectures are at risk of falling into the

wrong hands and being hosted on the Internet for free or

being distributed to another University. Thus there is

tremendous concern regarding plagiarism and IP theft.

Ensuring that adequate attention is given to the way data

is processed, stored and distributed is critical. As a result,

Information Assurance is at the top of most people’s list

of concerns. Therefore we shall examine confidentiality,

integrity and availability with regard to the distribution of

streamed lectures.

Like any good science, information assurance must be

undertaken with a high degree of discipline. For this

reason we will frame our discussion against the domains

of the common body of knowledge (CBK) of the CISSP

examination. The CISSP exam is held in very high regard

amongst information assurance professionals. People

holding the CISSP certification are among the highest

paid in the field. There are 10 domains in the CISSP

CBK. We will focus on five of those ten domains, each

listed under separate headings.

A. Access Control

Access controls can be defined as mechanisms that

protect the assets of an enterprise. They reduce exposure

to unauthorized activities and allow access to information

systems only to authorized personnel. In other words, they

specify which users can access the system, what resources

they can access, the operations they can perform and also

provide individual accountability [4].

In this case, the asset to be protected is the intellectual

property – the class lecture and video tape. Access

control is the process of ensuring only authorized users

are able to access the class lectures. Furthermore, the

system must ensure that legitimate users only have access

to the features and content to which they are entitled.

There are two kinds of access control. Role based

access control and rule based access control. Of the two,

Role Based Access Control is used to regulate access to

the protected material. Students log into the system using

their University generated username and password.

Videos can be viewed by both the distance education

students as well as on-campus students. Students and

faculty members login to the same portal to view the

videos, but encounter different views. In the student view,

users can view videos for the classes in which they are

enrolled. Faculty also have this capability as well as the

ability to request that the administrator lock certain videos

to certain students. An example of this occurs when a

faculty member wants to discuss the results of an exam

that a student or students have not yet taken (possibly due

to illness or other approved absence). This approach is the

same for all three colleges at the University.

B. Physical Security

Physical security is the portion of the Common Body of

Knowledge that addresses the common physical and

procedural risks that exist in the environment in which

physical systems are protected [4].

The College of Business, College of Engineering, and

School of Pharmacy server rooms all have considerable

physical security. The rooms are equipped with locks and

security camera’s fixed on the outside walls. In order to

enter the room one is required to swipe in their ID card

and type in a unique pin number. Thus they have multi-

factor authentication in place. The cases in which the

servers are present have a special combination lock so that

they cannot be detached easily.

At the SoP, the maintenance for these servers is done

by the Starbak Company as part of the Service Level

Agreement. As a result, only two University personnel

are issued keys for the room: the IT Specialist who

controls the streaming system and his/her supervisor. A

copy is also present in the Dean’s Office. If an intruder

manages to gain access to the server room and tamper

with the server in any way, tamper sensors will notify the

company which is monitoring the server. They in-turn

will contact the SoP to verify the alarm. If this activity

was not part of an authorized activity, the call serves to let

them know that there has been a breach in security.

C. Business Continuity and Disaster Recovery

Business Continuity and Disaster Recovery efforts are

designed to ensure that normal operations will continue in

the event of a major event, such as a natural disaster [4].

With regard to streaming media for coursework, the

primary business activity is the serving of video. A

secondary concern is the preservation of data. To this

Page 49: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

end, the colleges store backup copies of all videos in three

physically separate locations. There is a backup on the

local system, one copy on server and another on DVD.

Furthermore, in the case of the School of Pharmacy,

backup copies are also stored at the satellite campus,

providing a truly offsite backup solution. The Colleges of

Business and Engineering choose to store their backups in

different buildings on-campus, thereby mitigating the

threats posed by fire, flood, and wind-storm. These

preparations increase the likelihood that rapid recovery

from a disaster will be possible. Finally, once per year,

copies of all videos are delivered to the professors giving

each class as an additional form of safe archival.

D. Telecommunications and Network Security

The domain known as Telecommunications and

Network Security encompasses transmission methods,

formats and security measures used to provide protection

of the material while it is in transit over private and public

communications network and media [4].

The University uses firewalls and intrusion detection

systems to thwart head-on assaults by attackers. The

University has deployed modern, up-to-date anti-virus

software on all computer systems joined to the network in

order to slow and/or halt the spread of network based

worms as well as to detect malware which could offer

attackers back-door access. The University uses

encrypted protocols like Secure HTTP (https) to prevent

the sniffing of credentials by attackers.

E. Legal Compliance

According to the Common Body of Knowledge, this

domain addresses computer crime legislation and

regulations, investigative measures and techniques that

used to determine if an incident has occurred [4]. At the

orientation for distance education students, as well as for

on-campus students, the “do’s and don’ts” are listed. The

policy is also available on the Office of Information

Technology webpage for reference. This is to make sure

that all students are aware of their roles and

responsibilities. One clause clearly states that the video

cannot be taped and distributed for personal purposes or

sold for money. If a student is found guilty they can be

expelled from the University. Annual audits are also

performed to ensure that all of the backup copies are

accounted for.

IV. CONCLUSION AND FUTURE WORK

In this case study we have discussed the different

approaches used by colleges within our University to

protect streaming classes. Though the audience and

distribution methods vary between the Colleges of

Business, Engineering, and the School of Pharmacy the

techniques used to protect the streamed content are largely

the same.

As described earlier in this paper, protection of the

university’s intellectual property is a primary concern.

Unfortunately no delivery system can be considered truly

secure unless the one controls both the transmitting and

receiving computer. Therefore the decision to allow

students to download the course materials becomes a

philosophical one for the university: do the advantages of

allowing students to download materials outweigh the

penalties if a malicious student is able to bypass the

intellectual property controls implemented by the

vendors?

In both cases presented here, the university has decided

that the gains outweigh the costs. Obviously the

protections offered by the software vendors play a role in

this decision. The StarBak system does not natively allow

viewers to download the streaming .asf video files. Hence

the expertise required for a student to upload the course

materials to an alternate site such as YouTube can be

considered high enough to eliminate this as a practical

concern. On the other hand Sony system provides what is

known as progressive streaming. This means that the

computer has to buffer the video stream in a temp file and

users can somehow capture the temp file. Also, other

capture systems allow files to be downloaded to the

computer but the manager can turn this off. In these cases,

the colleges of business and engineering have concluded

that the virtues of distance education outweigh the costs of

having course materials illegally distributed. It is also

worth remembering that the university still enjoys all of

the legal protections afforded by U.S. intellectual property

laws. The videos are copyrighted and any misuse will be

an Intellectual Property right infringement. Students are

fully aware of the repercussions should they upload videos

on YouTube or share it with other universities.

Future work will focus primarily on the upcoming

changes planned for the School of Pharmacy system. The

School of Pharmacy has recently announced plans to

invest more than $25,000 in order to upgrade from version

2.0 to version 3.0 of the Starbak system. Version 3.0

features capabilities and features not found on version 2.0

[3]. How these changes impact the overall security

posture of the University network will be studied with

great detail.

Page 50: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

REFERENCES

[1] “Harrison School of Pharmacy” Available:

http://pharmacy.auburn.edu/mobile/index.htm

[2] “Streaming Video: Products: Polycom RSS 2000” Available:

http://www.ivci.com/streaming-polycom-rss-2000.html

[3] “Manage the V3 System” Available:

http://www.starbak.com/solutions/Manage_the_V3_System/manage-

v3.aspx

[4] H. F. Tipton and K. Henry, Official (ISC) 2 guide to the CISSP CBK,

Auerbach Publications.

Page 51: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

1

Gwen Greene Applied Information Technology Department

Towson University, Towson, MD 21252, USA

[email protected]

John D‟Arcy Mendoza College of Business, University of Notre Dame,

Notre Dame, IN 46556, USA

[email protected]

Abstract—IS security advocates recommend strategies that shape

user behavior as part of an overall information security

management program. A major challenge for organizations is

encouraging employees to comply with IS security policies. This

paper examines the influence of security-related and employee-

organization relationship factors on users’ IS security compliance

decisions. Specifically, we predict that security culture, job

satisfaction, and perceived organizational support have a positive

effect on users’ IS security

compliance intentions. Our results support the notion that

security culture and job satisfaction lead to increased compliant

security behavior. However, we did not find evidence that

perceived organizational support increases compliant behavior

intentions. Our results do suggest that security culture is a

multidimensional construct consisting of top management

commitment to security, security communication, and computer

monitoring. The findings have implications for information

security research and practice.

Index Terms—Information security, user behavior, security

culture, job satisfaction, perceived organizational support

I. INTRODUCTION

T is recognized among information systems (IS) security

scholars and practitioners that end users are a “weak link” in

security and therefore organizations must develop strategies to

shape user behavior as part of their overall information

security management program. A major challenge for

organizations is encouraging employees to comply with IS

security policies. Many users routinely make a conscious

decision to violate such policies because they want to expedite

their work or increase their own productivity [22]. Others may

violate with more harmful intentions, such as stealing sensitive

corporate data or sabotaging company networks. Regardless of

user intent, research indicates that over half of all information

security breaches stem from users‟ poor (or lack of) IS security

compliance [26][32].

Due to the significance of this issue, a body of literature that

investigates a variety of individual and contextual influences

on compliant user behavior has emerged. One stream is rooted

in deterrence theory, which predicts that the greater perceived

certainty and severity of sanctions for an illicit act, the more

individuals are deterred from that act [11]. The premise of

these studies is that fear of punishment motivates users to

comply with IS security policies. For example, Herath and Rao

[14] found that perceived certainty of detection was positively

associated with users‟ security policy compliance. A separate

stream investigates various non-punishment factors, such as

influence of co-workers, attitudes and beliefs, moral reasoning,

and coping mechanisms, as antecedents to compliant security

behavior. This work incorporates elements from moral

development research models, the theory of reasoned

action/planned behavior, as well as criminological perspectives

including social bond theory, differential association, and

neutralization. Additional research has assessed the influence

work environment factors, such as security culture, as

predictors of compliant behavior [6][10]. D‟Arcy and Greene

[10] found that security culture – operationalized as top

management commitment to security and security

communication – had a positive influence on security

compliance intention among a sample of computer-using

professionals.

Although this prior work is highly informative and

contributes substantial theoretical knowledge to the IS security

domain, we believe that additional research is needed to

provide a more complete understanding of what motivates

users to engage in compliant security behavior. Interestingly,

the vast stream of organizational behavior (OB) literature has

received little attention from IS security scholars. The OB

literature provides a rich source of knowledge on the factors

that contribute to a variety of workplace behaviors, including

behaviors that adhere to corporate policies and regulations. In

the current paper, we draw upon the OB literature and identify

two factors – job satisfaction and perceived organizational

support – that influence security compliance intention. We also

augment D‟Arcy and Greene‟s [10] second-order security

culture construct we an additional factor, computer

monitoring, and posit that security culture, job satisfaction, and

perceived organizational support each have a positive impact

on security compliance intention. The integrated model, shown

in Figure 1, facilitates a comparison of security-related factors

(i.e., security culture) to those of the employee-organization

relationship (i.e., job satisfaction and perceived organizational

support). The results shed light on the factors that motivate

individuals toward compliant IS security behavior in the

workplace.

Assessing the Impact of Security Culture and

the Employee-Organization Relationship on IS

Security Compliance

I

Page 52: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

2

Fig. 1. The Research Model

II. SECURITY CULTURE

Organizational culture has been defined as a pattern of

beliefs and expectations shared by organization members [7].

These beliefs and expectations produce norms that shape the

behavior of individuals, groups, or the organization. In a

similar vein, security culture reflects the values and beliefs of

information security shared by all members at all levels of the

organization. Researchers have pointed out that security

culture is an important factor in maintaining an adequate level

of information security in organizations and have even asserted

that only a significant change in security culture can reduce the

number of security breaches experienced [24]. As such,

managers should regard security culture as an important factor

for supporting and guiding their security management

programs [7]. IS security advocates have suggested that

organizations can influence user behavior by cultivating a

security culture that promotes security-conscious decision

making and adherence to security policy [28][29].

Ruighaver et al. [24] argued that security culture is a

multidimensional concept that has often been investigated in a

simplistic manner. In response, D‟Arcy and Greene [10]

opertionalized security culture as a higher (second) order

construct consisting of top management commitment to

security and security communication. In terms of management

support, research has found that beliefs that the decision

makers within organizations have about security, and about the

quality of different processes used to manage security, are

distinguishing characteristics of a security culture [24]. In

short, if information security is clearly important to the

organization, it should be apparent in the actions and

communications of top management.

Education of employees on their roles and responsibilities

related to security is another crucial aspect of security culture

[28]. A well-defined process of security communication

involving education, reminders, and refresher courses increase

employee feelings of responsibility and ownership in decisions

about security, and ultimately lead to a more positive attitude

about security throughout the whole organization [29]. Such

communication efforts manifest themselves in security culture,

which in turn assists in ensuring acceptable actions and

behavioral patterns of end users.

Based on a review of the IS security literature, we identified

a third dimension of security culture: computer monitoring.

Computer monitoring is often used by organizations to gain

compliance with rules and regulations [11]. Within the IS

context, examples include tracking employees‟ Internet use,

recording network activities, and performing security audits.

Monitoring and auditing activities are a means of policy

enforcement and convey the message that the organization is

serious about security and any deviations from mandated

policies and procedures will not be taken lightly [27]. Hence,

computer monitoring is a signal of the organization‟s security

culture.

Although previous studies have not examined security

culture in a holistic manner, extant literature does suggest that

each of our three security culture dimensions impact user

behavior. Hu et al. [15] found that individual behavior toward

information security is influenced by the actions of top

management. IS scholars have also asserted that continuous

education and communication efforts that reiterate the

importance of security help to ensure compliant behavior [28].

Finally, Herath and Rao [14] provide indirect support for the

notion that monitoring increases compliance, as they found a

significant positive relationship between perceived certainty of

detection and security policy compliance intention. Based on

the above, we posit that the stronger the security culture within

the organization, the more likely users will exhibit compliant

behavior. Hence, the following hypothesis:

H1: Security culture is positively associated with security

compliance intention.

III. JOB SATISFACTION AND PERCEIVED

ORGANIZATIONAL SUPPORT

As previously mentioned, we identified job satisfaction and

perceived organizational support as two variables from the OB

literature that have a bearing on decisions to comply with IS

security policy1. Both of these variables have been shown to

influence employees‟ efforts toward achieving organizational

goals, and we therefore reason that this motivational basis will

percolate to the narrower domain of IS security compliance.

Job satisfaction refers to an employee‟s overall sense of

well-being at work [2]. The construct is rather general and

encompasses employee feelings about a variety of intrinsic and

extrinsic job elements. It is believed that employees who

report positive feelings about their employer are likely to

comply with company policies and procedures because they

are highly engaged and see the “big picture” in terms of their

organizational responsibilities. Thus, to the extent that positive

1 The sheer size of the organizational behavior (OB) literature and the

number of variables studied by OB scholars make it impractical to include all

possible constructs that might have a relationship with security compliance

intention. Our examination of the literature led us to pick two of the more

interesting (in our minds) variables to examine. We leave it to future studies

to included additional OB variables.

Page 53: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

3

affect is reflected in one‟s job satisfaction, it is likely that more

satisfied employees engage in computing behavior that is in

line with corporate policies and guidelines. Social exchange

theory provides theoretical justification for this assertion. The

theory predicts that people seek to reciprocate those who

benefit them [4]. In an organizational context, if an employee

perceives their work contribution as a positive exchange

relationship in which they feel properly treated by their

organization, they reciprocate through behavior that is positive

and beneficial to the organization [25].

A number of studies provide empirical support for a

relationship between job satisfaction and compliant behavior.

For example, a recent meta-analysis found a positive

correlation between job satisfaction and measures of

performance that included task or in-role performance [16].

Research has also consistently shown that job satisfaction is

related to a variety of organizational citizenship behaviors,

including organizational compliance [19][31]. Based on the

foregoing evidence, we postulate that higher job satisfaction

will manifest itself in an increased tendency toward compliant

security behavior. Hence, the following hypothesis:

H2: Job satisfaction is positively associated with security

compliance intention.

Perceived organizational support (POS) refers to

employees‟ perceptions about the degree to which the

organization cares for their well-being and values their

contributions [12]. POS strengthens employees‟ beliefs that

the organization recognizes and rewards increased

performance or expected behaviors. In return, the employee is

expected to provide dedication and loyalty to the organization

and help the organization reach its objectives [23]. Similar to

the logic used above, social exchange theory provides a

theoretical basis for a relationship between POS and compliant

behavior. When POS is high, a social exchange develops

where the employee may feel compelled to reciprocate the

high level of perceived affective commitment he or she

receives from the organization by engaging in beneficial

behavior. In the IS security context, it is reasonable to assume

that the employee‟s higher level of affective commitment will

lead to a greater adherence to security policies.

The OB literature provides empirical support for this

purported relationship. Settoon et al. [25] cite a number of

studies that found a positive relationship between perceived

organizational support and employees‟ willingness to fulfill

conventional job responsibilities, such as following corporate

rules and guidelines specified in their job descriptions.

Rhoades and Eisenberger [23] found a relationship between

POS and employee job performance in their meta-analysis of

over 70 POS studies. Research has also demonstrated that POS

is associated with increased organizational commitment and

job involvement (i.e., identification with and interest in the

specific work one performs), which in turn are both linked to

increased productivity and prosocial behavior [12][25].

Finally, POS has been shown to be related to lessened

withdrawal behavior and an increased desire to remain with

the organization [23]. Given these empirical findings, the

theoretical predictions of social exchange theory, and the

notion that POS is associated with increased dedication and

loyalty to the organization, we predict a relationship between

POS and IS security compliance intention. Hence, the

following hypothesis:

H3: Perceived organizational support is positively associated

with security compliance intention.

IV. METHODOLOGY

Data were collected using two online surveys that were

administered at separate points in time. The first survey

contained measures for security compliance intention (COMP)

and security communication (COM), as well as demographic

items. The second survey was administered two weeks after

the first and contained measures for top management

commitment to security (TMCS), computer monitoring

(MON), job satisfaction (JS), and perceived organizational

support (POS). The two stage approach allowed us to separate

collection of the independent (with the exception of COM)

from the dependent variable (COMP), thus reducing the

likelihood of common method effects in our results [21].

The survey used several previously validated scales as well

as some original items. TMCS was measured with three items

from Knapp et al. [17]. COM was measured with a six-item

scale developed by the authors, using various items from

Knapp et al. [18]. MON was measured with three items from

D‟Arcy et al.‟s [11] awareness of computer monitoring scale.

JS was measured with five items from Brayfield and Rothe‟s

[5] index of job satisfaction. This scale has been used to

measure overall job satisfaction in numerous studies and has

shown strong psychometric properties. POS was measured

using Eisenberger et al.‟s [12] seven-item scale. The scale

assesses employee valuation of the organization and actions it

might take in situations that affect employee well-being. This

scale has also exhibited strong validity and reliability in prior

studies (e.g., [25]). All survey items were measured on five

point scales ranging from „strongly disagree‟ to „strongly

agree,‟ except for the control variables. The items are

presented in the Appendix.

We reviewed our multi-item measures using criteria for

formative and reflective constructs [20] and determined that

each was reflective. However, we modeled security culture as

a Type II second-order construct – a second order construct

that has formative relationships with its reflective

subconstructs. Recall that security culture is a higher (second)

order construct consisting of three lower (first) order

constructs: TMCS, COM, and MON. At an abstract level, each

of these variables contributes to the overall security culture

construct. However, our literature review (and the inter-

construct correlation results in Table 3) indicates that TMCS,

COM, and MON are distinct dimensions of security culture.

Page 54: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

4

A list of 223 potential survey participants was developed

based on the first author‟s professional contact list. This list

consisted of computer-using professionals located in various

organizations throughout the mid-Atlantic region of the U.S.

An email invitation was sent to the list, asking individuals to

complete the first survey. The email also stated that a second

survey would be forthcoming in two weeks. One hundred

forty-six individuals provided usable responses to the first

survey, and 134 provided usable responses to the second.

Participants were asked to provide the last four digits of their

telephone number in each survey so that the researchers could

match their responses. The final sample consisted of 127

usable, matched responses. Table 1 summarizes the respondent

demographic characteristics.

TABLE I

RESPONDENT DEMOGRAPHIC CHARACTERISTICS

V. ANALYSIS AND RESULTS

We used the partial least squares (PLS) structural equation

modeling technique to analyze the data via the SmartPLS 2.0

software package. PLS was chosen because it supports the

testing of formative relationships between higher-order

constructs and their subconstructs. The second-order construct

security culture is estimated using the hierarchical component

approach. In this approach, the higher-order factor is created

using the indicators of its lower-order factors (i.e., TMCS,

COM, and MON). For further explanation, see [8][30].

Following standard procedures, we assessed the

measurement model and then the structural relationships. The

measurement model was assessed through tests of convergent

validity, discriminant validity, and reliability. For convergent

validity, all factor loadings should exceed 0.70 and average

variance extracted (AVE) for each construct should exceed

0.50 [13]. As shown in Table 2, both criteria were met for all

constructs, with the exceptions of TMCS3 (loading of .69) and

POS4 (loading of .63). TABLE 2

LOADINGS, CROSS-LOADINGS, AND AVE‟S

TABLE 3

RELIABILITY AND INTER-CONSTRUCT CORRELATIONS

For discriminant validity, the square root of the AVE for

each construct should be greater than the inter-construct

correlations, and items should load more strongly on their

corresponding construct than on other constructs (i.e., loadings

should be at least .10 than cross-loadings) [13]. As shown in

Tables 2 and 3, these conditions were met for all constructs. It

is worth noting, however, that the JS and POS constructs were

highly correlated at .74. Multicollinearity typically is

considered a serious problem if the correlation between two

variables is greater than 0.8 [3]. We conducted formal tests for

multicollinearity and both constructs had variance inflation

factor (VIF) values less than 3.0, which is below the

conservative cutoff of 3.3 [20]. Further, our discriminant

validity tests revealed that JS and POS were empirically

distinct. Hence, our results corroborate prior research that has

found POS to be related, yet distinct from, JS [23]. We also

note that the correlations between TMCS, COM, and MON are

below the .80 cutoff value commonly used to assess

discriminate validity. This demonstrates the distinctiveness of

content captured by the individual first-order factors,

providing support for the formative nature of security culture.

Page 55: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

5

Finally, the reliabilities of all constructs were above the

recommended .70 threshold [13] as shown in the second

column of Table 3.

The test of the structural model includes estimating the path

coefficients, which indicate the strength of the relationships

between the independent and dependent variables, and the R2

value (the variance explained by the independent variables). A

bootstrapping resampling procedure (600 samples) was used to

determine the significance of the paths within the structural

model. We followed a multistage approach and analyzed the

structural model in a manner analogous to hierarchical

regression. This approach facilitates an assessment of the

substantive impact of the two OB variables (JS and POS) on

COMP relative to that of security culture.

The first model consisted of only the control variables and

explained 9% of COMP variance. Next, we add security

culture to the model and the addition of this second-order

factor increased the variance explained to 40%. We then

calculated the effect size by applying the following formula: f2

= (R2 Full Model – R

2 Partial Model)/(1 – R

2 Full Model), which yielded

an effect size of .51. We determined the significance of this

change in R2 through a pseudo F-test in which f

2 was

multiplied by (n – k -1), where n equals sample size and k

equals the number of independent variables. The result of the

pseudo F-test was 62.62 (p<.001). An effect size of 0.02 is

small, 0.15 is moderate, and 0.35 is large [9]. Thus, the

addition of security culture to our control variable-only model

had a large effect size that is significant.

Next, the JS and POS variables were added and they

increased the R2 of COMP from 40 to 45%. This is a small

effect size (.10) but it is significant (12.02, p<.001). Results of

the full PLS model evaluated are shown in Figure 2.

The results show that the three dimensions forming security

culture (TMCS, COM, and MON) each have significant paths.

This indicates that all three first-order constructs make a

unique contribution to the second-order construct, and thus

provide justification for the acceptance of security culture as a

second-order factor. COM had the highest of the three path

coefficients, suggesting a greater importance of this dimension

over TMCS and MON in forming security culture. In terms of

the hypotheses, the results are supportive of H1 as security

culture had a significant positive relationship with COMP (β =

.636, p<0.01). The results also support H2 as JS had a

significant positive relationship with COMP (β = .214,

p<0.05). The relationship between POS and COMP was

significant (β = -.363, p<0.01), but not in the predicted

direction and therefore H3 is not supported. Turning to the

control variables, age (β = .169, p<0.01) was significantly

associated with COMP while gender and industry were not.

Thus, it seems that older employees are more likely to comply

with IS security policies.

Fig. 2. Structural Model Results

As an additional analysis we tested for the moderating

influences of employee type (IT vs. non-IT employee),

industry (IT company vs. non-IT company), and employee

tenure (low tenure vs. high tenure) on the hypothesized

relationships in our model. While we have not committed to a

particular theoretical basis for these moderating effects, it

seems logical that the influences of security culture and our

OB variables on COMP may be contingent on the

aforementioned factors. To explore the moderating influence

of employee type, we split the sample into IT (n=50) and non-

IT (n=77) employee subgroups and ran separate models for

each group. Statistical tests using the approach suggested by

[8] tested the differences in path coefficients between the

subgroups. The results revealed significant differences in the

two groups for the influences of both OB variables on COMP.

For the non-technical subgroup, the relationship between POS

and COMP was nonsignificant while for the technical

subgroup this relationship was negative and significant (β = -

.278, p<0.01). In terms of JS, the relationship between JS and

COMP was positive and moderately significant (β = .123,

p<0.10) for the non-technical group while nonsignificant for

the technical group. These results suggest that job satisfaction

is a somewhat important determinant of security compliance

intention among non-technical employees but has little or no

influence for employees in technical positions. Furthermore,

the unexpected negative relationship between POS and COMP

that was found in the full sample results (Figure 2) seems to be

largely a result of respondents with technical positions.

We followed the same procedure as above and created

subgroups for the respondents from IT companies (n=60) and

non-IT companies (n=67). The only significant differences

between the groups was the relationship from JS to COMP,

with this coefficient being positive and significant (β = .198,

p<0.05) in the non-IT company sample and nonsignificant in

the IT company sample. Hence, it appears that JS is more

influential in determining employee security compliance

intention in non-IT companies compared to IT companies. We

speculate that due to their industry affiliation, employees in IT

companies are more cognizant of security issues and have a

stronger overall sense of the importance of IT security and

Page 56: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

6

therefore are more apt toward engaging in compliant security

behavior versus those employees in non-technical industries

that may have less emphasis on IT security and require more

positive affect toward their employer as a means to engage in

compliant security behavior.

Finally, we performed a median split (median = 4 years) on

the tenure variable and created subgroups for the low tenure

(n=65) and high tenure (n=62) employees. As with the industry

subgroups, the only significant differences between the groups

was the relationship from JS to COMP, with this coefficient

being positive and marginally significant (β = .126, p<0.10)

for the low tenure sample and non-significant for the high

tenure sample. This suggests that employees with less time in

the organization are more influenced by their satisfaction

toward their job in terms of engaging in security related

behavior. It may be that higher tenure employees comply with

security policies and procedures out of fear of punishment or

because they have more to lose if they do not comply, whereas

lower tenured employees have less to lose if punished and

therefore make their compliance decisions based on their

feelings of affect toward their job and the organization.

It should also be noted that the influence of security culture

was relatively consistent across the subgroup analyses. Hence,

security culture appears to have a strong influence on security

compliance intention across different employee positions and

industries, and for employees with various years of work

within their respective organizations.

VI. DISCUSSION

This paper introduced and empirically tested a model on the

effect security culture, job satisfaction, and perceived

organizational support on IS security compliance intention. In

doing so, we (1) augmented the second-order security culture

construct with an additional dimension – computer monitoring,

and (2) evaluated the contribution of two employee-

organization relationship variables on IS security compliance

intention. The results suggest that both security-related and

general work environment factors contribute to IS security

compliance. The results also provide some evidence that these

factors are moderated by employee position, industry, and

tenure with the organization.

The observed relationship between security culture and

security compliance intention supports the notion that security

culture is an important factor, and perhaps necessary, for

supporting and guiding information security management

programs. The results also support prior empirical work that

found a relationship between security culture and both in-role

(i.e., formally specified in job descriptions) and extra-role (i.e.,

discretionary) end user security behavior. However, as noted,

extant research has conceptualized security culture as top

management commitment to security and security

communication. We contributed to the content validity of the

security culture construct by identifying computer monitoring

as a third dimension. This is based on the idea that

enforcement activities are a key facet of any information

security culture. Our results point to the unique contribution of

computer monitoring in forming a security culture, while

suggesting that this broader conceptualization of security

culture has a strong influence on users‟ IS security compliance

decisions.

The examination of the OB variables advances our

understanding of the factors that motivate compliant user

behavior in the workplace. We found that higher job

satisfaction is associated with an increased tendency toward

compliant security behavior. This supports the predictions of

social exchange theory in that users will be more apt to engage

in positive and beneficial organizational behaviors if they are

satisfied and perceive their employment relationship as a

positive exchange. The results also extend prior work that

found a link between job satisfaction and desirable employee

behavior to the narrower context of security-related behavior.

In short, satisfied employees appear more likely to comply

with information security mandates.

The results regarding the influence of perceived

organizational support were not as expected and therefore

warrant further discussion. Based on the voluminous OB

literature in this area, we expected perceived organizational

support to have a positive effect on individuals‟ intentions to

engage in compliant security behavior. Instead, we found the

opposite effect; that is, perceived organizational support had a

significant negative association with security compliance

intention. A possible explanation is that employees who

perceive strong organizational support in general may feel that

information security problems will be handled by the IT

department or other functional areas and therefore their own

compliance practices are not critical. This is related to the

concept of risk homeostasis [1]. Risk homeostasis predicts that

people engage in risky behavior when they perceive increased

safety in a particular area (i.e., a false sense of security). Thus,

users may be less vigilant in their security behavior when they

perceive greater organizational support because they equate

this to an abundance of security controls. It may also be that

employees who perceive stronger organizational support feel

that the organization will continue to “have their backs” even

if they fail to comply with security policies and guidelines.

Finally, it is possible that the strong correlation between our

job satisfaction and perceived organizational support construct

measures contributed to the unexpected result. On the other

hand, formal tests indicated that the two constructs were

empirically distinct and their correlation was within the

threshold for establishing discriminant validity. At any rate,

future studies should assess the influences of job satisfaction

and perceived organizational support on security compliance

intention vis-à-vis other employment relationship variables to

add credibility to our findings.

The results of the subgroup analyses suggest that the

relationships between the OB variables and IS security

compliance may be contingent on other factors. Specifically,

the influence of job satisfaction on security compliance

intention is stronger for employees with non-technical

Page 57: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

7

positions and for employees in non-IT industries, and for

employees with less tenure in their organizations. Future

research should explore the underlying causes of these

contingent effects and test the purported moderating effects

using stronger empirical tests.

This study has implications for both the research and

practice of information security. From a research perspective,

this study offers an expanded conceptualization of security

culture that accounts for enforcement activities in the form of

computer monitoring. Thus, we offer a more thorough

measurement and analysis of security culture that can be used

in future studies. The current study is also one of the few in the

behavioral information security literature that incorporates OB

variables. As such, we extend the rich body of work on the

employment relationship factors that influence workplace

behavior to the information security realm. From a

methodological perspective, the study improves upon the rigor

of many prior studies through a two-part survey approach that

included a time lag between collection of the independent and

dependent variables (with the exception of COM). This

significantly reduces the likelihood of common method effects

in our data. For practitioners, the results point to the

importance of top management commitment to security,

security communication efforts, and monitoring activities as

drivers of security culture, which in turn contributes to

adherence to security policy. Further, efforts to create a work

environment where employees are satisfied and happy with

their jobs will not only improve the overall quality of life

within the organization, but should also enhance the

organization‟s security posture by increasing security

compliance among the user population.

APPENDIX

REFERENCES

[1] E. Albrechtsen, "A qualitative study of users' view on information

security," Computers & Security, vol. 26, no. 4, pp. 276-289, 2007.

[2] S. Ang, L. van Dyne and T.M. Begley, "The employment relationships

of foreign workers versus local employees: A field study of

organizational justice, job satisfaction, performance, and OCB," Journal

of Organizational Behavior, vol. 24, no. 5, pp. 561-583, 2003.

[3] R.P. Bagozzi, Y. Yi and L.W. Phillips, "Assessing construct validity in

organizational research," Administrative Science Quarterly, vol. 36, no.

3, pp. 421-458, 1991.

[4] P.M. Blau, Exchange and power in social life, John Wiley & Sons, New

York, 1964.

[5] A.H. Brayfield and H.F. Rothe, "An index of job satisfaction," Journal

of Applied Psychology, vol. 35, no. 5, pp. 307-311, 1951.

[6] M. Chan, I. Woon and A. Kankanhalli, "Perceptions of information

security at the workplace: linking information security climate to

compliant behavior," Journal of Information Privacy & Security, vol. 1,

no. 3, pp. 18-41, 2005.

[7] S.E. Chang and C-S. Lin, "Exploring organizational culture for

information security management," Industrial Management & Data

Systems, vol. 107, no. 3, pp. 438-458, 2007.

[8] W. Chin, "Frequently asked questions-partial least squares & pls graph,"

Http://disc-Nt.Cba.Uh.edu/chin/plsfaq.Htm, 2000.

[9] J. Cohen , Statistical power analysis for the behavioral sciences,

second edition, L. Erlbaum Associates, Hillsdale, N.J., 1988.

[10] J. D'Arcy and G. Greene, "The multifaceted nature of security culture

and its influence on end user behavior," in Proceedings of IFIP TC8

International Workshop on Information Systems Security Research,

Cape Town, South Africa, pp. 145-157, May 2009.

[11] J. D'Arcy, A. Hovav and D. Galletta, "User awareness of security

countermeasures and its impact on information systems misuse: A

deterrence approach," Information Systems Research, vol. 20, no. 1, pp.

79-98, 2009.

[12] R. Eisenberger, R. Huntington, S. Hutchison and D. Sowa, "Perceived

organizational support," Journal of Applied Psychology, vol. 71, no. 3,

pp. 500-507, 1986.

[13] D. Gefen and D. Straub, "A practical guide to factorial validity using

pls-graph: Tutorial and annotated example," Communications of AIS,

vol. 16, pp. 91-109, 2005.

[14] T. Herath and H. Rao, "Protection motivation and deterrence: a

framework for security policy compliance in organisations," European

Journal of Information Systems, vol. 18, no. 2, pp. 106-125, 2009.

[15] Q. Hu, T. Dinev, P. Hart and D. Cooke, "Top management

championship and individual behaviour towards information security:

an integrative model," in Proceedings of the Sixteenth European

Conference on Information Systems, Galway, Ireland, June 2008.

[16] S. Kaplan, J.C. Bradley, J.N. Luchman and D. Haynes, "On the role of

positive and negative affectivity in job performance: A meta-analytic

investigation," Journal of Applied Psychology, vol. 94, no. 1, pp. 162-

176, 2009.

[17] K.J. Knapp, "Information security: management's effect on culture and

policy," Information Management & Computer Security, vol. 14, no. 1,

pp. 24-36, 2006.

[18] K.J. Knapp, T.E. Marshall, R.K. Rainer and F.N. Ford, "Managerial

dimensions in information security: A theoretical model of

organizational effectiveness," ISC2 Inc and Auburn University Research

Report, 2005.

[19] J.A. LePine, A. Erez and D.E. Johnson, "The nature and dimensionality

of organizational citizenship behavior: A critical review and meta-

analysis," Journal of Applied Psychology, vol. 87, no. 1, pp. 52-65,

2002.

[20] S. Petter, D. Straub and A. Rai, "Specifying formative constructs in

information systems research," MIS Quarterly, vol. 31, no. 4, pp. 623-

656, 2007.

[21] P.M. Podsakoff, S.B. MacKenzie, J.Y. Lee and N.P. Podsakoff,

"Common method biases in behavioral research: A critical review of the

literature and recommended remedies," Journal of Applied Psychology,

vol. 88, no. 5, pp. 879-903, 2003.

[22] J. Predd, S.L. Pfleeger, J. Hunker and C. Bulford, "Insiders behaving

badly," IEEE Security & Privacy, vol. 6, no. 4, pp. 66-70, 2008.

[23] L. Rhoades and R. Eisenberger, "Perceived organizational support: A

review of the literature," Journal of Applied Psychology, vol. 87, no. 4,

pp. 698-714, 2002.

[24] A.B. Ruighaver, S.B. Maynard and S. Chang, "Organisational security

culture: Extending the end-user perspective," Computers & Security,

vol. 26, no. 1, pp. 56-62, 2007.

Page 58: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

8

[25] R.P. Settoon, N. Bennett and R.C. Liden, "Social exchange in

organizations: Perceived organizational support, leader-member

exchange, and employee reciprocity," Journal of Applied Psychology,

vol. 81, n. 3, pp. 219-227, 1996.

[26] J.M. Stanton, K.R. Stam, P. Mastrangelo and J. Jolton, "Analysis of end

user security behaviors," Computers & Security, vol. 24, no. 2, pp. 124-

133, 2005.

[27] D.W. Straub, "Effective IS Security," Information Systems Research,

vol. 1, no. 3, pp. 255-273, 1990.

[28] R. von Solms and B. von Solms, "From policies to culture," Computers

& Security, vol. 23, no. 4, pp. 275-279, 2004.

[29] C. Vroom and R. von Solms, "Towards information security behavioural

compliance," Computers & Security, vol. 23, no. 3, pp. 191-198, 2004.

[30] M. Wetzels, G. Oderkerken-Schroder and C. van Oppen, "Using PLS

path modeling for assessing hierarchical construct models: Guidelines

and empirical illustration," MIS Quarterly, vol. 33, no, 1, pp. 177-195,

2009.

[31] L.J. Williams and S.E. Anderson, "Job satisfaction and organizational

commitment as predictors of organizational citizenship and in-role

behaviors," Journal of Management, vol. 17, no. 3, pp. 601-617, 1991.

[32] T. Wilson, "Understanding the danger within," InformationWeek

Research Report, 2009.

Page 59: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

Abstract—Incidents of computer abuse, loss of proprietary

information and security lapses have been on the increase. More

often than not, such security lapses have been attributed to

internal employees in organizations subverting established

organizational controls. What causes employees to engage in such

illicit activities is the focus of this paper. In particular we argue

that increased employee emancipation leads to better protection

of information in organizations. In this paper we present our

argument by reviewing the appropriate theoretical literature and

by developing conceptual clarity of the related constructs.

I. INTRODUCTION

NTERNAL threats from employees have always been

acknowledged as a major source of security breaches in

organizations [37]. A recent report by privacyrights.org on

data breaches indicates a substantial increase of internal

security breaches from 2005 to 2009 that involve various cases

of stealing personal information of internal publics, co-workers

and clients in different organizations such as banking, health,

university and government offices. The breaches cause a lot of

damage to a company‟s computer systems, financial data,

business operations and ultimately, it‟s reputation. These

events are created by the people who are given privileges to

use IS in organizations and subsequently misuse that access

privilege. Stanton et al. [32] describe that regardless of users‟

intent - from naive mistakes to intentional misuse - security

breaches because of lack of compliance can be curtailed. The

importance of the issue, hence, attaches relative importance of

research into insider threats that have consistently focused on

employees‟ compliance behavior to IS policies. IS literature

has a long tradition of studying the management roles in

cultivating a security culture and its impact on intentions to

comply [11][22][31]. The premise of these studies is that

management commitment, communication and enforcement of

power motivates employees to comply with security policies.

For example, a study conducted by Arbodela et al. [4]

involving the identification of antecedents of employees‟

safety compliance behavior discovers that top management

commitment to safety was perceived to be an integral

determinant of safety culture. Additionally, Chan et al. [11]

suggests that upper management practices are found to be

positively related to employees‟ compliance behavior through

mediation of perception of information security climate. Along

the same lines, Spitzmueller and Stanton [31] reveal

that organizational commitment, organizational identification

and attitudes towards technology predict employees‟ intentions

to comply with security policies. An associated stream of research is one that investigates

motivation of management through reward and sanction in

influencing employees‟ compliance behavior. An organization

that is truly dedicated to security will recognize the need for

motivation beyond mere security awareness and will develop

an effective security motivation program along with or as a

part of a continuing awareness effort. It should employ the

most controllable and powerful motivators, rewards and

penalties [27]. A study conducted by Siponen et al. [30] finds

that sanctions have a significant impact on actual compliance

with information security policies. According to Parker [27]

penalties or sanctions often involve loss of favor, perks,

position, or remuneration. One group manager publicly posted

the name of anybody who revealed his or her password and

required everyone in the group to immediately change their

passwords. This produced peer pressure to keep passwords

secret because nobody liked having to learn a new one.

However Pahnila et al. [26], commenting on factors that

explain employees' IS security policy compliance among

Finnish companies, suggest that rewards do not have a

significant effect on actual compliance with IS security policy.

Most of the prior research in internal control towards security

problems deals with protection motivation and cultivating a

security culture. While this stream of research is useful, it falls

short of suggesting why employees subvert security policies in

the first place. Moreover, from an ontological perspective,

previous research orientation is dominated by functionalist and

interpretivistt paradigms [13]. Although these paradigms are

important, information systems security research can also be

studied from alternative viewpoints, viz critical. Based upon

the notion that critical humanist study of information systems

security is underdeveloped, we attempt to explain the issue of

employees‟ intention to comply with security policy by

arguing that lack of emancipation of an individual is a

significant cause for security lapses.

Yurita Abdul Talib and Gurpreet Dhillon School of Business, Virginia Commonwealth University

301 W Main Street, Richmond, VA

{abdultalibyy, gdhillon} @vcu.edu

d

Invited Paper: Employee Emancipation and

Protection of Information

I

Page 60: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

II. WHO IS AN EMANICIPATED EMPLOYEE?

The concept of emancipation was established by Critical

Social Theorists (CST) as an alternative to traditional research

approaches. The fundamental goal of CST is improvement of

human conditions, which takes into account the human

construction of social forms of life and the possibility of their

recreation [25]. The critical theorists seek to emancipate

people; they are concerned with finding alternatives to existing

social conditions. More specifically, with emancipation, the

organization actors have the capability to transform

organizational conditions.

Habermas‟s Theory of Communicative Action is an

extension of CST with a broader notion of rationality.

Habermas developed the concept of "communicative action",

defined as "the type of interaction in which all participants

harmonize their individual plans of action with one another

and thus pursue their illocutionary aims without reservation"

[16]. According to this perspective, in order to overcome

social crises, it is necessary to counterbalance purposive

rationality by bringing communicative rationality back into

play. Hence the framework highlights the importance of

developing a society based on free, undistorted communication

in order to prevent colonization and technization of the life

and world by power and money [16].

The study of human emancipation has been long established

in the field of social studies. And among the well-known

studies are those on 'women emancipation'

[1][2][8][12][18]21]. The term 'women emancipation' is used

to denote equalization of opportunity structure in which

women are to gain equal status as men as a result of balancing

of gender roles [21]. Women now are no longer stuck in the

kitchen with baby diapers, but are involved actively in social

structures. The prominent benefit of equality of gender is in

education, in which for a long time in history, women were not

given opportunity to obtain an education. Education opens up

many opportunities for women, while eliminating ignorance

and silent suffering. Women are now emancipated enough to

join the labor force in which they can be found amongst the

most powerful politicians, scholars, CEOs of companies and

holding many more jobs that were earlier monopolized by

men. This movement of equality in gender roles or

emancipation allows women to discover their suppressed

dignity and potential. Though some proponents of women

emancipation see it as a stimulus for crime committed by

women and delinquent behaviors, most support the idea as a

means of manifestation of democratic ideals.

Hirschheim and Klein [17] purport that “Information

systems as social communication systems have potential of

freeing employees from repressive social and ideological

conditions and thereby contributing to the realization of human

need” (p.87), i.e., information systems facilitate emancipation.

This notion is supported in information systems that are a basis

of poor information access, particularly via traditional method

of information dissemination and sharing in a large and

distributable organization [9]. However, due to the fact that

'information is power' [35], organizations impose security

mechanisms in terms of limiting employees‟ access to

information systems in order to preserve management power.

The inequality of power in information access creates

oppressed individuals, which significantly contradicts the

earlier view that information systems should free employees

from oppressive conditions [17].

Given the aforementioned, in this paper we aim to stimulate

the idea of employees‟ emancipation in information access in

organizational information systems. Ultimately the central

objective is to relate employees‟ emancipation and their

compliant behavior. In cultivating the idea, we base our

conception upon the concept of women emancipation, in which

managers and employees are given equal right to access an

organization‟s information. The transformation in information

sharing and decentralization of power would then dismiss the

power structure that alienates the employees and makes it

difficult for them to undertake their work and get involved in

decision-making processes. Emancipation therefore provides

equality between managers and workers, hence allowing for

sharing of responsibilities and freedom to take decisions.

Responsibility for successful decisions‟ results in employees

becoming more motivated and hence increases their morale

and develops a feeling of ownership. In turn, the ownership

feeling makes employees feel responsible towards protecting

the assets of an organization.

III. HOW EMANCIPATION FACILITATES

INFORMATION PROTECTION?

In an information systems environment, there are only a few

individuals (usually the management), who have a privileged

access to information and hence organizational power. This is

usually at the expense of other people in the organization (i.e.

employees). Since information is power [35], and only a

privileged few have access, it leaves a vast majority as being

powerless. Hence the notion that power 'creates' oppressed

individuals in which it stops them from carrying out what they

would freely choose to do otherwise, is realized [7]. Limiting

employees‟ access to information not only creates inequality of

power but also restricts the involvement of the oppressed in

the decision making process. Ultimately, these employees get

alienated, disgruntled and generally do not feel that they are an

active participant in organizational affairs.

Alienation of employees as the result of the power structures

can be represented by four dimensions, namely powerlessness,

meaninglessness, isolation and self estrangement [33].

Correspondingly, the feelings of powerlessness as a result of

being controlled by others and isolation, that leads to a lack of

sense of belonging, diminishes the worker‟s commitment

towards the organization [33]. As a result, employees who feel

alienated from their employer are less motivated to commit

and more likely to take decisions that are unethical. This in

turn leads to non-compliance with security policies. A

combination of these circumstances makes an organization

susceptible to security breaches.

In this paper, the focus of employees‟ emancipation is

towards information access in an organization's information

systems. We define emancipation as freeing the employees

from the power structure by increasing the scope and depth of

their information access. This, we argue, leads to

decentralization of power between employees and managers,

Page 61: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

which subsequently provides employees with the authority to

be involved in the decision making processes. Information

sharing is an instrument in eliciting employees‟ involvement

and building trust [19]. Based upon decision making

involvement and the culture of trust that an organization

creates, it can foster greater emotional buy-in from

employees.

As an example, imagine that employees are allowed to share

information about financial status during a crisis, at which time

the management often hoards and restricts access to such

information. However the employees might be able to help

with suggestions. Being at the operational level, employees

know better as to what specific actions affect the overall

business. They are also more likely to offer valuable ideas on

how the operations can be improved. Since meaningful

suggestions from employees are more likely to be adopted, it

builds a situation where the employees feel valued by the

organization. That in turns can help build morale, giving

employees a greater sense of worth and emotional ownership

in the company.

This sense of ownership or buy-in triggers a sense of

responsibility for the entity [14]. When employees feel

responsible for the entity, they will protect what belongs to the

entity - in our case the information systems and information

that they handle. This is logically supported by the assertion

that 'rational' people will not destroy or take unnecessary

action against something that belongs to him. As much as

possible these people will protect what they are responsible

for. This assertion is fundamental to our argument that

emancipation of employees, by allowing them access to

information and participation in the decision making process,

leads to a sense of ownership and responsibility. This will

form the basis for ensuring complete support and adherence of

all employees to organizational security policies.

IV. ON THE NATURE AND SCOPE OF “RESPONSIBLE

BEHAVIOR” FOR INFORMATION PROTECTION

Employees‟ sense of ownership towards the information,

information systems and hence the organization itself may

have a positive impact on employees‟ behavior. It has been

argued that ownership creates a sense of responsibility for an

object [14] which initiates behaviors of protecting the owned

object [35].

Bear, Manning and Izard [15] purport that responsible

behavior in obedience and compliance to a rule entails self-

motivation and self-guidance, and is not merely a response to

external supervision, rewards and punishment. Hence, prior

research on employees‟ compliance behavior to security policy

that focuses on these external factors is not the sole contributor

for building responsible behavior in employees‟. While

embedding these factors, which are merely enforcement of

power, is empirically effective, over time their significance

will be eroded. These external factors change over time and

are impossible for employees to follow because of the

constantly shifting direction to conform to the influences [5].

Therefore we believe that the good behavior of the employees

on policy compliance is only temporary, if it is present at all.

On the other hand, responsibility behavior of employees,

which derives from employees' cognitive and emotional factors

[15], would deem to be more effective in shaping their

behavior towards compliance. With responsible behavior,

people will act accordingly because they perceive that they are

the cause of their own behavior, in which case, if they do not

comply with the policies, they are responsible for that as well,

not the other people or some other external factors. People

with a responsible behavior will act the way they should

whether anyone is watching or not and also remain aware of

the consequences of their behavior for the welfare of others.

In summary, as stated previously, the emancipation of

employees allows them to acquire a feeling of ownership

towards the organization (object). This is achieved by

providing an equality of power in term of information access

and consequent involvement in the decision making process.

This ownership signifies that both employees and managers

are entitled to use and engage with the object, and hence be

held responsible by each other in protecting the owned object.

Based upon this assumption, we postulate that employees will

protect the organization from any harmful acts by performing a

„right‟ responsible behavior in accordance with organizational

security policies.

V. A CONCEPTUAL MODEL OF EMANCIPATION AND

INFORMATION PROTECTION

Emancipation refers to freeing employees from oppressive

conditions, hence enabling them to realize their full potential

[3]. According to Alvesson and Willmott [3], the concept of

emancipation in organizations is described as “freeing

employees from unnecessarily alienating forms of work in

organization” (p.433). Ultimately the central focus of

emancipation lies is providing communication discourse in

which all level of employees are able to access the same

amount of information and are involved with decision making

processes, share responsibility and hence promote democracy

in organizations. In the same vein Hirschheim and Klein [17]

purport that emancipation can be practiced in organization

through involvement in decision making process, which in turn

reduces the power of management and increases employees‟

responsibility. Similarly, Sashkin [28] identifies the need for

employers to fulfill employees‟ basic human needs at the

workplace for autonomy, meaningfulness of work and

decreased isolation. Failure to provide those needs, according

to Sashkin [28], is unethical as it creates physiological and

physical harm to the employees. In relation to the previous

literatures, we propose that while employees‟ involvement in

decision making process is vital, it is not achievable if they are

not emancipated in accessing the information. Our argument is

based on current practices of an organization in the digital age

wherein most information, including that for decision making

purposes, are inputs, processed and delivered via vastly

controlled computer based information systems. Without

substantial and suitably structured access privileges for the

information, it is rather difficult for the employees to be

involved in and contribute to the process. A summary of our

postulated relationships of emancipation and information

protection are presented in Figure 1.

Page 62: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

Emancipation of individuals has become an increasingly

important method of increasing employees‟ creativity and

productivity and researches have directed increased attention

towards emancipation effectiveness. Research results

consistently contribute to the notion that giving the employees

involved in the tasks the authority to take decisions, makes

them more creative and motivated to aim for successful

production decisions. In a security context, the aim is towards

protection of the information, particularly in compliance with

security policies. Although previous studies in IS have

examined the positive consequences of emancipating

employees in information system development, none has made

an attempt to study emancipation of employees for access to

information. Based on the above, we posit that by

emancipating employees with respect to information access,

employees are more likely to comply with an organization's

security policies directly or indirectly.

Fig. 1. Conceptualizing employee emancipation and protection of

information

As previously mentioned we postulate that emancipation can

also have an indirect relationship with protection behavior.

One of the indirect routes is its influence on employees‟

commitment to the organization. Employees‟ commitment is

defined as the feeling of desire, need or obligation to remain in

an organization [23]. As employees are given more power

towards information access and deliberate decision making

authority, it increases their commitment towards the

organization. There is support in the literature for this

contention, which claims that participation in decision making

increases employees‟ organizational commitment [10][30]

whereas alienation of employees from an organization will

diminish their commitment [33].

In information systems, the concept of emancipation has

largely been investigated within the realm of information

systems development (ISD), particularly through the

application of both Habermas‟s Theory of Communicative

Action and Critical Social Theory. Cecev-Kecmanovic and

Marius [9] address effectiveness of emancipation in their 15

years longitudinal case study of development of Information

System for Information Dissemination (ISID) in Colruyt

Company in Belgium. They discover that with the amount of

increased power given to the employees, they are less alienated

and oppressed, leading to increased individual commitment.

Similarly Kanungo [20] finds that impact of emancipatory

roles of ICT development in rural areas suggests that by

involving villagers with the development process, they become

more committed towards using the systems. Regardless of the

context of the organization, employees‟ commitment will

direct an individual‟s effort toward achieving organizational

goals [24].

Another stream of research, as introduced previously,

studies the achievement of an organization‟s security goals

through a sense of ownership and responsible behavior.

Previous studies provide substantial evidence that employees‟

involvement in decision making process creates a stronger

sense of ownership or identity [6][29]. With the empowerment

of being involved in decision making process, employees will

be able to contribute more valuable ideas to improve

operations since they are attached directly at the operational

level. Subsequently, their meaningful suggestions are more

likely to be accepted and adopted. This in turns strengthens

motivation by providing employees with the opportunity to

attain intrinsic rewards from their work, such as a greater

feeling of being valued. This helps in inculcating and

increasing a sense of ownership towards the organization.

Our definition of sense of ownership is borrowed from van

Dyne and Pierce [34] which defines it as “psychologically

experienced phenomenon in which an employee develops

possessive feelings for the target” (p. 439). The feeling of

ownership makes the employees feel they own the

organization, in other words- they have a feeling that the

organization belongs to them, which is fundamentally different

from the feeling of their need to remain in an organization or

organizational commitment [34]. If employees feel that they

are part of the business processes, they become much more

attached to their work. As a result employees become more

conscientious of their work and how it affects the organization.

Thus, the feeling of ownership initiates a sense of

responsibility for the entity [34].

Being responsible, according to Bear et al. [15], means the

ability to take decisions that are morally and socially right.

This conveys the notion that employees will act accordingly

because they realize that they are accountable for their own

behavior, regardless of whether they are being monitored or

not. Bear et al. [15] contend that responsible behavior that is

derived from social cognition and individual emotion is more

important than responsible behavior constituted by external

TABLE I

PROPOSITIONS POSTULATING RELATIONSHIPS BETWEEN

EMANCIPATION AND INFORMATION PROTECTION

Proposition Description

1 Emancipation will positively influence information

protection, both directly and indirectly through affective

commitment, sense of ownership and responsible

behavior.

2 Emancipation is positively associated with employee

commitment to an organization

3 Employee commitment to an organization is positively

associated with employee protection of information

4 Emancipation is positively associated with an employee's

sense of ownership / belonging leading to information

protection

5 Employee's sense of ownership / belonging is positively

associated with responsible behavior leading to

information protection

6 Employee responsible behavior is positively associated

with protection of information

Page 63: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

factors. In their study, they gather evidence on how

responsible behavior of a student not only benefits the

individual but also the members of a school community. The

behavior promotes positive effects such as academic

achievement and self-worth. Motivated from this notion, we

argue that employee's feelings and thoughts (sense of

ownership / feeling of being valued /feeling of importance) are

the primary determinants that explain how individuals become

responsible employees. Embedded responsible behavior

influences them to promote positive attitude such as protection

of information. A summary of our propositions, as discussed in

this section are presented in Table I.

VI. CONCLUSION

In this paper we have introduced the concept of employees‟

emancipation and its relationship with organizational

commitment, sense of ownership, responsible behavior and

protection of information within an organization. While many

researchers have commented on an employees‟ behavioral

compliance with security policies and laws and regulations,

our research introduces fresh insight by establishing a

relationship between emancipation and information protection.

Rather than be enshrouded in positivist or interpretivist

conceptions, in this research we introduce a critical theorist

orientation for the study of information protection. We have

argued that protection of information in an organization is

influenced by the human cognitive and emotional feelings of

individuals, which are derived from the emancipation of an

employee from organizational power structures that govern

information access. Hence, this paper lays a foundation for

further theoretical and empirical research on the protection of

information.

REFERENCES

[1] F. Adler, Sisters in Crime: The Rise of the New Female Criminal. New

York: McGraw-Hill. 1975

[2] F. Adler, “The interaction between women‟s emancipation and female

criminality: a cross-cultural perspective”, International Journal of

Criminology and Penology, 5, 1977, 101-112, 1977.

[3] M. Alvesson and H. Willmott, “On the Idea of Emancipation In

Management And Organization Studies”, Academy of Management

Review, 17(3), 432-464, 1992.

[4] A. Arboleda, P.C. Morrow, M. R. Crum, and S.C. Mack, “Management

Practices as Antecedents of Safety Culture within The Trucking

Industry: Similarities And Differences By Hierarchical Level”, Journal

of Safety Research 34, no. 2: 189-197, 2003.

[5] A. Bandura, “Social cognitive theory: an agentic perspective”, Annual

Review Of Psychology, 521-26, 2001.

[6] B. Benkhoff, “Ignoring Commitment Is Costly: New Approaches

Establish the Missing Link Between Commitment and Performance”,

Human Relations, 50(6), 701-726, 1997.

[7] K. Booth, “Security and emancipation”, Review of International Studies,

17(4), 313-26, 1991.

[8] S. Box & C. Hale, “ Liberation/emancipation, economic marginalization,

or less chivalry: The relevance of three theoretical arguments to female

crime patterns in England and Wales, 1951–1980”, Criminology: An

Interdisciplinary Journal, 22(4), 473-497, 1984.

[9] D. Cecez-Kecmanovic, M. Janson & A. Brown, “The rationality

framework for a critical study of information systems”, Journal of

Information Technology (Routledge, Ltd.), 17(4), 215-227, 2002.

[10] A.L. Pearson and C. Duffy, “Contexts Commitment and Job

Satisfaction: A Study in Australian and Malaysian Nursing The

Importance of Job Content and Social Information on Organizational”,

Asia Pacific Journal of Human Resources 1999; 36; 17, 1999.

[11] M. Chan, I. Woon and A. Kankanhalli “Perceptions of Information

Security in the Workplace: Linking Information Security Climate to

Compliant Behavior”, Journal of Information Privacy & Security 1, no.

3: 18-41, 2005.

[12] R. Deming, Women: the new criminals. Nashville: Thomas Nelson,

1997.

[13] G. Dhillon and J. Backhouse, “Current directions in IS security

research: towards socio-organizational perspectives”, .Information

Systems Journal, 11(2), 127, 2001.

[14] L. Furby, “Possession in humans: an exploratory study of its meaning

and motivation”, Social Behavior and Personality, 6, 49–65, 1978.

[15] G.G. Bear, M. A. Manning & C. E. Izard, “Responsible Behavior: The

Importance of Social Cognition and Emotion”, School Psychology

Quarterly, 18(2), 140-157, 2003.

[16] J. Habermas, The theory of communicative action, Volume 2. Boston,

MA. Beacon Press, 1987.

[17] R.A. Hirchheim, and H.K. Klein, “Realizing emancipatory principles in

information systems development: The Case for ETHICS”, MIS

Quarterly, 18(1), 83-109, 1994.

[18] J. James and W. Thornton, “Women's Liberation and The Female

Delinquent”, Journal of Research in Crime & Delinquency, 17(2), 230,

1980.

[19] G. M. Kandathil and R. Varman, “Contradictions of Employee

Involvement, Information Sharing and Expectations: A Case Study of an

Indian Worker Cooperative”, Economic and Industrial Democracy

28(1): 140-174, 2007.

[20] S. Kanungo, “On the emancipatory role of rural information systems”,

Information Technology & People, 17(4), 407-422, 2004.

[21] S. Karstedt, “Emancipation, Crime and Problem Behavior of Women: A

Perspective from Germany”, Gender Issues,18(3), 21, 2000.

[22] K.J. Knapp, T.E. Marshall, R.K. Rainer and F.N. Ford, “Information

security: management's effect on culture and policy”, Information

Management & Computer Security 14, no. 1: 24-36, 2006.

[23] J. Meyer and N. Allen, Commitment in the workplace: Theory,

research, and application. Thousand Oaks, CA US: Sage Publications,

Inc. 1997.

[24] J. Meyer and C. Smith, “HRM Practices and Organizational

Commitment: Test of a Mediation Model”, Canadian Journal of

Administrative Sciences (Canadian Journal of Administrative

Sciences), 17(4), 319-331, 2000.

[25] O.K. Ngwenyama, “The Critical Social Theory Approach to Information

Systems: Problems and Challenges,” in Information Systems Research:

Contemporary Approaches and Emergent Traditions, H,E, Nissen, H,

Klein, and R, Hirschheim (eds,), North-Holland, Amsterdam, pp. 267-

278, 1991.

[26] S. Pahnila, M. Siponen and A. Mahmood, “Employees‟ Behavior

towards IS Security Policy Compliance”, Proceedings of the 40th

Hawaii International Conference on System Sciences 2007.

[27] D.B. Parker, “Motivating the Workforce to Support Security”, Risk

Management (00355593) 50, no. 7: 16-19, 2003.

[28] M. Sashkin, “Participative management remains an ethical imperative”,

Organizational Dynamics 14, no. 4: 62-75, 1986.

[29] B. Scott-Ladd, A. Travaglione and V. Marshall, “Causal inferences

between participation in decision making, task attributes, work effort,

rewards, job satisfaction and commitment”, Leadership & Organization

Development Journal, 27(5), 399-414, 2006.

[30] M. Siponen, S. Pahnila and A. Mahmood, “New Approaches for

Security, Privacy and Trust in Complex Environments”, IFIP

International Federation for Information Processing, Volume 232,

133–144, 2007.

[31] C. Spitzmüller and J.M. Stanton, “Examining employee compliance

with organizational surveillance and monitoring”, Journal of

Occupational & Organizational Psychology 79, no. 2: 245-272, 2006.

[32] J.M. Stanton, K.R. Stam, P. Mastrangelo, and J. Joiton, “Analysis of

end user security behaviors”, Computers & Security 24, no. 2: 124-133,

2005.

[33] G. Tonks and L. Nelson, “HRM: A Contributor To Employee

Alienation?” Research & Practice in Human Resource

Management, 16(1), 1-17, 2008.

Page 64: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

[34] L. van Dyne and J. Pierce, “Psychological ownership and feelings of

possession: three field studies predicting employee attitudes and

organizational citizenship behavior”, Journal of Organizational

Behavior, 25(4), 439-459, 2004.

[35] S. Zuboff, In the Age of the Smart Machine. New York, Basic Books,

1984.

[36] A. Melek and M. MacKinnon. (2006). Deloitte global security survey.

[Online]. Available:

http://www.deloitte.com/dtt/cda/doc/content/us_fsi_150606globalsecuri

tysurvey(1).pdf

Page 65: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

Abstract—We present an instructional suite of labs that can

help in teaching security courses. Our labs are publicly available

for use. We have used these labs in teaching security courses and

also helped several other instructors adopt these labs to suit their

style of teaching. Based on this experience, we present some

guidelines for using these labs in security courses. Furthermore,

we also describe the results of using these labs among students.

Index Terms—Security Education, Laboratory Exercises,

Hands-On Learning

I. INTRODUCTION

T is widely known that learning-by-doing can significantly

enhance student’s learning in many disciplines [1], [5], in-

cluding computer security education [3]. Eight years ago,

when we started looking for well-design hands-on projects for

our security courses, we could not find many. Although we did

find a small number of good individual laboratories [2], [4],

[6], there were two problems in using them. First, the coverage

of security principles was quiet narrow. Second, the lab

environments were varied and as we intend to use these labs in

a single course, students would have to spend a significant

amount of time to learn to use the underlying environment.

Motived by the need for better, coherently-designed, and

well-supported lab exercises for security education, we started

our journeys in 2002. Our objective was to develop a suite of

labs that cover a wide spectrum of principles, ideas, and

technologies that are essential for teaching computer security.

We call these labs the SEED labs (SEED stands for Security

EDucation). Our design particularly focused on two aspects:

the lab environment and the lab contents.

A. Low-cost and Consistent Environment

First, we wanted to have a unified lab environment that is

consistent for different lab exercises, so students do not need

to learn a new lab environment for each lab, because learning a

new environment is not easy. Second, the environment used

for these labs must be affordable to enable wider adoption.

With these constraints in mind, created a lab environment

comprising the open-source Linux Operating system and a

This work was partially sponsored by Award No. 0618680 from the United

States National Science Foundation (NSF).

number of open-source software tools. The lab environment

can be created on student’s personal computers or

department’s general computers with the use of a virtual

machine (VM) software. There are free VM software, such as

VirtualBox, and even the commercial product VMware is free

for educational use through VMware’s academic program.

B. Lab Contents with Wide Coverage

Unlike some traditional courses, such as operating systems

and compilers, there is no common consensus on how security

should be taught, and what contents should be covered.

Therefore, we decided not to base our labs on a particular

course. Instead, we decided to develop labs that can cover a

wide spectrum of security principles. Although the style and

focus of security courses could vary with instructors, the

fundamental security principles that all instructors would like

to teach do not significantly vary. Because the labs cover a

range of security principles, instructors could choose the labs

from our list to fit their course based on the security principles

they want to cover. Adoption of our lab exercises do not

require any changes in the course structure.

After 8 years’ working on the project (funded by two NSF

grants), we have developed over 25 labs, most of which

have been used and tested by courses at our own university;

several of them were also used by instructors from other

universities. The labs are available from our project web site:

http://www.cis.syr.edu/_wedu/seed/. Since 2007, the labs have

been downloaded for over 40, 000 times from the computers

outside of our university network. The labs are broadly

categorized into three classes:

1) Vulnerability and attack exercises

The objective of these exercises is to illustrate and help the

student learn a vulnerability in detail. Each of these exercises

typically has four parts. First, students will understand the

vulnerability based on simple examples and identify them in

either a real program or a synthetic example. Second, students

will construct real attacks to take advantage of the

vulnerability. Third, students will fix the vulnerabilities and

also comment on prevailing mitigation methods and their

effectiveness.

Enhancing Security Education

with Hands-On Laboratory Exercises

Wenliang Du1, Karthick Jayaraman

1, and Noreen B. Gaubatz

2

Department of EECS1, Office of International Research and Assessment

2

Syracuse University, Syracuse, NY 13244

{wedu, kjayaram, nbgaubat}@syr.edu

I

Page 66: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

2) Design and implementation exercises

The objective of these exercises is to help the student

understand the concepts of secure system development.

Students are expected to design a system that provides a

specific security functionality. The exercise outlines clear

objectives and students would explore various choices for

fulfilling the objectives, analyze the impact of the choices on

security and performance, make appropriate design decisions,

and finally implement and demonstrate their system.

3) Exploration exercises

The objective of these exercises is to coerce the students to

explore or play with an existing security functionality.

Exploration labs are like a “guided tour” of a system, in which,

students can “touch” and “interact with” the key components

of a security system to either learn or question the underlying

security principles that the system is based on.

C. Experiments and Evaluation

Over the last 7 years, we used the SEED labs in two

graduate and one undergraduate security courses. We have

been revising those labs based on student feedback. Now, most

of the labs are quite stabilized, and are ready for use by other

instructors. So far, fifteen other universities have informed us

that they have used some of our SEED labs in their courses.

To evaluate the effectiveness of our labs, we collected survey

data from students for each of the labs they used. The results

are quite encouraging. We will present our evaluation results

in this paper.

II. DESCRIPTION OF LABS

In this section, we provide a brief description of our SEED

labs. Since the actual lab description for each lab ranges

between 2 pages to 10 pages, we cannot include them in this

paper. We will only describe a selected few. Readers can find

the detailed lab descriptions from our web page

http://www.cis.syr.edu/_wedu/seed/. Table I contains the list of

labs.

A. Vulnerability and Attacks Labs

All the vulnerability and attack labs are structured to let the

student learn the vulnerability in detail by creating an exploit

for a program with known vulnerabilities. Furthermore, the

students are asked to comment on the prevailing mitigating

methods and their effectiveness. Table I contains all 12

vulnerability and attack labs. Broadly, we have three

categories of vulnerabilities, general software vulnerabilities,

network protocol vulnerabilities, and web application

vulnerabilities. We describe the nature of the labs in each of

the categories. The detailed lab description can be obtained

from our web site. From our experience, the vulnerability labs

are heavily used by several instructors.

General Software Vulnerabilities

We have lab exercises focusing on general software

vulnerabilities such as buffer overflows, format-string

vulnerabilities, race conditions, etc. Latest versions of Linux-

based systems have several features for countering these

vulnerabilities. Therefore, where necessary, the lab exercises

provide information on how to disable the vulnerabilities in

order to perform the lab exercises.

Network Protocol Vulnerabilities

In this category, we have two labs, namely TCP/IP attacks and

DNS pharming. In the TCP/IP attack lab, students discover

and exploit vulnerabilities of the TCP/IP implementations. In

the DNS pharming lab, students understand how DNS

pharming attacks could be used to manipulate the name

resolution process of trusted web sites to compromise the user

privacy.

Web Application Vulnerabilities

We have labs focusing on web-based vulnerabilities such as

cross-site scripting, cross-site-request forgery, clickjacking,

etc. The lab exercises for these labs involve exploiting a web

application with known vulnerabilities. We have setup some

web applications inside our pre-configured virtual machine

that students will use for these labs. Therefore, instructors need

not maintain any web servers or install or configure

applications for using these labs.

B. Exploration Labs

In the exploration labs, students both learn and critique the

design and implementation of an existing security component.

We have four labs in this category:

Packet Sniffing and Spoofing Lab

In this lab, students learn how packet sniffing and spoofing

tools are implemented. Students will start with some simple

tools, understand them by reading their source code and using

them, and modify them to implement some interesting variants.

Linux Capability Lab

The objective of this lab is to train students to use the

capability system implemented in Linux and also gain

insights on how the capability system is implemented in the

operating system.

Web Browser Access Control Lab

The objective of this lab is to help the student understand

and critique the same origin policy commonly implemented in

all web browsers for access control.

Pluggable Authentication Module

The objective of this lab is to illustrate the use of the

flexible authentication technique called pluggable

authentication module (PAM) implemented in Linux. PAM

provides a way for developing programs that are independent

of the authentication scheme.

Page 67: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

C. Design Labs

Each of the design labs provides clear functional

requirements for implementing a system. Furthermore,

students are also provided supporting material and documents

for helping them. To help the students get started, some design

choices are outlined. The students are first asked to provide a

detailed design of the system describing why they choose

particular design choices and how they impact the security and

performance of the system. The faculty or TA could provide

feedback on their design, before the student implement the

system. Currently, we have nine design labs. We provide a

brief description of these labs as follows:

Linux Firewall Lab and Minix Firewall Lab

In this lab exercise, students are expected to implement a

simple packet filtering firewall. The firewall should provide an

interface to the user to enable configuration. Each firewall rule

describes the types of packets, in terms of their properties such

as source IP address, protocol, etc., that will be allowed or

blocked. The design labs are available in both Linux and

Minix operating systems. For Minix, students have to

modify the Minix kernel to implement the firewall. For

Linux operating system, students will use the kernel

loadable module mechanism and the netfilter hooks to

implement the firewall.

IPSec Lab

In this lab, students will implement a simplified version of

the IPSec protocol for the Minix operating system. The lab

tasks have been simplified to facilitate the students to complete

the lab exercise within four to six weeks. Students need to use

their IPSec implementation to construct a Virtual Private

Network (VPN).

VPN Lab

VPN is a widely used security technology. VPN can be built

upon IPSec or Secure Socket Layer (SSL). These are two

fundamentally different approaches for building VPNs. In this

lab, we focus on the SSL-based VPNs. This type of VPN is

often referred to as SSL VPNs. The learning objective of this

lab is for students to master the network and security

technologies underlying SSL VPNs. The design and

implementation of SSL VPNs exemplify a number of security

TABLE I

THE COVERAGE OF SEED LABS

Legend: ST - Systems, NW - Networking, PG - Programming, SWE - Software engineering. G - For Graduate students only,

UG - For both Undergraduates and Graduates. AU - Authentication, AC - Access control, CG - Cryptography SC - Secure coding,

SD - Secure design.

Page 68: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

principles and technologies, including cryptography, integrity,

authentication, key management, key exchange, and Public-

Key Infrastructure (PKI). To achieve this goal, students will

implement a simple SSL VPN for Ubuntu.

Role-Based Access Control (RBAC) Lab

In this lab, student will implement an integrated access-

control system that uses both capability and RBAC access

control mechanisms. The lab tasks require students to modify

the Minix kernel to implement the proposed system. •

Capability lab: In this lab, students will design and implement

a capability-based access control system in Minix. Similar to

the RBAC lab, the lab tasks will involve modifying the Minix

kernel. The capability-based access control system would co-

exist with the file-based permission used in Minix.

Encrypted File System Lab

In this lab, students will design and implement an encrypted

file system for Minix. A key requirement of the design is to

ensure that EFS is transparent to the applications. From the

users perspective, there should be no more work other than

mounting the new filesystem to be able to use the new file

system.

Set-random UID Lab

In this lab, students will implement a simple sandbox for

executing untrusted programs. In the simple sandbox,

programs are assigned randomly generated uid that do not

exist in the system. Because the random uid do not own files,

the access permissions of the executed program are limited.

Address Space Randomization Lab:

In this lab, students will implement methods for

randomizing the heap and stack of the Minix operating

system.

III. COVERAGE OF SEED LABS

A. Principle Coverage

Regardless of how instructors teach computer security and

in what contexts (e.g. networking, operating system, etc.), one

thing is for sure: they should cover the principles of computer

security. In civil engineering, when building bridges, there are

well-established principles that need to be followed. Security

engineering is no different: in order to build a software system

that is intended to be secure, we also need to follow principles.

Regardless of how computer security is taught, the

fundamental principles that most instructors cover are quite the

same, even though the principles might be covered in different

contexts.

The definition of “security principles” is interpreted

differently by different people: some interpret it as software

engineering principles, such as the principle of least privileges;

some interpret it as access control, authentication, etc. To

avoid confusion, we use the following definition:

A computer security principle is an accepted or professed

rule of action or conduct in building a software or hardware

system that is intended to be secure.

We broadly categorized the fundamental computer security

principles into the following classes: Authentication (AU),

Access Control (AC), Cryptography (CG), Secure Coding

(SC), and Secure Design (SD). Table I also shows the security

principles covered by each of the SEED labs.

B. Courses Coverage

After studying a number of security courses taught at

different universities and colleges, we identified several

representative types of courses, and provide suggestions

regarding what SEED labs are appropriate for these courses

(Table I).

System-focused Courses

This type of course focuses on security principles and

techniques in building a software system. Network, also

considered as a system, might be part of the course, but not the

main focus. The focus is mainly on software systems in

general. Operating systems, programs, and web applications

are usually used as examples in these courses.

If an instructor wants to ask students to design and

implement a real system related to system security, there are

several choices. (a) If the instructor wants to let students learn

how to use cryptography in a real system, the Encrypted File

System Lab is a good choice. (2) If the instructor wants to let

students gain more insights on access control mechanisms, the

Role-Based Access Control Lab and Capability Lab are good

candidates. (3) If the instructor wants students to learn some of

the interesting ideas in improving system security, the Address

Space Layout Randomization Lab and the Set- RandomUID

Sandbox Lab are good candidates. Because all these labs

require modifying the underlying operating system kernel,

these are labs are meant to be carried out in the Minix

operating system. These labs can be used as the final projects.

Networking-focused Courses

This type of a course focuses mainly on the security

principles and techniques in networking.

Programming-focused Courses

The goal of this type of course is to teach students the

secure programming principles when implementing a software

system. Most instructors will cover a variety of software

vulnerabilities in the course.

Software-Engineering-focused Courses

This type of a course focuses on the software engineering

principles for building secure software systems. For this type

of courses, all the vulnerabilities labs can be used to

demonstrate how flaws in the design and implementation can

lead to security breaches. Moreover, to give students an

opportunity to apply the software engineering principles that

they have learned from the class, it is better to ask students to

build a reasonably sophisticated system, from designing,

Page 69: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

implementation, to testing. Our design/implementation labs

can be used for this purpose.

C. Textbook Coverage

Most of instructors do choose a particular textbook in their

courses. There are several popular textbooks that are popular

among the computer security instructors. We show how SEED

labs can be used along with those textbooks; in particular, for

each of the textbooks, we have identified the chapters that can

use our SEED labs as their lab exercises to enhance student’s

learning of the subjects in those specific chapters. Our results

are summarized in Table II. We have picked the following four

textbooks in our studies:

Introduction to Computer Security, by Matt Bishop

(published by Addison-Wesley Professional in October

2004). We refer to this book as Bishop I.

Computer Security: Art and Science, by Matt Bishop

(published by Addison-Wesley Professional in

December 2002). We refer to this book as Bishop II.

Security in Computing (3rd Edition), by Charles P.

Pfleeger and Shari Lawrence Pfleeger (published by

Prentice Hall PTR in 2003). We refer this book as

Pfleeger.

Network Security: Private Communication in a Public

World (2nd Edition), by Charlie Kaufman, Radia

Perlman, and Mike Speciner (published by Prentice

Hall PTR in 2002). We refer this book as KPS.

IV. EVALUATION

With the goal of this project being enhancement of student

learning in security education via laboratory exercises, the

focus of the current assessment effort is on the efficiency and

effectiveness of the SEED labs. The efficiency of the lab

exercises is quantified by the following metrics: level of

difficulty, time spent on lab is worthwhile, and clarity of

instructions. The effectiveness of the lab exercises assesses

student learning and is quantified by the following metrics:

student level of interest, attainment of learning objective, and

value of the lab as part of the curriculum. Students were asked

to submit a short questionnaire after completing each lab based

on these metrics. Student responses were confidential and

completion of the surveys was on a voluntary basis. Among all

the SEED labs we have developed (about 25), 11 of them have

statistically sufficient number of survey responses (we have

total 735 survey responses). Therefore, the survey results only

from 11 individual laboratory exercises were analyzed.

Eight of the eleven exercises are categorized as

TABLE II

TEXTBOOK MAPPINGS

Numbers indicate chapter numbers.

Page 70: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

Vulnerability and Attack Labs, with their overall learning

objective focused on students learning the principles of secure

design, programming, and testing. Three of the 11 exercises

are identified as Design/Implementation Labs, with their

overall learning objective focused on students applying

security principles in designing and implementing systems.

Survey data from 5 of the 11 labs were gathered over 3 years

of implementation (2007-2009), with Total N for these labs

ranging from 39- 77 student respondents. Four of the 11 labs

were included in the curriculum for 2 of the 3 years, with 3 of

these labs not included in 2009. Total N for these labs ranged

from 37-60 student respondents. During 2009, 2 new labs were

introduced, with Total N ranging from 11-15 student

respondents. Student survey data analysis across all 11

laboratory exercises is summarized below.

Lab instructions were clear: Approximately, three-

quarters or more of respondents (range from 93% to

73%) agreed or strongly agreed for 10 of the 11 labs.

60% of respondents indicated similarly for 1 lab.

The time I spent on the lab was worthwhile:

Approximately, three-quarters or more of respondents

(range from 94% to 74%) agreed or strongly agreed for

all 11 labs.

The lab was a valuable part of this course: Over 90% of

respondents (range from 99% to 91%) agreed or

strongly agreed for 10 of the 11 labs. 82% of

respondents indicated similarly for 1 lab.

I have attained the learning objective of the lab: Over

three-quarters of respondents (range from 95% to 78%)

agreed or strongly agreed for all 11 labs.

Student level of interest in the lab: Approximately,

three-quarters or more of respondents (range from 92%

to 74%) reported a high or very high interest level for

10 of the 11 labs. 66% of respondents indicated

similarly for 1 lab.

Level of difficulty of the lab: Over one-half of

respondents (range from 100% to 54%) reported a

somewhat difficult or very difficult level for 8 of the 11

labs. 46% - 32% of respondents indicated similarly for

3 labs.

To further explore these results, survey responses for these

items were analyzed by student gender, year of lab

administration, level of student preparation, student familiarity

with Unix, and level of lab difficulty. Due to page limitation,

we are unable to put all the evaluation results in this paper.

Detailed evaluation and diagrams can be found from our

project web site.

V. CONCLUSION

To help enhance student’s learning in computer security

education, we have developed a suite of hands-on lab

exercises. These labs can be conducted in an environment that

is very easy to build using the computing facilities that are

already available to students in most universities. Based on

this environment, instructors can select among 25 well-

developed laboratory exercises for their courses. These labs

cover a wide spectrum of security principles and can

accommodate a variety of security courses. We have

experimented with these labs in our own courses for the last

seven years. The results are quite promising. More than 15

other universities have also used our labs, and their feedbacks

are also quite positive.

We would like to disseminate the SEED labs to a larger

audience of instructors. All our labs are publicly available for

use under the open-source license. Instructors are welcome to

modify our lab description, if they want to tailor the lab

descriptions to fit their courses. The entire lab environment is

built into a pre-built virtual machine image, and can be

downloaded from our web page. We have dedicated set of

assistants, funded from the grant provided by NSF, for

providing help to instructors if they run into problems in using

our lab environment.

ACKNOWLEDGMENT

We thank the anonymous reviewers for their comments and

kind suggestions.

REFERENCES

[1] P. J. Denning. Great principles of computing. Communications of the

ACM, 46(11):15–20, November 2003.

[2] J. M. D. Hill, C. A. C. Jr., J. W. Humphries, and U. W. Pooch. Using an

isolated network laboratory to teach advanced networks and security. In

Proceedings of the 32nd SIGCSE Technical Symposium on Computer

Science Education, pages 36–40, Charlotte, NC, USA, February 2001.

[3] C. E. Irvine. Amplifying security education in the laboratory. In

Proceeding IFIP TC11 WC11. First World Conference on INFOSEC

Education, pages 139–146, Kista, Sweden, June 1999.

[4] C. E. Irvine, T. E. Levin, T. D. Nguyen, and G. W. Dinolt. The trusted

computing exemplar project. In Proceedings of the 2004 IEEE Systems

Man and Cybernetics Information Assurance Workshop, pages 109–

115, West Point, NY, June 2004.

[5] D. Kolb. Experiential learning: Experience as the source of learning

and development. Prentice Hall, Englewood Cliffs, NJ, 1984.

[6] W. G. Mitchener and A. Vahdat. A chat room assignment for teaching

network security. In Proceedings of the 32nd SIGCSE Technical

Symposium on Computer Science Education, pages 31–35,

Page 71: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

Abstract—Recent advancements in medicine have enabled

doctors to achieve never-before-seen levels of efficacy in treating

traumatic battlefield injuries. While these achievements have had

the wonderful effect of reducing overall casualties, they have also

caused a dramatic increase in the number of soldiers who have

been rendered unfit for military service because of physical

injury. The result of this is that many veterans find themselves

making an unexpected career switch; despite the fact that many

of these individuals are otherwise fully capable of being

contributing members of society. For this reason, Auburn

University, in partnership with Mississippi State University and

Tuskegee University have created an outreach program to

provide vocational training in digital forensics to America’s

newly disabled veterans. It is believed that this training will

positively impact the career opportunities for injured soldiers.

Early results support this belief and indicate that the soldiers

believe they have benefited from it.

Index Terms— Digital forensics, veterans, computer security,

law enforcement

I. INTRODUCTION

DVANCES in modern medicine have dramatically

improved the odds of surviving a traumatic injury. These

advancements (along with evolved tactics and technology)

have resulted in a sharp decrease in the number of US fatalities

during the recent conflicts in Iraq and Afghanistan in

comparison to previous conflicts of similar size and duration.

These advancements, effective as they have been, are still

incapable of completely treating many different types of

traumatic injury encountered on the battle field. The result of

this is that wounds that would have been fatal 30 years ago are

now debilitating. In many cases of debilitating injury, the

injured soldiers have been declared medically unfit for

continued service. Many of these soldiers find themselves

with no definitive plan for the future; frequently the soldier

had planned to remain in the armed forces for several more

years.

At the same time that the conflicts overseas are producing a

pool of people in need of a new career, there continues to be a

documented need in the field of criminal justice and

investigation. In this field, the number of job openings is

expected to grow by 20+% between now and 2018 [3][4][5].

Given the longstanding pattern of retired service members

entering the law enforcement and investigative fields following

their discharge, Mississippi State University, Auburn

University, and Tuskegee University decided to create a

program that would attempt to solve two problems

simultaneously by providing recently discharged soldiers with

the skills necessary to tackle the shortage of digital

investigators.

The rest of this paper is organized as follows: in section 2

we describe the curriculum that is being taught to the soldiers.

In section 3 we present the results of an initial survey designed

to provide an initial glimpse into the efficacy of the training

that is being provided to the soldiers. Finally, in section 4 we

provide some concluding thoughts and describe the program’s

future.

II. OVERVIEW OF THE TRAINING

A. Training Overview

This training program is offered via face-to-face instruction

on-site at varying Army posts throughout the continental U.S.

The program is sponsored by the National Science Foundation,

which enables all costs, including instructors and equipment,

to be provided free-of-charge to the Army. Approximately 44

hours of total instruction are provided to the soldiers. Soldiers

are given instruction for four hours per day over a ten to

eleven day period. Limiting the daily instruction to four hours

enables soldiers to continue receiving rehabilitation for their

injuries, as well as, allowing them time to review the material

before the next day’s class. The class is limited to twenty

students in order to maintain a favorable student-to-teacher

ratio. No restrictions have been put in place to prevent

soldiers with certain injuries, such as traumatic brain injury

(TBI), from participating, provided they can operate a

computer.

B. Required Skills

The skills required for participation in the training are

relatively modest. Soldiers entering the training are expected

to be reasonably proficient with the Windows operating

system. In other words, soldiers are expected to be familiar

with basic concepts such as the keyboard, mouse, monitor, and

computer/tower. In terms of operating the computer, it is

expected that participants will be familiar with basic operating

skills such as left/right clicking of the mouse and typing. It is

Eric S. Ismand and John A. Hamilton, Jr. Department of Computer Science & Software Engineering

Auburn University Auburn, AL 36849

{eimsand, hamilton}@auburn.edu

A Digital Forensics Program to Retrain

America’s Veterans

A

Page 72: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

also expected that participants will be familiar with the

Microsoft Windows operating system: using the Start button

and task bar; opening and closing programs; minimizing

programs; etc. Beyond these basic skills participants are not

required to have any additional qualifications for participation

in the course. The lack of prior access to the participants made

it impossible to use anything other than the soldiers’ self-

reported computer proficiency when deciding who was

suitable for the course.

C. Training Methodology

Students in the course are taught using a combined

lecture/laboratory format. For any given topic, the instructor

will lecture for approximately two to four hours. During the

lecture, the instructor will provide a basic overview, applicable

theory, as well as a walk-through of any tools that might be

used by an examiner. Following the lecture students engage in

a laboratory exercise that allows them to gain experience using

the tools and techniques that have just been described by the

instructor.

D. Curriculum

The course is designed with the expectation that the

students have little or no prior experience with information

technology or law enforcement. The goal of the course is to

provide participants with a working introduction to the field of

digital forensics. For students who are already considering the

field of digital forensics, this course is also designed to

provide them with a refresher on many of the topics that they

will be tested on if they should decide to pursue a professional

certification in the field. The course covers the basics in a

variety of disciplines before moving into more advanced

topics. Hands-on exercises are interspersed throughout the

lectures in order to give the students experience using the tools

and techniques they are learning.

The following is a list of basic topics that are covered by the

training program:

Basic Computer Usage

Soldiers are trained on the basics of operating Microsoft

Windows. Ostensibly these are skills that should have already

been present before enrolling in the course. They are covered

again, just to be sure that all of the participants have some

exposure to the operating system that will be used prior to

starting the hands-on activities.

Overview of Computer Hardware

Participants are given a brief overview of the different types

of hardware present within a computer, with particular interest

given to the overall functionality provided by each component.

For example, students are taught what the role of the CPU is,

and students are taught about the different types (volatile vs.

permanent) and forms (Cache, RAM, disk) that memory may

take.

Writing A Business Plan & Small Business Advertising

The belief, when creating the curriculum for this program,

was that the majority of graduates that seek employment as

digital forensic experts will work for some form of law

enforcement. It was recognized at the time, however, that

there may be individuals in the program that have no desire to

become part of law-enforcement; that might still wish to work

as digital forensics investigators. For this reason a module was

added to the program to teach transitioning soldiers how to

form their own business, how to write a business plan, some

basic accounting, and possible locations they should consider

advertising.

Introduction to Cybercrime

When creating the curriculum, it was believed that a

relatively low percentage of the participants would have prior

direct experience with digital forensics. Digital forensic

science differs from the forensic science prominently featured

in many TV shows and movies. It was believed that students

would not have been exposed to the practices and procedures

of digital forensics. For this reason, an overview of

cybercrime was added. This module details the types of

crimes that are encountered; a basic overview of the types of

evidence gathered is presented; as well as a basic overview of

the procedures and techniques used to analyze the evidence.

An introduction to writing warrants is also provided. Much of

the material featured in this module is repeated elsewhere in

the course; it was believed that this repetition would benefit

the student.

Introduction to Forensics Tools

This module provided an introduction into the tools used

during forensic analysis: write blockers, imaging devices, etc.

Overview of Digital Evidence Search and Seizure Procedures

This module provides a more in-depth examination of the

procedures that should be used when searching for and seizing

digital evidence. The students are provided with a concrete

timeline that tells them what actions should be taken, and the

order in which they should be performed.

Introduction to Imaging and Hashing

This module provides hands-on experience with the creation

of images, as well as experience with the creation of the image

hashes that can be used to verify the integrity of the evidence.

Overview of Digital Storage Techniques

A basic overview of the different types of storage that are

typically seen: magnetic, optical, and flash. This module also

provides a brief discussion of the logical organization of disks

as well as a discussion of slack space.

Introduction to Digital Evidence

A basic overview of digital evidence, describing when

digital evidence is permanently destroyed, when and where it

can be recovered, and other related topics. The subject of file

Page 73: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

signatures and file carving is also introduced.

Introduction to File Systems

This module provides a beginner’s look at file systems:

what a file system is, how they are organized, as well as some

of the more important differences between the commonly

observed file systems.

E. Hardware and Software Tools

The students in this course are taught to perform their

imaging, hashing, and analysis using the Forensic Toolkit

software package by AccessData [1]. Students are trained

using hardware write-blockers manufactured by WiebeTech

[6].

During the course of training, students create images and

hashes of USB disks instead of traditional harddisks. The

reason for this is simple: time. The time savings of imaging a

1GB USB “thumb drive” as opposed to the smallest modern

hard disk is immense. Furthermore, the only difference from a

procedural standpoint, is the use of a USB writeblocker

instead of a SCSI/IDE/SATA writeblocker.

F. Course Scheduling

As mentioned earlier in this paper, this program was

purposefully structured to be taught over a two week period.

Because the targeted group of soldiers is typically undergoing

rehabilitation, it was decided to only hold class for four hours

per day. This leaves half the day for the soldiers to continue

their rehabilitation and maintain their other prescribed

appointments.

The other item considered when designing the program was

the current medical and mental status of the trainees. While all

of the soldiers that enter into the program possess their full

cognitive faculties, some of the wounded veterans have

experienced traumatic brain injury (TBI). Soldiers with TBI

frequently find it difficult to maintain concentration for long

periods of time [2]. By only holding class for four hours per

day, soldiers with TBI are better equipped to fully process the

day’s instruction, ensuring that they stay with the rest of the

group and do not fall behind on the material.

Because the daily lectures are designed to be given in four

hour blocks, we typically elect to conduct two different

sessions of the program at a time: a morning session and an

afternoon session. The two sessions are identical. In cases

where a soldier experiences a time conflict with his/her

normally scheduled class time, he/she can sit in on the other

session, space permitting. In some cases soldiers have

attended both of the day’s sessions in order to increase

repetition over the material. We encourage this whenever the

soldier is able and space is available.

Cost Description

As previously mentioned in this paper, this program is

funded by the National Science Foundation. This allows us to

provide the instruction at no cost to the Army or the soldiers

taking the course. This fact has greatly increased cooperation

from the Army in scheduling courses and has greatly enhanced

our overall ability to reach the target audience.

Initial Results and Performance

Our motivation behind this training is twofold: first, we are

driven by a desire to provide valuable training to America’s

injured soldiers. Secondly we want to improve our teaching

methodologies and techniques.

During the entire project, teaching efficacy is measured by

providing students with a post-course questionnaire that seeks

to determine how much the students have learned from the

course. The following is an overview of what the soldiers

were asked rate or indicate:

Previous IT/Computer skill

Previous law enforcement experience

Previous IT professional experience

How much they learned in the course

How likely they were to consider a career in digital

forensics, both before and after the course

The likelihood of recommending the course to another

soldier

The results that have been observed to date have been

positive. The following results are derived from an admittedly

small sample of 25 participants. It is believed that surveys

from future participants will confirm these findings:

On a scale of 1-5, with 1 being very little and 5 being

considerable, soldiers were asked to rate the amount

they had learned from the course. The average

response was 4.36.

On a scale of 1-5, with 1 being highly unlikely and 5

being highly likely, soldiers were asked to rate how

strongly they were considering a career in digital

forensics prior to taking the course. The average

response was 2.08.

On a scale of 1-5, with 1 being highly unlikely and 5

being highly likely, soldiers were asked to rate how

strongly they would consider a career in digital

forensics after taking the course. The average response

was 3.8.

On a scale of 1-5, with 1 being highly unlikely and 5

being highly likely, soldiers were asked whether they

would recommend the course to others. The average

response was 4.68.

Additional information about the participants:

The average age reported was 31.08 years

The average self-reported computer skill on a scale of

1-5, with 1 being very unskilled and 5 being very

skilled was 3.2.

Roughly 16% of respondents have reported working as

an IT worker at some point.

Roughly 16% of respondents have reported previous

law enforcement experience.

As shown in the data, we have seen a roughly 35% increase

in the number of soldiers giving some level of consideration to

a career in digital forensics. When asked to rate how much

they had learned and whether they would recommend the

course to a friend, soldiers rated 4.36 and 4.68 respectively.

Page 74: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

This indicates that they find the training valuable and

interesting.

A detailed discussion of the demographics represented in

the course is limited by the data that was collected from the

participants. For example, while it is possible to state the

average age of the participants (31 years), it is not possible to

state what percentage of the participants was male or female.

This data and other items like it, such as race and educational

achievement, were not collected. The reason this data was not

collected was because it was not deemed to have a material

impact on the overall assessment of the quality of the training.

To-date, the partnership between our Universities has

resulted in approximately eighty students having completed the

program. Our partner University has trained a similar number.

We anticipate having trained between 300 to 400 servicemen

and women at the completion of the project between all the

partner institutions.

At the conclusion of the project we anticipate being able to

offer more additional details on the efficacy of this training

program. The number of graduates of this program that enter

the field of digital forensics is of particular interest.

Unfortunately this data is simply not available yet, as many of

the participants that have been trained to date have yet to be

completely discharged from their respective branches of

service.

III. LESSONS LEARNED

Though this study is only partially completed, our efforts to

date have resulted in several important “lessons learned”. First

and most importantly is a confirmation in our belief that a

training program of this kind was sorely needed. We have

been warmly and well received at every post we have offered

the training program at.

Not every discovery made during the offering of this course

has been a positive one. For example, scheduling courses has

been more difficult than originally anticipated. Our original

plan was to team with the Veterans Administration and offer it

through their hospital system. We devised this plan based on

the belief that they would be able to quickly and easily link us

with the soldiers we were trying to target. Unfortunately this

was not the case. The VA vocational rehabilitation program is

focused on veterans who are transitioning from active duty.

For this reason we have increasingly teamed with the “Warrior

Transition Units” based at most major posts. These groups,

which are organizationally part of the Army, have been very

supportive.

Anecdotal evidence indicates that our program is very

popular and highly in demand. Every time a course is held we

receive multiple requests for soldiers and people outside the

target group to be enrolled in the training program. Space

permitting we allow this, which probably helps to ingratiate us

to the host facility.

IV. CONCLUSION AND FUTURE WORK

As evidenced by the completion data, we are still in the

midst of executing this project. Thus far we have found the

reception from the Army and the soldiers to be quite positive;

many institutions have even asked us to consider making

return trips to their post to hold additional classes in order to

satisfy demand. The soldiers seem interested in the material

and report that they have learned a considerable amount from

this program.

The next logical step following completion of this work is

the creation of an online program. This will allow us to expand

our reach. At present the single greatest limitation to the

training we can provide are our physical limitations: number of

laboratory workstations, instructors, etc. Transitioning to an

online format eliminates many of these obstacles. There are

countless difficulties in attempting to provide forensics

training via the Internet, though, and these difficulties will

have to be overcome if meaningful online training is to be

made available to transitioning soldiers.

An alternate direction for our future efforts might be to

redesign the course with a focus on preparing participants for

one of the industry accepted certification exams. Obviously

this would be of great help to the students who believe they

wish to pursue a career in digital forensics, as it would

undoubtedly increase their likelihood of success on the

certification exam. We feel, though, that the basic

introductory course that we have developed is in better

keeping with our University’s outreach and service objectives.

Recall that the soldiers, sailors, and airmen targeted for this

training are individuals who are being medically discharged

from the armed forces against their will. These are people

who were not ready to leave the service and find themselves

making an unexpected career switch. It is believed that

offering these persons an introductory course at no-cost offers

them the opportunity to learn about a possible new field with

ample employment opportunities with no risk.

ACKNOWLEDGMENT

Our Lab Manager, Mr. James R. Thompson has played a

major role in this program. Mr. Thompson is an NSF

Scholarship for Service scholar and will join the US Civil

Service upon completion of his graduate studies at Auburn

University.

REFERENCES

[1] AccessData Inc. AccessData: A Pioneer in Digital Investigations Since

1987. February 15, 2010. http://www.accessdata.com/ (accessed

February 15, 2010).

[2] American Speech-Language-Hearing Assocation. "Traumatic Brain

Injury (TBI)." American Speech-Language-Hearing Association.

February 12, 2010.

http://www.asha.org/public/speech/disorders/TBI.htm (accessed

February 12, 2010).

[3] Federal Bureau of Labor and Statisitcs. "Private Detectives and

Investigators." Occupational Outlook Handbook, 2010-2011 Edition.

December 17, 2009. http://www.bls.gov/oco/ocos157.htm#outlook

(accessed February 13, 2010).

[4] Federal Bureau of Labor and Statistics. "Police and Detectives."

Occupational Outlook Handbook, 2010-2011 Edition. December 17,

2009. http://www.bls.gov/oco/ocos160.htm (accessed February 13,

2010).

Page 75: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

[5] Federal Bureau of Labor Statistics. "Science Technicians." Occupational

Outlook Handbook, 2010-2011 Edition. December 17, 2009.

http://www.bls.gov/oco/ocos115.htm (accessed February 13, 2010).

[6] WiebeTech Inc. "USB Write Blocker." WeibeTech. February 15, 2010.

http://www.wiebetech.com/products/USB-WriteBlocker.php (accessed

February 15, 2010).

Page 76: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

Roundtable Discussion: Forensic Education

Fabio R. Auffant II Technical Lieutenant,

Computer Crime Unit, NYS Police

Kevin Williams Chair, Psychology Department

University at Albany, SUNY

John A. Hamilton, Jr. Professor, CSSE

Auburn University

HE field of computer and digital forensics is changing rapidly and our dependence on computers and

network is increasing. Techniques from computer and digital forensics are being used not only for

investigating crime, but also for auditing systems as well as for recovery of lost data. Computer and

digital forensics involves data not only from computers, but also from servers, networks, and mobile

devices. The needs of the public sector workforce are growing as the demand for such expertise increases

within existing IT departments and new forensics divisions are created in agencies. However, they are

competing with the private sector, which often lure prospective employees with better salaries.

Knowledge of computer and digital forensics has become a necessary component of any IT specialist,

but due to the changing environment, it is also important to adapt by continuing to learn new tools and

techniques. Back by popular demand from last year, this round-table features a panel of experts who will

discuss the challenges faced by educators/trainers, law enforcement, and prosecution in terms of training,

retraining, and retaining a computer and digital forensics capable workforce. It will also cover novel

ways to ensure continuous training to security and forensics professionals. The panelists at the round-

table come from law enforcement, prosecution and academia and each brings their unique perspectives to

the discussion.

T

Page 77: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

AUTHOR BIOGRAPHIES

Sabah Al-Fedaghi holds an MS and a PhD in computer science from Northwestern University,

Evanston, Illinois, and a BS in computer science from Arizona State University, Tempe. He has

published papers in journals and contributed to conferences on topics in database systems, natural

language processing, information systems, information privacy, information security, and information

ethics. He is an associate professor in the Computer Engineering Department, Kuwait University. He

previously worked as a programmer at the Kuwait Oil Company and headed the Electrical and

Computer Engineering Department (1991–1994) and the Computer Engineering Department (2000–

2007).

Victor Asal is an Associate Professor of Political Science at Rockefeller College at the University at

Albany, SUNY . Dr. Asal is a Research Associate with a focus on Ethnic Terrorism and Mass Casualty

Terrorism for the National Center for the Study of Terrorism and Responses to Terrorism (START). He

is also the Director of the Public Security Studies Certificate program at the University of Albany,

SUNY and is a co-director of the Project on Violent Conflict (PVC). Dr. Asal received his Ph.D. in

Government and Politics (Comparative Politics, Conflict and Conflict Resolution) from the University

of Maryland at College Park . He holds M.A.s in Government and Politics from the University of

Maryland at College Park and in International Relations (International Relations Theory, International

Law) from the Hebrew University of Jerusalem. Asal’s research has focused on terrorist behavior ,

including terrorist targeting of the United States structural and social antecedents to terror, non-state

actor mobilization on line and international crisis behavior. Much of Asal’s research efforts have

focused on identifying key factors from mostly qualitative literature, operationalizing these concepts in a

numerical fashion and then collecting the necessary data to quantitatively test the arguments in the

literature in a falsifiable fashion.

Fabio R. Auffant II has been employed by the New York State Police for the past 23 years. He is a

Technical Lieutenant in the Computer Crime Unit and Manager of the Computer Forensic Laboratory

located in the Forensic Investigation Center in Albany, NY, He has received extensive computer

forensics training, holds several certifications in Computer and Digital Forensics; he has conducted

forensic examinations and data analysis in hundreds of investigations, as well as testified extensively

throughout the State. T/Lt. Auffant is a member of several professional organizations such as, High

Technology Crime Investigation Association (HTCIA), International Association of Computer

Investigative Specialists (IACIS), Institute of Computer Forensic Professionals (ICFP), and InfraGard.

He has provided training and lectured to government and law enforcement personnel in the field of

Digital and Computer Forensics, as well as academic institutions such as John Jay College Criminal

Justice School, University of Albany Business School and NY Prosecutors Training Institute Summer

College. T/Lt. Auffant is an adjunct professor at Columbia Greene Community College in the Computer

Security/Forensics degree program and is the chairperson of the NY State Technical Working Group on

Digital & Multimedia Evidence.

Sriharsha Banavara is a graduate student at Auburn University in Auburn, AL pursuing a Master’s in

Software Engineering. He is currently seeking the degree with an Information Assurance option.

Alex Cowperthwaite is a masters student at Carleton University Ottawa, Canada, graduating in summer

2010. He received a bachelor of computer science from the University of New Brunswick in

Fredericton NB, Canada in 2008. Alex's research interests include network security, traffic clustering,

DNS Security and anomaly and intrusion detection.

Page 78: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

John Crain is the Chief Technical Officer of ICANN. ICANN is responsible for coordinating domain

names, addresses, and other unique Internet identifiers. John has been contributing his technical

expertise and leadership to ICANN for over three years. In this role, he has taken on the task of

enhancing and improving the professional technical operations of ICANN and will lead the

organization's root management, website development and information services functions. John enjoys

an excellent and respected reputation with the global technical community. He conducts much of

ICANN's liaison work with the global technical community in order to facilitate and listen to discussion

on the many technical-based issues facing ICANN. John, a native of the United Kingdom and fluent in

Dutch and English, spent a significant portion of his professional career working in the Netherlands for

the RIPE NCC as part of the senior management team and was responsible for infrastructure, software

development and operations. Prior to RIPE NCC, John worked as a design engineer in research, design

and development of advanced materials.

John D’Arcy is an Assistant Professor in the Department of Management at the University of Notre

Dame. His research areas include information assurance and security and computer ethics. In recent

research, Dr. D’Arcy has examined the effectiveness of procedural and technical security controls in

deterring computer abuse. His studies also investigate individual and organizational factors that

contribute to end-user security behavior in the workplace. Dr. D’Arcy received a B.S. in Finance from

The Pennsylvania State University, an MBA from LaSalle University, and a PhD in Business

Administration with a concentration in Management Information Systems from Temple University. His

research has been published in journals such as Information Systems Research, Communications of the

ACM, Decision Support Systems, Computers & Security, and Human Resource Management.

Sanjukta Das is an Assistant Professor of Management Science and Systems at the State University of

New York, Buffalo. Her past research has been published in INFORMS Journal on Computing and

Journal of Organizational Computing and Electronic Commerce. She has presented her work at several

internationally recognized information systems conferences and workshops such as WITS, CIST and

WISE.

Wenliang Du is an associate professor of computer science in the department of EECS, Syracuse

University. He obtained his PhD in computer science from Purdue University. Dr. Du’s research

interests include security education, web security, privacy-preserving data mining, and security in

wireless sensor networks.

Noreen Gaubatz is the Assistant Director of the Office of Institutional Research and Assessment at

Syracuse University. She obtained her Ph.D. in Higher Education Administration from Syracuse

University. Dr. Gaubatz’s work supports a variety of university-wide assessment initiatives, with her

research interest in student ratings of teaching effectiveness.

Gwen Greene is a doctoral candidate in Applied Information Technology at Towson University. Her

research interests include risk management and information security. She is also an Information Systems

Security Engineer at Raytheon Company. Gwen has an undergraduate degree in Computer Science from

Bowie State University and a Masters in Information Systems from University of Maryland Baltimore

County. Contact her at [email protected].

Manish Gupta is a PhD candidate at State University of New York at Buffalo. He also works full-time

as an information security professional at a northeast US based bank. He received an MBA from SUNY-

Page 79: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

Buffalo (USA) and a bachelor’s degree in mechanical engineering from I.E.T, Lucknow (India). He has

more than a decade of industry experience in information systems, policies and technologies. He has

published 3 books in the area of information security and assurance. He has published more than 50

research articles in leading journals, conference proceedings and books including DSS, ACM

Transactions, IEEE and JOEUC. He serves in editorial boards of 7 International Journals and has served

in program committees of several international conferences. He holds several professional designations

including CISSP, CISA, CISM, ISSPCS and PMP.

John A. “Drew” Hamilton is a professor of Computer Science and Software Engineering at Auburn

University and director of Auburn’s Information Assurance Center. He has a B. A. in Journalism from

Texas Tech University, an M.S. in Systems Management from the University of Southern California, an

M.S. in Computer Science from Vanderbilt University and a Ph. D. in Computer Science from Texas

A&M University.

Eric S Imsand, is an Assistant Research Professor of Computer Science and Software Engineering at

Auburn University. He has previously served as a Visiting Assistant Professor in the Department of

Computer Science at the University of Memphis and as a Principal Engineer of Software Engineering

and Security Technologies at MaxVision LLC. His research interests include biometric authentication

and identification, computer and network security, end-user security training, and simulation security.

He holds a bachelor's degree, master's degree, and Ph.D. degree, all from Auburn University.

Karthick Jayaraman is a PhD candidate at the department of EECS, Syracuse University. His advisors

are Dr. Wenliang Du and Dr. Steve J. Chapin. His research interests are security education, system, and

web security.

Yingying Kang is a candidate for PhD in Operations Research the Industrial & Systems Engineering

Department at the State University of New York at Buffalo. Her research interests include mathematical

programming, analytical simulation modeling, transport of Hazardous material, and network

optimization. Yingying also served as a technical support engineer at Parametric Technology

Corporation.

Varun Narayanan Kutty is a Master of Science student of the Industrial Engineering department,

majoring in Operations Research at the University at Buffalo, SUNY. He completed his Bachelor of

Technology in Electronics and Communication Engineering from National Institute of Technology,

Trichy, India. His research interests are in Risk Management practices using derivatives, Portfolio

Optimization, Revenue Management for online advertisements, IT strategy and Business Analytics.

Malvika Pimple is currently pursuing her M.S. in Computer Science from the University of North

Dakota.

Mark Scanlon is currently a research Ph. D. candidate in Computer Science at University College

Dublin (UCD), under the guidance of Prof. Mohand-Tahar Kechadi as part of the UCD Centre for

Cybercrime Investigation. He received his B.A. degree in Computer Science and Linguistics in 2006 and

his research M.Sc. in Computer Forensics and Cybercrime Investigation in 2009, both from UCD. His

research interests include Computer Forensics and Cybercrime Investigation, Forensic Verification, Data

Mining, Networking and Peer-to-Peer protocols.

Page 80: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

Raj Sharman is an Associate Professor in the Management Science and Systems Department at SUNY,

Buffalo, NY. He received his B. Tech and M. Tech degree from IIT Bombay, India and his M.S degree

in Industrial Engineering and PhD in Computer Science from Louisiana State University. His research

streams include Information Assurance, and Disaster Response Management, Business Value of IT,

Decision Support Systems, Conceptual Modeling and Distributed Computing. His papers have been

published in a number of national and international journals. He is also the recipient of several grants

from the university as well as external agencies.

Anil Somayaji is an Associate Professor in the School of Computer Science at Carleton University in

Ottawa, Canada. He received a B.S. (1994) degree in mathematics from the Massachusetts Institute of

Technology and the Ph.D. (2002) degree in computer science from the University of New Mexico. He

has served as the program committee chair of the New Security Paradigms Workshop (2008 and 2009)

and has served on the program committees of ACM CCS, USENIX Security, and RAID, among others.

His research interests include computer security, operating systems, complex adaptive systems, and

artificial life.

R. Karl Rethemeyer is Associate Professor of Public Administration and Policy and Chair of the

Department of Public Administration and Policy at the Rockefeller College of Public Affairs and Policy,

University at Albany - SUNY. Rethemeyer’s primary research interest is in social networks, their impact

on social, political, and policy processes, and the methods used to study such networks. Dr.

Rethemeyer’s work spans two programs of research. The first focuses on the structure and operation of

collaborative and policy networks in the public sector. This work examines the challenges inherent in

the management of collaborative provision of public goods and services and the political ramifications

of engaging nonprofit and for-profit organizations in that effort. Most recently, Dr. Rethemeyer received

the Accenture Advances in Public Management Award for his research in this area. Dr. Rethemeyer’s

other program of research focuses on terrorism, terrorist organizations, terrorist networks, and counter-

insurgency/stabilization operations. Dr. Rethemeyer is co-director of the Project on Violent Conflict

(PVC), a research center focused on these topics. His Department of Homeland Security-funded work

focuses on how networks affect the use of various forms of terrorism (including suicide and CBRN

attacks), the lethality of terrorist organizations, the propensity of terrorist organizations to attack civilian

targets, and the propensity to choose or eschew lethal violence. Dr. Rethemeyer is also a co-investigator

on two other projects funded by the Office of Naval Research focused on the development of IEDs by

the Irish Republican Army and computational modeling of stabilization and counter-insurgency

operations.

Billy Rios is currently a security researcher for Google where he studies emerging security threats and

technologies. Before Google, Billy was a Program Manager at Microsoft where he helped secure

several high profile software projects including Internet Explorer. Prior to his roles at Google and

Microsoft, Billy was a penetration tester, making his living by outsmarting security teams, bypassing

security measures, and demonstrating the business risk of security exposures to executives and

organizational decision makers. Before his life as a penetration tester, Billy worked as an Information

Assurance Analyst for the Defense Information Systems Agency (DISA). While at DISA, Billy helped

protect Department of Defense (DoD) information systems by performing network intrusion detection,

vulnerability analysis, incident handling, and formal incident reporting on security related events

involving DoD information systems. Before attacking and defending information systems, Billy was an

active duty Officer in the United States Marine Corps. Billy has spoken at numerous security

conferences including: Blackhat briefings, Bluehat, RSA and DEFCON. Billy holds a Bachelors degree

Page 81: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

in Business Administration, Master of Science degree in Information Systems, and a Master of Business

Administration.

Kevin Williams is Professor of Psychology and Dean of Graduate Studies at the University at Albany,

State University of New York. He has served as Chair of Psychology and Director of the PhD program

in Industrial/Organizational Psychology. His major areas of research are: (1) human motivation and

performance, where he studies the self-regulatory processes that guide goal strivings and goal revision

over time, (2) the psychology of blame, where his work explores the social-cognitive processes that

underlie the allocation of blame for accidents, and (3) the psychology of security, where he investigates

the psychological factors that support and compromise information security. His research appears in

such journals as Journal of Applied Psychology, Human Performance, Organizational Behavior and

Human Decision Processes, and Academy of Management Journal. He received his Ph.D. in

psychology from the University of South Carolina in 1984.

Yanjun Zuo is an assistant professor at the University of North Dakota, Grand Forks, USA. He received

a Ph.D. in Computer Science from the University of Arkansas, Fayetteville, USA in 2005. He also holds

a master degree in Computer Science from the University of Arkansas and a master degree in Business

Administration from the University of North Dakota, USA. His research interests include survivable

and self-healing systems, security in pervasive computing, database security, and information privacy

protection. He has published numerous articles in those areas.

Page 82: 5th Annual Symposium on Information Assurance (ASIA ’10 · 2010. 6. 24. · Proceedings of the 5th Annual Symposium on Information Assurance Academic track of the 13th Annual 2009

INDEX OF AUTHORS

Al-Fedaghi, Sabah S. p. 10-18

Asal, Victor p. 9

Auffant II, Fabio R. p. 67

Banavara, Sriharsha p. 37-41

Cowperthwaite, Alex p. 2-8

Crain, John p. 1

D’Arcy, John p. 42-49

Das, Sunjakta p. 19-22

Dhillon, Gurpreet p. 50-55

Du, Wenliang p. 56-61

Gaubatz, Noreen B. p. 56-61

Greene, Gwen p. 42-49

Gupta, Manish p. 19-22

Hamilton, Jr., John A. p. 37-41

Hannaway, Alan p. 31-36

Ismand, Eric p. 37-41, 62-66

Jayaraman, Karthick p. 56-61

Kang, Yingying p. 19-22

Kutty, Varun N. p. 19-22

Kechadi, Mohand-Tahar p. 32-36

Lande, Suhas p. 23-30

Pimple, Malvika p. 23-30

Rios, Billy p. 31

Rethemeyer, R. Karl p. 9

Scanlon, Mark p. 32-36

Sharman, Raj p. 19-22

Somayaji, Anil p. 2-8

Talib, Yurita Abdul p. 50-55

Williams, Kevin p. 67

Zuo, Yunjun p. 23-30