IO Journal 1st QTR 2010

41
IO IO Journal Journal Association of Old Crows A publication of the Information Operations Institute Vol. 1, Issue 4 February 2010 The Battle for the Information Domain

Transcript of IO Journal 1st QTR 2010

Page 1: IO Journal 1st QTR 2010

IO IO JournalJournal

Association of Old Crows

A publication of the Information Operations Institute

Vol. 1, Issue 4February 2010

The Battle for the Information Domain

Page 2: IO Journal 1st QTR 2010

458960_Applied.indd 1 1/2/10 7:34:52 PM

Page 3: IO Journal 1st QTR 2010

IO Journal | February 2010 3

458960_Applied.indd 1 1/2/10 7:34:52 PM

IO IO JournalJournal

5 The Battle for the Information DomainBy Major Rob Sentse and Major Arno Storm, Royal Netherlands Army

12 The Emerging Battlespace of Cyberwar: The Legal Framework and Policy IssuesBy James P. Farwell

21 Applying Deterrence in CyberspaceBy Kevin R. Beeker, Robert F. Mills and Michael R. Grimaila

28 The Four Strategies of Information Warfare and their ApplicationsBy Carlo Kopp

Vol. 1, Issue 4 • February 2010

EDITORIAL ADVISORY BOARDMr. Robert GieslerMr. Austin Branch, SESMr. Mark Johnson, SESDr. Dan KuehlRADM Andy Singer, USN (Ret)Mr. Kirk HuniganBG John Davis, USARDML Bill Leigher, USNBrigGen Mark O. Schissler, USAFCol David Wilkinson, USMCCAPT Michael Hewitt, USNCol Al Bynum, USAF (Ret)LTC Kevin Doyle, USA (Ret)

ContentsContents34 Information Security Within DOD

Supply ChainsBy Lt Col Brian R. Salmans, USAF

40 Field of DreamsBy Tom “TCL” Curby-Lucier

scripts should be of interest to the information operations community and should include proper sourcing with endnotes. All articles are peer reviewed. Direct all submissions to Joel Harding, [email protected].

©2010 Association of Old Crows/Naylor, LLC. All rights reserved. The contents of this publication may not be re-produced by any means, in whole or in part, without the prior written authorization of the publisher.

Editorial: The articles and editorials appearing in this magazine do not represent an official AOC position, unless specifically identified as an AOC position.

EDITORIAL & PRODUCTION STAFFEditors: Carson Checketts, Joel Harding, Dr. Dan Kuehl, Jon Pasierb, Catherine TheoharyPublisher: Elaine RichardsonAdvertising: Jason Dolder, Shaun Greyling, Erik Henson, Chris Zabel, Melissa ZawadaMarketing: Allie HansenDesign & Layout: Barry SenykAdvertising Art: Effie Monson

Submissions: The IO Journal welcomes article submissions for consideration. Manu-

US Marine Corps Capt. Brent Molaski, an information operations officer assigned to Marine Expeditionary Brigade-Afghanistan, provides security during a patrol in the vicinity of Kaneshin, Afghanistan, December 2009. U.S. Marine Corps photo by Sgt. Evan Barragan/Released

On the cover: A US Air Force MQ-1B Predator unmanned aerial vehicle takes off during Operation Iraqi Freedom in 2008. US Air Force photo by Tech. Sgt. Sabrina Johnson/Released

Page 4: IO Journal 1st QTR 2010

IO Journal | February 20104

There’s an old saying that there are two things you don’t want to watch being made: law and sausage! I’ll add a third: a journal. The behind-the-scenes process that starts with a call for submissions and ends with the addition of graphics and artwork and sending the entire package to the publisher is difficult and labor-intensive, but you cannot produce a

quality publication without it. An indispensible part of that produc-tion process is the selection of the articles that make up each issue. I believe each issue of the IO Journal has been a noticeable improvement over the previous issue, and this one is simply the latest example, with a steady upward trend in the quality and depth of the submissions. One mark of a quality publication is its utility to a broad audience, and this issue really hits the mark. If you’re from the “influence” side of IO or the “cyber” side, this issue has something for you. If your focus is on spe-cific problems, from the security of our cyber-dependent supply chain to cyber deterrence – one of the hottest IO topics in the DOD – this issue has something for you.

It’s not too early for you to look ahead and mark your calendars for the 2010 edition of INFOWARCON! The second week of May (the 12th, 13th and 14th) will see what many feel is the best and most diverse and inclusive meeting of the IO community across government, indus-try, academia and the military. The agenda extends from influence to technology, and again has an outstanding international dimension. You don’t want to miss it … see you there!

Dan Kuehl

Dr Dan Kuehl is a professor of IO at the iCollege of the National Defense University, an Editor of the IO Journal and member of the IO Institute.

Inside the IO Journalthe IO Journal

f r o m t h e e d i t o r

For more information about InfowarCon 2010 – www.InfowarCon.com or www.Crows.Org

If you’re from the “influence” side of IO or the “cyber” side, this issue has something for you. If your focus is on specific problems, from the security of our cyber-dependent supply chain to cyber deterrence, this issue has something for you.

IOI Call for EventsThe IO Institute is soliciting input for an IO Community Event Calendar. Please send notifications for IO, SC, PD

and related events to: [email protected]. At a minimum, please include name of event, location, date and POC information.

Page 5: IO Journal 1st QTR 2010

IO Journal | February 2010 5

Infl uencing Behaviour: The basic principle for education, training exercises and operations

 In this paper we cover the need for exchanging and broadening inter-national insights and expertise on Influence Operations as a whole and Information Operations in particular. We describe the way to

operate in an expanding technology and communication era and we will conclude with a description on how to organize exercises starting from complementary factors of influence. This paper describes the way military forces could operate in the current and expanding communica-tion and technology era.[2]

We would like this paper to fuel dis-cussions regarding the viability of line staff organizations, the way we should organize our armed forces and the way we should arrange our training and exercise programs. While we do not have a solution to the many challenges in this field, we hope that this paper will contribute to the creative minds working on these issues.

The communication and technol-ogy era influences the way we relate to politics, populations, society and the media. To be specific, this technology era influences:a) The positions the population and

government take in the area of operations;

b) The positions the population and gov-ernment take in the countries which relate to the conflict;[3]

c) The positions the population and the government take in countries contrib-uting troops to international missions.Future conflicts will be complex and

non transparent, which will require a nation’s military to respond with flex-ibility, creativity and speed. Our tra-ditional military way of thinking has evolved into an interagency[4] way of acting in which the armed forces are to shape the conditions for development, security and diplomacy.

In our present and future areas of operations it will be hard to find a clear distinction between permissive, semi-permissive and non-permissive el-ements, as these three concepts tend to emerge at the same moment. The prima-ry behaviour we would like to influence is in urban areas. Future conflicts arise in part from the need for political free-dom, power, water, food, energy and liv-ing space. The interagency environment is not well supported by the current line staff organization, an organization type that often leads to internal con-flicts, competition and containment of networks in favour of personal ambition instead of organizational goals.

The limits of present line staff orga-nization require changes be made. One solution may be a process organization consisting of modular units. One of the positive effects of this model would be a decrease of restraints whilst solutions and creativity will be magnified caus-ing desired effects immediately.[5] A

modular process organization could be the answer to current line staff organi-zation difficulties.

Surviving and Living at the Speed of Information

It is conceivable that future con-flicts will occur in that part of the world where 70 percent of the world popula-tion lives at 30 percent of the earth’s surface. Asia (1.2 billion in China and 1.1 billion in India) will suffer the con-sequences of the ever-growing world population. [6] Due to the many failed and failing states in Africa, it should be closely monitored as well.[7]

Population growth is a major concern to be reckoned with, also and maybe particularly in Africa, the population at this continent has grown up to 1 billion. Specialists estimate a 9 billion-world population density by 2050, resulting in a massive increase of urbanized areas and a huge demand for food, water and energy. [8]

Influencing the behaviour of people and of the other parts of the informa-tion domain is fundamental and the armed forces have a role in this inter-agency approach. Developing awareness of the broad area of activities necessary to influence behaviour requires creative and pro-active minds. It is not simply about “every soldier a rifleman” nor is it about “every soldier a sensor.” It could best be summarized as: “every soldier is a tool of influence.”

The Battle for theInformation DomainBy Major Rob Sentse[1] Bachelor. Infantry, Royal Netherlands Armyand Major Arno Storm, Bachelor. Infantry, Royal Netherlands Army

“I’ve learned that people will forget what you said, people will forget what you did, but people will never forget how you made them feel.”

–Maya Angelou (American Poet, b.1928)

(This publication has been written on our personal title and does not refl ect the opinion of the Royal Netherlands Army.)

Page 6: IO Journal 1st QTR 2010

IO Journal | February 20106

Get to know your opponent and change him into your companion. [9] To achieve this, we have to develop an emphatic mind concerning the ethics, values, norms and culture in the area of operations (AO) and in the areas that influence the AO. This requires a well thought coordinated and synchronized approach of all behavioural aspects to control the information domain in its broadest sense.[10]

For instance, the “developed” coun-tries perspective towards the problems in the Middle East is far different from the perspective of the governments and people living in those nations.

The human quality to perceive world-wide problems from its own values and norms is one of the very few character-istics in which “we” recognize ourselves. The freedom and democracy “we” like to bring to “them” is something “they” ex-perience quite differently. The “killing,” and “battle,” which happens in moral, cultural, and psychological spheres is far stronger than any physical or kinetic harm inflicted.

Powerful nations fight a different kind of war than their opponents do. This problem leads to a question we should ask ourselves: In whose per-ceptions is it the opponent and, above all, WHY is it the opponent? [11] The following terms relate to our unique perception: terrorists, resistance, guer-rillas, criminal gangs, freedom fighters; labelling refers to the “ally” or to the “enemy.” Still, they all have one thing in common. They are all (in different ways) supported by a part of the local population and/or (a foreign) govern-ment or governments.

The opponent is not recognizable as such and has the initiative as one of its most typical aspects. [12] Our op-ponents use continuous technological developments to their advantage. An instrument, often employed by insur-gents, is to play to the perceptions of opponent policymakers and audiences, throughout the media, to convince their enemy that their goals are unach-ievable or too costly.

“…any sound revolutionary war operator (the French underground, the Norwegian underground, or any

other European anti-Nazi under-ground) used small-war tactics – not to destroy the German Army, of which they were thoroughly incapable, but to establish a competitive system of control over the population. To do this…they had to kill some of the occupying forces and attack some of the military targets. But above all they had to kill their own people who collaborated with the enemy.”[13] -Bernard Fall

In such an environment many vari-ables influence the outcome of our ac-tions. One thing is clear; the way, in which democratic countries choose to deal with insurgents and terrorists, is related to the increase or decrease of support for this issue from their policy makers and their troops. One needs to take this into account and try to posi-tively influence the perceptions of the local population, potential supporters of the insurgents abroad, allies and neighbouring countries; thus increasing resilience against insurgents to ensure support for our own efforts. [14] Dealing with different variables in a concerted manner requires a balanced approach to coordinate how efforts and information flows are to be organised. [15]

If this is not done correctly it will lead to a situation that is prone to pro-ducing efforts that are counter-produc-tive to the political and strategic goals. Furthermore, efforts can be counterpro-ductive for other parts of the organisa-tion or its partners. The challenge is to seek methods to reduce undesirable out-comes. Establishing a common starting point for planning and action is funda-mental in a battle in which perception is more important then facts. The “oppo-nent” seems to achieve their goal with 15% violence and 85% by controlling the information domain and “we” as the military respond to that with 85% vio-lence and 15 % control of the informa-tion domain, leading us to the question: “Who is effective here?” [16-17]

In such an environment there is no space for stove piped visions like “ki-netic elements” or “non-kinetic ele-ments” as the interconnection and the mutual influence of these concepts are fundamental in a coexisting permissive,

semi permissive and non permissive en-vironment. [18]

“It is obvious that the media war in this century is one of the strongest methods; in fact, its ratio may reach 90% of the total preparation for the battles.”[19] -Osama Bin Laden

Infl uencing Behaviour and the Information Domain: Two mutually reinforcing concepts

In the aftermath of the elections in Iran in June 2009, “Twitter” turned out to be a powerful medium to activate, inform and influence. [20-21] Obtaining and maintaining influence is no longer solely a military capacity and has evolved into a symbiosis of the information do-main with the military instruments of influence.[22] This international chal-lenge needs more attention from other armies.

In December 2008, a conference was held at the Netherlands Institute of In-ternational Relations, Clingendael, in The Hague. This conference was named, “Challenging uncertainties, the future of the Netherlands Armed Forces,” in which the effects of the “Information and Communication era” were explored. [23] Professor Alex Schmid, Director of the Centre for the Study of Terrorism and Po-litical Violence at the University of St. Andrews in Scotland addressed the fact that the success of terrorist actions de-pends on their access to technological means of communication. This principle is important to our consideration and practice of Influence operations and Information Operations, which can com-plement each other in the information domain. There is a great deal of room for improvement in this area.

To describe the RLNA’s perspective of Influence Operations we would first like to consider the views of some NATO partners. Influence Operations, as ter-minology in the RNLA is non-existent whilst Information Operations as part of Influence Operations is still under construction, not only in the Nether-lands but also in the USA and the UK. The American Armed Forces and the Armed Forces of the United Kingdom have made a great deal of progress on

Page 7: IO Journal 1st QTR 2010

IO Journal | February 2010 7

Influence Operations. Within the UK Army’s doctrine, Information Operations is a com-ponent of Military Influence, which, on its part, contributes to Influence Operations.

The following illustration is part of the “UK Influence Doctrine 2009,” among the cadre of Influence activities, the term In-formation Operations can be seen. Within brackets you see: “coord.” The Informa-tion Operations officer is responsible for synchronization and coordination.

According to the UK definition, Infor-mation Operations are: “A military function to provide advice and coordination of mili-tary information activities in order to create desired effects on the will, understanding and capability of audiences, consistent with a UK Information Strategy.” [24]

The UK Army’s doctrine is advanced, both in their vision and policy when it comes to influencing behaviour. UK “Influ-ence Campaigns” are conducted by several ministries. At the level of the Chief of De-fence Staff, the Targeting and Information Operations (TIO) office coordinates and synchronises the MoD actions within an interagency environment. TIO consists of a Targeting, Policy & Capability and an Info Ops desk. The Targeting desk also consists of an Intelligence Support Team. [25]

Also the US Army has a well-devel-oped vision of Influence Operations, al-though it seems that the execution of it does not entirely relate to the well thought-over documents about the important subject. According to the American non-profit think-tank, RAND, Influence Operations are: “the coordinat-ed, integrated, and synchronized applica-tion of national diplomatic, informational, military, economic, and other capabilities in peacetime, crisis, conflict, and post-conflict to foster attitudes, behaviours, or decisions by foreign target audiences that further interests and objectives.”[26]

Another US Military document is the Allied Joint Doctrine for Information Op-erations, Ratification Draft 1 (AJD-3.10 RD1). This document uses the same defi-nition as NATO document MC422/3 (NATO Military Policy on Information Opera-tions): “Info Ops is a military function to provide advice and coordination of military information activities in order to create desired effects on the will, understanding and capability of adversaries, potential ad-versaries and other NAC approved parties

in support of Alliance mission objectives. Information activities are actions designed to affect information and or information systems. They can be performed by any ac-tor and include protective measures.”

The AJD-3.10 RD1 also mentions the Information Operations Coordination Board (IOCB). This is the forum for the implementation of Information Opera-tions (Info Ops), collective coordination and advice. This board, chaired by the Chief of Information Operations, meets as a subset of the Joint Coordination Board (JCB). It will convene as necessary in the Headquarter decision cycle and during non-operational activities. Ideal-ly the IOCB is part of the decision cycle at every level and at every moment. This board should contribute to the militar-ies mindset, (and to other governments) to plan and execute operations.

Besides the AJD.3-10 RD1; the Ameri-can Joint Publication 3.13 “Information Operations(IO)” describes IO as: “Informa-tion operations (IO) are described as the in-tegrated employment of electronic warfare (EW), computer network operations (CNO), psychological operations (PSYOP), military deception (MILDEC), and operations se-curity (OPSEC), in concert with specified supporting and related capabilities, to influence, disrupt, corrupt, or usurp ad-versarial human and automated decision making while protecting our own.”

This document refers to the “Notion-al Information Operations Cell” (page IV-5); ideally this should be the design used at headquarters. Although advis-able to implement; this proposed layout has to be established by experience. One of the advantages is that all core “tools” of influence will be together perma-nently with the opportunity to consult their subject matter experts.

Within the RNLA; Influence Op-erations is not staffed as such and In-formation Operations is still under construction. The authors of this article recommend the conceptualization of a common understanding designed toward influencing behaviour as a whole. In the RNLA the functionality “Staff officer In-formation Operations” is foreseen at the brigade level. The RNLA consists of two manoeuvre brigades (13th and 43rd), one air mobile brigade (11th) and one support brigade. A brigade headquarters will, amongst others, consist of three staff of-ficers, two Information Operations and one for Psychological Operations.

The RNLA has chosen a policy in which officers and NCOs are multi-pur-pose, which creates another problem: there are some specialized units but the manning changes every 3 to 5 years. The desired balance between costs and effects leads to the situation that there are no specialized units neither

Mission-Specific Strategic Political Guidance on Information Activities

Info OpsFunction

To affect

Other Approved Targets/Audiences

AdversariesPotential

Adversaries

Effects Effects

Analysis, Advice, Coordination & Synchronization Input to Estimate, Planning, Execution and Assessment

Related Activities

PA

GLOBAL

LOCAL

REGIONAL

Information

Under-standing Will Capability

Information Systems

CIMIC PSYOPS

PPP

OPSEC

INFO SEC

DECEPTIONEW

KINETICS

CNO

OTHERS

KLE

Coordinated

Info

rmatio

n

Activitie

sIn

form

atio

n

En

viron

men

t

This illustration summarizes a part of the AJD-3-10 RD1.

Page 8: IO Journal 1st QTR 2010

IO Journal | February 20108

supporting the Influence Operations domain nor supporting the Information Operations domain. For instance, Psy-Ops is not regarded as a specialism in the RNLA; it is an additional function for personnel of the Air Defence unit. In the Netherlands Influence Operations as a whole and Information Operations in particular remain an evolving trade. Several documents have been written regarding the way commanders should deal with InfoOps as a means of coordi-nation rather than a specific capacity.

Within the RNLA the following defi-nition is used: “Coordinated activities aimed at influencing the opponents’ de-cision cycle and supporting the politica/ military targets of an operation by strik-ing the opponents’ information systems, decision-making processes and supporting processes whilst defending our own.”

It remains to be seen that much at-tention is required for the defensive side of InfoOps. It seems the message is that InfoOps is to be seen as a coordination mechanism to create a complementary environment for the military and non-military areas of attention.

One thing is clear; there are a lot of perspectives on Influence operations and Information operations which makes it all a bit diffuse, to say the least.[27] The RNLA has chosen for a limited approach to InfoOps. The Netherlands Defence Doctrine 2005, which is largely derived form the British Defence Doctrine, cor-rectly argues that attention to the neces-sity of harmonization and integration of activities is needed. Reports by former commanders of Task Force Uruzgan have emphasized the necessity of InfoOps.

Computer Network Operations (CNO)

In the RNLA CNO is one of the ele-ments of InfoOps, which could get more attention. CNO plays a major part in the battle for the information domain and is one of the fundamental elements to in-fluence behaviour and stems from the in-creasing use of networked computers and supporting ICT infrastructure systems by military and civilian organizations.

CNO is divided into Computer Network Attack (CNA), Computer Network Defence (CND), and related computer network ex-

ploitation (CNE) enabling operations. [28] CNA consists of actions taken through the use of computer networks to disrupt, deny, degrade, or destroy information resident in computers and computer net-works, or the computers and networks themselves. CND involves actions taken through the use of computer networks to protect, monitor, analyze, detect, and respond to unauthorized activity within information systems and computer net-works. [29] CND actions not only pro-tect systems from an external adversary but also from exploitation from within, and are now a necessary function in all military operations. CNE is enabling op-erations and intelligence collection ca-pabilities conducted through the use of computer networks to gather data from target or adversary automated informa-tion systems or networks.

The increasing reliance of “unsophis-ticated” adversary and terrorist groups on computers and computer networks to pass information to C2 forces reinforces the importance of CNO in InfoOps plan-ning and activities. As the capability of computers and the range of their em-ployment broaden, new vulnerabilities and opportunities will continue to de-velop. CNO should be an essential part of our operations.[30]

Adversaries also know how to play their part in the information battle. [31] The Taliban, for instance, obtain parts of their information via Twitter and Fa-cebook.[32] All Facebook and also Hyves profiles are partially accessible for others. Some prominent intelligence chiefs in the Netherlands[33] and in the UK[34] noted that fact to their great dissatisfaction.

Investigating Networks and other means of ICT can lead to a massive amount of information which can be used to study feelings, emotions, similarities and differ-ences within the indigenous population and governmental institutions.

Nationalism and political failures can be used and implemented as part of the information domain is exploited. Content generated by extremist organizations, their use of online tools, especially on-line forums, provide snapshots of their activities, communications, ideologies, relationships, and ongoing developments.

[35] These snapshots provide invaluable data sources for researchers and experts,

NOTIONAL INFORMATION OPERATIONS CELL

RESIDENTNON-RESIDENT

J-3J-2

J-2 REPCHAPLAIN

J-5 REP

J-6 REP

J-7 REPPAO REP

LEGAL REP

J2X REP (CI)

CMO STAFF/CIVILAFFAIRS REP

COMPONENT LIAISONS

STATE DEPARTMENT REP

OTHER REPS

CI CounterintelligenceCMO Civil-military OperationsCND Computer Network DefenseCNO Computer Network OperationsEW Electronic WarfareEWCC Electronic Warfare Coordination CellIA Information AssuraceIO Information OperationsJCMOTF Joint Civil-Military Operations Task ForceJPOTF Joint Psychological Operations Task Force

JSOTF Joint Special Operations Task ForceMILDEC Military DeceptionOPSEC Operations SecurityPAO Public Affairs OfficePSYOP Psychological OperationsRep RepresentativeSOF Special Operations ForcesSTO Special Technical OperationsUSSTRATCOM United States Strategic Command

J2T REP

JOINT FIRES OFFICER

PHYSICAL SECURITY REP

SOF REP

STO REP

EW REP

PSYOP REP

USSTRATCOM REPS (CNO, SPACE CONTROL & GLOBAL STRIKE)MILDEC REP

OPSEC REP

(IA & CND)

J-4J-5

J-6

J-7

PAOJUDGE

ADVOCATE

CI STAFF SECTION (J2X)

JCMOTF

SERVICE/FUNCTIONAL COMPONENTS

STATE DEPARTMENT LIAISON

ALLIES AND SUPPORT REPS

TARGETING CELL (J2T)

JOINT FIRES ELEMENT (JFE)

JSOTF

EWCC

JPOTF

MILDEC PLANNERS

IO CELL CHIEF

IO CELLIO CELL

OPSEC PLANNERS

STO CELL

PHYSICAL SECURITY STAFF

SECTION

USSTRATCOM LIAISON(S)

AJP 3.13

Page 9: IO Journal 1st QTR 2010

IO Journal | February 2010 9

with which they can better study ex-tremist movements. However, several problems, such as information overload and the covert nature of the “Dark Web,” prevent effective and efficient mining of “Dark Web” intelligence. Due to these problems, no systematic methodologies have been developed for “Dark Web” col-lection and analysis. A collection has been created of 110 U.S. domestic extrem-ist forums containing more than 640,000 documents. The extremist forum collec-tion, could serve as an invaluable data source to enable a better understanding of extremist movements.[36]

Adversaries manage their conflicts using the digital battle space and this knowledge should create more awareness than it does right now. [37] This part of the information domain is a major stra-tegic element to be reckoned with. The information and communication era is absolutely boundless.[38] Coordination and synchronization of operations that influence behaviour is of great impor-tance to create clarity and to produce order in the chaos of information flows.

Obvious and Applicable:Integration of Infl uence Operations in education, training and exercises

For that reason we need to coordi-nate and synchronise the influence of behaviour in the planning and imple-mentation of operations at strategic, operational and tactical levels. The endless possibilities of the information and communication era obliges far more attention to the influence of emotions, perceptions, feelings, convictions, atti-tudes and behaviour. [39] With this in mind, influencing behaviour dominates the complete operational spectrum and is coordinated and synchronised at ex-ercises and training in which the de-sired end state and the intent of the commander are normative. In the long run it is conceivable that an interagency approach for training, exercises and op-erations will be the standard; this will require commitment, eagerness and ded-ication of other relevant departments.

As we speak army units try their ut-most to implement exercises in complex environments (including in matters of time and category). Force enablers like

Provincial Reconstruction Teams (PRT), Civil Military Cooperation (CIMIC) and Psychological Support Elements (PSE) are deployed in a modular mode togeth-er with manoeuvre elements to present exercises with a higher level of reality. Such exercises have been executed for the past two years by the 13th Mecha-nized Brigade in order to train and pre-pare units for their mission.

To make this point more tangible we will give an example for an integral (interagency) exercise with an initial entry as a starting point. The point of departure for the scenario will be an initial entry in the port of Vlissingen in Zeeland which is a Netherlands prov-ince. From this town the operation will be executed via the southern parts of Brabant (province of The Netherlands) to Oirschot where 13th Mechanized Bri-gade is located.

In 2010 the 13th mechanized Brigade will provide a brigade staff for the Eu-ropean Battle Group (EUBG). The follow-ing main objective has been formulated: “The EUBG is to be trained to operate in several scenarios in which units have to respond flexible in a complex and dynamic environment in many areas.”

[40] In preparation of the exercise the brigade staffs are to execute an inte-gral country study including a Computer Network Exploitation. In advance we have to consider how the local popula-tion should be informed regarding the operation (exercise). In a targeting meeting we can determine what kind of resources we have to influence the tar-get audience.

It has been decided to send a Recon-naissance Squadron in advance to con-duct deep reconnaissance supported by a Psychological Support Element and a PRT mission team in order to create a positive mindset within the local popu-lation for the forthcoming deployment of NLD Forces. To achieve a positive mindset some meetings (Key Leader En-gagement) with the Mayor, police chiefs and district administration will be ex-ecuted.[41] Several media outlets will be used, like news papers whilst regional radio and TV messages will explain the reason for executing exercises or opera-tions like these. In addition the popula-tion will be asked to cooperate with this

exercise (for example by participating in a roadblock).

This will be followed by manoeuvre elements, which will enter the area while engaging the opponent physi-cally (which will be executed at a train-ing area) simultaneously performing an open approach regarding the population to influence their hearts and minds in a positive manner. (For example, by or-ganizing static shows in the vicinity of a school.) At the same time a lift opera-tion will be prepared. A reconnaissance unit will be inserted to carry out close target recce, for example, in a house in front of a pub occasionally visited by a Medium Value Individual (role players). Besides that, others assets will be de-ployed, such as Human Intelligence and Unmanned Arial Vehicles. After the ob-servation stage, followed by a positive identification Special Forces will con-duct a lift operation to capture the MVI. In accordance to this an intensive media campaign will be launched.

It is interesting to note that the Bel-gian army performed a similar exercise to certify EUBG units in 2009. In 2010 the RNLA will train to be lead nation for the EUBG ready to be deployed in 2011 and is now planning an exercise in which all modular elements will execute an exercise in urbanized areas in combi-nation with training areas. [42]

More options can be established to train units in which each soldier has to be aware of the results and consequences of their behaviour. In the preparation phase of an exercise we can plan and exe-cute social patrols at markets and streets. We can identify Quick Impact Projects and carry out those projects (for example repairing a neglected playground). For-mer inhabitants of the potential area of operation, currently living in The Neth-erlands, could be able to participate as an adviser and as role players. Role play can be performed by military personnel (National Reserves) to perform the role of adversaries or military opponents. Ci-vilian drama actors from an academy of dramatic art could perform certain civil-ian roles supported.

Public Relations and a consistent marketing strategy is an important ele-ment of such modular organised exercis-es using combined urbanised/exercise

Page 10: IO Journal 1st QTR 2010

IO Journal | February 201010

terrain. The own population will see and experience how and why their army trains and exercises which creates com-mitment and understanding.

Moving AheadThis paper covered the need for the

exchange and broadening of insights and expertise on Influence Operations as a whole and Information Operations in particular. We described the way to operate in an expanding technological and communication era. We also ques-tioned the viability of line staff organi-zations. Should we not be organized as we operate? Discussions of this may re-sult in a modular process organization.

As we are all tools of Influence we have to set the conditions to train in urbanized terrain mixed with exercises at training areas. The basic principle is to train in a complex environment with modular organised units to prepare for our operational role in an interagency structure. [43] Over the past few years we have executed our operations with mod-ular organized units. Currently we bring together several elements of manoeuvre units, CIMIC battalions, ISTAR battalions etc. into a module tailored for the opera-tions. When we are “back home” we then fall back in the “known pattern” of line-staff organized units. We should consider organizing our armies into a permanent modular organization.

Instead of deriving units from a battalion or a company we then derive strike power from a module. Although, a modular organised army may seem to be one step too far.[44] This would mean that every brigade sized unit will ex-ist of all units taking part in a module; working, practicing and training to-gether and that will have consequences for the current line-staff army organisa-tion and its employees.

According to us (the authors of this article), a modular organised army will be able to embed influence operations more fluently thus being better prepared for the growing battle for the informa-tion domain.[45] The centre of gravity in present and future operations aims at influencing the capabilities, will and understanding of all elements and ac-tors and we have to adapt our mindset and organization towards that purpose.

Influence Operations and Information Operations as a component of it, apply to a systematic and targeted approach to ensure that an opponent has the informa-tion we want him to have and which will lead him to make the decisions which act in our favour and to his disadvantage.

This illustration embodies the com-plex environment of our operations and shows the importance of well thought-out Lines of Influence to interconnect an interagency approach.

Furthermore we realize that the in-creasing population growth, combined with an ever-expanding urbanisation, will have a decreasing effect on ma-noeuvre space for traditional warfare.[46] The control of the information do-main will be one of the important tar-gets.[47] Influencing perception might be more important then facts in the future fight for energy, water, food and living space. In such a perception there is no space for stove piped visions like “kinetic elements” or “non-kinetic ele-ments” as the interconnection and the mutual influence of these concepts are fundamental in a coexisting permis-sive, semi permissive and non permis-sive environment.

In current and future conflicts we will most likely find ourselves in a fight which is put up against a hardly deter-minable opponent who uses a tuned com-bination of political/economic activities, criminality, conventional activity and terror to accomplish desired objectives.

This is an environment in which al-liances exist, by the day, between le-gitimate and illegitimate organisations – alliances which apparently are not linked but at the same time seem to find each other at corresponding areas to achieve common goals.[48]

One of the common goals is the con-trol of the population, a control that is achieved differently by legitimate and illegitimate organisations. In such an en-vironment Computer Network Operations plays a major part. The increasing reliance of “unsophisticated” opponent and ter-rorist groups on computers and computer networks to pass information to C2 forces reinforces the importance of CNO in plan-ning and executing operations. As the capability of computers and the range of their employment broaden, new vulner-

abilities and opportunities will continue to develop. One thing is sure; not only are our opponents managing their conflicts using the digital battle space; every actor in the conflict does. There should be a big-ger role for CNO in our operations.

The broad spectrum of Influence Op-erations, and Information Operations as a component of it, directs our individual perception of a society is this boundless information and communication era.

[49] Coordination and synchronization of operations is therefore of great im-portance to create clarity and to pro-duce order in the chaos of information flows. We have to acknowledge that the critical success factor of operations lies in analyzing and steering opinions and convictions. The question here is how to obtain and sustain uniformity about Influence Operations, Information Op-erations and Strategic Communication within NATO’s (Military) terminology and explanations, and to bring this into practise in operations and exercises. I would be strategically valuable for us to have a common understanding of these important subjects in this new era.

Perhaps the first step ahead could be exploring the possibility of organizing our armies into a permanent modular organisation instead of composing mod-ules from battalions and companies.

“You [and “they”] are the embodiment of the information you [and “they”] choose to accept and act upon.

To change your [and “their”] cir-cumstances you need to change your [and “their”] thinking and subse-quent actions.”[50]

Adlin Sinclair

Major Rob Sentse, B-SW, is a Royal Neth-erlands Army Infantry Officer. Serving at several branches, he worked, amongst oth-ers, as an intelligence analyst, as a trainer in leadership and educational skills and as a project officer implementing Field Hu-mint in the RNLA. Currently he works as an Information Operations officer for 13th Mechanized Brigade.

In 2006 he was deployed to the Cana-dian lead RC-S HQ in Kandahar as J2PLANS where he implemented the Fusion Cell. (http://www.jmss.org/2008/spring/ar-ticlesbody2.htm). www.linkedin.com

Page 11: IO Journal 1st QTR 2010

IO Journal | February 2010 11

Major Arno Storm is a Royal Netherlands Army Infantry Officer who is currently Branch Chief G5/Plans within the 13th (NLD) Mechanized Brigade in Oirschot. He was commissioned a second lieutenant within the Infantry upon graduation from the Royal Netherlands Military Academy in August 1998. He started his career at 17th Mechanized Infantry Battalion, in Oirschot, as a mechanized infantry platoon leader. www.linkedin.com

Endnotes1. Rob Sentse is staff officer Informa-

tion Operations and Arno Storm is G5PLANS, both work for the 13th Mechanized Brigade, Royal Nether-lands Army (RNLA), at the RvS Bar-racks, Oirschot , The Netherlands.

2. http://www.dtic.mil/cgi-bin/GetTRDoc?AD=ADA478337&Location=U2&doc=GetTRDoc.pdf

3. http://www.time.com/time/world/ article/0,8599,1871487,00.html

4. http://www.sfcg.org/Documents/CPRF/CPRF-Summary-090414.pdf

5. http://www.nytimes.com/2009/06/23/world/americas/23military.html

6. h t t p : / / w w w . p r b . o r g /pdf04/04WorldDataSheet_Eng.pdf

7. http://www.africom.mil8. http://www.un.org/apps/news/story.a

sp?NewsID=13451&Cr=population&Cr19. The “Strength through Peace program”

as adopted by the Karzai government in 2005 gives the possibility to recon-cile opponents and to include them in society making them important for progress thus splitting the few hard-liners from the many followers (who follow by belief, hate or intimidation). If this is in the Popalzai interest seems to be questionable. Until now the “pro-gram” has not been implemented.

10. h t t p ://w w w. a r m y.m i l /ap s/0 9/informat ion_papers/cyber_opera-tions.html

11. http://smallwarsjournal.com/blog/journal/docs-temp/216-guvendiren.pdf

12. http://www.captainsjournal.com/category/information-warfare/

13. “The Theory and practice of Insurgen-cy and Counterinsurgency,” Naval War College Review, April 1965.

14. Balancing the emotive (“hearts”) com-ponent and the cognitive (“minds”) component.

15. ht tp://fa s.org/i rp/dodd i r/a rmy/fm3-24-2.pdf

16. http://www.coldtype.net/Assets.04/Essays.04/Miller.pdf

17. h t t p : // w w w. g u a r d i a n . c o . u k /world/2009/mar/29/china-computing

18. In a Conventional Maneuver the Ob-jective is defined in terms of terrain & enemy. In a Counterinsurgency Maneuver the Objective is defined in terms of population & perception.

19. ht tp://www.jef fersoncomm.com/documents/strategiccommunications-bewarethejabberwockv2.pdf

20. ht t p://w w w.mapcha nne l s .com/twittermap/iranelection.htm

21. http://www.washingtonpost.com/wp-dyn/content/discussion/2009/06/17/DI2009061702232.html

22. http://www.cfc.forces.gc.ca/papers/csc/csc27/buck.pdf

23. http://www.cl ingendael.nl/cscp/events/20081216/

24. http://ics.leeds.ac.uk/papers/pmt/exhibits/2270/jwp3_80.pdf

25. http://www.fas.org/irp/eprint/index.html

26. h t t p : // w w w . r a n d . o r g / p u b s /mongraphs/2009/RAND_MG654.sum.pdf

27. ht tp://w w w.au.a f.m i l/in fo-ops/influence.htm#definitions

28. http://www.dtic.mil/doctr ine/jel/new_pubs/jp3_13.pdf

29. http://www.securecomputing.net.au/News/114202,uk-ministry-of-defence-to-bolster-internet-intelligence.aspx

30. http://www.crisisgroup.org/home/index.cfm?id=5589

31. ht tp://w w w.f a s .org/i r p/epr i nt/mobile.pdf

32. http://www.washingtonpost.com/wp-dyn/content/article/2009/04/08/AR2009040804378.html

33. http://www.expatica.com/nl/news/local_news/Dutch-news-in-br ief_-Wednesday-8-Apr i l-2009_51439.html?ppager=1

34. http://laurelpapworth.com/facebook-mi6-wifes-photos/

35. h t t p : / / w w w . s i p r i . o r g /b l o g s / A f g h a n i s t a n /tal iban-communicat ion-sk ill s-in-crease-mullah-omar-speaks-w ith-confidence-and-awareness

36. h t t p : // w w w 2 . c o m p u t e r . o r g /p l u g i n s /d l /p d f /p r o c e e d i n g s /hicss/2007/2755/00/27550070c.pdf?

template=1&loginState=1&userData=anonymous-IP%253A%253A127.0.0.1

37. h t t p : / / w w w . g o v c o m . o r g /about_us.html

38. http://www.potomacinstitute.org/media/mediaclips/2009/YAwatimes_a-kinder-gentler073009.pdf

39. h t t p : // w w w . d - n - i . n e t / f c s /lawrence_27_articles.htm

40. According to the authors of this ar-ticle there is no space for stove piped visions like “kinetic elements” or “non-kinetic elements” as the inter-connection and the mutual influence of these concepts are fundamental in a coexisting permissive, semi permis-sive and non permissive environment.

41. It is essential to build a robust net-work with representatives of organi-sations participating in the conflict, also to identify their possible role in reconstruction and development.

42. Example: A patrol walking at a market sees four men turning around whilst walking away from the approaching pa-trol. The patrol member then must have the awareness to create an observation report with relevant personal descrip-tion, once executed they can contact the nearby police station (part of the scenario) to identify the individuals.

43. ht t p://w w w.w i l tonpa rk .org.uk/documents/conferences/WP919/pdfs/WP919.pdf

44. h t t p ://w w w.b lu e s k yb ro ad c a s t .com/Cl ient/Army_Stratcom/docs/printable.slides.pdf

45. http://www.upiasia.com/Security/2009/07/17/chinese_and_us_lead_information_warfare/8629/

46. http://www.StrategicStudiesInsti-tute.army.mil/ State and Nonstate Associated Gangs: Credible “Midwives of New Social Orders,” Dr. Max G. Manwaring.

47. h t t p ://w w w. f o r s v a r e t .d k /f a k /documents/fak/publikationer/the_talibans_information_warfare.pdf

48. h t t p ://w w w.po tom ac i n s t i t u t e .o r g / p u b l i c a t i o n s / P o t o m a c _HybridWar_0108.pdf (December 2007).

49. ht t p://i n f luenceops .wordpre s s .c o m / 2 0 0 7/ 0 3 / 0 8 / c h a n g i n g- p e r s p e c t i v e s - e n h a n c e d -% E 2 % 8 0 % 9 8 i n f l u e n c e -operations%E2%80%99-in-conflict/

50. Text in brackets by authors.

Page 12: IO Journal 1st QTR 2010

IO Journal | February 201012

Part I. Introduction [1]

Cyberspace is “its own me-dium with its own rules,” and the establishment by the Department of Defense of the 24th Air Force and U.S. Cyber Command ‘marks

the ascent of cyberspaces as a military domain.”[2] Cyber war raises a host of issues with political, military, and legal ramifications.

Cyperspace is not owned by any single nation. All nations have equal access to it, limited only by financial or technical obstacles.[3] Laws, writes Major David Willson, “are written for people within physically defined bor-ders belonging to a particular nation. A nation’s sovereignty and national terri-tory are defined by its borders, and the laws of that nation apply only to those within its borders or, in some cases, its citizens when outside its borders.”[4]

The National Research Council has described the dramatic changes that have taken place over the last fifteen years in these terms:[5]1. The increasingly interconnection of

the world’s computers provides av-enues for cyber attackers to exploit. These will only proliferate.

2. Increasing standardization and homo-geneity of communications protocols,

The Emerging

By James P. Farwell

Battlespace of Cyberwar:The Legal Framework and Policy Issues

programming interfaces, operating systems, computing hardware, and routers enable a single developed at-tack against many systems.

3. Distinctions between data and pro-gram have been eroded. “Active con-tent” is quite common in programming paradigms; pictures, word processing files, and spreadsheets may contain programs embedded within them that increase their functionality. As a re-sult, the computing environment is no longer under the complete control of the user of these files.

4. Systems are growing more complex. As a system’s operation grows more complex, it become more difficult to defend against or detect penetration.

5. User demands for backward compat-ibility often mean that older and less secure components cannot be replaced

with newer components that reduce or mitigate the old vulnerabilities.

6. Use of Web-based services offers new opportunities for adversaries to at-tack service providers. Web services may depend on other Web services, so the ability to predict, or even com-prehend, the impact of attacks may be very low.

7. Difficulties in identifying attackers, coupled with an uncertain legal and policy framework, make it more diffi-cult to punish adversaries. That may prompt others to take hostile action.

A Brief History of Cyberspace as a Military Domain

Both Software and hardware repre-sent unique vulnerabilities. In 2000, President William Clinton charged Lt. General Edward Anderson with develop-ing a cyber-war strategy. He noted that China, Russia, and Israel were building up their information-warfare capabili-ties and that the U.S. may wind up with a new type of weaponry for launching massive distributed denial-of-service (DDSO) attacks and computer virus. He noted that “The Chinese recently indi-cated they are already moving ahead with this,” prompting a need within De-partment of Defense (DoD) thinking to develop U.S. capacities.[6]

In the late 1990s, the U.S. found it-self targeted for cyber intrusions from a site many believed was located in Rus-sia. A swarm attack by China nearly

Two Israeli Air Force F-16 Fighting Falcons from Ramon Air Base, Israel, head to the Nevada Test and Training Range in July 2009, during a training exercise that pits US and allied nation air forces against simulated enemy forces in an air, ground, cyberspace and electronic threat environment. US Air Force photo by Master Sgt. Kevin J. Gruenwald/Released

Page 13: IO Journal 1st QTR 2010

IO Journal | February 2010 13

took down the power grid in Southern California and other raids were launched on sensitive military data. Department of Defense (DoD) planners noted attacks launched on South Korea, the 2007 cyber attacks on Estonia and attacks in 2008 on Georgia, apparently staged by proxies to the Russian government.[7]

China has tried to penetrate DoD computer networks, while hardening its own computer system servers against infiltration with its Kylin secure op-erating system. China has developed a secure microprocessor that, unlike US chips, is hardened against external ac-cess by a hacker or automated malicious software.

China’s actions highlight the need for clear rules of deterrence and engage-ment. U.S. cyberwar capabilities employ less secure operating systems, such as Microsoft’s.[8] In the meantime, China is recruiting hackers and stands accused of mounting a program, GhostNet, to take over targeted computers and download documents and information. It targeted 103 countries as part of a vast Chinese cyber-espionage network, infecting at least a dozen new computers every week. The operation taps emails and turns computers into giant listening de-vices. Hackers can turn on a computer’s web camera and microphones and record any conversations within range. A third of the targets infected are considered high value.[9]

Part II. U.S. Law’s Applicability to the Cyber Domain: Understanding the Legal Framework

U.S. laws and directives issued by the National Command Authority come first in understanding America’s legal frame-work and policy issues. For internation-al law – whose interpretation is driven by strategic political and military policy considerations – an assessment in 1999 by the Office of General Counsel for the Department of Defense as to the manner those rules apply to peacetime computer intrusions remains relevant. Much of the Internet revolution has taken place since then and that affects policy and legal considerations for cyber war. Still, understanding the broad precepts in in-ternational law are vital. These enable

the government to (i) avoid activities that a target nation or the world commu-nity views as violations of international law, (ii) weigh risks of actions that may constitute a violation, and (iii) ensure that a government that is the victim of an information attack can identify rem-edies afforded by international law.[10]

This paper describes key aspects of the overall framework of legal authori-ties that govern strategies and options, and key policy issues for this new battle space. The key authorities that dictate what DoD may do are framed by Congres-sional authorization and the National Command Authority, a term that refers to the ultimate lawful source of military orders. In the United States, it refers collectively to the President and the Secretary of Defense.1. Cyberwar took on new urgency un-

der President Bill Clinton. An order from the National Command Author-ity backed by him and Secretary of Defense William Cohen instructed the military to gear up for cyberwar.[11] Lt. General Edward Anderson, deputy commander at the U.S. Space Com-mand, was assigned the task of cre-ating a cyber-attack strategy, which he defined “as attacks against digital ones and zeros,” and detailed in a de-fense plan called “OPLAN 3600.”

2. President George W. Bush issued Na-tional Security Directive 16 order-ing the development of guidelines to regulate the use of “cyber weapons in war.”[12] The Directive instituted strict rules of engagement requiring “top-level” approval for any attack.[13]

3. On May 29, 2009, President Barack Obama announced that the U.S. was forging a new national security strat-egy for cyberwar. His approach treats digital infrastructure as “a strategic national asset,” with U.S. policy being to deter, prevent, detect and defend against attacks and recover quickly from any disruptions or change.”[14] He created a White House office to be led by a Cybersecurity Coordinator. He defined five key areas for action:• A coordinated whole of government

approach.• Ensuring a organized and unified

approach to future cyber incidents.

• Strengthening public/private part-nerships to find technology solu-tions that ensure our security and promote prosperity.

• Investing in cutting edge research to meet digital challenges.

• Promoting digital literacy.The White House office will be run

by a “cyberczar.” Following the recent selection of a Cyberczar by the White House, questions remain as to how much access this individual will have to the President. One of the primary legal and policy questions will be whether the Pentagon or the National Security Agency has the lead for cyberwar. One proposal would integrate parts of the N.S.A. into the military command so they could operate jointly.[15]

Defense officials stress that the new command will be focused on defending military networks’ “.mil” domain and that its establishment does not repre-sent any attempt by the Pentagon to carve out a larger role for itself in de-fending the nation’s civilian computer systems.[16]4. Secretary of Defense William Gates

has directed the establishment of a unified cyber command to defend the military’s computer networks and at-tack those of U.S. enemies.[17] The Command is to be fully functioning by October 2010. Gates has ordered the Under Secretary for Policy Mi-chele A. Flournoy to lead a “review of policy and strategy to develop a com-prehensive approach to Department of Defense cyberspace operations.”[18] The Secretary’s memorandum called for an “implementation plan” to set up a command that would “delineate its mission, roles, and responsibili-ties” and its “command and control, reporting and support relationships with combatant commands, [military] services and U.S. government depart-ment and agencies.”[19]This last point opens up broad legal

issues because of the complicated jigsaw of authorities and responsibilities that different U.S. agencies have in relation to military, government and private-sector computer networks.[20] As Larry McKee, a computer-security specialist, observes: “There are so many stake-holder organizations and individuals in

Page 14: IO Journal 1st QTR 2010

IO Journal | February 201014

the cyber domain it is difficult to know exactly where to start the collaboration, information sharing, and integration needed.”[21]5. The Geneva Convention requires com-

batants to be readily identifiable. Marcus Sachs, who set up the U.S. military’s first cyber war unit, asks: “What does that mean in cyberspace? Should we put a special header on packets” – the tiny digital messages that make up Internet traffic – “say-ing, ‘This is a U.S. Air Force attack packet’”?[22]

6. The United Nations Charter is one source of international law for de-termining when nations may legiti-mately go to war. But, says Harvard Law school professor Jack L. Gold-smith, “cyberwar is problematic from the point of view of the laws of war. The U.N. Charter basically says that a national cannot use force against the territorial integrity or political independent of any other nation. But what kinds of cyberattacks count as force is a hard question, because force is not clearly defined.”[23] Article 51 of the U.N. Charter authorizes states to defend themselves against an at-tack.[24] Resolutions of the United Nations Security Council (UNSC) may also authorize the use of armed force.[25]

7. The United States is a party to nu-merous treaties and agreements.[26] These help define the framework for assessing U.S. policy options for cy-ber-war. For example, DoD’s General Counsel has explicitly cited various U.N. actions such as General Assem-bly Resolution 2625 (1970),[27] which declares that a war of aggression is a crime and that States have a duty to refrain from terrorist acts. George W. Bush and Barak Obama have taken a more pro-active view of U.S. preroga-tives. The action taken against Al Qaeda in Afghanistan after 9/11 illus-trates that view. History, as the 1999 DoD assessment notes, has also seen the emergence of the doctrines of “anticipatory self-defense” and “self-defense in neutral territory,” that permit a nation to strike the first blow if it has good reason to believe it is about to be attacked. Support-

ing that view, the General Assembly’s definition of aggression of acts au-thorizes use of force for self-defense before an actual attack is launched.[28]

8. Treaties offer one way of dealing with cyber conflicts. A key challenge is that the while some nations have en-acted laws dealing with cyberspace, there is lack of consistency. The 2001 Council of Europe Convention on Cy-bercrime and NATO’s Cyber Defence Centre implementing the Cyber De-fence Management Authority (CDMA) and Cooperative Cyber Defence (CCD) Centre of Excellence represent cur-rent efforts to address that issue.[29]

Part III. Policy Issues In Cyber War

Cyber war raises legal issues rooted in a confluence of political, military and legal policy considerations. What Congress and the National Command Au-thority direct are legal mandates. But strategic political and military consid-erations drive many of the legal issues – especially in interpreting international law – that affect cyberwar. Key issues include:1. Is a cyber-attack itself an act of

war? Is it illegal? Although in 1999 the Office of General Counsel for DoD argued that this question “invokes an obsolete concept not mentioned in the UN Charter and seldom heard in modern discourse,”[30] evolving trends require posing the question again, as the U.S. and other nations are targeted for infiltration to ma-nipulate computer systems, engage in espionage, and create damage to im-portant infrastructure such as power grids.Does characterizing an action as

a cyber-attack render it illegal? What responses pass legal muster? The rules seem unclear. U.S. STRATCOM Command-er Kevin Chilton has declared that the “law of armed conflict will apply to this domain.”[31] Pointing out that the De-fense Department’s networks are “probed thousands of times a day,” General Chil-ton stated that any U.S. response would be decided by the President and Defense Secretary, while the military’s role was “to present them options, just as every

other combatant commander would do.” He explicitly declined to rule out a ki-netic response. “You don’t take any re-sponse options off the table from an[32] attack on the United States of America. Why would we constrain ourselves on how we would respond?”[33]

The “law of armed conflict” that General Chilton cites is synonymous with the “law of war,” although it ap-plies more clearly to international armed conflicts, whether or not they are formally declared wars. The law of war is expressed in treaties. The United States is a party to a number of them and rec-ognizes a sizable body of customary law of war.[34]

Part IV: Key Principles of the Law of War

The key principles of war include distinctions of combatants from non-combatants; military necessity; propor-tionality in tactics; avoiding superfluous injury; banning indiscriminate weap-ons, notably biological weapons and poi-son gas; perfidy or providing visual and electronic symbols like the Red Cross to identify persons or property pro-tected from attack; and neutrality. As Davis Brown comments, each presents a new challenge in the evolving cyber environment. These principles are “not situation specific; they govern the use of force everywhere. Therein lies the problem inherent in the emergency of cyberspace as a medium of warfare: Cy-berspace is nowhere.” Computer technol-ogy can inflict damage, death or injury, but not all damage is physical.

Thus an emerging issue is whether there should be “an unambiguous stan-dard of conduct for information warfare that will be universally recognized – a cyber-jus in bello.”[35]

Cyber-Attack as an Act of War1. When does a cyber-attack become

an act of war? No one accused China of an act of war in 2007 when hackers from somewhere in China infiltrated a U.S. Defense Department network.[36] In September 2007, an experimental cyber attack at the Department of Energy’s Idaho National Laboratory caused a generator to self-destruct.[37] How much is too much?

Page 15: IO Journal 1st QTR 2010

IO Journal | February 2010 15

Treaties like the Geneva Convention were designed to avoid or regulate war. But many feel that it applies uncertain-ly to cyber intrusions. The U.N. Charter bans the use of force except in self-defense. When Georgia was attacked, Russia denied involvement although suspicions spread that it used a crimi-nal proxy to create plausible deniability and insulate itself from legal liability. Does “use of force” apply only when a target is physically harmed? Is death or destruction required? Must the target be critical to the victim nation’s security? The rules are not clear.[38] Almost cer-tainly, the U.S. will resolve them on a case-by-case basis.2. What freedom of movement will be

accorded the Department of Defense for cyber warfare? Pentagon spokes-man Bryan Whitman has stated: “We need to be able to operate within that domain just like on any battle-field, which includes protecting our freedom of movement and preserving our capability to perform in that en-vironment.”[39] Deputy Secretary of Defense William J. Lynn puts it this way: “How can we deter and prevent attacks in cyberspace?” Deterrence presumes we can identify an adver-sary, but in cyberspace, it’s easy for an attacker to hide.[40]

3. When does a cyber attack become a Declaration of War? Hackers from China have infiltrated the White House and Pentagon and “key U.S. national security secrets risk being lost.”[41] These are grave challenges. Cyberwar expert John Aquilla ob-serves that:

“in the virtual world of debilitating logic bombs, fast-spreading viruses and remotely controlled “botnets” of thousands of slave computers, a grave and growing capacity for crippling our tech-dependent society has risen unchecked. And all the warning signs have been evident for years.”[42]

Arquilla argues for a behavior-based form of cyber arms control, perhaps like the Chemical Weapons Convention. In this approach, nations would refrain from intruding into or attack others’ in-formation systems in response to acts or

imminent threats of virtual or physical aggression. Ironically, it was the Rus-sians who first proposed this idea 13 years ago, and the U.S. government that rejected it.[43] Russia still endorses this approach.[44]4. Are Cyber networks “weapons?”

Opinions divide. Some say: Yes. The United States Air Force defines “weap-ons” as “devices designed to kill, in-jure, or disable people or to damage or destroy property.”[45] As one legal scholar has noted, “[w]hen an infor-mation packet containing malicious code travels through computer sys-tems under the jurisdiction of a neu-tral state, a strict construction of the law of neutrality would result in that state’s neutrality being violated.”[46] For its part, the Hague Convention for-bids the movement of weapons, even those the size of an electron, across the territory of a neutral state.[47]And, if one agrees that information

superiority is critical in warfare, it fol-lows that computer networking gener-ates combat power. That’s especially true where we empower commanders by enabling them to network geographi-cally dispersed networks to exploit a battlespace.

Gregory F. Intoccia and Joe W. Moore[47] offer a competing view. They challenge the notion that a network is a weapon. Their view is a little like the 2nd Amendment argument: guns don’t kill, people do. They contend that while a communications network is integral to some weapons systems, it is merely an element of one, not the system itself. They note that the Air Force’s Concept of Operations (CONOPS) describes a “weap-on system” as the Air Operation Cen-ter (AOC). The AOC includes personnel, capabilities, and equipment – not just equipment (e.g., not just computers).

Characterizing a computer network system as a weapon also raises the is-sue of who is a combatant. In theory, military, not civilians man weapons systems. Civilians who take part in hos-tilities may lose their protected status as civilians. Exactly when do civilians providing combat support services be-come “active participants” in a conflict and thus forfeit their protected status? The line is not clear. It is an issue with

implications for defense contractors,[49] to whom many functions and services previously performed by uniformed per-sonnel have been outsourced.5. Is malicious software or malware an

“arm of war?” The international com-munity is divided on this issue,[50] but it’s hard to see how an attack using it fails to constitute an “armed attack” under the United Nations charter.[51] The International Telecommunication Union (ITU) posits that cyber attacks “could in theory be treated as acts of war and be brought within the scope of arms control or the laws of armed conflict.”[52] In January 2007, the United States Patent and Trademark Office issued a patent for “the public network weapons system,” effectively recognizing the Internet protocol (IP) as a weapon system component.”[53]On the other hand, the 2001 Coun-

cil of Europe Convention on Cybercrime (COE Convention), to which the United States is a party, does not use the terms “cyber attack” or “cyber weapons.” Ste-phen Horns and Joshua E. Kastenberg have raised a corollary issue: are cyber attacks issues for the military or law enforcement?[54]6. Do the Geneva and Hague treaties

apply to Cyber Warfare? The neutral-ity doctrine comes into play here. The Hague Convention offers rules that govern neutrality.[55] The Conven-tions dictate that the territory of a neutral state is inviolable.[56] Article 8 of Hague Convention V provides an exception for telecommunications, permitting a neutral country to allow belligerents to use communications equipment (telegraph or cable wires or wireless telegraphy apparatus), on an impartial basis. The U.S. has taken the position that “the plain language of this agreement would appear to apply to communication satellites as well as to ground-based facilities.”[57]Cyber attacks that route themselves

through a neutral State clearly violate the neutrality doctrine. This has broad implications in the current global envi-ronment, where cyber attacks may be routed through a number of non-bellig-erent States.7. Does the so-called “international

humanitarian law” framework apply

Page 16: IO Journal 1st QTR 2010

IO Journal | February 201016

to cyber-warfare? Some argue that it applies by analogy.[58] This raises the question of what “distinction” be-tween military and non-combatants means under the Geneva Convention. The 1977 Additional Protocol requires parties to an armed conflict to distin-guish between the two.[59] The treaty bars warring parties from destroy-ing objects needed for a population’s survival. That would include crops, drinking water, power stations, etc. In concept, States “must never use weap-ons that are incapable of distinguish-ing between civilian and military targets.”[60]Dual-use targets further complicate

the issue. Can an air traffic control sys-tem used for military purposes be taken down, thus causing a civilian airliner to crash? Is it appropriate to take out a power grid used for air defenses of an adversary that may also shut down a hospital? The Department of Defense would presumably deal with these chal-lenges on a case-by-case basis. Allega-tions that the Chinese recently took down a power grid in Florida illustrate the relevance.8. What rules govern a country’s right

to self defense? Temple University associate professor Duncan Hollis says: “If [a country] considers [cyber attacks] acts of war, they have a right under international law to respond with self-defense, and it doesn’t just have to be via computer.”[61] NATO is now looking more closely at the laws of cyber warfare and has opened the Cooperative Defense Center in Tal-linn, the capital of Estonia, to exam-ine how to close gaps in legal systems that cyber cyber crime has revealed. The 1999 DoD General Counsel Assess-ment argues that the U.S. is entitled to an “active defense” by using force to defend against computer network attacks when “significant damage is being done to the attack system or the data stored in it, when the sys-tem is critical to national security or to essential national infrastructures, or when the intruder’s conduct in the context of the activity clearly mani-fests a malicious intent.”[62]

9. What rules define who the oppo-nent is? Botnet attacks may mask

the identity of the attacker. Kenneth Geers notes: “Law enforcement in general in the U.S. are not sure exact-ly what they can or cannot do, as laws are changing on the fly.”[63] What happens when hackers route their at-tacks through countries with no laws or poorly drawn laws that govern cy-berattack? Attackers may range from bored teenagers to criminals to na-tion-states.[64] It may be difficult to identify the intruder, who may mask identity by using intermediate relay points, spoofing, or other tactics. Confirming an attacker’s identity or

intention does not solve the attribu-tion problem an unauthorized person may use a computer. That challenge has raised issues for over a hundred nations who believe that China sponsors cy-ber intrusions. A key issue is whether an attack that cannot be shown to be state-sponsored justifies an act of self-defense by taking action that has an ef-fect in another nation’s territory. Thus, how should he U.S. respond to Chinese incursions?

And what if incursions are perpetrat-ed by allied nations or parties within them?[65]10. What laws need to be updated to

provide defenses to cyber-attack? Although the Estonia attack in 2007, which occurred when violent pro-tests broke out among the Russian ethnic minority and triggered an In-ternet backlash, did not specifically concern DoD, it’s nature illustrates a challenge DoD planners must think about. The Estonia incident brought down banks, ministries, and news-papers. Indentifying the attacker was one issue. But, reports Eneken

Tikk, an Estonian legal expert, even then, “the punishment was so low that it was impossible to procedur-ally initiate covert investigations to investigate it.”[66]

11. What rules govern cooperation with other nations? China has denied the existence of, or partic-ipation in, the “Ghostnet” attack ap-parently launched by China to hack into U.S. systems. Russia refused to cooperate with Estonia in investi-gating the cyber-attack against Es-tonia. The rules on this issue remain unresolved.

12. What tactics constitute cyber-war? In the Iraq war, the U.S. had impres-sive cyberwar capabilities, but the rules for employing them were un-clear. Cyber attack may disrupt or destroy. It can go after DNS serv-ers. It can spoof official government sites as a propaganda tactic. Some argue that microwave bombs or mali-cious polymorphic viruses or bombs designed to destroy networks are a part of cyberwar.[67] What rules govern the use of these tactics?

13. Is cyber-espionage an act of war and what is the legally appropriate response? Much of the hacking that China, Russia and other parties are accused of conducting appear to be aimed at data mining. At what point does such action become an act of war? What is the legally appropriate response? The general view seems to be that a doctrine of proportionality applies but that is as much a politi-cal as a legal decision. U.S. STRAT-COM Commander Kevin Chilton has stated that the thousands of daily intrusions against DoD sytems are geared towards espionage, “gather-ing information rather than slowing or manipulating the department’s re-cords… Information stolen includes personnel and medical records.”[68]

14. What are the legal consequences of damage inflicted on noncom-batants? Attacks on systems that control everything from air traffic control systems to nuclear power plants raise legal issues. Tactical considerations apparently played the key role in decisions made in Iraq, but there appears to have been

US Army Staff Sgt. Adam Vinglas attached to the 305th Psychological Operations Company, 17th Fires Brigade, speaks with a local Iraqi businessman while out on a joint patrol in Al Quarnah, Iraq, October 2009. US Army photo by Spc. Samantha R. Ciaramitaro/Released

Page 17: IO Journal 1st QTR 2010

IO Journal | February 2010 17

some concern about what precedents cyber tactics against key installa-tions would produce. Mark Rasch has remarked:

“The U.S. and Western nations are much more dependent on electronic infrastructures than lesser developed countries, so they are rightly reluctant to establish a legal precedent that permits cyberwarfare. Cyberwar is asymmetric, which means it benefits lesser military powers as much as, or even more than, military goliaths. Nobody expects Iraqi B52’s to fly over Washington, D.C., but a handful of Iraqi computer scientists (or scientists bought with Iraqi oil money) could launch a cyber attack -- at least to some degree -- against U.S. targets.”[69]

In 2003, the Pentagon and American intelligence agencies made plans for a cyber attack to freeze billions of dollars in the bank accounts of Saddam Hussein and to cripple his government before the invasion of Iraq. A report published in The New York Times recounts that it would have left Saddam with no money for war supplies or to pay troops. “We knew we could pull it off – we had the tools,” it quotes one Pentagon official. But the attack was not launched:

“Bush administration officials wor-ried that the effects would not be limited to Iraq but would instead create world-wide financial havoc, spreading across the Middle East to Europe and perhaps to the United States. Fears of such col-lateral damage are at the heart of the debate as the Obama administration and its Pentagon leadership struggle to de-velop rules and tactics for carrying out attacks in cyberspace.”[70]

Commenting on the plan, Newsmax reported that a key factor was that Iraqi banking network was linked to a finan-cial communications network located in France. A cyber attack might have brown down banks and ATMs in Europe.[71]

In fact, the U.S. offensive against Saddam involved electronic jamming and digital attacks against Iraq phone companies. Officials informed interna-tional communications companies of jamming and asked for assistance in turning off certain channels.[72]

Iraq was hardly the first example of the use of cyber war by the U.S. In the late 1990’s, the American military

attacked a Serbian telecommunications network that accidently affected the In-telsat satellite communications system. The system’s service was interrupted for several days.[73]

Concern about risks of unintended harm to civilians and damage to civilian assets through computer network at-tacks is sensitive enough that the White House declines to comment, although DoD officials involved in planning the new cyber command “acknowledge that the risk of collateral damage is one of their chief concerns.”[74] 15. What should be the lines of juris-

diction between the FBI and the Department of Defense in dealing with cyber-intrusions? FBI Michael Vatis stated, as the Space Command embarked upon developing a strat-egy, that “this requires a close re-lationship between military and law enforcement.” He noted that the FBI will have to help determine if any cy-ber attack suffered by U.S. military or business entities calls for a mili-tary or law enforcement response.” The Internet is ubiquitous. It allows attacks from anywhere in the world. Attackers can loop in from many different Internet providers, said Vatis, who noted that a cyberattack can include espionage using com-puter networks.[75] He added: “It could start across the street but ap-pear to be coming from China. And something like that might look like a hacker attack could be the begin-ning of cyber warfare.”

16. Who will formulate the cyberwar-fare regulations that govern DoD strategy and tactics? Cyber warfare is still at an embryonic stage, and the U.S. has not forged, in the view of some, “a coherent policy for en-gaging in warfare involving attacks on a country’s electrical power grids and other important infrastruc-ture.”[76] Is this a function for the Department of Defense or the De-partment of Justice, and what rules will Congress promulgate to regulate a permissible scope of action?

17. Should cyberwar be attributable to cyberwarriors? Some have ar-gued that as laws and conventions governing war require uniforms and markings, to distinguish civilians form military individuals or assets, a similar rule should apply to cyber-war.[77] It’s doubtful, however, that such a rule could be applied in prac-tice and there’s no evidence DoD has seriously considered it.

18. Is new legal authority needed to meet the challenge or botnet at-tacks or use of a botnet strategy by DoD? Colonel Charles W. Williamsom III, has argued for re-conceiving cyber war towards a more offensive capabil-ity by using “botnet” attacks against an adversary to deter and retaliation.A botnet is a collection of widely dis-

tributed computers controlled from one or more points. Attackers build botnets by using automated processes to break through the defenses of computers and implant their programs or code. Often, the computer user is tricked through a crafty e-mail into cooperating with the installation of the code. The infected machines are called zombies and can be remotely controlled by masters. Hackers can build multiple levels of masters and zombies with millions of computers.[78]

It is not clear to what extent DoD possesses the authority to mount such an attack. William discusses the legal is-sues that obtaining such authority may entail:• The international law doctrine of “de-

fense in neutral territory,” may sanc-tion it where an af.mil botnet is used defensively.

• Reciprocity may for such a strategy repre-sent the “bigger challenge for the U.S.:”

Chief of Naval Operations Adm. Gary Roughead delivers remarks during the Center for Strategic & International Studies’ “Information Dominance: the Navy’s Initiative to Maintain the Competitive Advantage in the Information Age” event, October 2009, in Washington, D.C. US Navy photo by Mass Communication Specialist 1st Class Tiffini Jones Vanderwyst/Released

Page 18: IO Journal 1st QTR 2010

IO Journal | February 201018

“A U.S. defensive DDOS attack on a neutral country, or on multiple neutral countries, will certainly require the U.S. to explain itself. Commanders need to be ready to disclose some facts indicat-ing why the U.S. took action and what they did to tailor their response. Finally, the U.S. needs to be ready to consider legitimate claims for compensation, if warranted.”[79]• A particular challenge is presented in

defending attacks from devices that adversaries have captured from the U.S. or civilians of allied partner nations.

ConclusionThis paper has addressed the primary

legal and policy challenges presented by the emergence of cyberspace as a mili-tary domain. While answers to these questions will require additional inter-agency cooperation, political leader-ship and additional research, the U.S. is moving closer to a set of legal and policy positions that will shape its cyber struc-ture in the 21st Century.

Mr. James P. Farwell has served as a con-sultant to the Department of Defense and to the U.S. Stategic Command and the U.S. Special Operations Command.

Endnotes1. The opinions expressed are purely

those of the author and do not nec-essarily represent those of the U.S. Department of Defense or any of its agencies.

2. Martin C. Libicki, Cyberedeterrence and Cyberwar, prepared for the Unit-ed States Air Force, (Santa Monica: Rand Corporation, 2009): Preface.

3. Major David Willson, “A Global Prob-lem: Cyberspace Threats Demand an International Approach,” ISSA Journal, August 2009, p. 14. Major Wilson is an active duty lawyer with the U.S. Army.

4. Ibid.5. Seymour E. Goodman Herbert S. Lin

(Editors), “Toward a Safer and More Secure Cyberspace,” National Re-search Council and National Acad-emy of Engineering, (Washington: National Academies Press, 2007), pp. 38-39.

6. Ellen Messmer, “U.S. Army kick-

starts cyberwar machine,” CNN.com, November 22, 2000.

7. Kim Hart, “A New Breed of Hackers Tracks Online Acts of War,” Wash-ington Post, August 27, 2008.

8. Bill Gertz, “China blocks U.S. from cyber warfare,” Washington Times, May 12, 2009.

9. Malcom Moore, “China’s global cyber-espionage network GhostNet pen-etrates 103 countries,” Telegraph.co.uk, March 29, 2009. China denies participating in such an attack.

10. Dept. of Defense, Office of General Counsel, “An Assessment of Inter-national Legal Issues in Information Operations,” (1999), p. 12: www.au.af.mil/au/awc/awcgate/dod-io-legal/dod-io-legal.pdf.

11. Ellen Messmer, “U.S. Army kick-starts cyberwar machine,” CNN.com, November 22, 2000.

12. Bradley Graham, “Bush Orders Guidelines for Cyberwar-Warfare, Washington Post, February 7, 2003. The guideline itself is classified.

13. Ibid.14. “Text: Obama’s Remarks on Cyber-

Security,” The New York Times, May 29, 2009. Se also: David E. Sanger and John Markoff, “Obama Outlines Coordinated Cyber-Security Plan,” The New York Times, May 29, 2009.

15. David E. Sanger and Thom Shanker, “Pentagon Plans New Arm to Wage Cyberspace Wars,” The New York Times, May 28, 2009.

16. Ibid.17. Shuan Waterman, “U.S. takes aim

at cyberwarfare,” The Washington Times, July 2, 2009.

18. Ibid.19. Ibid.20. Ibid.21. Ibid.22. Ibid.23. John Markoff and Thom Shanker,

“Halted ’03 Iraq Plan Illustrates U.S. Fear of Cyberwar Risk,” The New York Times, August 2, 2009.

24. Article 51 states: “Nothing in the present Charter shall impair the inherent right of individual or col-lective self-defence if an armed attack occurs against a Member of the United Nations, until the Se-curity Council has taken measures

necessary to maintain international peace and security. Measures taken by Members in the exercise of this right of self-defence shall be im-mediately reported to the Security Council and shall not in any way affect the authority and responsi-bility of the Security Council under the present Charter to take at any time such action as it deems neces-sary in order to maintain or restore international peace and security.” Source: http://www.un.org/en/doc-uments/charter/chapter7.shtml

25. Dept. of Defense Office of General Counsel, “An Assessment of Inter-national Legal Issues in Information Operations,” (1999), p. 12.

26. See: Treaties in Force 2009: http://www.state.gov/s/l/treaty/trea-ties/2009/index.htm.

27. “Declaration on Principles of Inter-national Law Concerning Friendly Relations and Cooperation among States in Accordance with the Char-ter of the United Nations,” General Assembly Resolution 2625 (1970).

28. Dept. of Defense Office of General Counsel, “An Assessment of Inter-national Legal Issues in Informa-tion Operations,” (1999), pp. 18-19. Although not cyber war, the 1986 bombing of Libyan command and leadership targets to persuade Libya to stop sponsoring terrorist attacks against U.S. interests, the 1998 at-tack on the Iraqi military intelli-gence headquarters to persuade Iraq to desist from assassination plots against George H.W. Bush and the invasion of Iraq in 2003 to pre-empt WMDs offer classic illustrations of the U.S. posture on this precept.

29. See: Cyber Law Update,” visited April 6, 2009, at http://cyberlawupdate.blogspot.com/2008/08/cyber-law-update-august-2008-issue-no-5.html. See also, “NATO and Cyber De-fence, Mission Accomplished?”, Rex B. Hughes (April 2009), at http://www.atlcom.nl/site/english/nieu-ws/wp-content/Hughes.pdf; and “Convention on Cybercrime,” Coun-cil of Europe, Budapest (November 23, 2001), entered into force July 1, 2004, at http://conventions.coe.int/Treaty/EN/Treaties/Html/185.htm.

Page 19: IO Journal 1st QTR 2010

IO Journal | February 2010 19

30. Dept. of Defense Office of General Councel, “An Assessment of Inter-national Legal Issues in Information Operations,” (1999), p. 12.

31. Jeff Schogol, “Official: No options ‘off the table’ for U.S. response to cyber-attacks,” Stars and Stripes, May 8, 2009.

32. Davis Brown, “A Proposal for an In-ternational Convention To Regulate the Use of Information Systems in Armed Conflict,” 47 Harv. Int. L.J. 179 (2006), at 180.

33. Jeff Schogol, “Official: No options ‘off the table’ for U.S. response to cyber-attacks,” Stars and Stripes, May 8, 2009.

34. Dept. of Defense Office of General Counsel, “An Assessment of Inter-national Legal Issues in Informa-tion Operations,” (1999), (1999), p. 6: www.au.af.mil/au/awc/awcgate/dod-io-legal/dod-io-legal.pdf.

35. Davis Brown, “A Proposal for an International Convention To Regulate the Use of Information Systems in Armed Conflict,” 47 Harv. Int. L.J. 179 (2006), at 181. Brown notes that for a discussion of the ramifications of informa-tion warfare on jus ad bellum, see generally: Walter Gary Sharp, Sr., “Cyberspace and the Use of Force”(1999); Yoram Dinstein, “Computer Network Attacks and Self-Defense,” 76 Int’l L. Stud. 99 (2002); Daniel B. Silver, “Computer Network At-tack as a Use of Force under Article 2(4) of the United Nations Charter,” 76 Int’l L. Stud. 73 (2002); Eric Tal-bot Jensen, “Computer Attacks on Critical National Infrastructure: A Use of Force Invoking the Right of Self-Defense,” 38 Stan. J. Int’l L. 207 (2002); Horace B. Robertson, Jr., “Self-Defense against Computer Network Attack under Internation-al Law,” 76 Int’l L. Stud. 121 (2002); Michael N. Schmitt, “Computer Net-work Attack and the Use of Force in International Law: Thoughts on a Normative Framework,” 37Colum. J. Transnat’l L. 885 (1999); James P. Terry, “The Lawfulness of Attacking Computer Networks in Armed Conflict and in Self-De-fense in Periods Short of Armed

Conflict: What Are the Targeting Constraints?,” 169Mil. L. Rev. 70 (2001)

36. China denied involvement with this action.

37. Duncan Hollis, “E-War rules of en-gagement,” Los Angeles Times, Oc-tober 8, 2007.

38. See: Duncan B. Hollis, “E-war rules of engagement,” Los Angeles Times, October 8, 2007.

39. David E. Sanger and Thom Shanker, “Pentagon Plans New Arm to Wage Cyberspace Wars,” The New York Times, May 28, 2009.

40. Shuan Waterman, “U.S. takes aim at cyberwarfare,” The Washington Times, July 2, 2009.

41. Alastair Gee, “The Dark Art of Cy-berwar,” Foreign Policy, November 2008.

42. John Arquilla, “Click, click…. Count-ing down to Cyber 9/11,” Sgate.com, July 26, 2009.

43. John Arquilla, “Click, click…. Count-ing down to Cyber 9/11,” Sgate.com, July 26, 2009. Arquilla was a mem-ber of the U.S. team and urged ac-ceptance of the Russian offer. It was rejected, he says, because Washing-ton felt that it was ahead and should not give up its advantage. Actually, many feel that America is not ahead today in cyber technology.

44. John Markoff and Andrew E. Kram-er, “U.S. and Russia Differ on a Trea-ty for Cyberspace,” The New York Times, June 27, 2009.

45. Department of the Air Force, Policy Directive 51-4, “Compliance with the Law of Armed Conflict,” para. 6.5 (1993).

46. Davis Brown, “A Proposal for an In-ternational Convention To Regulate the Use of Information Systems in Armed Conflict,” 47 Harv. Int. L.J. 179 (2006), at 210; and Jeffrey T. G. Kelsey, “Hacking into International Humanitarian Law: the principles of distinction and neutrality in the age of cyber warfare,” Michigan Law Review, May 2008: http://goli-ath.ecnext.com/free-scripts/docu-ment_view_v3.pl?item_id=0199-7854035&format_id=XML.”

47. 1907 Hague Convention V, Article 2.48. Gregory F. Intoccia and Joe W.

Moore, “Communications Technol-ogy, Warfare, and the Law: Is the Network a Weapon System?” Hous-ton Journal of International Law, 28 (Winter 2006): http://www.en-trepreneur.com/tradejournals/ar-ticle/146272030.html.

49. On this point, see also: Davis Brown, “A Proposal for an International Convention To Regulate the Use of Information Systems in Armed Con-flict,” 47 Harv. Int. L.J. 179 (2006), p. 182; and Michael E. Guillory, “Ci-vilianizing the Force: Is the United States Crossing the Rubicon,” 15 A.F. L. Rev. 111, 123-130 (2001).

50. Stephen W. Korns and Joshua E. Kastenberg, “Georgia’s Cyber Left Hook,” Army.mil, April 2007; and noting Thomas C. Wingfield, “When Is a Cyber Attack an ‘Armed At-tack?’” Cyber Conflict Studies As-sociation, 1 February 2006, http://www.cyberconflict.org/pdf/Wing-fieldCCSAArticle1Feb06.pdf, 8; see also Kelsey, 1443; Gregory F. Intoc-cia and Joe W. Moore, “Communica-tions Technology, Warfare, and the Law: Is the Network a Weapon Sys-tem?” Houston Journal of Interna-tional Law, 28 (Winter 2006), 469; Lawrence T. Greenberg, Seymour E. Goodman, and Kevin J. Soo Hoo, In-formation Warfare and International Law (Washington: National Defense Univ. Press, 1998), http://www.dod-ccrp.org/files/Greenberg_Law.pdf, 30-33; Hanseman, 183.

51. “Charter of the United Nations” (1945) in The United Nations and Human Rights 1945-1995 (New York: United Nations, 1995). See also: Stephen W. Korns and Joshua E. Kastenberg, “Georgia’s Cyber Left Hook,” Army.mil, April 2007.

52. Stephen W. Korns and Joshua E. Kastenberg, “Georgia’s Cyber Left Hook,” Army.mil, April 2007, cit-ing International Telecommu-nication Union, “A Comparative Analysis of Cybersecurity Ini-tiatives Worldwide” (Geneva: 10 June 2005). http://www.itu.int/o sg/spu/c ybe r s e c u r i t y/doc s/Background_Paper_Comparative_Analysis_Cybersecur ity_Init ia-tives_Worldwide.pdf, 23.

Page 20: IO Journal 1st QTR 2010

IO Journal | February 201020

53. John Goree and Brian Feldman, “Public Network Weapon System and Method,” United States Patent no. 7,159,500 (Washington: US Pat-ent and Trademark Office, 9 Janu-ary 2007), http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&Sect2=HITOFF&d=PALL&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.htm&r=1&f=G&l=50&s1=7159500.PN.&OS=PN/7159500&RS=PN/7159500, noted in Stephen W. Korns and Joshua E. Kastenberg, “Georgia’s Cyber Left Hook,” Army.mil, April 2007.

54. Stephen W. Korns and Joshua E. Kastenberg, “Georgia’s Cyber Left Hook,” Army.mil, April 2007, citing Gregory F. Intoccia and Joe Wesley Moore, “Communications Technol-ogy, Warfare, and the Law: Is the Network a Weapon System?” Hous-ton Journal of International Law, 28 (Winter 2006): http://www.en-trepreneur.com/tradejournals/ar-ticle/146272030.html.

55. See: 1907 Hague Convention XIII; and J.T.G. Kelsey, “Hacking into Interna-tional Humanitarian Law: the prin-ciples of distinction and neutrality in the age of cyber warfare,” Michi-gan Law Review, May 2008: http://goliath.ecnext.com/free-scr ipts/document_view_v3.pl?item_id=0199-7854035&format_id=XML, Part II.

56. 1907 Hague Convention V, Article 1; and Hacking into International Humanitarian Law: the principles of distinction and neutrality in the age of cyber warfare,” Michi-gan Law Review, May 2008: http://goliath.ecnext.com/free-scr ipts/d o c u m e n t _v i e w_v3 .p l ? i t e m _id=0199-7854035&format_id=XML, Part II (page number not reflected on Internet).

57. Dept. of Defense Office of General Counsel, “An Assessment of Inter-national Legal Issues in Information Operations,” (1999), p.10: www.au.af.mil/au/awc/awcgate/dod-io-legal/dod-io-legal.pdf

58. Dept. of Defense Office of General Counsel, “An Assessment of Inter-national Legal Issues in Information Operations,” (1999), (1999), (“There are novel features of information

operations that will require expan-sion and interpretation of the estab-lished principles of the law of war. Nevertheless, the outcome of this process of extrapolation appears to be reasonably predictable”); Roger D. Scott, “Legal Aspects of Informa-tion Warfare: Military Disruption of Telecommunications,” 45 Naval L. Rev. 57, 59 (1998) (“Anyone with an understanding of the fundamental principles of the law of war will not need specific, ‘no-brainer’ precedents to assess the legality of proposals for information attack.”), cited in J.T.G. Kelsey, “Hacking into International Humanitarian Law: the principles of distinction and neutrality in the age of cyber warfare,” Michigan Law Review, May 2008: http://goliath.ecnext.com/free-scripts/document_v i e w _ v 3 . p l ? i t e m _ i d = 0 1 9 9 -7854035&format_id=XML.

59. See generally Convention Respect-ing the Rights and Duties of Neu-tral Powers and Persons In Case of War on Land, Oct. 18, 1907, 36 Stat. 2310, T.S. 540 [1907 Hague Conven-tion V]; Convention Concerning the Rights and Duties of Neutral Powers in Naval War, Oct. 18, 1907, 36 Stat. 2415, T.S. 545 (1907 Hague Conven-tion XIII).

60. Legality of the Threat or Use of Nu-clear Weapons, Advisory Opinion, 1996 I.C.J., 226, 257 (July 8): http://www.fas.org/nuke/control/icj/text/iunan_ijudgment_19960708_Advi-sory_Opinion.htm.

61. Ibid.62. Dept. of Defense Office of General

Counsel, “An Assessment of Inter-national Legal Issues in Information Operations,” (1999), p. 20.

63. Alastair Gee, “The Dark Art of Cy-berwar,” Foreign Policy, November 2008.

64. Jeff Schogol, “Official: No options ‘off the table’ for U.S. response to cyber-attacks,” Stars and Stripes, May 8, 2009.

65. See, e.g., James Bamford, The Shad-ow Factory: The Ultra-Secret NSA from 9/11 to the Eavesdropping of America (New York: Doubleday, 2008). Bamford discusses cutting edge technology developed by com-

panies apparently started with Is-raeli support that market Verint, Naurus and other programs that provide a startling window into how technology enables cyber intru-sions for espionage. The companies that created it market it to nations globally, including China, which has demonstrated innovative skills in taking cyber technology to new lev-els of sophistication.

66. Alastair Gee, “The Dark Art of Cy-berwar,” Foreign Policy, November 2008.

67. Mark Rasch, “Why the Dogs of Cyber-war Stay Leashed,” Security Focus, March 24, 2003.

68. Jeff Schogol, “Official: No options ‘off the table’ for U.S. response to cyber-attacks,” Stars and Stripes, May 8, 2009.

69. Mark Rasch, “Why the Dogs of Cyber-war Stay Leashed,” Security Focus, March 24, 2003.

70. John Markoff and Thom Shanker, “Halted ’03 Iraq Plan Illustrates U.S. Fear of Cyberwar Risk,” The New York Times, August 2, 2009.

71. Charles R. Smith, “Cyber War against Iraq,” Newsmax.com, March 13, 2003.

72. John Markoff and Thom Shanker, “Halted ’03 Iraq Plan Illustrates U.S. Fear of Cyberwar Risk,” The New York Times, August 2, 2009.

73. Ibid.74 Ibid.75. “US military lacks strong cyberwar

regulations,” E-Commerce Journal, April 30, 2009: http://www.ecom-merce-journal.com/news/15051_us_military_lacks_strong_cyberwar_regulations?drgn=1

76. John Markoff and Andrew E. Kramer, “U.S. and Russia Differ on a Treaty for Cyberspace,” The New York Times, June 27, 2009.

77. “Principles for defining cyberwar – a modest proposal,” Radical Instrument (2009): http://radicalinstrument.wordpress.com/2009/07/29/pr in-c iples-for-def in ing-cyberwar-a-modest-proposal/.

78. Col. Charles W. Williamson III, “Car-pet Bombing in cyberspace: Why American needs a botnet,” Armed Forces Journal, May 2008.

79. Ibid.

Page 21: IO Journal 1st QTR 2010

IO Journal | February 2010 21

ApplyingDeterrenceIn CyberspaceBy Kevin R. Beeker

Robert F. Mills

Michael R. Grimaila

confounds traditional approaches to deterrence based on a “de-tect and preempt” strategy and advocate for a strategy based on “fight through.” Finally, we discuss how attribution, identi-ty management, and moderating trust relationships may have significant roles in building an effective deterrence strategy.

Deterrence Operations Joint Operating ConceptThe Deterrence Operations Joint Operating Concept (DO

JOC) provides the military’s doctrinal foundation for deter-rence operations (Department of Defense, 2006). The central idea of the DO JOC is to “decisively influence the adversary’s decision-making calculus in order to prevent hostile actions against US vital interests.” An adversary’s decision calculus focuses on their perception of three primary elements:

• The benefits of a course of action• The costs of a course of action• The consequences of restraint (i.e., costs and benefits of

not taking the course of action we seek to deter)Joint military operations and activities contribute to the

“end” of deterrence by affecting the adversary’s decision cal-culus elements in three “ways”, namely by denying benefits, im-posing costs, and encouraging restraint (Figure 1). Deterrence is successful when the perceived costs incurred by an adversary outweigh the perceived benefits in regard to the consequenc-es of restraint (fulcrum). Deterrence fails when an adversary perceives the benefit of taking an action outweighs any as-sociated costs and then commits that action. In the following sections, we will observe how the DO JOC model can be applied to develop a cyberspace deterrence strategy.

The views expressed in this paper are those of the authors and do not reflect the official policy or position of the United States Air Force, the Department of Defense, or the U.S. Government.

 Moving “the theory of deterrence” into the world of practice is extremely challenging, and nowhere are these complex challenges more evident than in attempting to deter attacks in and through cyberspace. Quoting from a speech by Vice Admiral Carl V. Mauney,

Deputy Commander, US Strategic Command:“We face emerging forms of 21st Century warfare—transna-

tional terrorism, cyber warfare, and counter-space warfare—which we have little experience in deterring. We need to think carefully about how deterrence will or will not apply to these threats and we need to tailor our deterrent strategy and associated capabilities accordingly. I believe deterrence does have a critical role to play in these threats.” (Mauney, 2009)

For cyberspace deterrence strategy the list of limitations, difficulties and challenges is indeed long. Libicki (2009) raises several hard questions: How can we differentiate between spy-ing and an attack? Is spying cause for retaliation? Can we de-termine who conducted the attack? Can we actually retaliate over a given offense or impose costs for cyberspace attacks? How do we avoid escalation?

This paper explores some of these concepts further. First we introduce a common framework used for deterrence strategy in all domains and discuss how this framework can be applied to cyberspace. We then discuss how the very nature of cyberspace

Page 22: IO Journal 1st QTR 2010

IO Journal | February 201022

Figure 1: DO JOC Model of Deterrence (Okey, n.d.)

Objective: Deny Benefi tsOne practical way to deter an adversary who seeks to con-

duct behaviors unacceptable to the United States is to deny them the benefit of their actions. A prime example of this is the way that the US military deters adversaries from using chemical or biological weapons. The military equips its people with mission oriented protective posture (MOPP) gear (Figure 2) and trains them to carry out their missions despite the use of chemical weapons on the battlefield. These actions tell an adversary that it may be inconvenient to conduct our mission, but we will get it done nonetheless. The adversary may be de-terred from using chemical/biological weapons because the reward they seek is denied. If the adversary is not deterred from using those weapons, our training ensures we can operate safely and continue the mission anyway.

Figure 2: Chemical/Biological Warfare Deterrence (442 Fighter Wing, 2009)

Our deterrence strategies should seek to deny the adver-sary benefit from their actions against us in and through cyberspace as well. Exercising and proving our ability to fight through cyber attacks will contribute to cyberspace deter-rence, but how do we do this? It turns out the space com-munity has already addressed some of these same issues. There are many parallels between space and cyberspace; both operational environments in their own right, but they are also utility domains that enable and support operations in the air, land and sea domains. To help ensure unfettered ac-cess to space, US Strategic Command (USSTRATCOM) created an Operationally Responsive Space (ORS) capability to ensure

Joint Force Commander (JFC) needs are met despite adver-sary attempts to disrupt our space capabilities (Magnuson, 2007). ORS consists of three tiers as shown in Figure 3. Tier 1 involves leveraging existing capabilities to meet JFC needs, such as adjusting a satellite orbit to provide better coverage in a warfighting region. Tier 2 involves replacing a damaged satellite or providing capability within weeks. Tier 3 focuses on deploying new capabilities within one year.

ORS Needs

Gaps/NeedsIdentified and Prioritized by

USSTRATCOM

ORS Approaches Warfighting Effects

Tier 1

Tier 2

Tier 3

“Employ it” Reconstitute lostcapabilities

Augment/Surgeexisting capabilities

Fill Unanticipated Gapsin capabilities

Exploited new technical/operational innovations

Respond to unforeseenor episodic events

Enhance survivabilityand deterrence

“Launch/deploy it”

“Develop it”

On-demand withexisting assets

On-call with ready-to-field assets

Rapid transition from development to delivery of new or modified capabilities

Minute to hours

Days to weeks

Months (not years)

Figure 3: ORS Concept (Wegner, 2009)One might ask how we are posturing ourselves to continue

operating given our total dependence upon cyberspace—a do-main that is perhaps more contested than space. In the same way that the ORS office is designed to provide quick reconstitu-tion upon the destruction of our space assets, is there anything that can be done in cyberspace to quickly reconstitute after the destruction or compromise of key cyberspace infrastructure? Creating an operationally responsive cyberspace (ORC) capabil-ity would be a step in the right direction (Figure 4).

Figure 4: ORC Contributes to Benefi t DenialAlthough there are similarities between space and cyber-

space, there are some distinct differences. Cyberspace is a created domain—created by governments, businesses, or-ganizations, and individuals—and the cost of developing a cyberspace infrastructure is significantly less than develop-ing a space infrastructure. Within the Department of Defense (DoD) alone, there are many entities responsible for creating, sustaining and defending the cyberspace domain, includ-ing the services, combatant commands and combat support agencies. Defending and sustaining cyberspace is a shared

Page 23: IO Journal 1st QTR 2010

IO Journal | February 2010 23

responsibility, and focusing the capability to reconstitute our cyber infrastructure in a single ORC office, even in a limited sense, could be problematic. Nonetheless, we need to be think-ing about how ORC principles could be achieved.

ORC would be based upon similar principles to ORS, loosely based on three tiers as shown in Figure 5. ORC could still have the same three-tier focus as ORS, but instead of being directly involved in recovery and reconstitution efforts themselves, ORC would focus more on coordinating and encouraging ac-tivities throughout government and our national critical in-frastructure elements.

“IT guys” at all levels are primarily concerned with keeping the network operational. This activity is necessary but not suf-ficient, and an often-overlooked part of the mission assurance puzzle is business continuity planning (BCP), or continuity of operations planning (COOP). BCP—the equivalent of training in MOPP gear—mitigates operational risks when confronted with disaster, data loss, cyber attacks, or other serious events. A classic example of inadequate continuity planning was evident after Hurricane Katrina, during which emergency response ef-forts were crippled by a lack of communications: companies were unable to contact their employees to coordinate a re-

sponse, and municipal web sites normally used to disseminate disaster recovery and other information were unavailable for weeks (Landry and Koger, 2006).

Mission assurance planning is not new and should be included in any organiza-tion’s continuity plans, especially those dealing with critical processes and func-tions. This planning requires significant introspection by the end users and should not be left to the “IT guys” to sort out. After all, the whole point of building cy-berspace is to enable information sharing among diverse and distributed users, and end users can best understand and appre-ciate the context of the information tra-versing the network.

Unfortunately, BCP planning is often lacking for cyber infrastructure (Vijayan, 2005; U.S. Government Accounting Office, 2006). The 2009 National Infrastructure

Protection Plan and its underlying Sector Specific Plans at-tempted to remedy this situation (Department of Homeland Security, 2009), but the primary emphasis in these documents continues to be on protection of cyber assets rather than re-covery. As a result, these documents contain little detail or guidance for generating robust continuity plans. A strategic-focused ORC office would be an ideal location to develop BCP guidance for the DoD and national critical infrastructure. The office could also provide experts to work with governmental and private organizations in developing such plans.

Lastly, a Tier 3 capability in cyberspace is critical due to the rapidly-changing technology used to create the cyberspace domain. Governmental reviews continue to highlight the need for research and innovation, including President Obama’s re-cent 60-day cyberspace policy review, which has as part of its near-term action plan, the following:

Develop a framework for research and development strategies that focus on game-changing technologies that have the potential to enhance the security, reliability, resilience, and trustworthi-ness of digital infrastructure; provide the research community ac-cess to event data to facilitate developing tools, testing theories, and identifying workable solutions. (White House, 2009)

It is important to note that “resiliency” is included in the re-search efforts, to include “developing options for additional ser-vices the Federal government could acquire or direct investments

Tier Operationally Responsive Space

Operationally Responsive Cyberspace

1 - Retasking of a remote sensing satellite to provide reconnaissance photos

- Reallocating satellite or “backbone” network bandwidth towards urgent communications need

- Requesting additional bandwidth from civilian communications satellites

- Assigning extra computing processors across organizational boundaries

2 - Build a new satellite with off-the-shelf components in weeks or days

- Development of robust cyber infrastructure continuity of operations plans to recover and reconstitute quickly

- Use of virtual servers and desktop thin clients

-Data storage back-up sites

-Net Force maneuver concepts

3- Transition of new ISR/communications capability to operational use in less than one year

- Continual transition of new cyber reconstitution/recovery technologies to operational use

Figure 5: Operationally Responsive CyberspaceRe-tasking cyber infrastructure assets can be extremely

complicated due to highly shared resources and the inter-twined nature of cyberspace. Cyberspace assets such as satel-lite communications and Internet “backbone” pipes are used by a variety of customers spanning organizational boundaries. There is rarely a single authority or process that can reallocate resources quickly to support emergency requests. An ORC-type office could have the responsibility to establish procedures and define authorities for making decisions to reallocate/re-task cyber infrastructure assets across organizational boundaries at the national level (e.g., private sector, government, military, intelligence, etc.). At lower levels, single authorities may be easier to identify, such as in the DoD, where USSTRATCOM has been given the responsibility for securing and defending the Global Information Grid (GIG). Even then, however, the author-ity to secure and defend the GIG may not provide the authority to reallocate assets within the GIG, and similar reallocation functions are likely needed at lower organizational levels.

Tier 2 is focused on reconstituting cyberspace capabilities to prove and demonstrate our ability to fight through an at-tack with little or no degradation in capability. ORC must focus on all elements of the cyberspace infrastructure. This includes not only network links, computers and software, but also the information itself.

Page 24: IO Journal 1st QTR 2010

IO Journal | February 201024

the government could make to enhance the survivability of communications during a time of natural disaster, crisis or conflict” (White House, 2009). The report also includes a need to “coordinate with international partners and standards bod-ies to support next-generation national security/emergency preparedness communications capabilities in a globally dis-tributed next-generation environment.” An ORC office could take the lead in increasing the emphasis on research into cyber infrastructure recovery and reconstitution, and transitioning new technologies, capabilities, and processes to both the gov-ernmental and private sectors. This would contribute to deter-rence by proving to our adversaries our commitment to recovery and reconstitution capabilities and mission accomplishment.

Objective: Impose CostsThe second part of the DO JOC strategy for deterrence is

to impose costs—i.e., make the adversary expend a lot of re-sources in order to achieve the desired goal. If an adversary perceives that their preparations for attack are likely to be detected and preempted by the United States, they may be deterred from initiating the attack (Department of Defense, 2006). The benefit of conducting the attack is therefore denied by preemption.

This is one area where the realities of the physical world and the characteristics of cyberspace differ enough to make this “detect and preempt” strategy less effective for cyber-space. An attack conducted through cyberspace does not have the same frictions with regard to time and space. Ones and zeros traverse the globe at “network speed” thereby decreasing the amount of time to observe the indications and warnings for a cyber attack. When a packet shows up at the firewall, it is extremely difficult to determine if it is a “good” packet or an “evil” packet, and if it is evil, there is no warning…the attack is already underway.

Aside from the timeliness issue, there are other issues to consider. Key identifiers to predict, detect, track, and describe an incoming cyber attack are or minimal compared to the phys-ical world. There are no missile plumes to detect in cyberspace. Named areas of interest are difficult to identify. The sheer vol-ume of network traffic in the Internet makes discrimination between legitimate traffic and an attack difficult.

The same types of things that are done to conduct espio-nage in the cyberspace domain are the same things that are done in attacking the system: scanning, gaining access, gain-ing control of processes, and leaving a backdoor for future access. Today’s espionage could easily provide the access for tomorrow’s attack. Discriminating between preparation for an attack, preparation for exploitation, or legitimate use of cyber-space is extremely challenging.

General Kevin P. Chilton, Commander USSTRATCOM offered a solution to some of these problems:

“But you know, at the end of the day I believe we ultimately have to be even faster than network speed if we’re going to defend this network appropriately. How do you do that? I’m not defying the laws of physics here. You do it by focused high-tech intelligence. You do it by focused high-tech intel-

ligence, focused all-source intelligence, that [sic] tries to get you out and anticipate threats before they arrive. You have to be able to anticipate them and when you can preempt those threats and preempt those attacks before they arrive at your base, post, camp or station, or at your laptop on your desk.” (Chilton, 2009)

In this view we get around the inherent characteristics of cyberspace through our intelligence—our ability to predict or anticipate threats. The implication is that our intelligence agents are pre-positioned in adversary systems to such a de-gree that they can provide intelligence that allows preemp-tion. This puts a “detect and preempt” strategy back into play as a viable means to deny benefit to an adversary attempting to conduct an attack against the critical infrastructure of the United States via cyberspace.

There are some distinct challenges to this approach. First, our capabilities must be credible and demonstrable for an ad-versary to be deterred by “detect and preempt.” Recall the ex-ample of chemical warfare deterrence: when a military exercise takes place where MOPP gear is worn, Public Affairs and the news media are there to capture the event and disseminate the pictures and story. These messages are not just for internal consumption—they also communicate to external audiences. Of course, any public demonstration must always be weighed against intelligence and security considerations.

Another issue is the question of cyber egalitarianism. The very nature of cyberspace seems to necessitate attacking and exploiting adversary systems to gain intelligence in order to pursue a “detect and preempt” strategy of deterrence. Fur-ther, preemption in the form of a cyber response requires long scanning and planning times. It requires maintaining a covert presence on the adversary’s systems. Is it reasonable to main-tain an obtrusive intelligence presence—one that might be construed as an attack in and of itself—in another’s systems and not expect the adversary to attempt the same amount of surveillance in our own? Having an expectation that an ad-versary will refrain from obtaining the same intelligence and presence on our systems is philosophically hypocritical and not practical to implement. Working to increase adversary situa-tional awareness in order to prevent miscalculation is always a challenging issue when formulating deterrence strategy. The United States needs to engage the most capable cyber-nations to establish out-of-bounds areas, or “red lines”—like critical infrastructure such as electric plants and water treatment fa-cilities. This would allow the United States and cyber adversar-ies to pursue “detect and preempt” in other cyber venues such as government, military and intelligence circles.

Overall, “detect and preempt” as a cyberspace deterrence strat-egy has serious limitations. To be successful, this strategy relies upon intrusive “high-tech” intelligence gathering for detection, and it relies upon the ability to impose cost, or hold at risk, those systems and individuals which might perpetrate an attack. In ad-dition, this strategy has the drawback of actually condoning some types of cyber attacks/exploitation in order to detect and pre-empt others. Other strategies which enable cost imposition and benefit denial are necessary for cyberspace deterrence.

Page 25: IO Journal 1st QTR 2010

IO Journal | February 2010 25

Objective: Encourage RestraintThe third portion of the DO JOC framework is encouraging

restraint. The consequences of restraint (COR) fulcrum can be moved to enhance the effectiveness of strategies to impose costs and deny benefits to an adversary, as shown in Figure 6. Tailoring cyberspace deterrence policies which capitalize on this advantage creates unique opportunities. Three potential ways to influence the fulcrum are attribution, identity man-agement and moderating trust relationships.

Figure 6: Infl uencing the Cyberspace COR Fulcrum (adapted from Okey)

AttributionThe most significant deterrence challenge posed by the threat

of cyberspace attack is the perceived difficulty of attributing such attacks to a specific attacker, be it a state or nonstate actor. (Chilton and Weaver, 2009)

Attributing cyber attacks and intrusions to their source is critical to any deterrence strategy—how can we respond effec-tively if we do not know who did what? Attributing activities in cyberspace is difficult (but not insurmountable) because (1) it cannot be accomplished by purely technological means, and (2) cyber attacks cross jurisdictional boundaries and require trust and assistance to achieve attribution (Hunker, Hutchin-son and Margulies, 2008; Wozniak and Liles, 2009). Discerning the difference between a system failure and a cyber attack can be problematic, and of course there are also concerns regarding open and anonymous communications via the Internet.

Attributing cyber attacks will require modifications to laws and the development of procedures and policies that are not currently in place. Tracing an attack can involve multiple ju-risdictions and nations. Attributing attacks will necessitate establishing cyberspace norms and treaties for mutual coop-eration. Procedures to provide attribution must be developed, enhanced and normalized. No single technique will work in ev-ery possible situation; a combination of techniques will likely be needed to determine attribution. In the end, the ability to provide quick and accurate attribution will shift the “conse-quences of restraint” fulcrum point away from imposing costs, making cost imposition easier and more effective.

Identity ManagementTrust is fundamental to the successful operation of a decen-

tralized system, such as that composed by the Internet in the cyberspace domain. Exploiting the characteristics of a decen-tralized system will be pivotal in developing concepts that will

help defend cyberspace. The concepts of attribution and trust are bridged by a third concept, identity management (IM), which holds significant potential for cyberspace deterrence.

As a side note, the government and military are largely hi-erarchically-organized and centralized entities. As such, they struggle to bring order to the seeming chaos of the decentral-ized wilds of the cyberspace domain. In a decentralized sys-tem, no single entity owns the cyberspace defense problem. If we are going to organize and fight in a network centric environment and develop network centered deterrence strat-egies, we must blend aspects of a centralized-organization with elements of the decentralized—a hybrid. This will likely prove difficult for those favoring centralized command and control structures.

A 2008 presidential study noted that intrusion into DoD networks fell by more than fifty percent after a Common Access Card (CAC) infrastructure was implemented (Center for Strate-gic and International Studies, 2008). While the CAC is not a silver bullet solution, it is a critical step in knowing who is in our network at any given time. The Bush administration man-dated CAC implementation in Homeland Security Presidential Directive 12 (HSPD-12). The DoD began using the CAC in 1999, and by 2007 over 13 million cards had been issued (Aitoro, 2008). Unfortunately, the rest of the government is having dif-ficulty keeping up with HSPD-12 mandates. Background checks for federal government employees with less than 15 years of service and new identity cards with fingerprint data were to be completed by 27 October, 2007, and no agency met the deadline (Aitoro, 2008). The Office of Management and Budget started to require quarterly progress reports to track compliance, but as of October of 2008, only 29% of employees and contractors (1,593,191) had received their new identification cards (Office of Management and Budget, 2008).

Identity management will become even more important in the future. In a way, identity management allows for some de-gree of understanding an individual’s documented attributes. IM is normally issued by a government agency to document certain attributes of an individual—e.g., their age, eye color, and status. The government agency is responsible for ensuring you are you—normally through a birth certificate and social security number (which is also based upon a birth certificate). Thus the government agency is affirming that the attributes listed can be trusted. At its heart, identity management is fundamentally about linking attribution and trust.

Whom do we trust with access to our critical information? Whom do we trust to join our network? Are our systems us-ing the correct patches and security updates? Can we even tell? Decentralized systems (like those found in cyberspace) operate best when those who join are trusted. But cyber-adversaries seek to exploit these trust relationships to gain access, steal informa-tion, or otherwise wreak havoc. How do we deter such activi-ties? Identity management will be part of the solution because it reduces some of the anonymity currently found in cyberspace. IM attempts to confirm the identity of those we allow into our systems by linking their virtual presence with their physical identity and attributes. For example, potential adversaries will have to prove they have a required attribute to gain access.

Page 26: IO Journal 1st QTR 2010

IO Journal | February 201026

Moderating Trust RelationshipsAnother potential concept which will contribute to deter-

rence in cyberspace involves moderating trust relationships. A break in trust can result in shutting out or disallowing access. Individuals who misuse their privileges and permissions in a DoD computer system are subject to having those privileges revoked (in addition to other administrative or disciplinary actions). Likewise if an individual’s CAC were somehow compro-mised by an adversary that card’s privileges should be revoked to prevent its misuse.

But moderating trust relationships holds promise for appli-cation of cyberspace deterrence strategy in potentially greater ways. In November 2008, a US-based web hosting firm, McColo, was disconnected because it was responsible for more than 75% of the junk email sent globally (Krebs, 2008). McColo (whose servers were located in California) was effectively blacklisted out of service.

“Peering” refers to how Internet Service Providers (ISPs) connect with each other to exchange Internet traffic. Several computer security researchers were able to detail how McColo contributed to cybercrime and spam and convinced those with whom McColo connected to “de-peer” the company. Overnight, 40 websites hosting child pornography were essentially discon-nected from the rest of the Internet, botnet masters no longer had control of their legions of computers, and spam decreased 50-80% overnight (Kirk, 2008). This is a powerful idea that can contribute greatly to deterrence. Admittedly, spam levels soon returned to their old levels, and the perpetrators were not de-terred from finding other ways to conduct their activities (they probably have continuity plans as well). Perhaps, if the de-peer-ing of ISPs like McColo were the norm instead of the exception, this approach could prove a more robust deterrence strategy.

What is interesting about this scenario is that it wasn’t the government that forced McColo out of business. It was a con-glomeration of people sharing a common interest to protect the Internet community at large. Engendering support for cy-bersecurity from all participants in cyberspace will require a shift in the way we educate and train our people. Adapting the approach of the Internet security researchers responsible for bringing down McColo may have even greater success in developing cyberspace deterrence strategy. When undesirable activity cannot be deterred in cyberspace, the world commu-nity can work together to address the problem, perhaps with more success than if we operated independently.

This powerful idea can be instrumental in shifting the “con-sequences of restraint” fulcrum. Nation-states who participate in upholding the standards and laws of right conduct in cyber-space will be granted access—they will be trusted. Those who assist with identifying and attributing criminal and warlike behavior will be rewarded with continued connectivity, perhaps even some level of “favored status”. Those who do not observe the norms and rules of the cyber-community will be discon-nected. Everyone will have a responsibility to monitor the com-munity and enforce the rules. It is incumbent upon the United States to participate in the decisions that are made across the globe as norms and practices are established. We must provide leadership by example in this area to gain success.

ConclusionWithout a doubt, practical implementation of cyberspace

deterrence strategy is difficult and complex. Determining ap-propriate ways and means to modify or change adversary be-havior is complex. Turning deterrence theory into operational reality is not easy. It requires a shift in both mindset and cul-ture. In the world of military and nuclear deterrence, practical implementation has meant willingness to exercise, and when called upon, execute capabilities. It means advertising or com-municating these capabilities to the adversary to affect their decision making calculus.

Some capabilities such as “detect and preempt” may be nec-essary to defend the nation and its critical infrastructure, but they have limited effect in cyberspace. Other concepts that hold promise for contributing to cyberspace deterrence include a robust “fight through” capability, attribution, identity man-agement, and moderating trust relationships. Much work re-mains to build these capabilities.

Major Kevin R. Beeker is a senior A/OA-10 combat pilot, who also completed an exchange tour flying F/A-18s with the United States Navy. He graduated from the United States Air Force Acad-emy with a Bachelor of Science in computer science in June 1996. Major Beeker is a June 2009 graduate of the Air Force Institute of Technology’s Cyber Warfare Intermediate Developmental Edu-cation Program, and is now the J2 Targeting Chief for the Joint Functional Component Command for Network Warfare (JFCC-NW) at Ft Meade, Maryland.

Dr. Robert F. Mills is an Associate Professor of Electrical Engineering at the Air Force Institute of Technology (AFIT), Wright-Patterson AFB OH. He teaches graduate courses and leads sponsored research in support of AFIT’s cyber opera-tions and warfare program. His research interests include network management and security, communications systems, cyber warfare, and systems engineering. He retired from ac-tive duty in the US Air Force after serving 21 years as a com-munications officer.

Dr. Michael R. Grimaila is an Associate Professor of Informa-tion Resource Management at the Air Force Institute of Technology (AFIT), Wright-Patterson AFB OH. He teaches graduate courses and leads sponsored research in support of AFIT’s cyber opera-tions and warfare program. His research interests include risk management, network security, and cyber warfare.

References442 Fighter Wing (2009), http://www.442fw.afrc.af.mil.Aitoro, J. (2008) “HSPD-12”, nextgov.com. February 10, 2008,

http://www.nextgov.com/the_basics/tb_20080610_8037.php.

Chilton, K. (2009) Speech, “2009 Cyberspace Symposium.” April 7, 2009, http://www.stratcom.mil/speeches/23/.

Chilton, K., and Weaver, G. (2009) “Waging Deterrence in the Twenty-First Century”, Strategic Studies Quarterly. Spring 2009, http://www.au.af.mil/au/ssq/2009/Spring/chil-ton.pdf.

Page 27: IO Journal 1st QTR 2010

IO Journal | February 2010 27

Center for Strategic and International Studies (2008) Securing Cyberspace for the 44th Presidency, December 2008.

Department of Defense (2006). Deterrence Operations Joint Op-erating Concept, Version 2.0, December 2006. http://www.dtic.mil/futurejointwarfare/concepts/do_joc_v20.doc.

Department of Homeland Security (2009) National Infrastruc-ture Protection Plan: Partnering to Enhance Protection and Resiliency (2009).

Hunker, J., Hutchinson, B., and Margulies, J. (2008) “Role and Challenges for Sufficient Cyber-Attack Attribution.” Insti-tute for Information Infrastructure Protection, Jan 2008. http://www.thei3p.org/docs/publications/whitepaper-attribution.pdf.

Kirk, J. (2008) “ISP Cut off From Internet After Security Con-cerns.” PC World. November 12, 2008, http://www.pcworld.com/businesscenter/article/153734/isp_cut_off_from_internet_after_security_concerns.html.

Krebs, B. “Major Source of Online Scams and Spams Knocked Offline.” Washington Post, November 11, 2008, http://voices.washingtonpost.com/securityfix/2008/11/major_source_of_online_scams_a.html.

Landry, B., and Koger, S. (2006) “Dispelling 10 Common Di-saster Recovery Myths: Lessons Learned from Hurricane Katrina and Other Disasters,” ACM Journal on Educational Resources in Computing 6, no. 4, December 2006.

Libicki, M. (2009) “Deterrence in Cyberspace.” High Frontier. May 2009. http://www.afspc.af.mil/shared/media/docu-ment/AFD-090519-102.pdf.

Magnuson, S. (2007) “‘Responsive Space’ Office Must Quickly Prove Itself, Proponents Say”, National Defense, December, 2007.

Mauney, C. (2009) Speech, “Space Weapons in the 21st Cen-tury”, http://www.stratcom.mil/speeches/19/.

Office of Management and Budget (2008) “OMB Reports Sig-nificant HSPD-12 Implementation Progress but Areas for Improvement Identified.” Office of Management and Bud-get, October 31, 2008, http://www.whitehouse.gov/omb/pubpress/2008/103108_hspd12.html.

Okey, H. (n.d.) “Strategic Deterrence (SD) Joint Operating Con-cept (JOC) Version 2.0” http://www.dtic.mil/futurejoint-warfare/strategic/sd_joc.ppt.

U.S. Government Accountability Office (2006) Internet Infra-structure—Challenges in Developing a Public/Private Recovery Plan, GAO-06-1100T, September 13, 2006.

Vijayan, J. (2005) “Data Security Risks Missing from Disaster Recovery Plans,” Computer World Vol 39, no.41, October 10, 2005.

Wegner, D. (2009) Briefing: “Operationally Responsive Space: Meeting the Joint Force Commanders’ Needs”. http://www.responsivespace.com/ors/reference/ORS%20Office%20Overview_PA_Cleared%20notes.pdf.

White House (2009) Cyberspace Policy Review: Assuring a Trusted and Resilient Information and Communications Infrastructure, 2009.

Wozniak, J., and Liles, S. (2009) “Political and Technical Roadblocks to Cyber Attack Attirbution”, IO Journal, April 2009.

Join the AOC’s Join the AOC’s IO InstituteIO Institute

Benefi ts of IOI and AOC Benefi ts of IOI and AOC Individual Membership:Individual Membership:• The IO Journal – the premier

professional journal of Information Operations.

• Through our worldwide chapters, access to an extensive network of government and industry professionals in the fi elds of Information Operations and related and supporting fi elds.

• Excellent networking opportunities:

• Through chapter and regional activities tailored to meet local professional development needs.

• Through world-renowned national/international conventions, exhibitions, conferences and symposia sponsored by the IOI and AOC.

• Career strategy assistance:

• Access to job postings by IOI and AOC corporate members.

• Access to the IOI and AOC’s Professional Development Center – advanced education and training in communications, intelligence and information systems disciplines.

• Awards and scholarship programs for recognition of professional and academic accomplishments.

Visit www.crows.org and click Visit www.crows.org and click “Join the AOC IO Institute” “Join the AOC IO Institute”

for an application.for an application.

Association of Old Crows

Benefits_adHalfVert.indd 1 1/27/10 9:53:49 AM

Page 28: IO Journal 1st QTR 2010

IO Journal | February 201028

Introduction

The term “Information Warfare” was coined by Thomas Rona during the 1980s to describe a range of offensive and defensive actions involving the use of information. As such the term encompassed the full gamut of techniques whereby information was employed to gain a competitive advantage in

a conflict or dispute.This rather soft definition of “Information Warfare” was fol-

lowed in 1997 by a more formal definition by the US Air Force, focused primarily on military applications in social systems (17). The newer definition has provided a basis for the subse-quent mathematical formalisms by Borden and Kopp (1, 7).

The problems which are encompassed within the scope of “Information Warfare” are fundamental to nature, and despite being commonly treated as artifacts of social systems, are ap-plicable to any competitive survival situation where informa-tion is exploited by multiple players. It is important to observe that a contest between two machines, examples being an item of radar equipment and a hostile jammer, both players obey the very same constraints as obeyed by biological organisms exploiting information while competing for survival.

Mathematical formalisms for “Information Warfare” did not emerge until the turn of the millennium, when Borden and Kopp independently identified the “four canonical strategies of Information Warfare”, relating these to Shannon’s informa-tion theory.

Since then, further research has aimed to establish the range of environments in which the canonical strategies apply, and relate them to established research in areas such as game theory, the Observation Orientation Decision Action loop, and the theory of deception, propaganda, and marketing (3, 4, 5, 6, 7, 8, 9, 10).

The Four Strategies of Information Warfare and their ApplicationsBy Carlo Kopp

Perhaps the most remarkable finding of research follow-ing the initial mathematical formalism for the four canonical strategies, is their pervasive impact in evolutionary biology. Very little effort was required to establish that the four strate-gies are indeed a biological survival mechanism, evolved spe-cifically for the purpose of gaining an advantage in a survival game (8). In the evolutionary arms race pursued by all organ-isms, the use of information becomes a powerful offensive and defensive weapon against competitors and predators.

The four canonical strategies of Information Warfare pro-vide a common mathematical model which can be applied to problems of this kind regardless of whether they arise in bio-logical, military/social or machine systems. This is important for at least two reasons.

The first reason is that the increasing penetration of com-puting and networking equipment into a range of military systems increasingly constrains the functions and behaviours of these systems, making it in turn increasingly difficult to understand how and where Information Operations conducted by an opponent may impact the function of such military sys-tems. A mathematical model which applies equally well to the machine and human components of the system is thus essen-tial to solving such problems.

The second reason is the increasing level of integration and automation, and the increasing diversity of functions ap-pearing in numerous items of military equipment, requiring a common model for analyzing and understanding problems which arise from an opponent’s offensive and defensive uses of information.

Modern multimode radar equipment presents a good case study, as such a device may be employed in its “traditional” role as an active sensor, yet may be also employed as a long range high data rate datalink, a precision direction finding receiver, a high power narrowband jammer, or if its power-aperture perfor-mance is high enough, even as a directed energy weapon. Each of these distinct operating regimes has unique implications in terms of both offensive and defensive use of information in com-bat, and a coherent logic must be applied in designing the rules for invoking and operating these functional regimes. A common and mathematically robust model is therefore essential.

The four canonical strategies of information warfare pro-vide the necessary structure for modeling and understanding such systems.

The Four Canonical Strategies of Information Warfare

The four canonical strategies of Information Warfare can be defined thus (7, 8, 9):

AbstractThe four canonical strategies of Information Warfare pro-

vide a rigorous and mathematically supportable theoretical basis for the analysis of a wide range of problems involv-ing deceptions and other forms of manipulating information in survival contests. These strategies have been shown to provide a robust model for dealing with problems spanning biological, social, computing and military systems. This pa-per discusses the definitions of the four canonical strategies, defines test criteria for the resolution of classification prob-lems which might appear ambiguous, and explores a number of practical examples.

Page 29: IO Journal 1st QTR 2010

IO Journal | February 2010 29

Degradation or Destruction (also Denial of Information), i.e. concealment and camouflage, or stealth; Degradation or Destruction amounts to making the signal sufficiently noise-like, that a receiver cannot discern its presence from that of the noise in the channel. Degradation attacks can be further divided into ‘active’ and ‘passive’ forms, depending on whether the attacker generates the signal, or hides the signal.

Corruption (also Deception and Mimicry), i.e. the in-sertion of intentionally misleading information; corruption amounts to mimicking a known signal so well, that a receiver cannot distinguish the deceptive signal from the real signal.

Denial (also Disruption and Destruction), i.e. the inser-tion of information which produces a dysfunction inside the opponent’s system; alternately the outright destruction of the victim receiver subsystem; Denial via disruption or destruc-tion amounts to injecting so much noise into the channel, that the receiver cannot demodulate the signal, or rendering the receiver permanently inoperative.

Denial (also Subversion) , i.e. insertion of information which triggers a self destructive process in the opponent’s target system; Denial via subversion at the simplest level amounts to the diversion of the thread of execution within a Turing machine, which maps on to the functional behaviour of the victim system, i.e. surreptitiously flipping specific bits on the tape, to alter the behaviour of the victim Turing machine.

MESSAGE

NOISESOURCE

MESSAGE

DESTINATIONRECEIVER

RECEIVEDSIGNAL

SIGNAL

TRANSMITTERSOURCEINFORMATION

Figure 1. A communication channel (C.E. Shannon)These four strategies are atomic, in the sense that they

cannot be further subdivided or broken down. An informa-tion warfare attack can be a simple attack, using a single strategy, or more often in practice a compound attack, where a number of strategies are applied concurrently or sequen-tially, to achieve the intended effect upon the victim (10).

A problem observed repeatedly in the application of the four canonical strategies to practical problems, especially forensic analyses of past attacks, is the problem of classification of at-tack types into specific canonical strategies.

The boundary conditions between these strategies are best understood by exploring Mills’ paradox.

Mills’ paradox was uncovered in 2002, during the research effort then performed by Kopp and Mills, which established the evolutionary and biological basis of Information Warfare. The problem which repeatedly arose was that of deciding which of the canonical strategies applied to a specific evolved feature in an organism. Survey of a large number of biological examples re-peatedly displayed what appeared to be ambiguity as to whether

a feature constituted denial via subversion or corruption, or degradation or corruption. Only careful study of the examples yielded a clear answer.

The result of this work was the definition of Mills’ para-dox, named so as Mills was the first to observe the problem. It has since been observed in a wide range of other classification problems dealt with by this author.

To best appreciate the importance of Mills’ Paradox, it is necessary to closely explore the relationship between Shan-non’s model and the four strategies.

The Shannon model has some caveats. One is that what constitutes information in a message depends on the ability of an entity or receiver to understand that information. For the purpose of this discussion, it is assumed that the receiver does understand the information.

If a message contains information, an entity receiving it and understanding it will experience a state change which al-ters its level of uncertainty. The less likely the message, the greater its information content H(X), which is articulated in Shannon’s entropy theorem:

(1)

The relationship of most interest in the context of Infor-mation Warfare is Shannon’s channel capacity theorem, refer Figure 1(12). It states that the capacity of a channel to carry information depends on the magnitude of interfering noise in the channel, and the bandwidth of the channel:

(2)

If an attacker intends to manipulate the flow of informa-tion to an advantage, the game will always revolve around con-trolling the usable capacity of the channel, C.

To achieve this, the attacker must manipulate the remain-ing variables in the equation, bandwidth, W, and signal power versus noise power, P/N.

Three of the four canonical strategies involve direct ma-nipulation of bandwidth, signal power and noise.

The degradation strategy thus involves manipulation of the P/N term in Shannon’s equation. The flow of information be-tween the source and destination is impaired or even stopped by burying the signal in noise and driving C➔0.

There are two forms of this strategy, the first being the ‘camouflage/stealth’ or ‘passive’ form, the second being the ‘jamming’ or ‘active’ form.

The first form involves forcing P➔0 to force C➔0. In effect the signal is made so faint it cannot be distinguished from the noise floor of the receiver. The second form involves the injec-tion of an interfering signal into the channel, to make N>>P and thus force C➔0. In effect the interfering signal drowns out the real signal flowing across the channel.

There is an important distinction between the active and passive forms of the degradation strategy. In the passive form of this attack, the victim will most likely be unaware of the attack, since the signal is submerged in noise and cannot be detected. This form is therefore ‘covert’ in the sense that no

p log2p

Page 30: IO Journal 1st QTR 2010

IO Journal | February 201030

information is conveyed to the victim. In the active form of this attack, the signal which jams or interferes with the mes-sages carried by the channel will be detected by the victim. Therefore this form is ‘overt’ in the sense that information is conveyed to the victim, telling the victim that an attack on the channel is taking place. Both forms are widely used in bio-logical survival contests and in social conflicts (8,10,11,12).

The corruption strategy involves the substitution of a valid message in the channel with a deceptive message, created to mimic the appearance of a real message. In terms of the Shan-non equation, P actual is replaced with P mimic, while the W and N terms remain unimpaired. The victim receiver cannot then distinguish the deception from a real message, and accepts corrupted information as the intended information. Success requires that the deceptive message emulates the real message well enough to deceive the victim. Corruption is inherently ‘covert’ since it fails in the event of detection by the victim re-ceiver. Corruption is used almost as frequently as degradation in biological, military and social conflicts.

The degradation and corruption strategies both focus on the P and N terms in the Shannon equation.

The denial via destruction strategy manipu-lates the W term, by effecting an attack on the transmission link or receiver to deny the re-ception of any messages, this by removing the means of providing bandwidth W.

This means that W➔0 or W=0 if the attack is effective.

The denial via destruction strategy is inher-ently ‘overt’ in that the victim will know of the attack very quickly, as the channel or receiver is being attacked. A denial attack may be temporary or per-sistent in effect, depending on how the channel or receiver is attacked. Numerous biological, military and social examples exist.

Denial via subversion differs from the first three strategies in that it does not involve an attack on the message, its con-tents or the channel/receiver. Subversive attacks involve the insertion of information which triggers a self destructive pro-cess in the victim system or organism. Good examples exist in the biological, military and social domains. It is typically supported by a corruption attack to gain access (10).

Mills’ Paradox becomes apparent when we pose the follow-ing four questions:1. How do we distinguish a Denial via subversion attack from a

Corruption attack?2. How do we distinguish a destructive Denial via subversion

attack from a Denial via destruction attack?3. How do we distinguish a Degradation attack from a mimick-

ing Corruption attack?4. How do we distinguish an intensive active Degradation at-

tack from a soft kill Denial via destruction attack?Degradation attacks can always be easily distinguished from

Denial via subversion attacks, and Corruption attacks can easily be distinguished from Denial via destruction attacks.

The same is not true for the remaining four boundary condi-

tions. Herein lies the paradox, in that of the six possible boundary conditions, two are so distinct, that no ambiguity can exist, yet four require very careful analysis to establish exactly where one strategy begins and the other ends.

Distinguishing a Denial via subversion attack from a mim-icking Corruption attack requires an understanding of how the victim system processes the deceptive input message. In both instances a message is being used to deceive the victim system to an advantage of the attacker. The distinction always lies in whether the victim is responding to the attack in a voluntary or involuntary fashion. A corruption attack alters the victim’s perception of external reality and the victim then responds to the environmental change with a voluntary action of some kind. This is quite distinct from the denial via subversion re-gime of attack, in which some involuntary internal mechanism is triggered to cause detriment to the victim. Biological ex-amples can often prove difficult to separate.

Figure 2. Mills’ Paradox (Author).Distinguishing a destructive

Denial via subversion attack from a Denial via destruction attack can also be problematic, where the end state is a destructive hard kill of the victim system. Superficially the effect is the same – the vic-tim receiver or system is no longer operational and this was the re-sult of an attack. The distinction between these two attacks lies in whether the destruction of the victim was a result of the expen-

diture of energy by the attacker, or the victim itself. A destruc-tive denial attack invariably sees the attacker delivering the destructive effect via external means, whereas subversion sees the victim deliver the destructive effect against itself, once triggered. The victim’s role in both attacks is involuntary.

Distinguishing a passive Degradation attack from a mim-icking Corruption attack can frequently present difficulties, especially in biological systems. Is not a passive attack by deg-radation an instance of the attacker mimicking noise within the channel? Wherein the boundary condition exists is in whether the victim recognizes the presence of the attacker, as something it is not, or does not perceive the attacker at all. Mimicry which is designed to camouflage an attacker against the background noise is a passive degradation attack since the victim cannot perceive the attacker.

Distinguishing a active Degradation attack from a soft kill Denial via destruction attack may also be superficially difficult. In both instances the channel has been rendered unusable by a perceived attack on the receiver. The boundary condition can be established by determining whether the receiver remains functional or not. An overloading of the receiver to deny its use is quite distinct from a channel which is unusable as it is saturated with a jamming signal.

To date a formal definition of the boundary conditions has not been formulated. This paper does so, in the following manner:

)Mills'

Paradox

Satura

tion o

r Swam

ping w

ith N

oise? Mim

icking Background Noise?

Mimicr

y or S

ubvers

ion?

Destruction or Subversion?

)

Denial via

Degradation(Bury Signal in Noise)

Corruption(Mimic Real Signal(Blind/Saturate Receiver

(Subvert System Function)

Denial viaSubversion

Destruction

Page 31: IO Journal 1st QTR 2010

IO Journal | February 2010 31

Application of these boundary condition test criteria in a rigorous manner resolves any perceived ambiguity in the iden-tification of the attack, and does so decisively.

In considering why no ambiguities in identification exist between the Degradation and Denial via subversion attacks, and the Corruption and Denial via destruction attacks, respectively, it is clear that the respective effects of these disparate strate-gies are mutually exclusive.

Degradation requires a functional victim system to achieve its effect, but Denial via subversion results in the destruction or serious functional impairment of the victim system. The same dichotomy exists between Corruption and Denial via destruc-tion, as a deception cannot be effected if the victim system loses its channel or receiver.

Another interesting dichotomy arises if we opt to sort the strategies by the measure of how covert they are. Passive forms of Degradation and all forms of Corruption are inher-ently covert. Conversely, both forms of Denial are ultimately overt, insofar as the victim system is damaged or impaired in function. In a sense, both Denial strategies are centred in damaging or impairing the victim’s apparatus for information gathering and processing, whereas the Degradation and Cor-ruption strategies are centred in compromising the informa-tion itself.

A small number of case studies will be explored to demon-strate the application of the four boundary condition tests. For generality, one example is biological, the other technological.

In the biological domain, two groups of species present excellent examples. These are mantids or Mantodea. Two spe-cies of interest are the Brazilian Acanthrops falcataria which hides as a dead leaf, the Indian Humbertiella ceylonica which hides against tree bark. Another group of interest are stick and leaf insects or Phasmatodea. These slow moving herbi-vores have evolved camouflage in their shape, colour, texture and movement, to hide from predators by resembling dead or live foliage (8).

Both the Mantodea and Phasmatodea employ surface cam-ouflage which unambigiously qualifies as passive Degradation. What is less clear initially is the complex shaping which is de-signed to mimick that of leaves, bark, branches or other forms of foliage. Often this mimicry extends to emulating the motion characteristic of plants. Superficially, an ambiguity arises be-tween Degradation and Corruption.

Application of the test as to whether the victim perceives the attacker as another insect which it is not, or dismisses

the attacker as an artifact of background clutter, resolves the initial ambiguity.

In the domain of military technology, the area of missile proximity fuse jamming presents interesting issues. Proximity fuses can be jammed in a variety of ways, for instance to pro-duce incorrect range readings to effect early or late warhead initiation, or to drown the target in background noise.

In assessing what canonical strategies are applicable, it is first necessary to separate noise jamming, which is clearly active Degradation, from range deception jamming. The lat-ter then presents some superficial ambiguity as to whether it represents Corruption or Denial via subversion. Strict ap-plication of the test criteria indicates that it is Corruption as the premature or late initiation arises as a result of a voluntary action on the part of the victim’s internal deci-sion logic.

Other interesting examples arise with forms of military camouflage. The direct analogue to the biological phasmid problem is the sniper dressed in a “Ghillie suit”, which is de-signed to emulate the shape, texture and colour of foliage. Conventional patterned camouflage suits are typically printed with patterns which emulate the colouring, contrast and other properties of foliage, present a clear case of a passive degrada-tion attack. The Ghillie suit presents the sniper as a bush or portion thereof. As with the phasmid example, the Ghillie suit also presents as a passive degradation attack as the victim cannot differentiate the attacker, ie the sniper, from the back-ground clutter in the observed scene.

Applications of the Canonical Strategies

To date the canonical strategies have been applied to the analysis of a number of military, intelligence and computer security problems.

In the latter category, Islam et al employed the four canoni-cal strategies to classify forms of information attack on self-forming ad hoc networks (5).

Borden has applied the strategies to modeling electronic warfare engagements and the provision of supporting decision logic (3).

Brumley et al applied the canonical strategies as a tool for analyzing the manner in which the Observation and Orienta-tion phases of Boyd’s OODA loop, widely used for modeling mili-tary engagements and systems, were impaired by information attacks. The strategies were also employed in the analysis of

Degradation Corruption Denial via Destruction Denial via Subversion

Degradation X Is effect perceived?Effect on channel

or receiver?X

Corruption Is effect perceived? X XVoluntary or

involuntary effect?

Denial via DestructionEffect on channel or

receiver?

XX

Attacker or victim

supplied effect?

Denial via Subversion XVoluntary or

involuntary effect?

Attacker or victim

supplied effect?X

Page 32: IO Journal 1st QTR 2010

IO Journal | February 201032

self deception, a recurring problem in military operations and intelligence analysis (2,3,4).

Kopp has applied the strategies as a tool for the forensic analysis and classification of a range of intelligence decep-tions, and propaganda operations (10,11).

An area where the four canonical strategies has yet to be applied extensively is in the analysis of problems arising in complex multimode equipment, performing a wide range of functions, and in the definition of the control logic in such de-signs. The earlier stated example of a multimode radar presents a good case study for such an application of the strategies.

The latest generation of X-band multimode radars for fight-er aircraft is the specific instance of interest, due to the di-versity of functions embedded in the design, but also due to the need for high levels of automation in such designs. The latter is because such designs are intended to operated by a single pilot who is unlikely to be extensively trained either in electronic warfare techniques or information operations. As a result the decision logic embedded in the radar may in many situations wholly determine the behaviour of the system when under electronic warfare or information attack (14).

As stated, such radars may have up to five distinct regimes of operation or function, which maybe dynamically interleaved in time:1. Active Sensor2. Long Range High Data Rate Datalink3. Precision Direction Finding Receiver4. High Power Narrowband Jammer5. Directed Energy Weapon.

As such this class of radar can be used offensively to exe-cute a range of information attacks, but will also be a potential victim of such attacks.

Used offensively, the radar regimes can be classified thus:

Strategy Degradation Corruption Denial (1) Denial (2)

Active Sensor

-Mimicry of Enemy Radars

- -

Datalink - -

Saturation of En-

emy Network Receivers

Penetration of En-emy Networks;

Eliciting DatalinkSync Messages

DF Receiver - - - -

Jammer Noise JammingDeceptionJamming

Saturation of Enemy

RadarReceivers

-

DEW - -Lethal

ElectronicAttack

-

Concurrently, the radar may be subjected to the following forms of attack:

Strategy Degradation Corruption Denial (1) Denial (2)

Active Sensor

Noise Jamming

DeceptionJamming

Saturation of Enemy

RadarReceivers;

DEW Attack

-

DatalinkNoise

JammingDeceptionJamming

Saturation of Enemy Network Receivers

Penetra-tion of Enemy

Networks

DF Receiver

- DeceptionJamming

- -

Jammer - -

Anti-Radiation

Missile Attack

-

DEW - -

Anti-Radiation

Missile Attack

-

Some interesting problems arise in the definition of the decision logic for selection of operating regimes, and optimal responses to specific modes of attack.

A good example is the scenario where the radar is being employed to jam an opposing fighter’s X-band radar. Con-ventional reasoning is that the high power-aperture of the radar would make it a highly effective means of effecting both noise, and range/angle deception jamming against the

opposing radar.What is apparent from the matrices of possible

attacks, is that operating as a jammer, the radar is emitting a waveform which is constrained by the parameters of the emissions of the radar which is being jammed. This opens up a major vulnerability to an anti-radiation missile attack by the fighter being jammed, as the missile can be easily pro-grammed to pursue an emitter with specific emis-sion parameters.

Another mode of attack is where the offensive player uses the radar’s datalink mode to tease an opposing radar to transmit a datalink synchro-nization message, with the aim of performing a precision angle and/or range measurement against the victim radar. Such a mode of attack would be particularly useful against a low observable victim aircraft employing its radar in a Low Probability of Intercept search mode.

This paper does not aim to perform an exhaus-tive analysis of the possible ways in which infor-mation attacks may be performed by or against such radars. The aim is illustrate that with in-creasing levels of functionality and automation,

Page 33: IO Journal 1st QTR 2010

IO Journal | February 2010 33

consideration of the full gamut of possible attacks or exploita-tion techniques is essential to the production of robust control logic for automated regimes of operation, and that the four canonical strategies provide a powerful model for understand-ing and defining such control logic.

A more detailed model might also incorporate the operator of the radar as a component in the total system, and consider the various forms of attack against the operator.

ConclusionsThis paper discusses the four canonical strategies of infor-

mation warfare. It has employed Mills’ paradox as a means of providing a better insight into the strategies, their respec-tive forms, and the defining boundary conditions for these. Specific tests are defined to facilitate rapid classification of the strategies.

Prior research covering applications for these strategies are briefly surveyed, and the example of a multimode X-band radar is employed to demonstrate the utility of the four strategies as a model for understanding and defining forms of informa-tion attack and exploitation which might be employed against such a radar.

Carlo Kopp is a Computer Scientist at Monash University, Austra-lia, and leads capability research at the independent Air Power Australia think tank which he co-founded in 2004. He holds a B.E. (Honours) degree in Electrical Engineering, and MSc and PhD de-grees in Computer Science, the latter from Monash University. He is best known in the IO community for his 1990s series of papers on the E-bomb. His most significant IO research has been a series of papers published between 2000 and 2010, exploring the infor-mation theory and game theory underpinning the Borden-Kopp model of the four canonical strategies of IW.

Endnotes1. Borden A. (1999) What is Information Warfare? Aerospace

Power Chronicles, United States Air Force, Air University, Max-well AFB, Contributor’s Corner, URL: http://www.airpower.maxwell.af.mil/airchronicles/cc/borden.html (Date accessed: 01/09/04).

2. Borden A., The Dialectics of Information - A Framework, Infor-mation and Security, Vol.4, 2000.

3. Brumley L, Kopp C , Korb K B: Causes and effects of perception errors, Journal of Information Warfare, vol 5, ed 3, School of Computer and Information Science, Edith Cowan University, Perth WA Australia, 2006, pp. 41-53.

4. Brumley L, Kopp C , Korb K B, The orientation step of the OODA loop and information warfare, Proceedings of the 7th Australian Information Warfare and Security Conference, 4 December 2006 to 5 December 2006, School of Computer and Information Science, Edith Cowan University, Perth WA Australia, pp. 18-25.

5. Brumley L, Kopp C , Korb K B, Misperception, Self-Deception and Information Warfare, in G Pye and M Warren (eds), Confer-ence Proceedings of the 6th Australian Information Warfare & Security Conference (IWAR 2005), Geelong, VIC, Australia, School of Information Systems, Deakin University, Geelong, VIC, Australia, ISBN: 1 74156 028 4, pp 125-130.

6. Islam M.M., Pose R.D., Kopp C., Suburban Ad-Hoc Networks in Information Warfare, in G Pye and M Warren (eds), Conference Proceedings of the 6th Australian Information Warfare & Secu-rity Conference (IWAR 2005), Geelong, VIC, Australia, School of Information Systems, Deakin University, Geelong, VIC, Austra-lia, ISBN: 1 74156 028 4, pp 71-80.

7. Kopp C. (2000), A fundamental paradigm of infowar, Systems, Auscom Publishing Pty Ltd, Sydney, NSW, February, 2000, pp 47-55, URL: http://www.ausairpower.net/OSR-0200.html (Date accessed: 08/08/2008).

8. Kopp C. and Mills B.I. (2002) Information Warfare and Evolu-tion, Proceedings of the 3rd Australian Information Warfare & Security Conference, ECU, Perth. November, 2002. pp: 352-360.

9. Kopp C. (2003) Shannon, Hypergames and Information Warfare, Journal of Information Warfare, 2, 2: 108-118.

10. Kopp C., (2005) The Analysis of Compound Information Warfare Strategies, Proceedings of the 6th Australian Information War-fare Conference, Deakin University, Geelong. November, 2005.

11. Kopp C., Classical Deception Techniques and Perception Manage-ment vs. the Four Strategies of Information Warfare, in G Pye and M Warren (eds), Conference Proceedings of the 6th Austra-lian Information Warfare & Security Conference (IWAR 2005), Geelong, VIC, Australia, School of Information Systems, Deakin University, Geelong, VIC, Australia, ISBN: 1 74156 028 4, pp 81-89.

12. Kopp C., Considerations on deception techniques used in po-litical and product marketing, Proceedings of the 7th Australian Information Warfare and Security Conference, 4 December 2006 to 5 December 2006, School of Computer Information Science, Edith Cowan University, Perth WA Australia, pp. 62-71.

13. Kopp C, (2006) CSE 468 Information Conflict, Lecture Notes, Clayton School of Information Technology, Monash University, URL: http://www.csse.monash.edu.au/courseware/cse468/subject-info.html (Date Accessed 03/05/2006).

14. Lynch D and Kopp C, Multifunctional radar systems for fighter aircraft, in Radar Handbook, eds Merrill I Skolnik, McGraw Hill Companies, Columbus OH USA, pp. 1-46.

15. C. E. Shannon, A mathematical theory of communication, Bell System Technical Journal, vol. 27, pp. 379-423 and 623-656, July and October, 1948. URL: http://cm.bell-labs.com/cm/ms/what/shannonday/paper.html (Date Accessed 03/05/2006).

16. Schwartau W, Information Warfare: Chaos on the Electronic Su-perhighway, ISBN 1-56025-088-7 First Edition, Thunders Mouth Press, 1994.

17. Widnall S. E., Fogelman R. R. (1997), Cornerstones of Information Warfare. Doctrine/Policy Document, United States Air Force.

Page 34: IO Journal 1st QTR 2010

IO Journal | February 201034

The integration of logistics and information and com-munication systems (ICS) creates many opportunities and efficiencies for supply chain networks and pro-

vides opportunities for organizations to achieve competitive advantages. In a globally competitive and interconnected world, information flows brought about from the merger of the supply chain and ICS are essential for successful logistical operations. Indeed, the synergies result-ing from effective use of ICS within DOD and private sector supply chains has lead to military logistics being a force multiplier and a strategic competitive advantage in military operations world-wide. However, this integration has also brought with it a new set of information security challenges. These security risks present themselves in the form of infor-mation security vulnerabilities which individuals, organizations, and nations anywhere in the world can exploit. This situation is complicated by multiple tiers of third party logistics suppliers (3PL) of which the DOD has little or no control regarding information security policies and practices. The connectivity of these systems on the Internet substantially increases the risk to the operations of any organizations connected within the DOD supply chain. Similarly, the integ-rity and accuracy of the data within the supply chain networks is increas-ingly at risk with the increasing threats posed by connection to the Internet and connection with additional supply partners. The DOD must take the lead in aggressively defending their infor-mation flows and myriad supply chain systems against this threat. Conducting a risk analysis and then following this with an effective security policy and

Information Securitywithin DOD Supply ChainsBy Lt Col Brian R. Salmans, USAF

implementation of information security processes and technological solutions is an essential strategy to ensure supply chains continue to create competitive advantages for organizations.

The threats faced by the DOD and its partners within their mutual supply chains are many, varied, and continual-ly evolving. They include the accidental loss of data, malicious users, hackers, terrorists, foreign intelligence collec-tion, targeted information warfare at-tacks, malicious code such as viruses or worms, and denial of service attacks. These threats can come from a variety of actors including terrorists, coun-tries, competing organizations, hackers, and disgruntled employees. In order to counter these threats and mitigate the risks from interconnectivity with ICS, organizations must conduct effective risk management processes and imple-ment appropriate information security practices and safeguards. Failure to do so can lead to supply chain disruptions and possibly failure of the relationships constituting those supply chains. The reliance of the DOD logistics system on private industry supply chains makes this a national issue requiring a public and private relationship to protect these critical supply chains.

This combination of supply chain systems and a high threat, high risk en-vironment on the Internet has garnered national level attention in the United States. The Joint Economic Committee of the United States Congress designat-ed two specific logistic areas as critical infrastructures: 1) oil and gas storage and distribution; and 2) transportation (2002). They defined critical infrastruc-ture as “those industries, institutions, and distribution networks and systems that provide a continual flow of the

goods and services essential to the na-tion’s defense and economic security and the health, welfare, and safety of its citizens” (Joint Economic Committee, 2002, 12). This report outlined a critical infrastructure assurance plan to create a national strategy to guard against and react to cyber attacks on these areas (Joint Economic Committee, 2002).

Integration of Information Systems and the Supply Chain

Risks

The gains in efficiency, competitive-ness, profitability, and to customer ser-vice are an irresistible pull for organi-zations in integrating ICS with logistics processes. However, with this increased interconnectivity of systems through the Internet comes increased risk due to the chances of successful information security breaches and the subsequent disruptions which increase supply chain impacts due to reliance of organizations on the technology. Additionally, com-puter or cyber attacks on information systems can be directly on the informa-tion systems supporting the logistics and supply chain or using the informa-tion systems to increase the magnitude of an attack in parallel with a physical attack. Moreover, the risks associated with information security can impact the adoption and implementation of new technologies like radio frequency identification (RFID) technology (Fish & Forrest, 2007).

Challenges to Implementing Information Security in Supply Chains

A critical challenge for the DOD in maintaining a secure supply chain is the role and performance of third party logistics providers (3PL). As logistics

Page 35: IO Journal 1st QTR 2010

IO Journal | February 2010 35

components such as delivery systems or returns management become as im-portant and indistinguishable as the product itself, the DOD has begun to use third party logistics providers to provide these and other services. Rea-sons for this trend include an emphasis for the DOD to focus on core warfighter competencies, cost savings, to enter a certain geographical area (by using the third party logistics provider’s con-nections or cultural expertise within a certain area), or to obtain needed area expertise outside the DOD.

When the DOD is attempting to es-tablish a certain level of information security across the supply chain net-work, they may realistically have little or no influence on third party logistics providers, even when contractually ob-ligating a vendor to maintain a level of security. The challenge is even more difficult when partnering with smaller third party logistics providers where information systems and security ex-penditures are a larger portion of their budgets. Research has indicated smaller provider firms are slower to adopt and use advanced information and com-munication systems (Evangelista & Sweeney, 2006).

The Threat EnvironmentLeadership, managers, and informa-

tion systems and security staff must understand the threats facing the sup-ply chain in order to implement the ap-propriate levels of information security. It cannot be considered as just a techni-cal issue. As the commander of United States Strategic Command, General Chilton, emphasized recently, problems within the cyber domain are within the warfighter’s domain, and they require a commander’s attention (Chilton, 2009). Before discussing the methods of in-trusions of information systems, the threat environment itself must be un-derstood. A critical component of risk is the threat. When trying to access risk, it is exceedingly difficult, if not impos-sible to determine the type of person or entity that may attack a supply chain. The motivations or goals are similarly endless.

One important consideration in un-derstanding the threats to information

security of the DOD’s supply chain is within the context of geopolitical up-heavals. A recent RAND report identi-fied an anti-access strategy by China where that country would target United States logistics systems and information systems as a strategy to disrupt United States logistic support in order to under-mine United States combat capabilities (Cliff, Burles, Chase, Eaton, & Pollpeter, 2007). The report goes further and de-scribes Chinese researchers’ beliefs that a goal in wartime is denying informa-tion to enemies and that supply systems could become targeting priorities. Fi-nally the RAND report clearly identifies attacking logistics systems via Internet attacks as elements of the overall Chi-nese strategy in which “the effective-ness of such efforts will depend largely on exploiting poor information security practices” (Cliff et al., 2007, 101). An anti-access strategy by the Chinese gov-ernment involving disrupting US mili-tary operations by cyber attacks on the DOD supply chain and information flows is commensurate with the influence of Sun Tzu on the Chinese military (Daven-port, 2009). This is important with the large number of logistics organizations either conducting business directly with

the DOD or involved with closely associ-ated industries such as transportation.

Another aspect of the threat envi-ronment regarding logistics and sup-ply chain information system security is terrorism. With the extensive use of information systems in the global trans-portation system, terrorists may at-tempt to exploit them for their political cause. Examples include tampering with electronic seals or radio frequency identification devices (RFID) to make a particular intermodal container appear as if it contains innocuous goods from an innocuous source in order to stand a better chance of clearing United States Customs checks. The threat from terror-ism is identified by a report concerning cybersecurity of freight information sys-tems from the National Research Council of the National Academies which stated: “The freight transportation industry ap-pears to offer unusual potential for both economic and physical damage from ter-rorist cyberattacks” (National Research Council, 2002, 1-3).

There are many additional players posing threats and motives to the infor-mation security of logistic information systems. They can be reporters looking for leads or inside stories, disgruntled employees, competitors competing for bids or contracts, foreign countries conducting industrial espionage, stock analysts attempting to gain insider in-formation, hackivists who compromise information systems to embarrass or discredit the DOD.

In addition to external threats, the DOD must be aware of internal threats to the supply chain as well. Either inten-tionally or not, insiders are a significant source of information security breaches. As reported by the FBI’s Computer Crime and Security Survey, computer security attacks from insiders occur about as of-ten as external attacks (Gordon, Loeb, Lucyshyn, & Richardson, 2006). These internal threats can be unintentional mistakes by users such as opening e-mail attachments and unleashing a virus, data leakage by releasing proprietary in-formation to third parties (by an e-mail attachment or on portable media such as a USB flash or ‘thumb’ drive), posting proprietary information on a publicly ac-cessible Web site, or allowing a malicious

US Air Force Staff Sgt. Melissa Alcantara, a computer systems operator from the 163rd Communications Flight, 163rd Reconnaissance Wing, uses network servers to load security patches and monitor network usage of computers with the Wing and geographically separated units at March Air Reserve Base, Calif., in 2008. US. Air Force photo by Val Gempis/Released

Page 36: IO Journal 1st QTR 2010

IO Journal | February 201036

outsider access to the internal organiza-tional network by downloading a music file or program with hidden malicious content. Conversely, an insider can have malicious intent and breach information security by these same or additional methods. The important consideration for protecting the information security of logistics information systems is the threat can come from all levels and from everywhere, inside and outside your network. This perspective shapes the risk management process and the pre-ventive actions taken. Risks to informa-tion security also include natural and man-made disasters such as hurricanes, flooding, or chemical spills, but this pa-per is primarily concerned with external and internal malicious activities from persons or other organizations.

Information Security Intrusion MethodsThe methods intruders can use to

compromise the information security within a supply chain are numerous and constantly evolving with the technol-ogy in order to overcome security mea-sures and technology put into place to stop them. These methods include ob-taining (sniffing) passwords used over the supply chain network and cracking them, spoofing techniques to attempt to appear as a legitimate supply chain partner system or web page, or denial of service attacks where attackers inun-date the networks and systems of a sup-ply chain with network traffic in order to slow, degrade, or stop the information flows (Warren & Hutchinson, 2000). A malicious hacker can attempt to identi-fy and exploit vulnerabilities within the hardware and software of a supply chain in order to gain access to the network. These methods of exploiting risks have been identified by others as important concerns within supply chains (Chou et al., 2004; Cliff et al., 2007; National Re-search Council, 2002).

Impacts to the Supply ChainThese attack methods can cause

damage in numerous ways. An outsider gaining access to systems within the a DOD supply chain could destroy or al-ter data, or gain important proprietary information of value to competitors or the disclosure of which would adversely

impact trust within the supply chain. If it was known an intruder had pen-etrated a supply chain network, but it was unknown whether or not any data (and which data) was manipulated, the entire data contents of the supply chain would have to be considered suspect. In a similar fashion, there is a heightened concern about intruders gaining access to manifest data or RFID tags (such as to the contents of a container on a ship) and altering the data to get pass United States Customs officials. If the intruder operates stealthily, the intrusion may not be known for years, if ever. Denial of service attacks could so degrade a supply chain by slowing or stopping informa-tion flows the DOD and its in the supply chain would be forced to use alternative methods for communications or look for other partners. These events would lead to such detrimental consequences within supply chains as increased costs, interruptions to delivery schedules and other flows, problems with customer service, and increased uncertainty with all aspects and information within the supply chain.

The response of organizations within the DOD to a computer intrusion can of-tentimes be worse than the actual at-tack, leading to a self-imposed denial of service. A successful (and perhaps, relatively benign) defacing of a web site within a DOD supply chain could lead to an immediate shuttering of the entire network to allow a complete assessment, purge, and correction of the incident. This type of heavy-handed over reac-tion has been evidenced at numerous DOD installations as various computer incidents and security violations have led to “disconnecting” the installation from the Internet and the imposing of strict rules of operations that reduces the efficiencies and effectiveness of information systems and users of these information systems.

Countering the Threat

Risk Analysis and Risk Management

An efficient and effective supply chain with consistent financial, prod-uct, and information flows requires information security safeguards at all levels. This is an especially difficult

challenge in supply chains since typi-cally the entire network is not under the control of one organization. The DOD must decide how to motivate partners to reduce and avoid risk. When considering information security, all players such as supply chain partners, suppliers, and service providers must be evaluated. To undertake this daunting task, the DOD should start with a risk analysis. A risk analysis aids in identifying and priori-tizing the sources and nature of the risk relevant to the supply chain and then sets the groundwork for the implemen-tation of information security processes and technology. The risk analysis must go beyond the first tier to all tiers of a supply chain (Kiser & Cantrell, 2006). It is invaluable to the organization in order to manage risk since realistically eliminating risk is impossible. The risk referred to here consists of the threat or uncertainty plus the vulnerability. Se-curity in any system should be propor-tional to the risks associated with it.

Similarly, the DOD must realize there is a tradeoff between functionality and security. The DOD must consider the implementation of information security measures throughout a supply chain in the context of the tradeoffs made with-in logistics to achieve a given objec-tive. The DOD must improve the security throughout the supply chain but not at excessive tradeoffs of exceedingly high operating costs and barriers or at unac-ceptable costs of reducing the integra-tion of a supply chain. The increased access and transparency of many DOD supply chains has led to increased ef-ficiencies throughout the DOD logistics systems. Unlike the situation with in-ventory carrying costs where there is a give and take to push these off to the supplier, security cannot and should not be pushed off to other suppliers or customers. Appropriate information se-curity must be established evenly along the entire length of the supply chain and with all partners (Sarathy, 2006). The DOD must also realize some of the firms within their supply chain, espe-cially the smaller ones, may not have the expertise or resources to provide an appropriate level of security.

A risk analysis will provide a realistic assessment of the true, heterogeneity of

Page 37: IO Journal 1st QTR 2010

IO Journal | February 2010 37

the real world supply chains implement-ed by organizations. In a supply chain, especially large, global ones, there can be a vast disparity between the nature of the many relationships between the DOD and various suppliers and custom-ers. Moreover, the criticality of the data components of the information flows is diverse as well. For instance, the in-formation seen by a customer within a DOD supply chain, such as an F-22 me-chanic tracking a part is different than the information the supplier within the supply chain, such as Lockheed-Martin, would see. Further, the nature of their access to the DOD supply chain systems is completely different, with Lockheed-Martin, being a critical supplier of F-22 parts, potentially having more access and potentially more avenues for in-formation security violations via this communication network than that of the contractor on the tarmac. Therefore this heterogeneity within supply chains requires different levels of security and trust. A risk analysis can identify these characteristics within the supply chain and help posture organizational leader-ship and security personnel to create the appropriate security architecture to re-flect this (Kolluru & Meredith, 2001). A one size fits all approach will not work.

Risk Mitigation

Security Policy

After the initial risk assessment, risk mitigation steps must be taken as part of the overall risk management process. Creating a security policy is the first and most important item to accomplish once a risk analysis has been accomplished (Baskerville & Siponen, 2002). A security policy serves as a guide and roadmap for an organization’s complete security pro-gram (Dubin, 2005). Research has shown an emphasis on the creation and imple-mentation of a security policy is critical to begin protecting networks (Whit-man, 2004). The security policy lays out the vision for information security along with responsibilities, information system usage rules, penalties for viola-tions, employee information security requirements, business continuity plan-ning, and an incident response action plan. Implementing security measures

such as more secure passwords, keep-ing computer systems up-to-date with the latest security patches, establishing computer usage policies, compartmen-talized access control based on need-to-know (the principle of least privilege), and security education of employees are documented in the security policy (War-ren & Hutchinson, 2000; Dubin, 2005). Maintaining an effective back-up sys-tem is essential as well. In case of an information security breach, the data on the back-up systems may be the only reliable data that could be used in the recovery operations.

Thoroughly documenting and prac-ticing a response plan in case of an information security breach is also an essential part of a security policy. The thoroughness of the response plan can contribute to the durability of a supply chain and in minimizing the down time of the supply chain after an information security incident. Moreover, the plan can assist with preserving data impor-tant for law enforcement use in order to pursue and prosecute the intruders. As discussed earlier, a self-imposed denial of service due to an overreaction to a se-curity incident can cause more disrup-tion and adverse effects than the actual incident.

Another key component of successful information security plans and policies, whether in supply chains or not, is to de-sign the supply chain with information security in mind from the beginning. Applying information security after any system has already been designed

and implemented is very difficult, ex-pensive, and can lead to less than op-timal implementations. Additionally, the security design should be a layered approach, a so called defense-in-depth strategy. It is often stated a network is only as secure as the weakest link. A layered approach to security incorpo-rates redundancy into the supply chain and in case of an information security breach, can compartmentalize the in-trusion and prevent the spread to other parts of the supply chain network.

In a similar manner in implementing a successful implementation of infor-mation security plans and architecture across the supply chain, the DOD should involve supply chain partners in the pro-cess from the beginning and continually throughout the relationship. Maintain-ing a level of information security across the supply chain requires extensive co-operation. The DOD must practice infor-mation sharing across the supply chain to guard against security degradation over time and to further safeguard se-curity over time with changing threat pictures (Sarathy, 2006).

Finally, the last and probably most important component of successful im-plementation of information security plans and policies is executive leader-ship involvement. Leadership will set the tone for security, and the level of strictness or laxity within the organiza-tion and supply chain. Moreover, as the commander of United States Strategic Command stated, commanders “need to review the maintenance statistics and readiness of our cyber networks” (Chil-ton, 2009, 8). The amount of resources allocated to security is governed by the importance placed on it by leadership. Commanders play a pivotal role in at-taining an appropriate level of informa-tion security with all partners across the supply chain. Using the appropri-ate mix of rewards and punishment as well as supplementing the push and pull with resources and guidance regarding supply chain partners will also fall to executive leadership.

Additional Mitigation StepsIn addition to the implementation of

a security plan, there are other impor-tant risk mitigation steps organizations

US Navy Information Systems Technician Seaman Dwight Ogle inspects the black audio switch in the combat systems message center aboard aircraft carrier USS Nimitz (CVN 68), under way in the Indian Ocean, October 2009. Operations in the US 5th Fleet area of operations are focused on reassuring regional partners of the United States’ commitment to security. US Navy photo by Mass Communication Specialist 3rd Class John Phillip Wagner Jr./Released

Page 38: IO Journal 1st QTR 2010

IO Journal | February 201038

should take to improve the information security of their supply chains. Each of the following mitigation techniques must be considered within the context of the risk management process and the nature of the information systems in-frastructure and supply chain partner interconnectivity. First of all the sup-ply chain network should be segmented based on sensitivity, community of in-terest, and need to know. It is not nec-essary for everyone in an organization or for every partner organization in a supply chain to have complete access to all information and systems within the supply chain. Compartmentalizing the information reduces the exposure and damage in case of an external or internal entity gaining unauthorized access to an information system. Other technological solutions that should be considered for implementation include common measures such as encryption, installation of intrusion detection sys-tems to detect malicious attempts at breaking into the network, firewalls to block unauthorized inbound and out-bound network traffic, and updated antivirus software on all servers and personal computers.

Implementing various technologi-cal information security solutions is essential to protecting information se-curity, but there are other important steps organizations should take to en-hance their information security pos-ture. An important one is to encourage supply chain partners to participate in an appropriate Information Sharing and Analysis Center (ISAC). The United States government, under Presidential Directive 63 encourages private orga-nizations to form ISACs by industry to share information security informa-tion in an attempt to improve security and minimize security breaches. For example, the Surface Transportation Information Sharing and Analysis Cen-ter (ST-ISAC) “provides a secure cyber and physical security capability for owners, operators, and users of critical infrastructure. Security and threat in-formation is collected from worldwide resources, then analyzed and distrib-uted to members to help protect their vital systems from attack” (ST-ISAC WWW site). Some ISACs that have been

formed are the Public Transit, Sur-face Transportation, and Supply Chain ISACs. Benefits that may be realized are increasing the chances of prevent-ing security intrusions by improving knowledge of security vulnerabilities and serving as deterrents to hackers (Gal-Or & Ghose, 2005).

Other important non-technical steps regarding information security include the management of human resources in the context of information security. The importance of this is apparent since a major obstacle to information security is information system users and their lack of awareness (Warren, 2002). The first step here is ensuring a system is in place for employee screening during the hiring process. Secondly, a core compe-tency of preventing the insider threat is required (Rice & Caniato, 2003). Regular security training of all employees is es-sential for a security logistic and supply chain network. The 2006 FBI Computer Crime and Security Survey highlighted this fact with results specifying that security training was very important to an organization’s security plan (Gordon et al., 2006).

The Role of Enterprise Architecture in Supply Chain Security

One possible approach to addressing the complexity of computer security within DOD supply chains is enterprise architecture (EA). EA may provide the discipline to bridge the gap between effective supply chain security require-ments, organizational objectives, and the actual implementation of informa-tion security within the supply chain. EA facilitates the alignment between the business and IS domains, a possible answer to achieving conceptual integ-rity and getting the requirements right and dealing with the challenges of se-curity within supply chains. EA pro-vides a formalized way to capture and document an organization’s present and future desired state and thus con-tributes to the management of change to the desired state. EA, which was mandated in the Federal Government by the Clinger-Cohen Act of 1996, is an effective answer for the DOD to use as a discipline around which to organize

and coordinate computer security ini-tiatives. EA can assist with treating the various DOD supply chain partners and 3PLs within a supply chain as a single entity in order to optimize the security efforts in a coordinated fash-ion, rather than in locally optimized solutions within each node and supply chain. EA can also contribute to the alignment of security efforts with the overall security goals derived from the security risk analysis. Enterprise archi-tecture can further address this trend of increasing complexity (an essential difficulty of information systems) in information systems (and in the DOD) by facilitating the abstraction of sys-tem complexity.

There are a myriad of EA definitions in existence, with none being univer-sally accepted (Janssen & Hjort-Madsen, 2007; Rohloff, 2005). However, accord-ing to Schekkerman (2004), various businesses, the Institute of Electrical and Electronics Engineers, and the US Department of Defense agree “that ar-chitecture is about the structure of im-portant things (systems or enterprises), their components, and how the compo-nents fit and work together to fulfill some purpose” (p. 21). In another defini-tion, enterprise architecture is defined as “the organizing logic for applications, data, and infrastructure technologies, as captured in a set of policies and tech-nical choices, intended to enable the firm’s business strategy” (Ross, 2003, p. 32). In other words, it helps transform IS into being seen as a strategic asset, rather than stovepiping IS planning into a series of tactical planning exer-cises centered around specific and sepa-rate IT applications or solutions.

A useful case study to illustrate this symbiosis of the information systems and enterprise architecture is offered by Venkatesh, Bala, Venkatraman, and Bates (2007). The authors discuss the use of enterprise architecture at the VHA in organizing their IT infrastruc-ture and business process capabilities to achieve alignment and integration. Over time, the VHA moved through sev-eral EA maturity stages, making their information systems standardized and integrated with business processes, thus improving overall organizational

Page 39: IO Journal 1st QTR 2010

IO Journal | February 2010 39

performance. Enterprise architecture was determined to be an important cat-alyst to improving performance (Ven-katesh, et al., 2007).

ConclusionLogistics and supply chains are es-

sential parts of not only the DOD but of the global economy as well, adding up to 9.9 percent of the Gross Domestic Product in the United States alone (Wil-son, 2007). The effectiveness and effi-ciency of logistics systems and supply chains have lead to competitive advan-tages for many organizations (such as Wal-Mart and Dell) and nations (such as the competitive advantage the United States’ military logistics systems cre-ate). Much of this success is due to the integration of information and com-munication systems into the logistical supply chains. However, the increased benefits from this integration have also brought substantial risk in the form of information security vulnerabilities which individuals, organizations, and nations anywhere in the world can ex-ploit. The DOD must take the lead in aggressively defending their informa-tion flows and systems against this threat. Conducting a risk analysis and then following this with an effective security policy and information secu-rity processes and technological solu-tions throughout DOD supply chains is an essential strategy to ensure supply chains continue to support competitive advantages for organizations instead of being the cause of their demise. En-terprise architecture which, in many cases, is already being implemented in DOD organizations may provide an or-ganizing discipline in which to address this complex problem.

Lt Col Brian R Salmans is a member of the faculty at the USAF’s Air Command and Staff College, Maxwell AFB AL. He is a computer-communications officer whose assignments have included computer se-curity positions at DISA’s DOD CERT and at USTRANSCOM. He earned his Ph.D. in Information Systems at the University of North Texas in 2009. The views expressed in this article are the author’s own, and do not necessarily reflect those of the US gov-ernment or the Department of Defense.

ReferencesBaskerville, R., & Siponen, M. (2002).

An information security meta-policy for emergent organizations.Logistics Information Management, 15(5/6), 337-346

Chilton, K., (2009) Cyberspace leader-ship. Air and Space Power Journal, 23 (3), 5-10.

Cliff, R., Burles, M., Chase, M., Eaton, D., & Pollpeter K. (2007) Entering the dragon’s lair, Santa Monica, CA: RAND.

Davenport, R. (2009) Know thy enemy. Armed Forces Journal, Sept, 21-22, 33-34.

Evangelista, P., & Sweeney, E. (2006). Technology usage in the supply chain: the case of small 3PLs. Inter-national Journal of Logistics Manage-ment, 17(1), 55-74.

Flavián, C., & Guinalíu, M. (2006). Con-sumer trust, perceived security and privacy policy. Industrial Manage-ment + Data Systems, 106(5), 601-616.

Fish, L., & Forrest, W. (2007). A world-wide look at RFID. Supply Chain Man-agement Review, 11(3), 48-59.

Gal-Or, E. & Ghose, A. (2005). The eco-nomic incentives for sharing security information.

Information Systems Research 16(2), 186-208.

Gordon, L., Loeb, M., Lucyshyn, W., & Richardson, R. (2006) CSI/FBI Com-puter Crime and Security Survey, Com-puter Security Institute.

Joint Economic Committee, United States Congress. (2002, May). Security in the Information Age: New Challenges, New Strategies (Washington, DC).

Kearns, G., & Lederer, A. (2003). A Re-source-Based View of Strategic IT Alignment: How Knowledge Sharing Creates Competitive Advantage. Deci-sion Sciences, 34(1), 1-28.

Kolluru, R., & Meredith, P. (2001). Secu-rity and trust management in supply chains. Information Management & Computer Security, 9(5), 233-236.

Lee, H., & Whang, S. (2005). Higher sup-ply chain security with lower cost: Lessons from total quality manage-ment. International Journal of Produc-tion Economics, 96(3), 289-300.

National Research Council (2002). Cy-

bersecurity of freight information systems, Washington D.C., Transpor-tation Research Board.

Rice, J., & Caniato, F. (2003). Building a secure and resilient supply network. Supply Chain Management Review, 7(5), 22-30.

Ross, J. Sloan School of, M., & Center for Information Systems, R. (2003). Cre-ating a Strategic IT Architecture Com-petency: Learning in Stages: Center for Information Research Sloan School of Management, Massachusetts Insti-tute of Technology.

Sanders, N. (2005). IT Alignment in Sup-ply Chain Relationships: A Study of Supplier Benefits. Journal of Supply Chain Management, 41(2), 4-13.

Santhanam, R., & Hartono, E. (2003). Is-sues in linking information technol-ogy capability to firm performance. MIS Quarterly, 27(1), 125-153.

Sarathy, R. (2006). Security and the global supply chain. Transportation Journal, 45(4), 28-51.

Schekkerman, J. (2004). How to Survive in the Jungle of Enterprise Architecture Frameworks: Creating Or Choosing an Enterprise Architecture Framework: Trafford Publishing.

Surface Transportation Information Sharing and Analysis Center official Web site, Retrieved July 27, 2007, from: http://www.surfacetranspor-tationisac.org/.

Venkatesh, V., Bala, H., Venkatraman, S., & Bates, J. (2007). Enterprise ar-chitecture maturity: The story of the Veterans Health Administration. MIS Quarterly Executive, 6(2), 79-90.

Warren, M., & Hutchinson, W. (2000). Cyber attacks against supply chain management systems: a short note. International Journal of Physical Dis-tribution & Logistics Management, 30(7/8), 710-714.

Warren, M. (2002). Security practice: Survey evidence from three coun-tries. Logistics Information Manage-ment, 15(5/6), 347- 351.

Whitman, M. (2004). In defense of the realm: understanding the threats to information security. International Journal of Information Management, 24(1), 43-57.

Wilson, R. (2007). 18th Annual State of Logistics Report.

Page 40: IO Journal 1st QTR 2010

IO Journal | February 201040

INTRO

 Most of us have seen the movie, “Field of Dreams” where the leading char-acter, Ray, is played by Kevin Costner. He has a tremendous passion

and vision to build a baseball field on his farm. He is driven by his inner con-science which tells him that if he “builds it, they will come”. And, sure enough, he builds a baseball field in his corn field and players from the past, representing multiple teams and various skill levels, come to play baseball.

In many ways, the USJFCOM Informa-tion Operations (IO) Range is analogous to this Hollywood “field of dreams”. Sec-retary of Defense guidance in the Infor-mation Operations Roadmap led to the creation of the IO Range by the Office of the Undersecretary of Defense for Intel-ligence (OUSD(I)). On 18 November 2005, USJFCOM became the lead agent for the IO Range.

“DOD requires an integrated test range to increase confidence and better assure predictable outcomes. The test range should support exercises, testing, and development of CNA, EW, and other IO capabilities.” – Department of Defense, Information Operations Roadmap[1]

Needs and RequestsThe Combatant Commanders (COCOMs),

Components, and Services have identi-fied a need to test, train, and exercise IO in a realistic environment. IO, per Joint Publication 3-132, includes Computer Net-work Operations (CNO), Electronic Warfare (EW), Psychological Operations (PSYOP), Military Deception (MILDEC), and Opera-tional Security (OPSEC). Since its incep-tion, the IO Range has matured the ability to integrate CNO, or Cyber, into COCOM and Service exercises and events. In addition, it has inherently contributed directly to

Field of DreamsUSJFCOM IO Range By Tom “TCL” Curby-Lucier, Lt Col, USAF (Ret)

CNO integration with PSYOP and the oth-er “soft” pillars of IO. It is also a prime “playground” for integration of Cyber and EW capabilities, demonstrating the syn-ergistic effects that can be obtained by controlling Information Technology (IT) infrastructure as well as electromagnetic spectrum operations.

What is the IO Range?The mission of the IO Range is to cre-

ate a flexible, seamless, and persistent environment that enables combatant and component commanders to achieve the same level of confidence and expertise in employing IO weapons that they have in kinetic weapons. The IO Range meets its mission by integrating COCOM, Service, and DoD Agency IO capabilities and en-vironments into the IO Range architec-ture. The current architecture provides a closed-loop network that connects geographically separated sites to sup-port a distributed testing, training, and exercise environment for IO capabilities. This architecture also allows support for multiple simultaneous events and, with-in each event, networks of different clas-sification levels can be supported.

Sites that are part of the IO Range re-quire either a fixed (i.e., permanent) or deployable service delivery point to ac-cess the IO Range. Traffic is encrypted and data movement on the IO Range is cleared to the highest levels of classifica-tion, depending on the particular event and the individual sites that are partici-pating in that event. This architecture is constantly being reviewed for evolution-ary changes in order to fully integrate other IO capabilities, including EW. One of the key elements of the IO Range is to create a representative operational envi-ronment with realistic target sets. Cur-rently, the IO Range supports numerous mission areas that range from IO Tool Re-search and Development (R&D) to IO Tool Developmental and Operational Tests and Evaluations to Operational Mission Rehearsal and Tactics, Techniques, and Procedures (TTP) development.

The IO Range delivers value to its cus-tomers as a standing infrastructure, closed loop network with a staff that provides a streamlined event approval process as well as a centralized management, security, and coordination process. The IO Range is, essentially, the “playing field”.

US Army Spcs. Jeffrey Crossman and Joe Garcia interact with two Iraqi boys during a patrol to clear the influence of al Qaeda in Iraq in 2008. US Navy photo by Mass Communication Specialist 2nd Class Paul Seeber/Released

Page 41: IO Journal 1st QTR 2010

IO Journal | February 2010 41

The Field as a SolutionSo how does this playing

field translate in to a viable and realistic environment for Electronic Warfare? It does so by integrating existing ranges and capabilities in order to provide a more robust environment. One of the first steps in this process is to identify some of the players and to determine what their positions or responsibilities on the playing field will be. This is best explained with a specific ex-ample that can illustrate what is within the realm of the possible.

Let’s begin by taking a command and control (C2) target set such as Global System for Mobile (GSM) communica-tions or Long Range Cordless Telephones that are representative target sets that the USSTRATCOM Joint Information Op-erations Warfare Command’s Joint Elec-tronic Warfare Center (JIOWC/JEWC) provides when they bring their Red Team capabilities to open air ranges in sup-port of various exercises. Most recently, the JIOWC/JEWC has been providing this target set and representing the Oppos-ing Forces, or OPFOR, in support of Mis-sion Rehearsal Exercises at the National Training Center, Fort Irwin, California and also in support of Air Force Weapons School Mission Employment Syllabus training at Nellis Air Force Base, Nevada. They have also brought their equipment capability to other sites, such as Idaho National Labs. This equipment and tar-get set that is provided by JIOWC/JEWC identifies a key player who has come to the “field of dreams”.

Once this target set is identified, a CO-COM or Service works with the IO Range staff to define the playing field for that particular event. In this example, let’s say that a COCOM wants to schedule and synchronize the JIOWC/JEWC provided target sets in a manner that represents various regions in their area of respon-sibility. The IO Range would then begin to coordinate this event and apply their operational and technical expertise to bringing this event to reality in support of the COCOM request.

Other players may come join the field of play. As with any C2 target structure, there will likely be objectives designed to

deny, disrupt, or delay communications. Other players that can contribute to these particular objectives will then be brought on board to support the event.

Once all of the players have arrived and schedules have been set, the IO Range will complete the analogous work of painting the lines on the field and preparing the scoreboard to capture any data (through visualization or in-strumentation) that may be requested. Each individual “site” can be thought of as a particular player in the game. As players are added, a combination of physical ranges (IT and/or open air ranges), simulators with a man-in-the-loop, and modeling and simulation tools are brought together. This integration of live, virtual, and constructive team members can mean only one thing - it is time to “play ball”.

Play BallAfter the IO Range “field” is built to

the right specifications and the players have been identified and scheduled, ex-ecution of the “game” can begin. Unlike the baseball field in the movie, this field is a secured network of geographically dispersed sites. Each site contributes a unique capability to the event and pro-vides an orchestrated “game” that helps develop TTP for EW and IO applications. In this example, the “game” represents a virtual country whose command and control is depicted by various, geo-graphically dispersed sites. There are, of course, several other potential uses for the field. A service or COCOM may re-quest an operational assessment of EW capabilities in a contested or congested electromagnetic spectrum environment.

The “players”, in this case, could be a combination of labs that house appli-cable jamming equipment in anechoic chambers, centers of excellence for mod-eling and simulation, and, potentially, a warfighting operator in the loop. In either case, the IO Range mission is to provide the playing field and associated support because, as the saying goes, “If you build it, they will come.”

Tom “TCL” Curby-Lucier, Lt Col, USAF, Retired, is an Associate with Booz Allen Hamilton, working for the US Joint Forces Command IO Range Program as a Liaison Officer to the Joint Electronic Warfare Cen-ter, San Antonio, Texas. He acts as an Event Coordinator and Planner for the IO Range Operations Team. He served 20 years in the Air Force and, previous to his current role, was Chief, Air Force Network Warfare Op-erations for the 8 AF/CC (HQ 8 AF). Prior to that, he was Commander of the B-52 Training Detachment in support of Air Com-ponent Command’s Training Support Squad-ron. He was a fully qualified Joint Specialty Officer (JSO) who led US Navy Tomahawk Cruise Missile strike planning in support of OPERATION ENDURING FREEDOM and has over 2,500 hours as a B-52 Electron-ic Warfare Officer. He has a BS from the University of California and an MBA from City University. Readers can contact him at [email protected].

Notes1. Information Operations Roadmap, 30

October 2003, http://cryptome.org/io-roadmap.htm.

2. Information Operations, Joint Publi-cation 3-13 2006, http://information-retrieval.info/docs/DoD-IO.html.

ngandctronictingiesust

r onis isfic ex

A US Air Force B-52 “Stratofortress” bomber does a flyover at the 101st Airborne Division (Air Assault) “Screaming Eagles” Air Show, on Campbell Army Airfield at Fort Campbell, Ky., Aug. 15,

2009. (U.S. Army photo by Sam Shore/Released)