Cyber Security - Mapping the Ethical Terrain

download Cyber Security - Mapping the Ethical Terrain

of 40

Transcript of Cyber Security - Mapping the Ethical Terrain

  • 8/10/2019 Cyber Security - Mapping the Ethical Terrain

    1/40

    C Y B E R S E C U R I T Y

    M A P P I N G T H E E T H I C A L T E R R A I N

    National Security College Occasional Paper | No 6 June 2014

    Nicholas G Evans, S Brandt Ford, Adam C Gastineau, Adam Henschke,

    Michael Keelty, Levi J West

    A joint initiative of theCommonwealth Government andThe Australian National University

    National Security

    College

    Crawford School of

    Public Policy

    College ofAsia & the Pacific

  • 8/10/2019 Cyber Security - Mapping the Ethical Terrain

    2/40

    National Library of Australia Cataloguing-in-Publication entry

    Authors: Evans, Nicholas G.

    Ford, S. Brandt

    Gastineau, Adam C.

    Henschke, Adam

    Keelty, Michael

    West, Levi J.

    Title: Cybersecurity: Mapping the ethical terrain [electronic resource] / Nicholas G. Evans, S. Brandt Ford, Adam C.Gastineau, Adam Henschke, Michael Keelty, Levi J. West.

    ISSN: 2200-5269 (National Security College occasional papers)

    ISBN: 978-1-925084-14-6 (ebook : pdf)

    Series: National Security College occasional paper: 4 (Online).

    Notes: Includes bibliographical references.

    Subjects: National security--Australia.

    National security--Australia--21st century.

    Military planning--Australia.

    Political leadership--Australia--21st century.

    Australia--Politics and government--21st century.

    Other Authors/Contributors:

    Australian National University. National Security College.

    Dewey Number: 355.0330994Published and distributed by:

    National Security College

    Building 132a, 1 Lennox Crossing

    Australian National University, Acton

    ACT 0200

    T: (02) 6125 6480

    E: [email protected]

    W: http://nsc.anu.edu.au/

  • 8/10/2019 Cyber Security - Mapping the Ethical Terrain

    3/40

    C O N T E N T S

    Contributing authors iv

    Foreword

    Michael Keelty

    1

    A decision-making procedure for responding to cyber-attacks

    Adam Henschke

    3

    Cybersecurity and open, distributed cultures: Designing in ethics

    Nicholas G Evans

    10

    Warfare, cyberweapons and morality

    Shannon B Ford

    16

    Policing cyber-threats: Regulating exceptional police actions

    Adam C Gastineau

    21

    Virtual terrorism: Data as a target

    Levi J West

    28

  • 8/10/2019 Cyber Security - Mapping the Ethical Terrain

    4/40

    iv Nat ional Security College

    C O N T R I B U T I N G A U T H O R S

    Nicholas G EvansDr Nicholas G Evans is a philosopher and bioethicist living in Philadelphia and an Adjunct Research Associateat the Centre for Applied Philosophy and Public Ethics, Charles Sturt University, Canberra. Dr Evans primaryresearch focus is the dual-use dilemma in the biological sciences, and ethical issues in regulating scientificresearch that poses a security threat to human health and wellbeing. His first volume on military ethics,the Routledge Handbook of Ethics and War, was published in 2013. He has also written on the ethics ofemerging military technologies, professional ethics, public health ethics, and friendship. From January toMarch of 2014, Dr Evans was a visiting fellow at the Monash Centre for Human Bioethics in Melbourne; inJuly he will commence as a Postdoctoral Fellow in Advanced Biomedical Ethics at the Department of MedicalEthics and Health Policy, University of Pennsylvania.

    Shannon B Ford

    Shannon Brandt Ford is a Lecturer in Intelligence and Security Studies with the Australian Graduate Schoolof Policing and Security, Charles Sturt University. Before starting his academic career, Mr Ford spent tenyears as a Defence Strategist and Intelligence Analyst (1999-2009). He has taught at the Australian NationalUniversity (ANU), the Australian Defence Force Academy (University of New South Wales) and the AustralianDefence College. He has a Master of Arts (International Relations) from the ANU, and Bachelor of Arts(Psychology) and Master of Social Science (Criminology/Sociology) from the University of Queensland. Hewas a CAPPE Research Fellow on the ARC project Police Leadership in the 21st Century and the ChiefInvestigator for the National Security College funded research project on the ethics of cybersecurity. Mr Fordsresearch interests include the theory and ethics of: cybersecurity; national security institutions (includingmilitary, police and intelligence agencies); leadership and strategy; intelligence studies; and analytical and

    critical reasoning in applied contexts.

    Adam C Gastineau

    Adam Gastineau is a PhD candidate focusing on police and military ethics with the Centre for Moral, Social,and Political Theory at the Australian National University. He has presented papers on the moral boundariesof police discretion and has also written on ethical issues pertaining to duty of care, targeted killing, thepermissible use of lethal forcein bello, and police corruption. He is currently working on a project examiningthe asymmetry between the norms governing the permissible use of lethal force by military personnel andthose permitting the use of lethal force by police.

  • 8/10/2019 Cyber Security - Mapping the Ethical Terrain

    5/40

    The South China Sea and Australias Regional Security Environment v

    Adam HenschkeDr Adam Henschke is a Research Fellow at the Australian National Universitys National Security College,an adjunct Research Fellow at the Centre of Applied Philosophy and Public Ethics, Charles Sturt Universityin Canberra, and was recently visiting Assistant Professor in the Department of Politics and PublicAdministration, University of Hong Kong. He started his academic career in the applied sciences, movedfrom biology to bioethics, completing a Master of Bioethics (20052006) at Monash University and a Masterof Applied Ethics (20062007) at the Norwegian University of Technology and Linkoping University (Sweden).He was a researcher with the Centre for Human Bioethics at the World Health Organisation (2006), a visitingresearcher at the 3TU Centre of Ethics and Technology at Delft Institute of Technology (2009), and a co-recipient of a Brocher Fellowship, researching ethical issues of Open Health technologies in Geneva (2012).He has published in areas that include information theory, ethics of technology, and military ethics. He recentlyco-edited the International Journal of Applied Philosophys symposium on war in the 21st century (Fall 2012),The Routledge Handbook of Ethics and War(July 2013) and is currently co-editing a book on the ethics ofcyberwarfare for the University of Oxford Press.

    Michael Keelty

    Mick Keelty, AO, APM, is an Associate Professor at the Australian Graduate School of Policing and Security,Charles Sturt University in Canberra and was Commissioner of the Australian Federal Police (AFP) from2001 to 2009. He served two terms as AFP Commissioner and has thirty-five years of policing experienceat local, national, and international levels. During this time, the AFP grew significantly, gaining major newresponsibilities in counter-terrorism, high-tech crime, international peacekeeping and law enforcementcapacity building. He also played a key role in the establishment of the Australian Crime Commission, which is

    a statutory body with Royal Commission-style powers established to tackle organised crime.

    Levi J West

    Levi West is Lecturer in Terrorism and National Security at the Australian Graduate School of Policing andSecurity, Charles Sturt University in Canberra. He coordinates the Masters program in Terrorism and SecurityStudies. Levi commenced his doctoral thesis at the Australian National Universitys School of International,Political and Strategic Studies in 2013. He is a graduate of the National Security College, holding a GraduateCertificate in National Security Policy, and Macquarie Universitys Centre for Policing, Intelligence, and CounterTerrorism, from which he holds Masters degrees in International Security Studies, and in Policing, Intelligenceand Counter Terrorism. Levis research is focused on the impact of technology on non-state actors, theevolution of terrorism and transnational crime, and cyber security. Levi has held various research and teachingpositions in Australia and in the United States, the Middle East, and South Asia.

  • 8/10/2019 Cyber Security - Mapping the Ethical Terrain

    6/40

    1 National Security College

    Governments and society are increasingly relianton cyber systems. Yet the more reliant we areupon cyber systems, the more vulnerable we areto serious harm should these systems be attackedor used in an attack. This problem of reliance andvulnerability is driving a concern with securingcyberspace. For example, a cybersecurity teamnow forms part of the US Secret Service. Its job isto respond to cyber-attacks in specific environments

    such as elevators in a building that hostspolitically vulnerable individuals, for example, staterepresentatives. Cybersecurity aims to protect cyber-infrastructure from cyber-attacks; the concerningaspect of the threat from cyber-attack is the potentialfor serious harm that damage to cyber-infrastructurepresents to resources and people.

    These types of threats to cybersecurity might simplytarget information and communication systems:a distributed denial of service (DDoS) attack on agovernment website does not harm a website in anydirect way, but prevents its normal use by stifling theability of users to connect to the site. Alternatively,cyber-attacks might disrupt physical devices orresources, such as the Stuxnet virus, which causedthe malfunction and destruction of Iranian nuclearcentrifuges. Cyber-attacks might also enhanceactivities that are enabled through cyberspace, suchas the use of online media by extremists to recruitmembers and promote radicalisation. Cyber-attacksare diverse: as a result, cybersecurity requires acomparable diversity of approaches.

    Cyber-attacks can have powerful impacts onpeoples lives, and soin liberal democratic societiesat leastgovernments have a duty to ensurecybersecurity in order to protect the inhabitantswithin their own jurisdiction and, arguably, the peopleof other nations. But, as recent events following therevelations of Edward Snowden have demonstrated,there is a risk that the governmental pursuit ofcybersecurity might overstep the mark and subvert

    fundamental privacy rights. Popular commenton these episodes advocates transparency ofgovernment processes, yet given that cybersecurityrisks represent major challenges to national security,it is unlikely that simple transparency will suffice.

    Managing the risks of cybersecurity involvestrade-offs: between security and privacy; individualrights and the good of a society; and types ofburdens placed on particular groups in order toprotect others. These trade-offs are often ethicaltrade-offs, involving questions of how we act,what values we should aim to promote, and whatmeans of anticipating and responding to the risksare reasonablyand publiclyjustifiable. ThisOccasional Paper (prepared for the National SecurityCollege) provides a brief conceptual analysis ofcybersecurity, demonstrates the relevance ofethics to cybersecurity and outlines various ways inwhich to approach ethical decision-making whenresponding to cyber-attacks.

    First, Adam Henschke argues that we need to makedecisions regarding cyber-attacks in a reasonablemanner. Cybersecurity challenges involve a series of

    complex relations between old and new actors andold and new technologies. State security agencieshave some duties to protect themselves and civiliansagainst cyber-attacks. But to ensure that theseduties are met in a way that is publicly justifiable,Dr Henschke argues that the decision proceduresneed to make explicit the moral values involved andhow the state agencies make decisions betweencompeting values.

    F O R E W O R DMichael Keelty

  • 8/10/2019 Cyber Security - Mapping the Ethical Terrain

    7/40

    Cybersecurity: Mapping the ethical terrain 2

    Next, Nicholas Evans examines the argument,which proponents of the open culture movementadvocate, that with sufficient openness we enableothers to solve modern challenges, includingthose posed to security in the information age.Here, Dr Evans presents a series of challenges tothis argument; he offers only the most tentative ofsolutions, but hopes to move the discourse aroundopenness and security on the internet from a series

    of technical hurdles to a set of ethical concerns.

    Then Shannon Brandt Ford argues that we shouldbe concerned about the use of cyberweaponsbecause they have the potential to cause seriousharm by attacking vulnerabilities in informationsystems. He demonstrates the way in which thedistinction between war and non-war contexts isimportant for our understanding of the basic moralprinciples for justifying the use of cyberweapons.Importantly, he argues, proportionate anddiscriminate responses to cyber-attacks require theability to distinguish an act of cyberwar from acts ofcybercrime and/or cyber espionage.

    Fourth, Adam Gastineau argues that cyber-surveillance greatly enhances a states powerto restrict the privacy of individuals. He argues,however, that the state is only permitted to restrictan individuals privacy through surveillance in caseswhere he or she can be shown to be liable to suchrestrictions. An individuals privacy is defined by theamount of access that others have to that individual.Restricting an individuals privacy without justificationis wrongful, he argues, because it can unjustly

    restrict an individuals freedom. This restriction isonly justified if one can be said to be a threat, andtherefore liable to have their freedom restricted.Individuals who simply pose a risk are not sufficientlyliable to have their freedom restricted in this way.

    Finally, Levi West argues that cyberterrorismremains a contested and controversial concept,lacking definitional clarity. Cyberterror attacks oncritical infrastructure offer poor return on investmentfor terrorist organisations, whose primary useof cyberspace remains focused on recruitment,radicalisation, financing and other network-sustainingfunctions. But it is possible, according to Mr West,to envisage future manifestations of cyberterrorism

    where data rather than people constitute theprimary target, enabling, through the absence ofconventional violence, more effective realisation ofterrorists political or ideological goals.

  • 8/10/2019 Cyber Security - Mapping the Ethical Terrain

    8/40

    A decision-making procedure for

    responding to cyber-attacks

    National Security

    College

    National Security College

    The National Security College is a joint initiative of the

    Commonwealth Government and The Australian National University

    Adam Henschke

  • 8/10/2019 Cyber Security - Mapping the Ethical Terrain

    9/40

    Cybersecurity: Mapping the ethical terrain 4

    Introduction

    Cybersecurity challenges involve a series of complexrelations between old and new actors and oldand new technologies. Furthermore, the decisionsinvolved in responding or not responding to a cyber-attack will typically involve tensions between differentmorally important values. This presents decisionmakers with a challenge: how are we to respond tocyber-attacks in a way that is both practicable andethically justifiable? This paper presents a generaloutline for an ethically based decision procedure forresponding to cyber-attacks.1

    Cyber-attacks can have powerful impacts onpeoples lives, and so in liberal democratic societies,governments are given a special duty to protecttheir citizens, and arguably, the people of othernations. In short, we generally expect governmentsto act to protect the security of people. But asrecent examples involving the US National SecurityAuthority (NSA) and the UK Global CommunicationsHeadquarters (GCHQ) have shown, there is publicconcern about what state security agencies do inpursuit of national security. Many potential cyber-attacks and responses pertain to ethical trade-offs:

    security vs. privacy, individual rights vs. an overallpublic good, the placing of burdens on particulargroups over others. The critical reasoning toolsfound in practical ethics play an essential role inidentifying where value conflicts arise, and offerways of justifying responses that help to resolvesuch conflicts.

    1 This is intended to be a general outline only: anydecision procedures for a specific cyber-attack wouldnecessarily involve fine grained detail and would requireextensive input, including but not limited to experts fromcomputer science, security studies and applied ethics.

    For example, a particular concern for securingcyberspace is the rise of dual-purpose infrastructure.

    Consider that something is targeted by a militaryattack. [T]hat is, systems and structures for bothcivilian and military uses, or even civilian targets withthe goal of demoralizing, weakening, or confusing anenemys military and civilian leadership.2The laws ofwar typically forbid the deliberate targeting of civilianinfrastructure. But with a considerable amount ofinformational infrastructure being privately run formilitary use, or used by both military, industry andprivate actors, such discrimination between strictlymilitary targets and strictly civilian targets becomes

    increasingly difficult.

    Values and responses

    The first aspect of a decision procedure is todevelop a list of potential morally salient values thatcould potentially be in conflict. Box 1.1 (see below)contains a list of key moral factors that might beincluded in an anticipatory ethical decision-makingprocedure for responding to a cyber-attack.3Any listof such values is necessarily incomplete, and givesrise to a highly complex set of interactions betweenthe different elements. As with the potential cyber-

    attacks themselves, the results of the interactionsbetween the elements are hard to predict in advance,and it may be difficult to understand how they willplay out through time.

    2 Randall R. Dipert, The Ethics of Cyberwarfare,Journal Of Military Ethics9, no. 4 (2010): 390.3 Adapted from work by the following authors:

    Thomas Rid, Cyber War Will Not Take Place(London:Hurst Publishers, 2013); Philip Brey, Anticipatory Ethicsfor Emerging Technologies, NanoEthics6, no. 1 (2013);Martin Libicki, Cyberdeterrence and Cyberwar, (RANDCorporation, 2009).

  • 8/10/2019 Cyber Security - Mapping the Ethical Terrain

    10/40

    5 A decision-making procedure for responding to cyber-attacks

    The purpose of such a list is to present a set ofmorally salient values that may arise when formulatinga decision procedure to respond to cyber-attacks.The first step in either designing an anticipatorydecision procedure or determining how to respond

    to a given cyber-attack is to check the decisionprocedure against the list to see which values are in

    play, and which could potentially conflict.

    Box One: Key Moral Values Just war principles

    > response was proportional,discriminate, necessary, fair andnot wrong in itself

    Risk to life

    > Amount of people killed and/orseverity of injuries

    Critical infrastructure

    > Targets of attacks

    > Non-life threatening (in the shortterm at least) but highly significantimpacts

    Individual rights

    > Cyber incidents and responses tothem that violate individualsclaims to:

    > privacy

    > property

    > equality

    Trust

    > Loss of trust in given serviceprovider and/or system

    > Loss of trust in securityagencies/government/oversight,e.g., blowback from EdwardSnowden actions

    Economic costs

    > Direct economic theft

    > Theft of intellectual property

    > Redesign/rebuild criticalinfrastructure

    Efficiency

    > May need to sacrificeefficiency for security, i.e.shift from just in timeservice delivery to systemsthat offer a buffer, i.e. designing insystem resilience, high availability

    > Raise consumer costs

    > Basic annoyance i.e. constantpassword resets

    International relations

    > Strain on diplomatic relationsbetween nation states: SaudiArabian/Israeli relations following0x Omar data thefts, 2012.

    National security

    > Trade individual rights forincreased national security

    > Require clarity on meaningof national security, i.e., is it

    maximised social good or privateeconomic interest?

    > Need to specify whichrights are traded against national

    security, whose, and for how long.

  • 8/10/2019 Cyber Security - Mapping the Ethical Terrain

    11/40

  • 8/10/2019 Cyber Security - Mapping the Ethical Terrain

    12/40

    7 A decision-making procedure for responding to cyber-attacks

    Note that these are models only: they are deliberatelysimplified representations of methods of responding.They are not intended to replace the fine-grained anddynamic reasoning discussed throughout this paper.This point needs to be heavily emphasised, as thereis a danger in simply applying this model, especiallyin conditions where there is uncertainty or limitedtime. Such models are suggested simplificationsintended to provide a general indication of howethical reasoning with relation to cybersecurity can

    operate in practice.

    Second, to prescribed courses of action; thefirst matrix involves a cyber-attack on non-criticalinfrastructure, while the second involves a cyber-attack of high or severe impact or a circumstancethat could impact critical infrastructure. As describedearlier, one of the key elements of any effectivecybersecurity program is the capacity to identify

    reliably the target and impact of a given cyber-attack.Despite fears of a cyber-Armageddon,5such anevent has not occurred so far.6The point here is thatwe have typically encountered cyber-attacks thatwould be located in the first matrix. This is not to saythat disasters cannot occur, but rather to point outthat it is only the events in the second matrix that willtypically require a major trade-off between key ethicalvalues. Furthermore, if critical infrastructures can bedesigned such that cyber-attacks are likely to fall

    within the first matrix, then many of the challengingethical decisions regarding cybersecurity canhopefully be avoided.

    5 Richard A. Clarke and Robert K. Knake, Cyber War: The Next

    Threat to National Security and What to Do About It, (Harper Collins

    e-Books, 2010).

    6 Rid, Cyber War Will Not Take Place.

    Figure 2: General response matrix to critical infrastructure/high impact attacks

  • 8/10/2019 Cyber Security - Mapping the Ethical Terrain

    13/40

    Cybersecurity: Mapping the ethical terrain 8

    In respect to the colour-coding, the modelshave been deliberately kept simple, with moral

    hazards given three potential ratings: green = low,orange = moderate and red = severe. If the set ofpotential outcomes includes all red and one green,thenabsent significant extrinsic factorsthegreen option is the most favourable. Although anydecision-making procedure must involve far moredetail than that contained in these models, the pointhere is to illustrate what role ethical analysis canplay in a given scenario. There is practical value toinvolving ethics in cyber-security, in a way that iscentral and foundational to decision making, ratherthan a patch or supplement at the end. As such,

    developing any decision procedures would need toinvolve consistent and high-quality ethical analysissuch that the models do not replace effectivedecision procedures.

    In this example, if X cyber-risk is likely to causeconsiderable harm to critical infrastructure, or havesevere impacts on individuals, the second matrixis engaged. In the second matrix two courses ofaction have been identifieda military responseand a private response. To spell out what thismeans, the comparison is between a military

    response, i.e., use of cyber-weapons and/orphysical violence such as putting a missile downone of your smokestacks,7and a private response,for example, a stand-your-ground approach.8Again, this comparison is a deliberate simplificationthat highlights how comparisons can be madebetween different options. The two options, militaryand private, have been rated by reference to (a)immediate impacts, (b) moral hazards of acting/not acting, and (c) severity/impactorange beingmoderate impact, red being severe. In situations ofcomparison, one can see that the impact of militaryaction is severe, while that of private response ismoderate. On this, it seems that private response is

    7 This is a reference to a Pentagon official who made the

    now-famous statement in a Wall Street Journalarticle, If you shut

    down our power grid, maybe we will put a missile down one of

    your smokestacks, Siobhan Gorman and Julian E. Barnes, Cyber

    Combat: Act of War, The Wall Street Journal, 31 May 2011.

    8 Patrick Lin, Stand Your Cyberground Law: A Novel Proposal

    for Digital Security, The Atlantic, 30 April 2013.

    desirable to the military, but allowing private citizensto act might lead to escalation.

    Such responses need integration. That is, what hasbeen listed here is a simplified set of governmentagencies; military, police, security/intelligence,other government agency, and private actor(s).The if X, then Y decision framework and resultingmatrices imply coordination between governmentagencies and simplify these agencies to clusterssuch as military. The reality of government is farmore complex. The military, for example, is a highlycomplex institution comprising many differentgroups, sub-groups and actors. Consequently,

    what is necessary for effective cybersecurity is acoordinated level of integration and harmonisationbetween these agencies, perhaps overseen by aspecific department such as the Australian CyberSecurity Centre (ACSC). It might be that the chieforganising department shifts over time, as it hasdone in the past,9but my point is that coordinated,integrated and harmonised responses are necessaryfor effective cybersecurity.

    Developing long-term strategies for

    cybersecurityIn order to ensure a robust capacity to respond tocyber-attacks, I suggest that an integrated cyber-operations centre have the capacity to:

    identify relevant agencies with capacity andmandate to respond

    identify and recognise a given threat

    work within a clear chain of command

    identify moral hazards of acting/not acting in theshort and long term.

    9 Peter Jennings and Tobias Feakin, Special Report the

    Emerging Agenda for Cybersecurity, (Canberra: Australian Strategic

    Policy Institute, 2013).

  • 8/10/2019 Cyber Security - Mapping the Ethical Terrain

    14/40

  • 8/10/2019 Cyber Security - Mapping the Ethical Terrain

    15/40

    Cybersecurity and open,

    distributed cultures:

    Designing in ethics

    National Security

    College

    National Security College

    The National Security College is a joint initiative of the

    Commonwealth Government and The Australian National University

    Nicholas G Evans

  • 8/10/2019 Cyber Security - Mapping the Ethical Terrain

    16/40

    Introduction

    Two features1of modern cyberspace present aserious challenge for cybersecurity:

    Distribution:information and communicationstechnologies (ICTs) are most productive whenubiquitous2

    Deskilling:the development of cyberspace hasseen a proliferation of skill and access, enabling alarger number of people to command relatively largeamounts of power in cyberspace.3

    Distribution and deskilling create a security challenge

    by lowering the costsmaterial, experiential, andopportunityof the malevolent use of technology.Committing acts ranging from minor vandalism tolarge-scale economic or physical harm is becomingever more possible.4We are approaching a crisispoint at which the cost of committing cybercrime5in terms of technical and economic barriersis verylow; however, the consequences of individual acts

    can be quite large.

    1 These features are not exhaustive of issues in cybersecurity;

    they are, however, s ignificant challenges.

    2 Weber, Steven, The Success of Open Source. Harvard

    University Press, 2005; Hope, Janet, Biobazaar: the open source

    revolution and biotechnology, Cambridge, MA: Harvard University

    Press, 2008; Carlson, Robert, The pace and proliferation of

    biological technologies, Biosecurity and Bioterrorism: Biodefense

    Strategy, Practice, and Science, 1 (3), 2003, 203214. Though this

    productivity is not always a good thing, see e.g., Goodman, M.

    A vision of crimes in the future, Technology, Entertainment, and

    Design (TED) Global, June 2012 http://www.ted.com/talks/marc_

    goodman_a_vision_of_crimes_in_the_future.html (Accessed 6 July2013). The subject of when innovation or productivity is truly good

    is for another paper currently under development, but it seems

    clear (and indeed that paper is motivated from the intuition) that not

    everything productive or innovative is indeedgood.

    3 De Joode, Ruben van Wendel, Understanding Open Source

    Communities: An Organizational Perspective. Doctor of Philosophy,

    TU Delft, Delft, NE, 2005.

    4 Arquilla, John, Twenty Years of Cyberwar,Journal of Military

    Ethics12:1, 2013, 8087.

    5 Or cyberterrorism, though perhaps not (yet) cyberwar. I wont

    offer a comprehensive analysis of these terms and distinctions

    between them; the token I use for malevolent acts in cyberspace is

    cybercrime.

    Dual-use and open access

    This problem is neither new nor unique to ICTs,but finds expression in a number of contemporaryproblems of dual-use, whereby one and the sametechnology or set of technologies can be used forgood or evil.6 As the cost of genomic sequencing and increasingly, synthesis falls, we encounterthe widespread proliferation of accessible, powerfulbiotechnology.7Likewise, the ability to produce,modify, and even wrest control of drones is alreadyin the public domain.8Convergent technologiessuch as 3D printing could further this convergenceof ease of access, use, and modification.9

    Each field has its risks, and ICTs present theserisks in the most obvious fashion by virtue of thefields maturity. In point of fact, biotechnology isstill very much considered science, while dronesboth find considerable research promise as well asan emerging market presence. ICTs are certainlythe subject of research, but are also a mature,ubiquitous, and relatively well-functioning collectionof technologies.

    6 Evans, Nicholas G., Great expectations ethics, avian flu

    and the value of progress,Journal of Medical Ethics39: 4, 2013,

    20913.

    7 Hope,Biobazaar.

    8 Carlson, Robert, Are These The Drones Were Looking For?

    (Part IV) 11 October 2012 http://www.synthesis.cc/2012/10/

    are-these-the-drones-were-looking-for-part-iv.html (Accessed 28

    November 2013)

    9 The Drone It Yourself project is an open-source, 3D printable

    drone platform that can use any regular object as a drone body. See

    http://jaspervanloenen.com/diy/ (Accessed 28 November 2013).

    11 Cybersecurity and open, distributed cultures: designing in ethics

  • 8/10/2019 Cyber Security - Mapping the Ethical Terrain

    17/40

    Cybersecurity: Mapping the ethical terrain 12

    Assessing the risks

    A comprehensive account of the current risks isbeyond the scope of this paper, but a brief surveyof recent events will demonstrate the type andpervasiveness of existing threats. Stuxnet, theworm that infected Siemens systems worldwide,but mysteriously only attacked and damagedIranian nuclear centrifuges in 2010, is of course theparadigm of cyberwar.10In July of 2013, an Englishcourt ruled that Flavio Garcia, a researcher at theUniversity of Birmingham, be banned from publishinghis research on cracking the security systems ofluxury cars; Volkswagen applied for an injunction

    against Garcia based on the security implicationsof his work.11While talking about cars, it should benoted that in 2010, collaboration between Universityof California San Diego and University of Washingtonshowed that car software could be hacked, andmalicious code embedded through most any systemin a car, including the CD player.12 Similarly, the Foodand Drug Administration has warned that medicaldevices can be hacked.13GPS satellites are also atrisk: Tyler Nighswanders team from Carnegie MellonUniversity showed that 2030 per cent of the globalContinuously Operating Reference stations could be

    rendered inoperable via malicious GPS broadcastsusing about $2,500 worth of equipment.14

    10 Lucas, George, Jus in Silico: Moral Restrictions on the Use

    of Cyberwarfare, in Allhoff, Evans, and Henschke, The Routledge

    Handbook of Ethics and War,Routledge, 2013.

    11 OCarroll, Lisa, Scientist banned from revealing codes used

    to start luxury cars, The Guardian,27 July 2013,http://www.theguardian.com/technology/2013/jul/26/scientist-

    banned-revealing-codes-cars (Accessed 27 November 2013).

    12 Koscher, Karl et al., Experimental Security Analysis of a Modern

    AutomobileIEEE Symposium on Security and Privacy, May 2010.

    13 US Food and Drug Administration, FDA Safety

    Communication: Cybersecurity for Medical Devices and Hospital

    Networks, 13 June 2013 http://www.fda.gov/MedicalDevices/Safety/

    AlertsandNotices/ucm356423.htm (Accessed 27 November 2013).

    14 Nighswander, Tyler, Ledvina, Brent, Diamond, Jonathan,

    Brumley, Robert, Brumley, David, GPS Software attacks, CCS

    12 October 16-18, 2012, Raleigh, NC http://users.ece.cmu.

    edu/~dbrumley/courses/18487-f12/readings/Nov28_GPS.pdf

    (Accessed 28 November 2013).

    Everything and everyone is vulnerable, and thetechnical solutions to guard against such vulnerabilitycan often be complex.15The more secure thesystem, the higher the barrier to penetrationbutfew systems are overly secure. Researchers recentlydiscovered a critical vulnerability in the Androidphone operating system that allowed attackers toinject malicious code into legitimate programsandthey identified instances in the wild of attackersdoing just that.16

    These represent some emerging security issues

    in civilian devices; however, the majority ofcybercrimewhich costs hundreds of billions ofdollars per year17is performed through endemicbut uncontroversial exploitation of computersystems around the world. In terms of individuallosses, compromised critical infrastructure maybe catastrophic. However, the current state ofcybercrime is one that represents a massive lossdistributed over a panoply of relatively minor losses.Importantly, a great deal of this crime is centralisedthrough organised crime gangs, who have movedinto the online world for its lack of accountabilityand enforcement.18

    15 Goodin, Dan, No easy way to stop BREACH from plucking

    secrets from HTTPS pages, feds say 3 August 2013,

    http://arstechnica.com/security/2013/08/no-easy-way-to-

    stop-breach-from-plucking-secrets-from-https-pages-feds-say/

    (Accessed 28 November, 2013).

    16 Goodin, Dan, First malicious apps to exploit critical Androidbug found in the wild, 24 July 2013, http://arstechnica.com/

    security/2013/07/first-malicious-apps-to-exploit-critical-android-

    bug-found-in-the-wild/ (Accessed 28 November 2013).

    17 For a good summary of the range of difficulties in estimating

    the costs of cybercrime, and some preliminary estimates, see

    Anderson et al., Measuring the Costs of Cybercrime, 11th

    Workshop on the Economics of Information Security, (Berlin, July

    26 2012).

    18 McCusker, Rob, Transnational organised cyber crime:

    distinguishing threat from reality, Crime, Law and Social Change,

    46:4-5, 2006, pp.257-273. See also Williams, Philip Organized

    Crime and Cybercrime: Synergies, Trends, and Responses, Global

    Issues6:2, 2001, 22-26.

  • 8/10/2019 Cyber Security - Mapping the Ethical Terrain

    18/40

    13 Cybersecurity and open, distributed cultures: designing in ethics

    If crime is the threat, the true challenge arises inwhat one does about such widespread, accessible,distributed skill in a vulnerable environment. Byvirtue of their structure, distributed and deskilledendeavours are resistant to preventativestrategiesregulation, export controls, licensingand prohibition.19Distributed, deskilled communitieshave the ability to circumvent security strategieswith easesomething we see in everything from thetheft of 100 million SONY Playstation accounts, toLulzsec, to the Arab Spring.

    Prevention strategies are problematic, it is claimed,because they drive away talent; restrict access tomaterials, opportunities, and skills; and limit thefreedom to innovate. They also routinely fail toprotect individuals from harm: hampering beneficialinnovation while failing to prevent misuse is thehallmark of failed regulation.20

    Instead, proponents of distributed and deskilledcultures call for preparation strategies: incentivisingrapid, beneficial innovation combinedin somecaseswith comprehensive technology monitoring.In very simplistic terms, preparation strategies create

    conditions in which the progress of those securingour good outpaces those who mean us harm.

    The problem of regulation

    Preparation is claimed to surpass prevention.Prevention is too sluggish for the environment onwhich it acts: it doesnt always fail, but it fails toofrequently, or is too costly, to be a satisfactorysolution. It is claimed that preparation is thesuperior strategy because it incentivises beneficialdevelopmenta desired end regardless ofapproacheswithout spending on costly andineffective regulation.

    19 Carlson, Robert, Biology is Technology: The Promise, Peril,

    and New Business of Engineering Life, Cambridge, MA: Harvard

    University Press, 2010.

    20 Carlson, Biology is Technology.

    Yet preparation rests on the assumption thatthe good guys can be meaningfully faster thanthe bad guys, and thus be a strategy thatsucceeds against other contenders. This is a highlyproblematic claim and I offer a series of concernswith holding such an assumption.

    First:numbers. How do we ensure that there areenough good guys relative to bad guys to achievesufficient security? The good guys need to coverall or most of their basesbad guys only need tohit one target. This is a manpower issue: does an

    open system like the one described in the preventionstrategy have the right incentives to attract people tosolving security issues?

    The answer, I think, isno. A typical response tothe problem of malfunction or malfeasance in opensystems is that that many eyes make all bugsshallow. Distribution and deskilling increase thenumber of good guys, by virtue of leaving problemsopen for the largest possible number of people to

    work on.

    This problem is larger still because bugs are

    notvulnerabilities.21Bugs are typically parts ofa system that, when used, cause disruption innormal functioning. Vulnerabilities, however, donot present as disruptions in normal functioningup until the point that they are exploited. We need,then, enough good guys who are knowledgeableand relevantly incentivised to find vulnerabilities,and enough individuals that can shore thosevulnerabilities up once they are found.

    21 Laurie, B. (2005). Open Source and Security. In C. DiBona,

    D. Cooper & M. Stone (Eds.), Open Sources 2.0: The Continuing

    Evolution(pp. 5771). Sebastopol, CA: ORielly.

  • 8/10/2019 Cyber Security - Mapping the Ethical Terrain

    19/40

    Cybersecurity: Mapping the ethical terrain 14

    Second, relying on preparation makes certainassumptions about the uptake of information inan open systemnamely, that it is comprehensiveenough to penetrate society and secure the numberof people (as above) with theamount of skillweneed. But this is not assured as a matter of whatit means for a system to be open in the way thatdistributed and deskilled communities are open.The wholepointis to provide information, but thatis not the same as education. Huge numbers ofindividuals use open-source software56 per centof the web server market is held by the open-source

    organisation the Apache Software Foundation.22

    Yet the population of individuals with the skill to fixbugs, much less to repair security vulnerabilities,is surely very small in comparison.23Developersmay be recalcitrant or reluctant to adopt securitymeasures. Software may not be designed withsecurity in mind. And the security risks keep onevolvingfaster than people can keep up.24

    Third and finally is the problem of what I willcall parasitic malevolence. Innovation in opencommunities, it seems, runs into the problem that

    even though secrecy is an unstable strategy,25it can limit information uptake to accelerate asecurity program sufficiently. In distributed, deskilledcommunities, we assume that for each innovationwe make, that malevolent individuals cannot easilypiggyback off our efforts. The degree to which thisholds is in part the degree to which preparationstrategies can succeed.

    22 Hope, Biobazaar.

    23 Laurie, Open Source and Security.

    24 Bright, Peter, Crypto experts issue a call to arms to avert

    the cryptopocalypse, 2 August 2013, http://arstechnica.com/

    security/2013/08/crytpo-experts-issue-a-call-to-arms-to-avert-the-

    cryptopocalypse/ (Accessed 28 November 2013).

    25 Schneier, Bruce. Securing Medical Research: A

    Cybersecurity Point of View. Science, 336:6088, 2012, 1527-1529.

    If part of our preparation strategy is to innovate alongthe lines of securing cyberspace, then perhaps thiswill work. But if the preponderance of innovationis directed towards new, unsecured endeavoursin cyberspaceparticularly when connected backto other important services, as in the case of MatHonan of Wiredmagazine, whose Amazon accountwas hacked, and then daisy-chained into hisApple ID, Gmail and Twitter accounts. The hackerseventually locked Honan out of his computer andthen wiped it cleanit is worth noting that thehackers used a customer service line to gain access

    to his iCloud account.26

    Conclusion: how to proceed

    Can the assumptions on which the preparationstrategy is based be borne out, and if so, to whatdegree? If the answer is yes, so much the betterparticularly if preparation is low cost. But if not, or ifsuccessful preparation comes at a very high cost,this is a serious concern. It of course does not meanthat the preparation strategy is a write off: it may stillbe our best option. But it should give us pause.

    This is not merely a technical problem: it is acombination of technical and ethical conflicts.Preparation can lead to rapid innovation, but the rateof uptake of that innovation can vary greatly.27Earlyadopters and the highly educated benefit immensely,while the benefits to others decrease in proportionwith their ability to use the technology, and to beintegrated into the dynamics of the resultant culture.

    26 Honan, Mat, How Apple and Amazon Security Flaws Led to

    My Epic Hacking, Wired8 June 2013

    http://www.wired.com/gadgetlab/2012/08/apple-amazon-mat-

    honan-hacking/ (Accessed 28 November 2013).

    27 Evans, Great Expectations

  • 8/10/2019 Cyber Security - Mapping the Ethical Terrain

    20/40

    15 Cybersecurity and open, distributed cultures: designing in ethics

    If we rely on prevention only, we appear to bedoomed to watch as open culture on the Internet

    routes around damage. We drive away otherwisebenevolent innovation while doing nothing to protectindividuals from harm. We may even exacerbate thedamage by causing more unrestthe harm causedby revelations of PRISM is yet to be seen, but alreadyconstitute a public relations disaster.

    If we follow through with preparation, the concernsI raise need to be addressed. Rapid innovationrequires empowering a great number of individualsto identify weaknesses: this is a coordination issuethat has its own trade-offs. Letting the status quo

    remain is likely to result in a dearth of experts, butthere are risks involved in creating security experts interms of oversight and corruption. A society of skilledexperts in ICT vulnerabilities must be accountablefor their role in larger society, and when engaged inthe highest levels of security, problems emerge. Oneform this corruption could take is the misuse of suchan organisation by the state; the other form is that ofindividual corruption of members of that organisation,tempted to use their skills for more profitable ends.

    Educating a populace is no doubt a good thing,but is a politically volatile subject in a number ofjurisdictions. Moreover, allowing security needs toinfluence education would be, I suspect, unpalatableto many. It is also potentially time-consuming, costly,and high-risk. Ultimately, if individuals can limit theirown vulnerability and be inured to the social aspectsof cybercrime (eg, phishing) then there is less toworry about. But how much this would cost, andhow to do this while preserving education in a rangeof other areas is a hugely complex question.

    We will also need to educate law enforcement ona more distributed level, so that state and federal

    and international agencies can better react againstcybercrime. As cybercrime becomes more distributed,and affects a greater number of devices (e.g., medicaldevices, vehicles etc.), we require greater know-howacross the board. In societies in which policing isunderfunded or neglected by the political process, orin which policing forces do not have the trust of theircommunities, this is incredibly problematic.

    These are all, however, fixes that leverage thebackground institutions on which our society, and

    our use of technology, rest. The degree to which weare willing to allow preparation to proceed shouldbe a function of our trust in these institutions, andof the degree to which we are willing to trade off thebenefits preparation brings against other valuablegoods. If our institutions fail us and we reach too far,the costs could be very high indeed.

    Finally, parasitic malevolence provides a seriouschallenge to preparation. The seemingly obvioussolution is community engagement, but this hashad mixed results for reasons Ive mentioned

    regarding trust in law enforcement. Surveillance canbe problematic without community participationor consent, even if only in terms of backlash whensurveillance is revealed.28However, the culturesabout which we are talking are often divided abouthow to relate to problems of security and thefunction of the state in promoting security. This is alarger ethical issue that I cannot develop here, butwhich must also be tackled.

    At the end of the day, preparation may be better thatprevention. But in order to show that is the case withany degree of confidence, we need to know whatwe are promoting, what that trades off against, andwhether that comprises an acceptable set of risks totake among an increasingly broad sector of society(i.e., internet users). When we begin comparing andtrading off values we enter the domain of ethics; thissuggests to me that mere technological innovationcannot be preparation enoughpreparation must bedesigned with ethics in mind.

    28 http://www.theguardian.com/world/2013/jul/31/nsa-top-

    secret-program-online-data

  • 8/10/2019 Cyber Security - Mapping the Ethical Terrain

    21/40

    Warfare, cyberweapons and

    morality

    National Security

    College

    National Security College

    The National Security College is a joint initiative of the

    Commonwealth Government and The Australian National University

    Shannon B Ford

  • 8/10/2019 Cyber Security - Mapping the Ethical Terrain

    22/40

    Introduction

    Cybersecurity is an increasingly prominent feature ofthe national security agenda. National governmentsare alarmed at the potential for serious attacks oncybersecurity and are responding accordingly. TheAustralian Government, for instance, nominatedmalicious cyber activity as one of seven keynational security risks in the 2013 Australiannational security strategy.1Given such an emphasison cybersecurity, and the significant investmentgovernments are making in this area, clearly it is animportant task to establish a firm conceptual graspon what we mean by the term. After all, it appears

    that cybersecurity is being used in a very broadsense to capture everything that is happening inthe cyber domain. In this sense, cybersecurity isan ill-defined catch-all for the nuanced problemsof a tech-rich, hyper-networked world.2Certainly,cybersecurity encompasses a range of conceptualaxes: private and public infrastructure; threatsagainst information or harms to or through physicaldevices. But cybersecurity can also be used in amore specific sense as protecting national cyber-infrastructure from threats to its reliability. Forthe purposes of this paper, cybersecurity is the

    protection of cyber-infrastructure from cyber-attacks.

    1 ACG. Strong and Secure: A Strategy for Australias National

    Security. Canberra: Australian Commonwealth Government, 2013.

    2 Joshua, Kopstein, Cybersecurity: How Do You Protect

    Something You Cant Define? The Verge,

    http://www.theverge.com/policy/2012/8/22/3258061/defining-

    cybersecurity-fud-cispa-privacy.

    Cyberweapons

    In this first section, I discuss the threat tonational cybersecurity in terms of the use ofcyberweapons. Cyberweapons are softwareprograms designed to attack and damage othersoftware (or data within computer systems) withthe intention of doing harm. In this, I follow NeilRowes (2010) definition of a cyberweapon in whichhe argues that cyberweapons are software usedto attack other software or data within computersystems.3Implicit in Rowes definition comprisesthe idea that cyberweapons, by virtue of being anattack on software, are attempting to damage the

    targeted software in some way.

    Cyberweapons might damage software in avariety of ways. A cyberweapon might infiltrateor injectunwanted data into an informationprocessing system that alters the database. It mightalso interfere with intended ways of processingthat data, such as is the case with malware, andthere are also cases in which the information flowwithin an information system is blocked, and thedegradation or otherwise modification of that flow.4

    But cyberweapons are designed to do more

    than simply damage software. Attacks usingcyberweapons are intentional attacks that areinstigated or controlled by individuals, politicalorganisationsor their military serviceswiththe intention of harming other persons, politicalorganisations or military services.5So the purposeof a cyberweapon is to attack an informationsystem in order to perpetrate harm. Stuxnet isfrequently cited as a paradigmatic example of acyberweapon. Discovered in June 2010, Stuxnet isa computer worm specifically designed to attack,and physically damage, Irans nuclear facilities.

    3 Neil C Rowe, The ethics of cyberweapons in warfare,

    International Journal of Technoethics1, no. 1 (2010): 21.

    4 Randall R. Dipert, Other-Than-Internet (OTI) Cyberwarfare:

    Challenges for Ethics, Law, and Policy,Journal of Military Ethics12,

    no. 1 (2013): 41.

    5 Randall R. Dipert, The Ethics of Cyberwarfare,Journal of

    Military Ethics9, no. 4 (2010): 398.

    17 Warfare, cyberweapons and morality

  • 8/10/2019 Cyber Security - Mapping the Ethical Terrain

    23/40

    Cybersecurity: Mapping the ethical terrain 18

    Cyberweapons pose a number of uniquechallenges for thinking about the ethics of conflict.

    First of all, it is difficult to determine the source ofan attack using a cyberweapon. The attributionproblem, as it is described, is the difficulty ofidentifying the source of a cyber attack withsufficient certainty to respond. To be morallyjustified in using a forceful responseespeciallyone that causes significant harm to anotherpartya necessary condition is that the defenderknows the alleged attacker intended to causeharm in some way. Yet the necessary componentsof cyberweaponsie, a laptop and an internetconnectionare readily available to most people.

    Expert actors operating from cyberspace do notneed significant institutional support potentiallyto inflict significant amounts of damage onnational information networks or infrastructure.So attributing an attack from a cyberweapon to aspecific party is notoriously difficult.

    A second challenge of cyberweapons is thepossibility that they will lower the threshold forconflict between states. The perception thatcyberweapons are not seriously harmful couldlead to their increased use and potentially instigate

    more serious forms of conflict. It might prove thatmany of these threats are not that serious, but theever-increasing reliance on cyber systems meansthat cyber-attacks on software can damage criticalinfrastructure and threaten the lives of people.

    Third, cyberweapons increase the l ikelihood ofcivilians being targeted and/or becoming victims ofdisproportionate attacks on joint-use infrastructure.The problem here is that the distinction betweenjoint-use and civil ian information systems is muchless meaningful when the military use of civiliancyber-infrastructure is ubiquitous.

    The war context

    Next I demonstrate the way in which thepermissibility of using cyberweaponsor respondingto an attack from a cyberweaponis determined bycontext. My starting point is the presumption that theuse of a cyberweapon should be morally justified. Inthis, I agree with the view that,

    cyber conflict, just as any formof armed conflict or use of force,should only be prosecuted for acompelling and justifiable reason,and only after all reasonable

    attempts short of these normally-prohibited measures have beenattempted without success. Wewould, moreover, reasonably expectourselves and others engaging inthis form of conflict to do soonly if the ends sought bore somereasonable relationship to the harmand destruction we might otherwiseinflict in an effort to resolve the givenconflict in our favour.6

    The basis for this justification, according to Lucas,

    is meeting the Just War principles of proportionality,discrimination and last resort. In other words,one must prove, first of all, that the use of thecyberweapon is necessary in some way. Second,the use of a cyberweapon should be discriminatingin that it is correctly targeted against its intendedobject. Third, the use of a cyberweapon should beproportionate, in that it does the least amount ofharm to achieve its purpose.

    6 George R. Lucas Jr, Jus in Silico: Moral Restrictions on the

    Use of Cyberwarfare, in Routledge Handbook of Ethics and War:

    Just War Theory in the 21st Century, ed. Fritz Allhoff, Nicholas G.

    Evans and Adam Henschke (Taylor & Francis, 2013), 20.

  • 8/10/2019 Cyber Security - Mapping the Ethical Terrain

    24/40

    19 Warfare, cyberweapons and morality

    So what does this mean in practice? Let meattempt to demonstrate using a hypothetical

    example. Let us imagine that the attacking countrylaunches an unjustified serious cyber-attackon a defending country. This attack attemptsto damage the defending countrys criticalinfrastructure (e.g., electricity grid, air traffic controlsystems) in a way that will seriously harm theinhabitants of the defending country. In this case,a plausible approach to the permissibility of usingcyberweapons might run along the following lines:

    Necessity:The defending countrys use ofcyberweapons against the attacker is justified as

    necessary because it is acting in self-defence,either to halt the attack or to deter the attackerfrom further hostilities.

    Discrimination:The defender might not haveconclusive evidence of the attacking countryscomplicity in the attack, but the seriousness of thethreat gives the defender a high-level of leeway inwhat it targets.

    Proportionality:Likewise, the context of thesituation allows the defender to inflict a significantamount of harm in response to the attack.

    This is what is conventionally described as a warcontext. The conventional war context presupposesthat a damaging act inflicted with the intentionof harm is part of a larger struggle between twoor more political communities engaged in armedconflict. That is, the incident is classifiable as anarmed conflict and part of a war-fighting effort. Thismeans that the decision to useor refrain fromusingforce occurs within, and must be judged interms of, an environment vastly different to what wewould expect within a non-war context.

    Non-war contexts

    Now let us turn to discuss non-war contexts. Inmost cases, the threats found in cyberspace aremore mundane than ruining critical infrastructureor hijacking drones. Most cyber-attacks are bestdescribed as cybercrime. These include: spamrings, extortion, money laundering and otherorganised criminal activity. For these threats, lawenforcement is the more appropriate context forconceptualising cybersecurity; and this involvesdifferent actors with different reach, jurisdictionalboundaries and purposes.

    The conventional law enforcement context describesan environment where a sovereign stateor similarpolitical communityis reasonably effective atmanaging violent conflict within its own jurisdictionusing a common body of law. It presupposes at leasta basic form of government with a functioning law-making body, criminal justice system and policinginstitutions. It also requires the absence of seriousarmed conflict, especially recurring violent incidentsbetween large politically motivated groups. Withinthe law enforcement context, belligerents who areparty to a conflict are treated as suspected criminals

    and not as combatants.The conventional understanding of violent conflictwithin the law enforcement context assumes threebasic environmentalor contextualconditions.First, the conflict is a domestic rather than aninternational issue. This means that any givenstate is responsible for resolving a violent conflictthat occurs within its own jurisdiction. Second, itis generally assumed that the parties to a violentconflict are non-state actors and the role of thestate is to adjudicate fairly between them. Third, theincident is not classifiable as an armed conflict or

    part of a war-fighting effort.

  • 8/10/2019 Cyber Security - Mapping the Ethical Terrain

    25/40

    Cybersecurity: Mapping the ethical terrain 20

    Having briefly highlighted the war context and thelaw enforcement context, there is also the sometimes

    discussed murky world of cyber espionage.According to Dipert,7 attacks in this realm include,traditional counterespionage and disinformationcampaigns, old-fashioned destruction of telephonelines, jamming of radio signals, killing of carrierpigeons, and so on. He goes on to suggest thatespionage is not usually an activity that has beenconsidered part of the moral considerations in regardto conventional conceptions of Just War Theory,and that the ethical considerations in espionage andother intelligence-gathering operations are but one ofthe several traditionally neglected aspects within the

    morality of conflict.8

    Let us consider a second example to illustratethe point that our interpretation of the basic moralprinciples for using cyberweapons should lookconsiderably different in non-war contexts. Imaginea second example in which a group working fromthe attacking country steals sensitive informationfrom the defending country, but with little or nodamage to the defenders cyber infrastructureand no threat to the inhabitants of the defendingcountry. In contrast to the first example,

    Necessity:Perhaps it is possible that the defenderis justified in using a cyberweapon in this case,but it would require some other key factor. Forexample, the attacker might be stealing sensitiveinformation as part of its preparation for a militaryattack on the defender.

    Discrimination:Even if the use of a cyberweaponis somehow justified, we would require morestringent proof as to the identity of the perpetrators.

    Proportionality:We would also want the harm to

    be strictly localised to the target in question.

    7 Randall R. Dipert, The Ethics of Cyberwarfare,Journal of

    Military Ethics9, no. 4 (2010): 386.

    8 Randall R. Dipert, The Ethics of Cyberwarfare,Journal of

    Military Ethics9, no. 4 (2010): 389.

    Conclusion: the war-fighting

    distinctionIn conclusion, the point of this exercise is todemonstrate the importance of what I refer to asthe war-fighting distinction for determining theapplication of moral principles in conflict. Thewar-fighting distinction states that designating acontext as war alters the way in which we shouldinterpret the basic moral principles for justifyingthe use of cyberweapons, or responding to beingattacked by a cyberweapon. In this case, there isan important moral distinction to be made betweenattacks using a cyberweapon and one that isdesigned for the purposes of cyber exploitation.And the reason for the importance in accuratelydescribing cyber-attacks is because there is alwaysa risk of escalating a case of espionage or crime tointernational armed conflict.9

    In other words, we do not want states overreactingto cyber-attacks. The key mistake here is toconflate the threat from a cyber-weapon with onethat involves cyber exploitation. There are non-damaging cyber-attacks that aim to exploit aninformation system without causing harm. Examples

    include: 1) theft of information, both state-sponsoredand criminal; 2) creating unauthorised accessto a system; or 3) attempts to take control of aninformation system. The important point here, sothe reasoning goes, is that we should acknowledgean important distinction between attacks usingcyber-weapons, which, as I have argued, aimto harm infrastructure and persons, and cyber-exploitation, which involves a non-damagingattack on cyber-infrastructure, and then respondaccordingly. Consequently, the permissibility of usingcyberweapons in response to a cyber-attack is

    contingent upon the context of the threat.

    9 Panayotis A Yannakogeorgos. Keep Cyberwar Narrow. The

    National Interest, http://nationalinterest.org/commentary/keep-

    cyberwar-narrow-8459.

  • 8/10/2019 Cyber Security - Mapping the Ethical Terrain

    26/40

    National Security

    College

    National Security College

    The National Security College is a joint initiative of the

    Commonwealth Government and The Australian National University

    Adam C Gastineau

    Policing cyber-threats:

    Regulating exceptional policeactions

    1

  • 8/10/2019 Cyber Security - Mapping the Ethical Terrain

    27/40

    Cybersecurity: Mapping the ethical terrain 22

    Introduction

    As with any type of surveillance, the challenge forsecurity services is to balance the need to detect andinterdict threats to the community against the rightsof the community that they are meant to protect.Cyber-surveillance grants security services greatpower in both collecting and collating information,allowing state agents to form highly detailed profilesof individuals or groups by using a wide range ofsources of digital information. However, as the recentcontroversy with the US National Security Agencyhas revealed, states cannot expect to have carteblanchein the use of such technology. The central

    claim of this section is that policing activities in thecyber sphere can prove to be as great an issueof moral concern to a population, particularly thepopulations of liberal democracies, as the activitiesof those seeking to undermine the state or activelyattempting to harm the populace. As technologyincreases our ability to access each others lives,we become more capable of affecting, directlyor indirectly, the actions of others. Insofar as wehold certain political goods (e.g., freedom) andpersonal goods (e.g., autonomy or individual liberty)to be valuable, we have reason to ensure that this

    increased power granted to the state to conductgeneral surveillance of its populace is carefullyregulated. Failing to do so would allow for large-scaleprivacy violations, and with these violations large-scale ethical violations against those living withinsuch societies.1

    This paper will proceed as follows: First I providea brief overview of what exactly I mean whendiscussing Cyber (or Cyberspace) and Cyber-surveillance. Second, I outline the ostensibly neutraldefinition of privacy given by Ruth Gavison. Thisdefinition defines privacy as being an issue ofaccessibility rather than one of control, and I offera brief summary of the reasons behind this. Next Ibriefly examine why we might think that restrictionsof privacy are ethically problematic. Finally, I applythis analysis to the issue of cyber-surveillance to

    1 References to just or unjust refer to ethical, not necessarily

    legal justification. Such claims may or may not also apply to legal

    justifications. As such something maybe permitted by law but not

    ethically just by this analysis.

    argue that societies that claim to value freedom andindividual rights should place strong restrictions on

    the use of such technology by the state. I then brieflyoutline what these restrictions might look like, andexplain when the state might be ethically justified inusing such technology.

    Cyber-surveillance and privacy

    Before discussing privacy, some account of cyber-surveillance is necessary. Some current casesinclude: surveillance of populations via network-enabled devices, such as travel information gleanedfrom smart cards by Victorian Police2and theNSA, using large amounts of data from majorinternet providers to identify and track particulartargets; and the possible use of drones by lawenforcement agencies to collect evidence for use inlater prosecutions. The term cyber or cyberspacehas not yet been formally defined, though there issome general agreement in the literature. One mighttraditionally define cyberspace as the existing setof interdependent, networked communications andinformation technology systems. Such systemsinclude the internet and all that is associated withit, incorporating online social networking sites,

    e-mail, phones, SMS/text, and GPS. Also includedare systems that provide services facilitated by theinternet and other similar networks. These publiccyber systems include transport key cards ore-tags that track a users activity on various publictransit systems or roadways, and charge the usersaccount accordingly. Also included in this set wouldbe commonly used security tools such as CCTVcameras and license plate/traffic cameras. Suchsystems collect a very narrow set of data, which isthen indexed and stored.

    2 Thomas Hunter, Myki is Watching You: Smartcard Operator

    Gives Police Access to Customer Info, The Sydney Morning Herald,

    (2010)., Neil Rees, et al., Surveillance in Public Places, (Victorian

    Law Reform Commission, 2010), 180.

  • 8/10/2019 Cyber Security - Mapping the Ethical Terrain

    28/40

    When these narrow data sets are combined andused in conjunction with other cyber systems,

    however, they can be used effectively to monitorindividuals. By collating these various narrow datastreams one can determine movement patterns,identify vehicles or other forms of transportcommonly used, and other habits. Cyber technologyallows one to create a comprehensive picture of anindividual and their habits in a relatively short time.

    It is not my intention here to go into any great detailon the various debates that have sprung up in theprivacy literature. Nonetheless, a concept of privacyin some form is necessary for the discussion. Certain

    assumptions will be made, but these should not havea great effect on the claims I wish to make regardingthe limits to be placed on police powers in liberaldemocracies. Though the divergent views inherent inthe varying conceptions of privacy may be at oddswith one another, given a neutral approach to privacy,the conclusions drawn should remain the same.

    Ruth Gavison has offered what she terms a neutraldefinition of privacy.3This definition seeks to outlinethe characteristics of the term without exploring whyprivacy neutrality of terminology might be valuable.As such, it is primarily a functional description ofwhat we are talking about when we consider privacy,and in particular violations of privacy. Gavison pointsout that when we are discussing issues of privacywe are referring to our accessibility to othersand she breaks this accessibility into three maincategories: information/secrecy, attention/anonymity,and access/physical access.

    First, information is precisely that: data aboutourselves. The greater access one has to data aboutan individual, the greater the reduction in privacy.If I have access to information about an individual I

    may use that data to forecast behaviour, track theindividual, or gain access to additional data, suchas accessing bank records or e-mail accounts. Ingaining information about an individual, I gain accessto them as an individual even if I never actuallyinterfere with them at all.

    3 Ruth Gavison, Privacy and the Limits of Law, The Yale Law

    Journal 89, no. 3 (1980).

    Second, Gavison splits attention into two sub-categories. One may think about an aspect of a

    particular individual. For example, I might thinkabout what Tony Abbott is likely to do on his daysoff; what kind of beer he most likely drinks; wherehe goes; who he may see, and so on. Gavisonargues that my paying this kind of attention to TonyAbbott does not violate his privacy, since this sort ofspeculation, on its own, gives me no real access tohim. But there is another type of attention that onemight pay to an individual, which would constitutean invasion of their privacy. This is what I will callactualised attention. This kind of attention is notmerely focusing ones thoughts on an individual

    and speculating about them, but actually watching,tracking, or otherwise monitoring an individual asthey go about their daily lives.

    Finally, Gavison points out that one considers onesprivacy to be violated when an agent is able to gainphysical access to us without our consent. Byphysical access Gavison means that one is able toplace themselves in close enough physical proximityto an individual that they are able to observe thatindividual by means of their regular senses. Forexample, a peeping tom who stations themselves

    outside an individuals windows to watch themundress violates the individuals privacy in this way.

    These three characteristics are interrelated, butconceptually independent of one another. I mightgather information on an individual without everfocusing my attention on the individual themselvesor being in close physical proximity to them.Likewise, I might observe an individual closely,staring at someone at the next table in a caf,without acquiring any further information about them.Finally, peeping toms may restrict anothers privacyby positioning themselves outside the window, as

    in doing so they give themselves physical accessto that individual without the individuals permissioneven if the individual is not yet in the room. In manycases, however, these three characteristics ofaccessibility are interrelated and feed one another.

    23 Policing cyber-threats: Regulating exceptional police actions

  • 8/10/2019 Cyber Security - Mapping the Ethical Terrain

    29/40

    Cybersecurity: Mapping the ethical terrain 24

    The value of privacy4

    Privacy is valuable for at least three reasons. First,privacy acts as a restraint against the state dominatingits citizens, thus protecting citizens freedom fromstate power. Second, having a sphere of privacy,wherein no one can gain access to me without somejustification, reinforces my status as an autonomousindividual. Third, insofar as this sphere of privacy isimportant to me as an autonomous individual, onehas a strong claim that it not be violated withoutsufficient justification. All three of these claims aboutprivacy are extensively discussed in the ethicalliterature on the subject. In the interests of space I will

    only address the first of these reasons here.

    4 For accounts discussing the importance of privacy to the

    autonomous individual, see Jeffrey H. Reiman, Privacy, Intimacy,

    and Personhood, Philosophy & Public Affairs6, no. 1 (1976);

    Joseph Kupfer, Privacy, Autonomy, and Self-Concept,American

    Philosophical Quarterly24, no. 1 (1987). For accounts discussing

    privacy as a right, see the following: Judith Jarvis Thomson, The

    Right to Privacy, Philosophy & Public Affairs4, no. 4 (1975);

    Thomas Scanlon, Thomson on Privacy, ibid; Thomas Nagel,

    Personal Rights and Public Space, ibid.24, no. 2 (1995).

    Philip Pettit formulates a concept of freedom5on thedifference in power between agents. Pettit suggests

    that freedom is a sort of anti-power. In order for anindividual to be free, others must not have the powerto dominate that individual. One is dominant overanother if, and only if, one has an actual capacity tointerfere with the dominated individual, can do sowithout penalty, loss, or chance of the subjugatedindividual asserting themselves in response, and cando so within a certain sphere of choice.6

    Pettits account maintains that provided these threeconditions are met, it is irrelevant if the dominatingparty chooses to interfere with the subject or not.

    The mere fact that the dominator could do so issufficient to support the claim that the subjugatedagent is not free. Pettit has also pointed out that thisanti-power, or lack of dominance, must be robust if itis to constitute freedom. If one might be dominatedat one point in time, but not another, based on thewhim of the potential dominator, then one is notactually free. In short, if there is a door-keeper onwhich ones freedom is contingent, then one isnot in fact free. Liberal democracies that stress theimportance of freedom must therefore ensure thatthe policies and legislation they pursue preserve

    this freedom by preventing the state dominating theindividual in this way.7

    5 There are many accounts in ethical literature and political

    theory of what exactly constitutes freedom. This is only one of

    these accounts. For other accounts of freedom as non-interference

    see Thomas Hobbes, Leviathan(1651); J.S. Mill, On Liberty (Oxford:

    Oxford University Press, 1991); and, more recently, Robert E.

    Goodin and Frank Jackson, Freedom from Fear, Philosophy &

    Public Affairs35, no. 3 (2007).

    6 (1) has the capacity to interfere (2) with impunity and at

    will (3) in certain choices that the other is in a position to make.

    Pettit includes manipulation as a form of interference. Philip Pettit,

    Freedom as Antipower, Ethics106, no. 3 (1996).

    7 Phillip Pettit, The Instability of Freedom as Noninterference:

    The Case of Isaiah Berlin, Ethics121, no. 4 (2011).

    Privacy as AccessibilityFor example, take a standard case of astalker. The stalker notices the individual ina caf, and watches the individual closely,following them as they leave and walk to aparticular address. Using this information thestalker obtains information about the personwho lives or works at this address, accessingtheir Facebook account or tracking themthrough social networks and monitoring theirmovements online. Finally the stalker adoptsa position from which to observe their subjectwithout their knowledge, placing themselvesoutside their targets window when they knowthe target to be elsewhere in order to watchthem when they return. Here one can seehow actualised attention garners information,which allows for further actualised attentionmore informationand finally physical accessto the individual targeted.

  • 8/10/2019 Cyber Security - Mapping the Ethical Terrain

    30/40

    25 Policing cyber-threats: Regulating exceptional police actions

    Implications for police surveillance

    Cyber-systems increase the capability for anagent or agency to gain access to individuals. Insome cases, as with drones or security cameras,such technologies may make it easier to focusour attention on an individual or group we wish tomonitor. However, these technologies help us in otherways as well. Nissenbaum8has pointed out thatdata gleaned from focusing our attention is sporadicand disjointed, and therefore difficult to analyse inmany ways. This is no longer the case when dronesand other cyber technologies are used, particularlywhen used in tandem with information obtained from

    monitoring other cyber systems such as meta-data,smart cards, and e-tags. These technologies allow usto connect small pieces of information to one anotherand collate them into a comprehensive picture of anindividual, allowing us greater ability to interfere withthat individual as they go about their daily lives, andthereby reduce their freedom.

    Think back to our stalker case and substitute apolice officer, or other state agent, for the stalker.You are sitting peaceably in a caf when you notice auniformed police officer watching you, staring at you.

    This might be mildly disturbing, but not catastrophic.Now imagine that as you leave you notice the officerfollowing you to your car. As you drive off you noticea police car behind you in traffic, which stays thereuntil you arrive at your destination. At various timesduring the next week you notice police driving by,or officers watching and perhaps following youwhen in public. So far we have only addressed oneaspect in which privacy might be violated: attention.Now imagine you discover that the governmenthas gained access to various pieces of data suchas those mentioned above, and is using that datato further focus its attention on and gain physicalaccess to you. This data might be meta-data loggingthe location and time of phone calls you havemade or e-mails that you have sent, who receivedthem and when. Such data might also include yourbrowsing history on the internet or membership ofcertain online forums. Perhaps, you discover thatthe police have gained access to smart-card records

    8 Helen Nissenbaum, Privacy in Context: Technology, Policy,

    and the Integrity of Social Life,(Stanford University Press, 2009).

    kept by the public transport systems that record thestops at which cards are used on public transport, or

    have tracked the times and dates you have passedcertain toll stations. All of these fall into the secondaspect of privacy above: access to information.Finally, let us incorporate the third aspect of privacy:physical access. One day, after several weeksof being watched you arrive home. Youve notbeen followed, but sitting across the street fromyour house is an official-looking car with a suitedindividual in the front seat. Later that evening younotice the same individual standing outside yourhouse watching you as you move around insideyour home. Such a situation would likely make one

    particularly uncomfortable and would certainly affectthe decisions one might make. One might avoidcertain neighbourhoods or groups that one believesto be considered dangerous by the state whetheryou believed them to be dangerous or not. Onemight not engage in certain activities that are legallyor socially unacceptable, even if only trivially so.For example, one might pay much more attentionto ones speedometer to observe the speed limit,and might not visit bars in a red-light district or ingathering places for those with ideas that ran againstthe social norm.

    This scenario demonstrates how state surveillance,without the restrictions suggested here, couldreasonably result in changes to an individualsbehaviour due to state surveillance that amountto domination by the state and so a violation oftheir liberty. This is not to say that surveillance isnever permissible. Such restrictions seem perfectlyreasonable if one has chosen to act in a way thatdesignates them as a threat to others, by choosingto speed while driving for example, or choosing tobehave recklessly in public. However, insofar assuch surveillance targets individuals who have notchosen to engage in such behaviour, it violates theirprivacy, and insofar as such violations affects thechoices they might permissibly make, it restricts theirfreedom without justification.

  • 8/10/2019 Cyber Security - Mapping the Ethical Terrain

    31/40

    Cybersecurity: Mapping the ethical terrain 26

    To highlight a case of particular concern, membersof ethnic or religious groups that are considered to

    be risky in some sense by the state may be lesslikely to engage in activities considered permissiblefor other groups: choosing not to attend political,religious, or social events they would otherwise wishto attend because they fear that doing so will exposethem to greater invasions of privacy or open themto other forms of state interference. Presuming thatone has no choice in membership in such groups,these individuals are not liable to state surveillance,or accompanying restrictions in freedom, becausethey themselves have yet to demonstrate that theyare a threat.

    In cases where there is some choice, as inassociation with specific religious institutions or socialorganisations or clubs, some surveillance may bepermitted of these institutions, organisations or clubs.However, further surveillance of individual membersmay or may not be permissible. Those who havechosen to behave as a threat as defined above areliable and therefore targetable because of this choice,but according to the analysis presented here, thosewho cannot be shown to have made such a choiceare not subject to such surveillance as it violates their

    privacy and thereby unjustly restricts their freedom.Police surveillance is greatly enhanced by advancesin technology. However such advances are only asvaluable as long as they are used to preserve thosevalues, like liberty, that are held to be most importantin society. When they are used in a way that damagesuch values, regulation is necessary to maximise thebenefit of these advances for all.

    Insofar as one gains access to another, one restrictstheir privacy. Insofar as we restrict their privacy, werestrict their freedom. But surely this is not always abad thing. There are plenty of cases in which such

    reductions of freedom might be permissible. Howthen do we determine which privacy restrictions arepermissible and which are not? In cases where thetarget being monitored is acting, or perhaps hasrecently acted, in a way that makes them liable to beso monitored, then the individual loses their claim, tosome extent, to privacy and must bear the requisiteloss of freedom that results.

    How then do we determine who is liable and who isnot? Here I turn to a recent RAND Europe report on

    cybersecurity, which seems to offer a solution.9Inthis report RAND analysts offer a distinction between

    Risks and Threats. Risks are a function of possiblethreats multiplied by vulnerability to such threats,multiplied by the severity of the possible outcomeshould the threats eventuate. Threats, on theother hand, consist in a motivated adversary whoexercises choice. Risks then are general in scope,whereas threats, in contrast, require a specificadversary that behaves in a certain way. Becausethreats constitute wilfully acting in an adversarialmanner, they pose a greater risk of harm, and soare liable to have their privacy violated, and freedomrestricted, by security surveillance in proportion

    to the threat they pose. To clarify this last pointsomewhat consider the following example: All elsebeing equal, a person carrying a firearm constitutessome risk to others. However, when the individualwilfully acts in an adversarial manner, by drawingthe weapon and pointing it at some other agent,or indicates some motivation to harm by leavingthe weapon holstered while verbally threateningothers with harm, or perhaps even just shoutingaggressively at others while gripping the weaponor displaying the weapon in its holster, it seemsreasonable to say that the individual poses a greater

    risk of harm: a threat.10

    The individual becomes liableto have greater restrictions placed on their freedom,such as being subject to surveillance by the state,for at least as long as such behaviour is manifest,because they choose to behave in this fashion. Riskson the other hand, are not liable to be so restricted,precisely because they lack the active motivation andchoice necessary to become liable. In short, whilethey might become a threat sometime in the future,they are not yet a threat and so are not permissibletargets for state surveillance. This then makes anysuch surveillance a violation of their privacy, thereby

    unjustly restricting their freedom.

    9 Neil Robinson et al., Cyber-Security Threat Characterisation: A

    Rapid Comparative Analysis, ed. RAND Europe, Cambridge, UK, 2013.

    10 Of course there may be some disagreement about when

    the risk posed by an individual agent is sufficiently elevated for that

    individual to be considered a threat. However, it is sufficient for

    the discussion here to give some idea about how one should go

    about drawing the line, even if space requirements do not allow for

    argument about precisely where that line should be drawn.

  • 8/10/2019 Cyber Security - Mapping the Ethical Terrain

    32/40

    27 Policing cyber-threats: Regulating exceptional police actions

    Conclusion: restraining police

    surveillanceThis analysis rules out giving police and other statesecurity agencies broad access to data pertainingto those living within the state. It also requires thatsome sort of third-party evaluation is necessary incases where the police believe a threat to exist,to ensure that the target of that surveillance is notmerely an individual or group posing a risk, but ratherone that constitutes a threat to the community.This evaluation currently takes the form of judicialreview of such cases, with a warrant, or similar legalmandate required before surveillance is undertaken.It seems to me that these precedents should beapplied to police use of cyber-surveillance as well.As cyber technologies advance, however, suchrestrictions and oversight must take into accountthe increased capability these technologies offer toinvade the privacy of individuals, as states that claimto value freedom cannot morally restrict the freedomof their citizens without good cause.

  • 8/10/2019 Cyber Security - Mapping the Ethical Terrain

    33/40

    Virtual terrorism:Data as a target

    National Security

    College

    National Security College

    The National Security College is a joint initiative of the

    Commonwealth Government and The Australian National University

    Levi J West

  • 8/10/2019 Cyber Security - Mapping the Ethical Terrain

    34/40

    Introduction

    The conceptualisation of cyberterrorism is byits inherent nature contested, opaque and, attimes, wilfully inaccurate. Engaging in a structuredanalytical process that seeks to establish somedegree of conceptual clarity in regards to thisconcept is essential so as to ensure that publicpolicy responses and broader security discoursein relation to cyberterrorism remain proportionateand necessary and, crucially, effective. Thispaper identifies the dominant approaches to theconcept of cyberterrorism and highlights some oftheir key shortcomings. Further to this, the paper

    hypothesises actions and behaviours that couldaccurately be described as cyberterrorism, andthrough use of alternatively proposed languagesuitably depict malicious behaviour occurring in orthrough cyberspace. By drawing on central worksin relation to the definition of both cyber andterrorism, this paper seeks to identify potentialnew sites for ideologically motivated activity thattargets and/or leverages cyberspace. In doing so,it is hoped that emergent activities and possiblefuture incidents may be conceptualised in a moreaccurate, moderate and informed manner, ensuring

    appropriate responses.

    First of all, it is necessary to address the definitionalproblems