Limited Autonomy Keynote Address at DASC-09 / PICom-09 Clark Thomborson 12 December 2009.

40
Limited Autonomy Keynote Address at DASC-09 / PICom-09 Clark Thomborson 12 December 2009

Transcript of Limited Autonomy Keynote Address at DASC-09 / PICom-09 Clark Thomborson 12 December 2009.

Limited Autonomy

Keynote Address at DASC-09 / PICom-09

Clark Thomborson

12 December 2009

Summary

Analyse the security and functionality of a system with limited autonomy Using a very high-level model, Discovering three fundamental design

patterns, Discussing the implications of, and

prospects for, a fully autonomous system.

2

Autonomic Computing

IBM (Horn, Chess, et al.): a system is autonomic to the extent that it is

1. Self-configuring,

2. Self-optimizing,

3. Self-healing, and

4. Self-protecting. A fully autonomic system would be self-

managing.

3

The manual manager is superior to the autonomic managers.

The autonomic managers are superior to the managed resources.

A Structural Analysis

An autonomic computing system could replace a workgroup (= a set of workers plus a first-level manager).

IBM’s AC is a strict hierarchy.

4

My System Diagrams

An actor is an oval. Arrowheads indicate hierarchical control.

The superior sets policy (= what the inferiors must do, and what they must not do.)

5

5

The superior defines the organisation’s structure.

The superior punishes and rewards inferiors.

The superior observes inferiors. Inferiors cannot observe the

superior.

Security Analysis

This system has no external threats. The inferiors in this system are not self-

controlling. Their policies are fixed by the superior.

6

6

Internal threat: The superior might make a

mistake in their system governance (= specification, implementation, assurance).

Micro to Macro Security

“Static security”: system properties (confidentiality, integrity, availability).

“Dynamic security”: system processes (Authentication, Authorisation, Audit). Beware the “gold-plated” system design!

“Security Governance”: human oversight Specification, or Policy (answering the question of

what the system is supposed to do), Implementation (answering the question of how to

make the system do what it is supposed to do), and Assurance (answering the question of whether the

system is meeting its specifications).

7

Self-Governing Computers?

IBM’s Autonomic Computing will handle some governance decisions. A self-configuring actor will adjust minor

details of its specification, when it is “hired”. A self-optimizing actor will adjust minor

parameters in its specification, in an effort to reach optimality criteria.

A self-healing or self-defending actor will adjust an implementation, if it receives a negative report from an assurance monitor.

Could a system govern itself?8

Fully Autonomous Systems

A fully autonomous system has no superior. It must specify itself. It must implement itself. It must assure itself.

Is this possible?

9

Human Autonomy

Are humans self-governing? Do we control our own specification,

implementation, and assurance? ???

I’m a security theorist, not a theologian, philosopher, psychologist, sociologist, or biologist. I am developing a general framework for

security analyses. My framework is based on Luhmann’s Trust

and Power, and on Lessig’s theory of control.10

Lessig’s Taxonomy of Control

Easy Difficult

Inexpensive

Expensive

Computers make things easy or difficult.

Legal Illegal

Governments make things legal or illegal.

An economy makes things inexpensive or expensive.

Moral

Immoral

A society makes things moral or immoral.

11

Hierarchical Control

The vertical axis of Lessig’s taxonomy is hierarchical control.

The superior actor defines what is legal, and what is illegal, for its inferiors.

12

12

The superior actor defines the structure (Lessig’s “architecture”) of the system, thereby making some things easy and some things difficult.

Lessig’s Vertical Axis

Easy DifficultSystems make it easy (or difficult) for inferiors to act in certain ways.

Legal IllegalA superior is the judge of what is legal and illegal for inferiors to do.

13

This control is exerted retrospectively. After the (alleged) behaviour has occurred, the superior can modify the system (i.e. punish or reward).

This control is exerted prospectively, before the behaviour occurs, by the superior’s prior governance of the system.

An External Threat

We extend our framework to include external threats.

14

14

The dashed line is an alias relation.

The blue actor is a sysop. Bob is the human who is

expected, by Alice, to be her sysop.

Bob sometimes has other priorities...

Alice

SysopBob

PC2PC1

Threat Mitigations

Alice can observe her Sysop. If she detects misbehaviour, she can fire

Bob. This is a legal control, mitigating the threat

of her Sysop not meeting its specification (due to contrary “control signals” from Bob).

Alice can install computer systems that are hard for Bob to subvert. This is an architectural control on Sysop

misbehaviour.

15

All Aliases are Threats Alice has logged into PC1. Her login is an actor in

cyberspace. Her login may cause

damage to Alice: it’s a security threat!

Architectural control: limited-authority login.

16

16

Alice

SysopBob

PC2PC1

[email protected]

Legal (= retrospective, hierarchical) control: adjust system after damage has occurred.

17

A Peerage

The peers define the goals of their peerage.

If a peer misbehaves, their peers may punish them (e.g. by expelling them).

Peers can trade goods and services.

The trusted servants of a peerage do not exert control over peers.

The trusted servants may be aliases of humans, or they may be automata.

Facilitator, Moderator, Democratic Leader, Auctioneer, …

Peers, Group members, Citizens of an ideal democracy, Consumers and Producers, …

Lessig’s Horizontal Axis

Inexpensive

Expensive

A society’s prior economic activity sets the current prices of goods, and also the current reputations of peers. This is a prospective control, affecting future behaviour of peers.

Moral

Immoral

A society decides, retrospectively, whether a peer behaved inappropriately (or admirably). The feedback is generally through reputation, but in extreme cases a peer may be expelled.

18

Back to Autonomous Systems...

IBM’s AC is purely hierarchical. Could a pure peerage be a design

pattern for an autonomous system? Yes. Such systems are being explored

by many AI researchers: “swarm intelligence” (Beni & Wang, 1989).

Ants in a swarm have architectural and legal control over their trusted servants, and social and economic control over their peers.

19

Security Analysis: Two Cases

1. The Trustee may be a peer.

2. The Trustee may be an outsider.

20

Trustee

P3P2P1

Bob

Trustee

P3P2P1

The Trustee is not a threat agent.

The alias of the Trustee is the primary threat to the peerage.

Peer Identification

Peers can exchange information. They can identify each other by “what you

know.” Cryptographic signatures are very important

in peerages. No central PKI: a peer must self-sign a digital

certificate, and develop a reputation. Peers can exchange goods and services.

Identification by “what you have” is possible, if peers can create distinctive signed objects e.g. works of art.

21

Biometrics are hierarchical

Peers cannot observe each other, so biometric identification of peers (“what you are”) is infeasible in a pure peerage.

Peers can observe their Trustee. Analysing from P3’s perspective:

22

Trustee

P3P2P1

Functionality and Security

Security and functionality requirements are set by the owner of the system.

The owner of a hierarchy is the human alias of the superior. Owners are always humans: their hopes and

fears define their security requirements. The human aliases of the peers are,

collectively, the owner of their peerage. Their Trustee holds the constitution of the

peerage, and handles their voting procedures.23

Functional Analysis

A tightly-controlled (highly secured) hierarchy is less functional than a loosely controlled one, because the superior is a bottleneck. All details of all policies, and all oversight,

must be done by the superior themselves. Delegation is dangerous!

A peerage has many threat agents. Peers (especially ones who serve as

Trustees) will fear their peerage, unless it has relatively little functionality.

24

Hybrid Vigour?

Real-world systems are not pure hierarchies or peerages.

What benefits can we gain from impurity? One downside: less analysability.

The pure hierarchy is the ideal case for security.

In a non-hierarchical access control system, it can be difficult (even undecidable!) for a guard to determine if an access request should be allowed.

25

IBM’s AC is Actually Impure

IBM’s blueprint for autonomous computing (4th Edition, 2006) defines two types of orchestrating managers:1. Orchestrating within a self-X discipline

2. Orchestrating across disciplines

26

M1 M2

S1 S1’S2 S2’

Aliases are Always Threats

(S1, S1’) may be unable to meet the requirements of both M1 and M2.

S1’ is a threat to M1, and S1 is a threat to M2.

Evaluating this threat may be very difficult.

27

M1 M2

S1 S1’S2 S2’

An Intriguing Hybrid

Hybrids, such as this, are very difficult to analyse but may be highly autonomous. 28

Trustee,Auditor

P3P2P1

BobJudge

P1’ P2’ P3’

Review

We have identified two “pure” structural forms for autonomous systems: hierarchical and peering. Hierarchical systems seem an appropriate

place to start, if you’re trying to design a system for a commercial or governmental market.

We have identified some strengths and weaknesses in the pure forms.

Hybrids are less analysable, but may offer advantages. 29

Simplicity vs Complexity

“What would be the shape of an organisational theory applied to security?” [Anderson, 2nd edition, 2008]

Ross was unimpressed with an early version of my analytic framework. (To appear in Handbook of Computer and

Information Security, Springer, 2010.) He thought I was oversimplifying. But... I agree with him! Simple analyses of

simplified systems (as done in this seminar) are not predictive of real-world security.

30

Why Use Simple Models?

A high-level model can identify areas of concern, and it can highlight points of difference.

My conclusion: Autonomous systems must rely on complexity, rather than simplicity, for their security. The security research community is

generally distrustful of “security by obscurity”. Complexity is an enemy of security.

31

Can we live simply?

Chess, Palmer, and White in IBM Sys J (2003) take the complexity of modern computer systems as a given. Their thesis: autonomic computer systems will, if

well-engineered, actually increase security – because we can’t manage the complexity of secure systems administration in any other way.

Luhmann’s thesis: Trust must increase with complexity, and

complexity increases with modernisation.

32

Security Assessments

A system is considered “secure”, in a technical sense, only after it has been assessed.

A complex system cannot be analysed accurately. Its security can only be assessed after it is

in operation. Any operational assessment is limited by

the false-negative rate of the fault detector.

33

The Future?

Predicting the future is dangerous... But... I see no prospect of anyone ever

developing a fully-autonomous (i.e. completely self-governing) system. I’m not convinced that humans are

completely self-governing! I think “limited autonomy” is a much

more appropriate goal than total autonomy. I’d be very afraid of a fully-autonomous

system! 34

Additional Slides

35

Specifying Requirements

Prohibition: forbid something from happening. Permission: allow something to happen.

Prohibitions and permissions are specifications for hierarchies.

Laws are generally prohibitive. Some are permissive. Contracts are non-hierarchical: agreed between

peers. Obligations are promises to do something in the future. Exemptions are exceptions to an obligation.

36

There are four types of static security requirements: Obligations are forbidden inactions, e.g. “I.O.U.

$1000.” (= you are forbidden from not paying me back) Exemptions are allowed inactions, e.g. “You need not

repay me if you have a tragic accident.” Prohibitions are forbidden actions. Permissions are allowed actions.

Two classification criteria:1. Strictness = {forbidden, allowed},2. Activity = {action, inaction}.

A Taxonomic Theory of Req’ts

37

A Taxonomic Theory of Controls Prospective controls:

Architectural security (easy/hard) Economic security (inexpensive/expensive)

Retrospective controls: Legal security (legal/illegal) Normative security (moral/immoral)

3. Temporality = {prospective, retrospective}.

4. Organisation = {hierarchy, peerage}.

38

Niklas Luhmann, on Trust A prominent, and controversial, sociologist. Thesis: Modern systems are so complex that

we must use them, or avoid using them, without analysing risks, benefits, and alternatives.

Trust is a reliance without an assessment. We cannot control any risk we haven’t assessed

We trust any system which might harm us. (This is the operating definition of “trust” in security research.)

Distrust is an avoidance without an assessment.

39

Trust, Distrust, Security, ...

Our fifth dimension is cognitive: An assessment (security/functionality), A non-assessment (trust/distrust).

Our sixth dimension is emotional: Desires of the owner (functionality/trust) Fears of the owner (insecurity/distrust)

Layers = {static, dynamic, governance} (but I have taxonomised only the static layer)

A 7-dimensional space... no wonder there’s so little agreement on terminology!

40