Stephen Schwab September 28, 2005

67
Schwab_20050928_1 DETER/EMIST DETER/EMIST DDoS Defense Experimental Methodology DDoS Defense Experimental Methodology Impact of Traffic Generation Selection on Precision of Impact of Traffic Generation Selection on Precision of Detection and Response Characterization Detection and Response Characterization Stephen Schwab September 28, 2005

description

DETER/EMIST DDoS Defense Experimental Methodology Impact of Traffic Generation Selection on Precision of Detection and Response Characterization. Stephen Schwab September 28, 2005. SPARTA Team Participants. DETER Steve Schwab, Ron Ostrenga, Brad Harris, David Balenson, Dan Sterne EMIST DDoS - PowerPoint PPT Presentation

Transcript of Stephen Schwab September 28, 2005

Page 1: Stephen Schwab September 28, 2005

Schwab_20050928_1

DETER/EMISTDETER/EMISTDDoS Defense Experimental MethodologyDDoS Defense Experimental Methodology

Impact of Traffic Generation Selection on Precision of Detection and Impact of Traffic Generation Selection on Precision of Detection and

Response CharacterizationResponse Characterization

Stephen SchwabSeptember 28, 2005

Page 2: Stephen Schwab September 28, 2005

Schwab_20050928_2

SPARTA Team ParticipantsSPARTA Team Participants

• DETER– Steve Schwab, Ron Ostrenga, Brad Harris, David

Balenson, Dan Sterne• EMIST DDoS

– Steve Schwab, Brett Wilson, Ron Ostrenga, Roshan Thomas, Alefiya Hussain, Brad Harris, Dan Sterne

Page 3: Stephen Schwab September 28, 2005

Schwab_20050928_3

OverviewOverview

• EMIST DDoS Experimental Methodology

• FloodWatch and CloudShield Experiment Design

• Testbed Methodology Automation

• Process for Incorporating Hardware and Appliances

• Traffic Generation

• Visualization and Metrics

• Future Plans

Page 4: Stephen Schwab September 28, 2005

Schwab_20050928_4

Objectives for EMISTObjectives for EMIST

• Create reusable library of test technology for conducting realistic, rigorous, reproducible, impartial tests

– For assessing attack impact and defense effectiveness– Test data, test configurations, analysis software, and experiment

automation tools• Provide usage examples and methodological guidance

– Recommendations for selecting (or developing) tests and interpreting results

– Test cases and results, possibly including benchmarks• Facilitate testing of prototypes during development and commercial

products during evaluation

Page 5: Stephen Schwab September 28, 2005

Schwab_20050928_5

Canonical DDoS ExperimentCanonical DDoS Experiment

• DDoS experimentation involves a vast multidimensional space of experimental variables.

• Canonical experiment form intended to organize the experimental space and facilitate navigation through it.

• Canonical experiment consists of:1. Attack Mechanisms2. Background Traffic3. Network Topology4. Defense Mechanisms5. Measurements and Metrics6. Network Services Infrastructure7. Risk

Page 6: Stephen Schwab September 28, 2005

Schwab_20050928_6

Typical DDoS Toolkit ArchitectureTypical DDoS Toolkit Architecture

Hacker Console Master

Zombie

Victim

Page 7: Stephen Schwab September 28, 2005

Schwab_20050928_7

OverviewOverview

• EMIST DDoS Experimental Methodology

• FloodWatch and CloudShield Experiment Design

• Testbed Methodology Automation

• Process for Incorporating Hardware and Appliances

• Traffic Generation

• Visualization and Metrics

• Future Plans

Page 8: Stephen Schwab September 28, 2005

Schwab_20050928_8

SPARTA DDoS ExperimentSPARTA DDoS ExperimentSeptember 2005September 2005

Background Traffic:REPLAY | NTCG | HARPOON

HIGH FIDELITY TRAFFIC

Topology:BUILDING-BLOCKS |

JUNIPER ROUTER CORE

REALISTIC CONNECTIVITY AND SCALE-DOWN

Attack Traffic:DETER-INTEGRATED ATTACK SCRIPTING

AUTOMATION OF VARIETY OF SCENARIOS UNDER STUDY

Instrumentation:PACKET AND HOST STATISTICS CAPTURE | SPECTRAL ANALYSIS | METRICS CALCULATION | INTEGRATED VISUALIZATION

TOOLBOX FOR RIGOROUS INVESTIGATION OF RESULTS

COREAS-11357

ATTACK TRAFFICBACKGROUNDTRAFFIC

Page 9: Stephen Schwab September 28, 2005

Schwab_20050928_9

SPARTA DDoS ExperimentSPARTA DDoS ExperimentSeptember 2005September 2005

Defenses:

FloodWatchSTATISTICAL DDOS DETECTION

McAfee IntruShieldCOMMERCIAL IDP APPLIANCE

CloudShieldNETWORK PROCESSOR APPLIANCE PLATFORM

Juniper IDP-200COMMERCIAL IDP APPLIANCE

STUDY AND QUANTIFY HOW PLACEMENT WITHIN THE NETWORK IMPACTS EFFECTIVENESS OF DEFENSE

DEFENSEDEPLOYMENTPOINT

Page 10: Stephen Schwab September 28, 2005

Schwab_20050928_10

FloodWatch Experiment – Example TopologyFloodWatch Experiment – Example Topology

Each end network (leaf and root) is both a source and sink for background traffic

Example traffic flow

From one edge network

Core Network

EdgeNetwork

Flood Watch

Root and leaf networks transmit/receive replayed packets

Packet source and destination addresses randomly remapped to experiment topology

Page 11: Stephen Schwab September 28, 2005

Schwab_20050928_11

Schematic of Network ConnectivitySchematic of Network Connectivity

2nd TierRouter

CoreRouter

ApplianceCore

Router2nd TierRouter

HostHost

FloodWatch DDoSStatistical Detector

Juniper M7i

CloudShield 2200DDoS EntropyDetector

Target orAttack Victim

Attack SourceReplay Traffic SourceTCP Traffic Source

Page 12: Stephen Schwab September 28, 2005

Schwab_20050928_12

Experiment GoalExperiment Goal

• Validate fidelity of attack and background traffic in reproducing characteristics of real DDoS experimental scenarios.

• Case #1: Attack Traffic– Recreate an attack captured in a real-world network– Compare spectral analysis of real-world and testbed network

to assess fidelity of phenomena reproduction.

• Case #2: Background Traffic– Use TCP analytical model of throughput as a reference– Compare theoretical throughput with observed throughput for

100Mb/s and 1Gb/s networks– Compare model-vs-testbed discrepancy between100Mb/s and

1Gb/s to gauge ability to preserve phenomena while scaling-up traffic.

Page 13: Stephen Schwab September 28, 2005

Schwab_20050928_13

OverviewOverview

• EMIST DDoS Experimental Methodology

• FloodWatch and CloudShield Experiment Design

• Testbed Methodology Automation

• Process for Incorporating Hardware and Appliances

• Traffic Generation

• Visualization and Metrics

• Future Plans

Page 14: Stephen Schwab September 28, 2005

Schwab_20050928_14

AutomationAutomation

• DDoS Requirements generalize to broader Security Experiment Requirements:

• Automate whenever possible for:– Repeatability, efficiency, ease-of-use

• Experiments described in Emulab ns2 format may include primitive events

– Under base Emulab system, provides control for a very limited number of operations

Page 15: Stephen Schwab September 28, 2005

Schwab_20050928_15

Event System IntegrationEvent System Integration

• Testbed Automation Agents are controlled by events• Events can be created in several ways:

– Specified in the NS file and played at runtime– Played from an external list of events generated by

hand or script– Dynamically generated by a GUI or script

• Current Agents– Injector: traffic replay agent– Flooder: attack agent– Collector: packet traces (TCPDUMP) control and per-

node filter counters– Harpoon: traffic generation agent for the Harpoon TG– FloodWatch: controls FloodWatch DDoS defense

Page 16: Stephen Schwab September 28, 2005

Schwab_20050928_16

Automation DashboardAutomation Dashboard

Page 17: Stephen Schwab September 28, 2005

Schwab_20050928_17

OverviewOverview

• EMIST DDoS Experimental Methodology

• FloodWatch and CloudShield Experiment Design

• Testbed Methodology Automation

• Process for Incorporating Hardware and Appliances

• Traffic Generation

• Visualization and Metrics

• Future Plans

Page 18: Stephen Schwab September 28, 2005

Schwab_20050928_18

AppliancesAppliances

• DETER requirement: support the experimental test and evaluation of appliances

– Commercial products often packaged as appliances, critical future user segment

• EMIST requirement: high-speed appliances stress the testbed and the tools supporting our methodology

– Requirements:– Provide the ability to seamlessly integrate appliances

as nodes in testbed experiments– Stress all aspects of our methodology at line-rate

» Topology – Gigabit forwarding routers (Juniper)» Traffic – Greater aggregation» Data Capture – vanilla TCPDUMP inadequate

Page 19: Stephen Schwab September 28, 2005

Schwab_20050928_19

DDoS Network Appliance Defense DDoS Network Appliance Defense Evaluation ScenariosEvaluation Scenarios

• Introduce scenarios in which technology developers evaluate network appliances

– CloudShield 2200» IXP2850 network processor + high-level application

development environment» Evaluate prototype statistical DDoS defenses

– McAfee IntruShield 2600» Commercial product, focus on DDoS capabilities

– Juniper IDP-200» Commercial product with 8 Gigabit ports enabling study

of placement and connectivity– Push envelope of DDoS defense evaluation methodology to

test Gigabit Ethernet rates

Page 20: Stephen Schwab September 28, 2005

Schwab_20050928_20

DETER Hardware and AppliancesDETER Hardware and AppliancesProcessProcess

• Develop a systematic process for integrating hardware devices and appliances in the DETER testbed and within EMIST experiments:

– Hardware connection– Control plane ns topology– Control plane manual configuration– Data plane manual configuration– Control and Data plane semi-automatic configuration

(scripting)– Control and Data plane automation

» Integrate generalized scripts behind the scenes into DETER and EMIST tools

Page 21: Stephen Schwab September 28, 2005

Schwab_20050928_21

Juniper RoutersJuniper Routers

• Deter has 5 Juniper M7i routers• 4 Gigabit Ports/Router

• The Juniper routers in DETER are almost first-class DETER experimental devices.• Can be allocated into an experiment by Assign• Can be assigned IP addresses within ns topology

• Assign cannot YET configure router to use the IP addresses that were allocated

• Must manually map the MAC and IP addresses into a router configuration

• Plan to use the Juniper supported XML API to automatically configure the router

Page 22: Stephen Schwab September 28, 2005

Schwab_20050928_22

CloudShield ApplianceCloudShield Appliance

• A CloudShield Appliance with 4 Gigabit interfaces has been added to DETER as an experimental device.

• Can be allocated into an experiment by Assign• Must be configured manually • Mapping of interfaces into an experiment is difficult

since there are no exposed MAC or IP addresses• Usage is complicated by the transparent bridging

function that causes the DETER switches to go into layer 2 loops.

• Spanning Tree Protocol (STP) is disabled on DETER

Page 23: Stephen Schwab September 28, 2005

Schwab_20050928_23

CloudShield CA2200CloudShield CA2200

• 2 Intel IXP2800 network processors running CPOS– 16 microengines each– 1 StrongARM processor each

• Dual Pentium-III management processor running Red Hat Linux

• 4 GigE or 4 Copper 10/100/1000 network interfaces

Page 24: Stephen Schwab September 28, 2005

Schwab_20050928_24

Entropy Based DetectionEntropy Based Detection

• Runs on CloudShield CA2200 network appliance• Data plane

– Detectors: source IP, destination IP, packet size, source port, destination port

– Create a histogram for each detector on packets in a fixed window size

– Calculate entropy for each detector– Optionally filter packets

• Control plane– Every N seconds, read entropy values– Compare with high/low thresholds for normal operation– When threshold is crossed, create a packet filter using

max item from each detectors histogram

Page 25: Stephen Schwab September 28, 2005

Schwab_20050928_25

Entropy Detection IssuesEntropy Detection Issues

• Unable to run completely in the data plane– Packet processing runs in parallel– Entropy algorithm requires synchronization to avoid

race conditions– CPOS (CloudShield Packet OS) provides no support

for “manager threads” or mutexes

Page 26: Stephen Schwab September 28, 2005

Schwab_20050928_26

Chi-Square AlgorithmChi-Square Algorithm

• The top bin contains the item with the highest count• Next bin contains the four next highest counts, etc.• Items are swapped to different bins as their relative

counts change• Chi-square statistic is the “shape” of the height of the

buckets

4 816

other

1

Page 27: Stephen Schwab September 28, 2005

Schwab_20050928_27

Problems with Chi-SquareProblems with Chi-Square

• When a packet is received on the data plane, the item counter is incremented

• If the new count causes the item to be moved to another bin (because it has a higher count than one of the other items), the bins must be rebalanced

• Since packets are received in parallel, it is necessary to lock the data structure so that only one thread can modify it during rebalancing

• Without rebalancing, the algorithm can’t do detection• Without synchronization primitives, data plane can

only collect data and detection must be done at the control plane

Page 28: Stephen Schwab September 28, 2005

Schwab_20050928_28

Experiment process using Juniper Experiment process using Juniper routers and CloudShield appliancerouters and CloudShield appliance

1. Allocated via assign into an experiment that put the CloudShield sensor on a GB link between two Juniper routers

2. Router interfaces had to be configured manually3. Routes had to be configured manually4. Configuration scripts were created to help speed

up the experiment setup.5. Routes needed to be verified after setup

completed6. There were issues in determining which physical

CloudShield interfaces were actually being used

Page 29: Stephen Schwab September 28, 2005

Schwab_20050928_29

IntruShield SensorsIntruShield Sensors

• Two McAfee IntruShield Sensors, each with 2 unidirectional Gigabit interfaces and 6 unidirectional 100 MB IPS interfaces have been added to DETER as an experimental device.

• Can be allocated into an experiment by Assign• Usage is complicated by the transparent bridging

function that causes the DETER switches to go into layer 2 loops. (STP issue revisited)

• May require physical attachment to a permanent anchor PC node.

• Requires a dedicated Windows2000 Manager node

Page 30: Stephen Schwab September 28, 2005

Schwab_20050928_30

OverviewOverview

• EMIST DDoS Experimental Methodology

• FloodWatch and CloudShield Experiment Design

• Testbed Methodology Automation

• Process for Incorporating Hardware and Appliances

• Traffic Generation

• Visualization and Metrics

• Future Plans

Page 31: Stephen Schwab September 28, 2005

Schwab_20050928_31

Background Traffic GenerationBackground Traffic Generation

• DDoS Specific Requirements:• Detection and mitigation of attacks need to be

evaluated against a statistically valid mix of background traffic.

• Interactive TCP traffic exhibits very different characteristics due to congestion control and recovery

• Two traffic generation systems under active investigation and development:

– NTCG (UC Davis)– Harpoon (U. Wisconsin)

Page 32: Stephen Schwab September 28, 2005

Schwab_20050928_32

NTCG: NTCG: Network Traffic Compiler Generator Network Traffic Compiler Generator

• Interactive Traffic: Paradigm is to create controlled mix of traffic produced by an array of traffic generators (TG, etc.).

Packet Traceanalyzer

Traffic

Specifications

Topology

Description spec

compiler

TG, etc.

commands

Designed and developed by Allen Ting and colleagues/U.C. Davis

Page 33: Stephen Schwab September 28, 2005

Schwab_20050928_33

Harpoon Traffic GeneratorHarpoon Traffic Generator

• Provides extreme configurability– specify precisely ordered temporal distributions of connection

duration or use Netflow traces– accepts multiple subnets as destination, allowing a single

Harpoon client to generate traffic across all networks– client and server accept either 0.0.0.0 (default routes) or the

IP of a specific interface– Can connect to and listen on all TCP and UDP ports

• Multithreaded implementation– Allows one agent to handle fifty or more connections

(governed by the OS threads per process limit and CPU)• Each thread can connect to any destination, rather than

using a single source / single sink paradigm• Plug-in architecture allows for extensibility• Well documented, clear and precise

Developed by Joel Sommers and Paul Barford, U. Wisconsin

Page 34: Stephen Schwab September 28, 2005

Schwab_20050928_34

IP Address DiversityIP Address Diversity

• Background traffic should involve a wide variety of IP addresses

– There are usually not enough nodes to obtain enough IP diversity

• Solution: Network Address Translation (NAT)– Using Linux IPTables, one machine can serve large blocks of

IP addresses

• Real machine 10.1.1.2 has a single IP address– Use IPTables to add a pre-routing entry to the NAT table– iptables -t nat -A PREROUTING -d 3.0.0.0/8 -j DNAT --to 10.1.1.2

• Packets arriving on any 3.0.0.0/8 are translated into packets destined for 10.1.1.2

• NAT is opaque to 10.1.1.2, allowing full TCP stacks to reply, including proper TCP backoff, etc.

Page 35: Stephen Schwab September 28, 2005

Schwab_20050928_35

Demo Video ClipDemo Video Clip

• An Attack run against the FloodWatch DDoS defense, as performed and recorded by Brett Wilson on DETER.

Page 36: Stephen Schwab September 28, 2005

Schwab_20050928_36

Requirements for NAT (Routed) Requirements for NAT (Routed)

• Traditional network with routes in and out– Start NATs on each of the nodes– iptables -t nat -d 2.0.0.0/8 -j DNAT -A PREROUTING --to 10.1.2.3

• Configure routers so that there are valid routes for the NAT networks.

• At the final hop, include a route to the NAT network (10.1.2.3) through the next hop

– This prevents the router from ARPing for a NAT address, as it believes 10.1.2.3 is another route

– On 10.1.2.3, IPTables catches the packet first (PREROUTING), translates it, and the connection completes

Page 37: Stephen Schwab September 28, 2005

Schwab_20050928_37

Requirements for NAT (LAN) Requirements for NAT (LAN)

• Isolated LAN configuration– More Complex Configuration

• This scenario has no routers; all nodes are on a single broadcast domain

• Configure each node so that the physical interface has a /32 netmask– ifconfig eth0 netmask 255.255.255.255– eth0 will no longer answer ARP requests itself

• Add explicit route for the experiment network via eth0– Use the broadest net possible– route add -net 10.0.0.0 netmask 255.0.0.0 dev eth0

• Add routes for all the NAT subnets on the other machines– route add -net 3.0.0.0 netmask 255.0.0.0 dev eth0

• Proxy ARP for all subnets this node is NATting. ./tarpd eth0 2.0.0.0 255.0.0.0

• Add the NAT via iptables– iptables -t nat -d 2.0.0.0/8 -j DNAT -A PREROUTING --to 10.1.2.3

Page 38: Stephen Schwab September 28, 2005

Schwab_20050928_38

OverviewOverview

• EMIST DDoS Experimental Methodology

• FloodWatch and CloudShield Experiment Design

• Testbed Methodology Automation

• Process for Incorporating Hardware and Appliances

• Traffic Generation

• Visualization and Metrics

• Future Plans

Page 39: Stephen Schwab September 28, 2005

Schwab_20050928_39

Data Capture and AnalysisData Capture and Analysis

• DDoS Specific Requirements:• Automate collection of both packet traces and host-

based statistics– to calculate metrics– to analyze traffic behavior

• Support visualization and interaction between experimenter and potentially large, unwieldy data harvested from the testbed

Page 40: Stephen Schwab September 28, 2005

Schwab_20050928_40

DDoS:DDoS:Instrumentation and VisualizationInstrumentation and Visualization

• Tracefiles produced, collected and extracted from DETER for post-mortem analysis.

• ESVT analysis and experimental data graphical browser:– Navigate within experiment topology.– Drill down on link or node traffic/metrics: visualize and

compare one or more statistics between different links or time intervals.

• Key Milestone: Integrated support for visualization and analysis of DDoS experiments across multiple links, topologies, attack patterns, and defense configurations.

• ESVT integration in collaboration with Jason Hart / Penn State

Page 41: Stephen Schwab September 28, 2005

Schwab_20050928_41

ESVT Experiment TopologyESVT Experiment Topology

Page 42: Stephen Schwab September 28, 2005

Schwab_20050928_42

ESVT VisualizationESVT Visualization

Page 43: Stephen Schwab September 28, 2005

Schwab_20050928_43

ESVT Visualization: ConnectivityESVT Visualization: Connectivity

Page 44: Stephen Schwab September 28, 2005

Schwab_20050928_44

Spectral Analysis RationaleSpectral Analysis Rationale

• Can we create high fidelity testbed experiments? – Gross parameters such as rate, packet type, etc.– Small time scale behavior captured by spectral analysis

• High fidelity experiments help capture accurate interaction with cross traffic.

Page 45: Stephen Schwab September 28, 2005

Schwab_20050928_45

Spectral AnalysisSpectral Analysis

• Periodic behaviors encoded into the attack stream – Host, Network, aggregation effect, ….

• Leverage experience from signal processing– FFTs, wavelets, detection theory, etc.

08:14:33.495046 2005host1.starwoodbroadband.com.domain > 2005host74.starwoodbroadband.com.32894: 40672 2/1/1 CNAME[|domain]08:14:33.495924 2005host74.starwoodbroadband.com.33350 > camembear.fogcreek.net.http: S 1688588990:1688588990(0) win 5840 <mss 1460,sackOK,timestamp 27125396 0,nop,wscale 0> (DF)08:14:33.496502 2005host74.starwoodbroadband.com.32894 > 2005host1.starwoodbroadband.com.domain: 19420+ PTR? 219.120.243.64.in-addr.arpa. (45) (DF)08:14:33.496956 london-bar3.ja.net > 12.160.3.255: icmp: echo request08:14:33.497347 2005host74.starwoodbroadband.com > london-bar3.ja.net: icmp: echo reply

Characterize attack behavior

Page 46: Stephen Schwab September 28, 2005

Schwab_20050928_46

08:14:33.495046 2005host1.starwoodbroadband.com.domain > 2005host74.starwoodbroadband.com.32894: 40672 2/1/1 CNAME[|domain]08:14:33.495924 2005host74.starwoodbroadband.com.33350 > camembear.fogcreek.net.http: S 1688588990:1688588990(0) win 5840 <mss 1460,sackOK,timestamp 27125396 0,nop,wscale 0> (DF)08:14:33.496502 2005host74.starwoodbroadband.com.32894 > 2005host1.starwoodbroadband.com.domain: 19420+ PTR? 219.120.243.64.in-addr.arpa. (45) (DF)08:14:33.496956 london-bar3.ja.net > 12.160.3.255: icmp: echo request08:14:33.497347 2005host74.starwoodbroadband.com > london-bar3.ja.net: icmp: echo reply08:14:33.499372 i.am.an.animal.and.i.irc.from.zoo-gate.fi > 12.160.3.255: icmp: echo request (DF)08:14:33.499441 2005host74.starwoodbroadband.com > i.am.an.animal.and.i.irc.from.zoo-gate.fi: icmp: echo reply…

FFT

packet trace

time-series

frequencydomain

Fourier AnalysisFourier Analysis

• FFT summarizes the frequency content • Used to characterize spatial effects

Page 47: Stephen Schwab September 28, 2005

Schwab_20050928_47

Wavelet AnalysisWavelet Analysis

08:14:33.495046 2005host1.starwoodbroadband.com.domain > 2005host74.starwoodbroadband.com.32894: 40672 2/1/1 CNAME[|domain]08:14:33.495924 2005host74.starwoodbroadband.com.33350 > camembear.fogcreek.net.http: S 1688588990:1688588990(0) win 5840 <mss 1460,sackOK,timestamp 27125396 0,nop,wscale 0> (DF)08:14:33.496502 2005host74.starwoodbroadband.com.32894 > 2005host1.starwoodbroadband.com.domain: 19420+ PTR? 219.120.243.64.in-addr.arpa. (45) (DF)08:14:33.496956 london-bar3.ja.net > 12.160.3.255: icmp: echo request08:14:33.497347 2005host74.starwoodbroadband.com > london-bar3.ja.net: icmp: echo reply08:14:33.499372 i.am.an.animal.and.i.irc.from.zoo-gate.fi > 12.160.3.255: icmp: echo request (DF)08:14:33.499441 2005host74.starwoodbroadband.com > i.am.an.animal.and.i.irc.from.zoo-gate.fi: icmp: echo reply…

wavelet

long packet trace

time-series

frequency and timedomain

• Wavelets summarize both time and frequency information• Used to characterize temporal effects

Page 48: Stephen Schwab September 28, 2005

Schwab_20050928_48

Case #1: Analysis techniqueCase #1: Analysis technique

• Validate fidelity of experiments by comparing spectral behavior of real-world traces to testbed experiments.

Page 49: Stephen Schwab September 28, 2005

Schwab_20050928_49

Experiment 1: Lander Attack

TCP NULL attack from a single source sending 40B packets at 1100 packets/s

Packet rate

Bit rate

Number of Flows

Trace Spectrum

Testbed Spectrum

Page 50: Stephen Schwab September 28, 2005

Schwab_20050928_50

Experiment 1: Lander Attack

TCP NULL attack from a single source sending 40B packets at 1100 packets/s

Testbed Spectrum

Page 51: Stephen Schwab September 28, 2005

Schwab_20050928_51

Packet rate

Bit rate

Number of Flows

Experiment 2: Lander Attack

Invalid IP-proto 255 attack from three sources sending 40B packets at 60 Kpackets/s

Trace Spectrum

Testbed Spectrum

Page 52: Stephen Schwab September 28, 2005

Schwab_20050928_52

Experiment 2: Lander Attack

Invalid IP-proto 255 attack from a three sources sending 40B packets at 60 Kpackets/s

Trace Spectrum

Page 53: Stephen Schwab September 28, 2005

Schwab_20050928_53

Case #2: Background Traffic AnalysisCase #2: Background Traffic Analysis

• CloudShield Entropy detection behavior

• CloudShield requires filter to be installed from control plane with minimum and maximum thresholds

– Packets that push the entropy above [below] a maximum [minimum] threshold are dropped

• What if entropy standard deviation is 23%?– Non-stationary statistic makes filtering impossible

Page 54: Stephen Schwab September 28, 2005

Schwab_20050928_54

Entropy During Attack RunEntropy During Attack Run

0

500000000

1000000000

1500000000

2000000000

2500000000

3000000000

3500000000

4000000000

4500000000

5000000000

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65

Series1

Series2

Series3

Series4

Series5

Page 55: Stephen Schwab September 28, 2005

Schwab_20050928_55

OverviewOverview

• EMIST DDoS Experimental Methodology

• FloodWatch and CloudShield Experiment Design

• Testbed Methodology Automation

• Process for Incorporating Hardware and Appliances

• Traffic Generation

• Visualization and Metrics

• Future Plans

Page 56: Stephen Schwab September 28, 2005

Schwab_20050928_56

Introduction to our DDoS metric frameworkIntroduction to our DDoS metric framework

• Distinguish between extrinsic and intrinsic metrics– Extrinsic: Measures that can be computed and observed by external

parties – Intrinsic: Measures that can only be computed by the object being

measured and by only analyzing the internal algorithms and data structures

• Analyze metrics at different levels of abstraction– packet, flow, aggregate, service/application layers

• Metrics from different vantage points– Client-side, server-side (end point), link-level, end-to-end

• Focus on metrics for – Traffic and service characterization– Measuring DDoS impact– Measuring DDoS effectiveness

Page 57: Stephen Schwab September 28, 2005

Schwab_20050928_57

Examples of Metrics for Characterizing Traffic Examples of Metrics for Characterizing Traffic (base traffic metrics)(base traffic metrics)

Client-observed Intermediary-observed link level

Server-observed

Application-level

- streaming video: mean opinion score (MOS) (E)- VoIP: Round-trip-delay (E)- VoiP: Percentage of Packets discarded by the Jitter Buffer (I)- VoIP: Mean-length-of-bursts (I)

- number-of-flows-per application

- client-request-rate- client-service-rate

Aggregate-level - per-aggregate-packet-rate (E)

- per-aggregate-arrival-rate (E)

Flow-level - server-connection-completion-time (E)- server-connection-completion-rate (E)- rate of failed connections (E)- per-flow-loss rate

- per-flow-packet-rate (E) - per-client-connection- request-rate (E)- per-client-goodput (E)

Packet-level - server-response-rate (E);- server-response-time (E);

- goodput (E);- ratio of attack traffic to goodput

- per-client-packet-rate (E);- packet-drop-rate (I);- per-packet-processing overhead (I)

Page 58: Stephen Schwab September 28, 2005

Schwab_20050928_58

The Server Service and Link Service Indices:The Server Service and Link Service Indices:Higher Level Posture IndicatorsHigher Level Posture Indicators

• Server Service Index (SSI)– a function of common service attributes such as:

» average server-response time, » Rate of failed connections,» max number of concurrent clients serviced,» server throughput

• Link Service Index (LSI)– a function of common link attributes such as:

» goodput,» link utilization,» loss rate

• Index value is a composite weighted measure of deviations from desired thresholds for constituent attributes

• Normalize to lie within [0, 1]– An SSI/LSI value of 0.5 means the server/link is meeting prespecified

objectives for various attributes and deviations from 0.5 indicate a deficiency (if < 0.5) or a surplus (if > 0.5)

• Changes in the SSI/LSI provide an early warning that the server or link is degrading or improving

1

0

Page 59: Stephen Schwab September 28, 2005

Schwab_20050928_59

Formulating the Server/Link IndexFormulating the Server/Link Index

2

),(1 mm

mm txw

I =

),( mm txwhere for each metric m with a weight wm we define a deviation function that measures how the value x of metric m deviates from the threshold tm

),max()(),(

m

mmm tx

txtxsigntx

The deviation function should have the following properties:• should have no units, • should be zero when x = tm

• should be in the interval [-1,0) if the value of the metric has not achieved the threshold and in the interval (0,1] if it has exceeded the threshold

• should reflect the effort needed to take the metric m from the value x to tm

• The SSI or LSI may be defined as:

Sample linear deviation function: where sign(y) is defined as:

Value of y Threshold is Sign(y)

Positive or zero Upper bound -1

Negative Upper bound 1

Positive or zero Lower bound 1

Negative Lower bound -1

Page 60: Stephen Schwab September 28, 2005

Schwab_20050928_60

Metrics for Characterizing Traffic in Our ToplogyMetrics for Characterizing Traffic in Our Toplogy

- Goodput;- Ratio of attack to background traffic- Average link utilization

- Aggregate attack rate

Victim/Server

-Average server response time-Average server-side application throughput

1

0

Link Service Index (LSI)(link posture)

Server Service Index (SSI)(overall service posture)

1

0

-Average connection completion time-Rate of failed connections-Average throughput per flow-Average loss per flow

end

-to-

end

(T

CP

) fl

ow

end-to-end

client to server

Page 61: Stephen Schwab September 28, 2005

Schwab_20050928_61

Metrics for Measuring DDoS ImpactMetrics for Measuring DDoS Impact

-decrease in goodput

- increased aggregate attack rate

Victim/Server

- degraded server response time- decreased server-side application throughput

1

0

-decrease in the Link Service Index (LSI)

- decrease in the Server Service Index (SSI) (overall service posture)

1

0

- increased connection completion time- increased rate of failed connections- increased loss per flow

end

-to-

end

(T

CP

) fl

ow

end-to-end

client to server

Page 62: Stephen Schwab September 28, 2005

Schwab_20050928_62

Metrics for Measuring DDoS Defense Metrics for Measuring DDoS Defense EffectivenessEffectiveness

• % improvement in base and composite metrics

• Time taken to achieve a % improvement• Metrics to characterize the breaking points for

a defense – e.g: a filter can filter bad traffic up to 200 Mbps

• A high level effectiveness score (index) for a defense mechanism

Page 63: Stephen Schwab September 28, 2005

Schwab_20050928_63

Metrics for Measuring Defense EffectivenessMetrics for Measuring Defense Effectiveness

- increase in goodput

- decreased aggregate attack rate

Victim/Server

- improved server response time- increased server-side application throughput

1

0

-increase in the link Service Index (LSI)

- increase in the Server Service Index (SSI) (overall service posture)

1

0

- decreased connection completion time- decreased rate of failed connections- decreased loss per flow

end

-to-

end

(T

CP

) fl

ow

end-to-end

client to server

Page 64: Stephen Schwab September 28, 2005

Schwab_20050928_64

Measuring Defense Effectiveness Measuring Defense Effectiveness Against TCP Flows Under AttackAgainst TCP Flows Under Attack

• Goal:– Measure the effectiveness of FloodWatch defense

mechanisms in isolating the effects of flooding attacks on TCP flows

– Formulate a metric that is tied to a measure of the spread of the throughput of TCP flows when under attack and protected by FloodWatch and in relation to some ideal (theoretic) notion of what the throughput should be if the flows are unaffected by the attack.

» Ideal throughput• f(r,p) = sqrt(1.5) / r (sqrt (p)) ……….eq (1)

where r = round-trip time and p is the loss rate

Page 65: Stephen Schwab September 28, 2005

Schwab_20050928_65

Towards a Defense Effectiveness MeasureTowards a Defense Effectiveness Measure

• Induce loss rates of 1,2,3, and 4% and subject a sample size of n flows to each loss rate (rk), k = 1,2,3,4.

• Calculate the standard deviation (sdk) of the observed throughputs of the flow for each sample with respect to the theoretical throughput fkas given by eq (1)

• Derive an effectiveness score for each loss rate as a function of sdk

ek = function (sdk)

• Derive and effectiveness score for the defense as the mean of all ek for k = 1,2, 3,4.

• The lower the E value, the more effective the defense• Can also normalize E as an index to lie between [0, 1]

k

eE k

k

Page 66: Stephen Schwab September 28, 2005

Schwab_20050928_66

Potential Future Directions in DDoS MetricsPotential Future Directions in DDoS Metrics

• Mapping of attacks to extrinsic metrics– Development of a classification of attacks and analyzing what the

externally perceived effect of an attack category is (in terms of the extrinsic metrics that will be perturbed)

• Mapping of extrinsic to intrinsic metrics– Goal is to understand why a specific type of defense technology is

limited or in some cases completely fails to mitigate certain attacks classes.

• Towards the development of a DDoS Outage Index– Metrics to measure DDoS at Internet scales. – The long-term goal is the development of the equivalent of the

outage index used by telecommunications and cable TV providers to report outages to the FCC.

– Index will be a function of:» Extent of an attack (how global, how many tierX ISPs affected

etc.)» Spread rate» Duration of the attack» Impact: how many critical services were affected?» Recovery time etc.

Page 67: Stephen Schwab September 28, 2005

Schwab_20050928_67

Future Experiments and Future Experiments and Experimental MethodologyExperimental Methodology

• Communicating DDoS defenses– Stress the experimental methodology by requiring network

communication during DDoS attack

• Juniper routers– Support for routing protocols (BGP, OSPF) within experiment

automation framework– Leverage Chris Alfeld’s WAIL router building blocks

• CloudShield Appliance– As programmable instrument– As an emulation of a router line card, e.g. RED queue variants– As development prototyping environment for various defenses