1 Content-Aware Device Benchmarking Methodology (draft-hamilton-bmwg-ca-bench-meth-04) BMWG Meeting...

Post on 18-Jan-2016

213 views 0 download

Transcript of 1 Content-Aware Device Benchmarking Methodology (draft-hamilton-bmwg-ca-bench-meth-04) BMWG Meeting...

1

Content-Aware Device Benchmarking Methodology

(draft-hamilton-bmwg-ca-bench-meth-04)

BMWG MeetingMaastrichtJuly 2010

Mike Hamiltonmhamilton@breakingpoint.com

BreakingPoint Systems

2

Agenda

Why draft-hamilton? Charter objections/responses Goals reset Explicit goals of this draft Explicit non-goals of this draft

3

Why draft-hamilton?

RFC 2544 doesn’t specifically apply to some modern devices

Test vendors are already doing this in a one-off fashion BreakingPoint, Spirent, Ixia, Agilent, etc.

4

Charter Objections

• “the scope of the BMWG is limited to technology characterization using simulated stimuli in a laboratory environment.”

• “Said differently, the BMWG does not attempt to produce benchmarks for live, operational networks

• This does not restrict BMWG from creating benchmark tests that are representative of VERY SPECIFIC live, operational networks

5

Goals Reset

• Create a series of benchmark tests to MOST accurately predict device performance under realistic conditions FOR A SPECIFIC SIMULATED NETWORK

• RFC 2544 Quotes Page 11, Section 18, “Multiple Frame Sizes”• “The distribution MAY approximate the conditions on

the network in which the DUT would be used.”

• “The authors do not have any idea how the results of such a test would be interpreted other than to directly compare multiple DUTs in some very specific simulated network”

6

Explicit Goals

• Repeatable Results

• Compare Multiple DUTs

7

Explicit Non-Goals

• Not a replacement of RFC 2544

• Total Input Repeatability (discussion to follow)

8

Test Run Setup

• Methodologies Run• RFC 2544 Throughput (64B + 1518B)• RFC 3511 Throughput (1 kB + 512 kB)• IMIX Throughput• CAIDA• Spirent• Wikipedia• Agilent-simple• draft-hamilton-03 (random)• draft-hamilton-04 (shell)

9

Test Results

10

Fuzzing Results

11

Draft-04 Highlights and Reasons

• “Shell” Methodology• More reproducible

• Backoff on ‘realistic’• Compromise

• Dropped ‘security’• Difficult to scope and maintain currency

• Maintain ‘fuzzing’ aspect• Random but repeatable