1 Content-Aware Device Benchmarking Methodology (draft-hamilton-bmwg-ca-bench-meth-04) BMWG Meeting...

11
1 Content-Aware Device Benchmarking Methodology (draft-hamilton-bmwg-ca-bench-meth-04) BMWG Meeting Maastricht July 2010 Mike Hamilton [email protected] BreakingPoint Systems

Transcript of 1 Content-Aware Device Benchmarking Methodology (draft-hamilton-bmwg-ca-bench-meth-04) BMWG Meeting...

Page 1: 1 Content-Aware Device Benchmarking Methodology (draft-hamilton-bmwg-ca-bench-meth-04) BMWG Meeting Maastricht July 2010 Mike Hamilton mhamilton@breakingpoint.com.

1

Content-Aware Device Benchmarking Methodology

(draft-hamilton-bmwg-ca-bench-meth-04)

BMWG MeetingMaastrichtJuly 2010

Mike [email protected]

BreakingPoint Systems

Page 2: 1 Content-Aware Device Benchmarking Methodology (draft-hamilton-bmwg-ca-bench-meth-04) BMWG Meeting Maastricht July 2010 Mike Hamilton mhamilton@breakingpoint.com.

2

Agenda

Why draft-hamilton? Charter objections/responses Goals reset Explicit goals of this draft Explicit non-goals of this draft

Page 3: 1 Content-Aware Device Benchmarking Methodology (draft-hamilton-bmwg-ca-bench-meth-04) BMWG Meeting Maastricht July 2010 Mike Hamilton mhamilton@breakingpoint.com.

3

Why draft-hamilton?

RFC 2544 doesn’t specifically apply to some modern devices

Test vendors are already doing this in a one-off fashion BreakingPoint, Spirent, Ixia, Agilent, etc.

Page 4: 1 Content-Aware Device Benchmarking Methodology (draft-hamilton-bmwg-ca-bench-meth-04) BMWG Meeting Maastricht July 2010 Mike Hamilton mhamilton@breakingpoint.com.

4

Charter Objections

• “the scope of the BMWG is limited to technology characterization using simulated stimuli in a laboratory environment.”

• “Said differently, the BMWG does not attempt to produce benchmarks for live, operational networks

• This does not restrict BMWG from creating benchmark tests that are representative of VERY SPECIFIC live, operational networks

Page 5: 1 Content-Aware Device Benchmarking Methodology (draft-hamilton-bmwg-ca-bench-meth-04) BMWG Meeting Maastricht July 2010 Mike Hamilton mhamilton@breakingpoint.com.

5

Goals Reset

• Create a series of benchmark tests to MOST accurately predict device performance under realistic conditions FOR A SPECIFIC SIMULATED NETWORK

• RFC 2544 Quotes Page 11, Section 18, “Multiple Frame Sizes”• “The distribution MAY approximate the conditions on

the network in which the DUT would be used.”

• “The authors do not have any idea how the results of such a test would be interpreted other than to directly compare multiple DUTs in some very specific simulated network”

Page 6: 1 Content-Aware Device Benchmarking Methodology (draft-hamilton-bmwg-ca-bench-meth-04) BMWG Meeting Maastricht July 2010 Mike Hamilton mhamilton@breakingpoint.com.

6

Explicit Goals

• Repeatable Results

• Compare Multiple DUTs

Page 7: 1 Content-Aware Device Benchmarking Methodology (draft-hamilton-bmwg-ca-bench-meth-04) BMWG Meeting Maastricht July 2010 Mike Hamilton mhamilton@breakingpoint.com.

7

Explicit Non-Goals

• Not a replacement of RFC 2544

• Total Input Repeatability (discussion to follow)

Page 8: 1 Content-Aware Device Benchmarking Methodology (draft-hamilton-bmwg-ca-bench-meth-04) BMWG Meeting Maastricht July 2010 Mike Hamilton mhamilton@breakingpoint.com.

8

Test Run Setup

• Methodologies Run• RFC 2544 Throughput (64B + 1518B)• RFC 3511 Throughput (1 kB + 512 kB)• IMIX Throughput• CAIDA• Spirent• Wikipedia• Agilent-simple• draft-hamilton-03 (random)• draft-hamilton-04 (shell)

Page 9: 1 Content-Aware Device Benchmarking Methodology (draft-hamilton-bmwg-ca-bench-meth-04) BMWG Meeting Maastricht July 2010 Mike Hamilton mhamilton@breakingpoint.com.

9

Test Results

Page 10: 1 Content-Aware Device Benchmarking Methodology (draft-hamilton-bmwg-ca-bench-meth-04) BMWG Meeting Maastricht July 2010 Mike Hamilton mhamilton@breakingpoint.com.

10

Fuzzing Results

Page 11: 1 Content-Aware Device Benchmarking Methodology (draft-hamilton-bmwg-ca-bench-meth-04) BMWG Meeting Maastricht July 2010 Mike Hamilton mhamilton@breakingpoint.com.

11

Draft-04 Highlights and Reasons

• “Shell” Methodology• More reproducible

• Backoff on ‘realistic’• Compromise

• Dropped ‘security’• Difficult to scope and maintain currency

• Maintain ‘fuzzing’ aspect• Random but repeatable