QoS in an Ethernet world

74
www.procket.com QoS in an Ethernet world Bill Lynch Founder & CTO

description

QoS in an Ethernet world. Bill Lynch Founder & CTO. Why is it needed? (Or is it?) What does it do? (Or not do?) Gotchas…. Why is it hard to deploy?. QoS. Headend. VPN A. CE. VPN B. CE. CE. VPN A. Headend. Computational Particle Physicist. Triple play data networks. - PowerPoint PPT Presentation

Transcript of QoS in an Ethernet world

www.procket.com

QoS in an Ethernet world

Bill LynchFounder & CTO

2

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

QoS

• Why is it needed? (Or is it?) • What does it do? (Or not do?)• Gotchas….• Why is it hard to deploy?

3

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

Triple play data networks

PE

Headend

CentralizedHeadend

Headend

High-speed Ethernet Edge• Assured QoS• DOS prevention

Broadband Home

VOD, CONF, Data services

VPN ACE

VPN BCE

VPN ACE

PE

•Video, voice, data over ethernet.

•QoS across thousands of subscribers

•SLAs and differential pricing

Interface content mirroring for security requirements

Edge

DistributionIP or MPLS or

λ Core

Computational Particle Physicist

4

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

Triple play data characteristics•Voice

• Many connections• Low BW/connection• Latency/jitter requirements

•Video• Few sources• Higher BW• Latency

•Data• Many connection• Unpredictable BW• BE generally okay

•Computational particle physicist• Very high peak BW & duration• Very few connections

5

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

Router QoS

Physical Port

Physical Port

Physical Port

Physical PortPhysical Port

Physical Port

Physical Port

Physical Port

6

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

Router QoS

Physical Port

Physical Port

Physical Port

Physical PortPhysical Port

Physical Port

Physical Port

Physical Port

QoS == which packet goes firstOnly matters under congestion

7

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

Router QoS

Physical Port

Physical Port

Physical Port

Physical PortPhysical Port

Physical Port

Physical Port

Physical Port

Inherent packet jitter

Bad: Per hop!

Worse: N simultaneous arrivals

Worse: Bigger MTU

8

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

Inherent jitter (per hop!)1500B latency (us)

120.0

19.212.0

4.8

1.21.0

10.0

100.0

1000.0

0.1 1 10

FE

OC-12GE

OC-12

OC-192

Fundamental conclusion:QoS more important at edge

Edge also more likely to congest

9

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

Gotchas….• Already no guarantees from simultaneous arrival… … but hope the total worst case is < 10ms?• And what if your router wasn’t perfect?

10

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

What is Queue Sharing?

Queue Sharing is when multiple physical or switch fabric connections must share queues.

Example: Each input linecard has two queues for each output linecard.

All packets in a shared queue are treated equally.

HI Queue

LO Queue

Physical Port

Physical Port

Physical Port

Physical Port

HI Queue

LO QueuePhysical Port

Physical Port

Physical Port

Physical Port

11

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

What is Head of Line Blocking?

When an output linecard becomes congested, traffic becomes congested on the input linecard

HI Queue

LO Queue

Physical Port

Physical Port

Physical Port

Physical Port

HI Queue

LO Queue

Traffic control (W/RED) must be performed at input VOQ.

12

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

What is Head of Line Blocking?

The output linecard cannot process all of the output traffic.

HI Queue

LO Queue

Physical Port

Physical Port

Physical Port

Physical Port

HI Queue

LO Queue

Because all traffic in a shared queue (VOQ) is treated equally, we have affected traffic on the uncongested port.

13

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

Queue Sharing Test Results

Congested port (Flows C, D, E) remained at 100% throughput

Uncongested (Flows A, B) were penalized because of Queue Sharing

14

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

The effects of Queue Sharing

With the presence of Queue Sharing, congestion can severely affect the performance of non-congested ports

Congestion is caused by:Topology ChangesRouting Instability

Denial of Service AttacksHigh Service Demand

Misconfiguration of systems or devices

15

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

Output Queued Architectures - PRO/8000

Only one queuing location exists in the entire system36,000 unique hardware queuesProtected bandwidth on a queue

Physical Port

Physical Port

Physical Port

Physical PortPhysical Port

Physical Port

Physical Port

Physical Port

Incoming packets are immediately placed into a unique output queue

Centralized Shared MemorySwitch Fabric

16

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

Output Queued Architectures - PRO/8000

Only one queuing location exists in the entire systemOver 36,000 unique hardware queues

Bandwidth is protected on a per-queue basis

Physical Port

Physical Port

Physical Port

Physical PortPhysical Port

Physical Port

Physical Port

Physical Port

Incoming packets are immediately placed into a unique output queue

Centralized Shared MemorySwitch Fabric

17

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

Output Queued Architectures - PRO/8000

Traffic control (W/RED) is performed on each output queue individually

Protected bandwidth for every single queue

Physical Port

Physical Port

Physical Port

Physical PortPhysical Port

Physical Port

Physical Port

Physical Port

Centralized Shared MemorySwitch Fabric

18

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

Pro/8812 Test Results

Congested port (Flows C, D, E) remained at 100% throughput

Uncongested (Flows A, B) remained at 100% throughput

19

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

Triple play data characteristics•Voice

• Many connections• Low BW/connection• Latency/jitter requirements

•Video• Few sources• Higher BW• Latency

•Data• Many connection• Unpredictable BW• BE generally okay

•Computational particle physicist• Very high peak BW & duration• Very few connections

20

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

Network Qos architectures

20

Network Predictability QoS

PSTN 50 years fixed BW

TDM

Cable MSO 50 years transmit only

Provision and broadcast

Data Evolving Over-provision

21

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

QoS Deployment Issues•Political

• Peers•Equipment

• QoS is end to end• Many queues/port• Many shapers/port• Fast diffserv/remarking• Computation expense

•Operational• Must deploy everywhere• Must police at the edge

•Commercial• Easier short term solutions to

problems• Cheaper alternatives

•Applications• Not tuned or aware• QoS not ‘required’ for the

application•Geographical

• Last mile technologies• Single provider network• Green field deployments

22

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

Summary• Triple play requires QoS

• Services drive quality

• Most routers aren’t perfect• Shared queues mean you can’t provision a port independently

• Political and deployment problems remain• Some geographic areas better suited

23

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

24

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

Never underestimate the power of Moore’s Law

Architecture429 sq mm (20.17mm x 21.29mm)214M transistors400M contacts2.6MBytes of memory

NPU

425 sq mm (20.17mm x 21.07mm)137M transistors188M contacts950KBytes of memory

LCU297 sq mm (17.26mm x 17.26mm)30.5M transistors47M contacts50KBytes of memory

SC

429 sq mm (20.17mm x 21.29mm)156M transistors265M contacts1.2MBytes of memory

Striper

389 sq mm (19.05mm x 20.4mm)106M transistors188M contacts1.2MBytes of memory

MCU225 sq mm (15.02mm x 15.02mm)83M transistors136M contacts900KBytes of memory

GA

25

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

NPU – 40G QoS lookups

FTSRAMLxU

PxU

IPA

PBUpacman

QxU

VLIW systolic Array• Packet advances

every cycle• Named bypassing• > 200 processors

• 4 ops/cycle/processor

• 12 loads every cycle • (1Tb memory BW)• 36 loads/packet

26

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

NPU

FTSRAMLxU

PxU

IPA

PBUpacman

QxU

VLIW systolic Array• Normal instruction

set• Arithmetic• Logical• Branch• Load

• Simple programming model

• Deterministic performance

27

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

Memory Controller – Service Level Queueing• High BW• 16 DRAM chips

• independent memory banks• BW dist. across banks• 36K queues• Memory management• Write-once multicast• Preserve ordering

28

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

Basic Router Architecture Elements

Linecard Switch Fabric Linecard

Three Classes of Switch Fabric Architecture- Input Queued (IQ)

- Output Queued (OQ)- Combined Input/Output Queued (CIOQ)

29

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

Input Queued (IQ) Fabrics

Input Linecard Switch Fabric Ouput Linecard

Input Queued Switch Fabrics:Inefficient use of memory

Require Complex Scheduling

30

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

Combined Input/Output Queued (CIOQ) Fabrics

Input Linecard Switch Fabric Ouput Linecard

CIOQ Switch Fabrics:Generally with point-to-point fabric in the middle

(Crossbar, multi-stage (clos), torus)Requires Complex Scheduling

Queues shared to reduce complexity

31

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

Output Queued Fabrics

Input Linecard Switch Fabric Ouput Linecard

OQ Switch Fabrics:Require extremely high speed memory access

Do not share queuesEfficient multicast replication

Protected bandwidth per queue

www.procket.com

Terabit Centralized Shared Memory Routers

Bill LynchCTO

April 20, 2004

www.procket.com

Whither QoS?

Bill LynchCTO

April 20, 2004

34

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

Concurrent Services

PE

Headend

CentralizedHeadend

Headend

High-speed Ethernet Edge• Assured QoS• DOS prevention

Broadband Home

VOD, CONF, Data services

VPN ACE

VPN BCE

VPN ACE

PE

•Video, voice, data over ethernet.

•QoS across thousands of subscribers

•SLAs and differential pricing

Interface content mirroring for security requirements

Edge

Distribution

IPMPLS

λ

Research, Education, Grid, Supercomputing

35

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

36

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

(More Bill’s Slides Here)• (As much detail on the switch fabric and chips as you are comfortable saying in a

multi-vendor environment!)

• No scheduling• 36K service level queues• NPU for fast lookup, policing, shaping• SW abstraction based on service performed, not provided knobs• Many, many, many DRAM banks. However, ½ as many as CIOQ

architectures.

• 40G NPU for line rate• Policing• Remarking• DA, AS, other lookup

• SW interface focus on service, not knobs.

37

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

(Insert Bill’s Slides Here)• Self Introduction• Problem Statement (Bill)

• "Layer 3 QoS at the right scale price is elusive"Throwing more bandwidth at lower layers only makes networking researchers commodity bandwidth brokers. Also that is fine for R&E but commercially that is too expensive, so there appears to be a growing disconnect between R&E and commercial.It will be important not to slam the current L2/L1 vogue lest we upset the locals :)

• Numerous commercial implementations starting now• Single network country• High BW to home• Triple play

• Assertion (Bill)• "System Architecture greatly contributes to the proper operation of network wide QoS"Current

system architecture are completely unfocused on network wide QoS, and focused on per-hop-behaviors. This forces networkers to tweak 100 knobs to get the desired behavior. Why not architect the system to protect a flow through the router, so that behaviors are predictable in every circumstance?

• End 2 end. Any problem exacerbated by TCP.

38

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

Abilene Network Map

Source: http://abilene.internet2.edu/new/upgrade.html

39

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

Internet Growth Predictions

QuickTime™ and aTIFF (LZW) decompressor

are needed to see this picture.

QuickTime™ and aTIFF (LZW) decompressor

are needed to see this picture.

QuickTime™ and aTIFF (LZW) decompressor

are needed to see this picture.

“117% YEARLY GROWTH THROUGH 2006”

“VIDEO WILL DRIVE TRAFFIC GROWTHOVER THE NEXT 10 YEARS”

Source: Yankee Group April 2004

40

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

Concurrent Services EdgeIntradomain QoS

Single Element Core (Cluster)Interdomain QoS

Network Reference DesignPeers

41

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

PRO/8000TM Concurrent Services RoutersHighest performance and density

960Gbps 2 per rack

Ultra-compact80Gbps 8 per rack

42

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

PRO/8000 Series Logical Architecture

• No single point of failure• Strictly non-blocking

Switch Cards (2+1)Line Card Line Card

# Procket VLSI Forwarding Plane

CP Control PlaneRoute Processors

(1+1)

MediaAdapters Media

Adapters

• Fully redundant Switch Cardsand Route Processors

• All components hot-swappablein-service

1 23

1

1

14 5

5

5

5

CP

CP

43

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

Basic Router Architecture Elements

Linecard Switch Fabric Linecard

Three Classes of Switch Fabric Architecture- Input Queued (IQ)

- Output Queued (OQ)- Combined Input/Output Queued (CIOQ)

44

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

Input Queued (IQ) Fabrics

Input Linecard Switch Fabric Ouput Linecard

Input Queued Switch Fabrics:Inefficient use of memory

Require Complex Scheduling

45

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

Combined Input/Output Queued (CIOQ) Fabrics

Input Linecard Switch Fabric Ouput Linecard

CIOQ Switch Fabrics:Generally with point-to-point fabric in the middle

(Crossbar, multi-stage (clos), torus)Requires Complex Scheduling

Queues shared to reduce complexity

46

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

Output Queued Fabrics

Input Linecard Switch Fabric Ouput Linecard

OQ Switch Fabrics:Require extremely high speed memory access

Do not share queuesEfficient multicast replication

Protected bandwidth per queue

47

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

What is Queue Sharing?

Queue Sharing is when multiple physical or switch fabric connections must share queues.

Example: Each input linecard has two queues for each output linecard.

All packets in a shared queue are treated equally.

HI Queue

LO Queue

Physical Port

Physical Port

Physical Port

Physical Port

HI Queue

LO QueuePhysical Port

Physical Port

Physical Port

Physical Port

48

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

What is Head of Line Blocking?

When an output linecard becomes congested, traffic becomes congested on the input linecard

HI Queue

LO Queue

Physical Port

Physical Port

Physical Port

Physical Port

HI Queue

LO Queue

Traffic control (W/RED) must be performed at input VOQ.

49

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

What is Head of Line Blocking?

The output linecard cannot process all of the output traffic.

HI Queue

LO Queue

Physical Port

Physical Port

Physical Port

Physical Port

HI Queue

LO Queue

Because all traffic in a shared queue (VOQ) is treated equally, we have affected traffic on the uncongested port.

50

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

Queue Sharing Test Results

Congested port (Flows C, D, E) remained at 100% throughput

Uncongested (Flows A, B) were penalized because of Queue SharingTraffic on adjacent ports was dropped!

51

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

Output Queued Architectures - PRO/8000

Only one queuing location exists in the entire systemOver 36,000 unique hardware queues

Protected bandwidth down to DS3 granularity

Physical Port

Physical Port

Physical Port

Physical PortPhysical Port

Physical Port

Physical Port

Physical Port

Incoming packets are immediately placed into a unique output queue

Centralized Shared MemorySwitch Fabric

52

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

Output Queued Architectures - PRO/8000

Only one queuing location exists in the entire systemOver 36,000 unique hardware queues

Protected bandwidth down to DS3 granularity

Physical Port

Physical Port

Physical Port

Physical PortPhysical Port

Physical Port

Physical Port

Physical Port

Incoming packets are immediately placed into a unique output queue

Centralized Shared MemorySwitch Fabric

53

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

Output Queued Architectures - PRO/8000

Traffic control (W/RED) can be performed on each output queue individually!

Protected bandwidth for every single queue

Physical Port

Physical Port

Physical Port

Physical PortPhysical Port

Physical Port

Physical Port

Physical Port

Centralized Shared MemorySwitch Fabric

54

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

Multicast Scaling and Performance by Design

ContentIncoming Line CardMedia Adapters

Centralized Shared Memory Switch Fabric

1. One copy of packet written into memory

2. Output Line Cards read copy of packet out of memory

3. Copy packet to each outgoing interface

Outgoing Line Cards

55

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

State-of-the-art Networking Software

Lightweight kernelFully modularMemory protectionInherent fault isolationRestartable processes Rapid recovery from failuresFast messaging between processes

Embedded self-diagnosticsIn service upgradesAutomatic image rollbackSimple to extendModular forwarding codeBuilt in portability

56

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

Portable, Lightweight Kernel

• Portability of PRO/1 MSE is built in• System software features can easily be moved to new platforms

• Stripped down to essential functions to maximize stability• No networking functions or services can crash the system

• Designed for mission critical applications

Portability Ensures Longevity & Consistency of System Software,Lightweight Operating System Maximizes System Stability

Lightweight Kernel to handlescheduling and memory allocation

57

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

Modular versus Monolithic AlternativesFully modular

Kernel

Semi-monolithic

CLI

Monolithic

InterfacesBGPIS-IS, OSPFPIM, MSDP, SSM, IGMP

CLI Interface Mgr

BGPIS-IS, OSPF

PIM, MSDP, SSM, IGMP Kernel

Lightweight Kernel

BGP OSPF IGMP PIM SNMP CLI… … Other…

System Manager

IntelligentService Agent

58

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

Improve Network Availability, Simplify Operations In-service software upgrades increase network availability

and simplify network operations

Base Release

1. Procket Package Manager checks compatibility of base release and SNMP package2. While SNMP package is installed, all protocols continue to operate3. Once installed, SNMP can be restarted using the new package

Package installed

and running

BGP OSPF IGMP PIM SNMP CLI… … Other…IntelligentService Agent

SNMP

…OSPF IGMP PIM… …

NewSNMP

PackageCLI Other

IntelligentService Agent

BGP

New SNMP Package

59

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

Software Architecture• Each protocol runs as a separate process• Uses multiple POSIX threads for scheduling tasks• Uses private memory for local data structures• Uses well documented APIs to service other processes• Uses shared-memory when offering read-only API

service to other processes• Run-to-completion thread scheduling• Table managers run as separate processes

60

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

IPC Example

OSPF- Learns Route- Adds to URIB- Uses mq IPC

IP- Packet arrives- Route lookup- Uses direct read

URIB

IPOSPF URIBroutingtable

R/WR/W

Shared-Memory

URIBAPI

R/W

- URIB writes to memory

URIBAPI

Direct read

IPC mess

age

61

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

Programmable VLSI Forwarding Engine• Facilitates line-rate forwarding of IPv4, IPv6, and MPLS

traffic• New services with software downloads rather than

hardware upgrades • Support for IPv6• TTL checking in hardware (for ACLs, GTSM …)

• Multiple priority queues for RP destined traffic• capable of modifying queue priority for various types of control

traffic• Multicast (PIM-SM) support does not need special

media-adaptors

62

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

CLI Highlights• Familiar ‘look and feel’ reduces OpEx

• Operations mode examplesshow ip bgp summaryshow ip interface briefshow ip ospf neighborsshow isis databaseshow ip mroute

• Configuration mode examplesrouter bgp 100 log-neighbor-changes neighbor 10.1.1.1 remote-as 200 dont-capability-negotiate address-family ipv4 unicast policy remove-martians in

• Support for deferred configuration “commits”

63

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

Other salient software features• Powerful policy specification framework

• intuitive syntax - similar to structured programming languages• support for “chaining” actions

• Service oriented, modular QoS configuration• Dynamic RP queue prioritization for known BGP peers• Ability to constrain debug output using “debug-filters”• Conservative defaults

• unnecessary services are disabled (only ‘ssh’ is on)• Digitally signed software packages for verifying source and integrity of

contents• Intelligent service-agent for pro-active health monitoring

64

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

Procket PRO/ Silicon TechnologyHighest performance and density

960Gbps 2 per rack

Ultra-compact80Gbps 8 per rack

65

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

Procket PRO/ Silicon Technology

66

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

Procket PRO/ Silicon Technology• World’s fastest packet processors

• First 40 Gbps network processor (2002)• Record bandwidth density• 6-chip family

• Most flexible platform• Unmatched programmability

enables new services• Long lifetime

• Enhanced reliability• Highest level of silicon integration

NPU214 Million Transistors

67

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

Never underestimate the power of Moore’s Law

Architecture429 sq mm (20.17mm x 21.29mm)214M transistors400M contacts2.6MBytes of memory

NPU

425 sq mm (20.17mm x 21.07mm)137M transistors188M contacts950KBytes of memory

LCU297 sq mm (17.26mm x 17.26mm)30.5M transistors47M contacts50KBytes of memory

SC

429 sq mm (20.17mm x 21.29mm)156M transistors265M contacts1.2MBytes of memory

Striper

389 sq mm (19.05mm x 20.4mm)106M transistors188M contacts1.2MBytes of memory

MCU225 sq mm (15.02mm x 15.02mm)83M transistors136M contacts900KBytes of memory

GA

68

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

40 Gbps NPU • VLIW systolic array• 375 MHz• 125 Mpps• 2856 min ops/packet• 37 min loads/packet• 255 meters• 256K GPCID• Programmable features

• Parsing, Lookup, PCLQOS, Accounting, IPv6

69

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

40 Gbps NPU

FTSRAMLxU

PxU

IPA

PBUpacman

QxU

VLIW systolic Array• Packet advances

every cycle• Named bypassing• > 200 processors

• 4 ops/cycle/processor

• 12 loads every cycle • (1Tb memory BW)• 36 loads/packet

70

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

40 Gbps NPU

FTSRAMLxU

PxU

IPA

PBUpacman

QxU

VLIW systolic Array• Normal instruction

set• Arithmetic• Logical• Branch• Load

• Simple programming model

• Deterministic performance

71

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

Memory Controller – Service Level Queueing• High bandwidth• 16 DRAM chips

• independent banks• BW across banks• 36K queues• Memory management• Write-once multicast• Preserve ordering

MLT

SCIB

MQCC

QCC

TQHQ

COHB

72

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

PRO/Silicon TechnologyAdvanced Silicon Development

ASIC vendorASIC designer

Max DensityMax SpeedMax ReliabilityMin PowerMin Cost

GatesArchitecture Logic Layout Fab CustomPackage

Procket facilities providecomplete control over chip design

73

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

Procket PRO/ Silicon Technology

Architecture

Highest Density Space efficient

Highest power Efficiency OPEX savings

Full programmability Future proof

High integration Reliable

74

CONFIDENTIAL © 2004 Procket Networks, Inc. All rights reserved.

PRO/8000TM Concurrent Services RoutersHighest performance and density

960Gbps 2 per rack

Ultra-compact80Gbps 8 per rack