Ning WengANCS 2005 Design Considerations for Network Processors Operating Systems Tilman Wolf 1,...
-
Upload
douglas-stone -
Category
Documents
-
view
216 -
download
2
description
Transcript of Ning WengANCS 2005 Design Considerations for Network Processors Operating Systems Tilman Wolf 1,...
Ning Weng ANCS 2005
Design Considerations for Design Considerations for Network Processors Network Processors Operating SystemsOperating Systems
Tilman Wolf1, Ning Weng2 and Chia-Hui Tai1
1University of Massachusetts Amherst2Southern Illinois University Carbondale
Ning Weng ANCS 2005 2
Network Processor SystemsNetwork Processor Systems• System outline:
• Network Processor Operating System (NPOS)─ Manages multicore embedded system─ Considers workload requirements and network traffic
RouterPort
Network Processor
SwitchingFabric
Port
Port
packets
Port
Network Processor
Network Processor
Network Processor
Parallel Architecture
Proc. Proc. Proc.
Proc. Proc. Proc.
Network Processor Operating System
Network Traffic
WorkloadApplication
1
Application 3
Application 2
Ning Weng ANCS 2005 3
NPOS CharacteristicsNPOS Characteristics• Network processing very dynamic process
─ Many different network services and protocols─ Processing requirements depend on network traffic─ New algorithm for existing applications, e.g., flow
classification• Managing network processors is difficult
─ Multiple embedded processor cores─ Limited memory and processing resources ─ Tight interaction between components
• Processing elements cannot implement complex OS• NPOS requirements:
─ Lightweight─ Consider multiprocessor nature─ Adaptive to changes in workload
Ning Weng ANCS 2005 4
ComparisonComparison• Major differences to workstation/server OS
─ Separation between control and data path─ Limited/no user interactions─ Highly regular and “simple” applications─ Processing dominates resource management─ No separation of user-space and kernel-space
• Differences to others NP runtime environments─ Others: NEPAL, Teja, Shangri-La─ Multiple packet processing applications─ Run-time remapping─ Considers parallelism within application─ Not limited to certain hardware
Ning Weng ANCS 2005 5
OutlineOutline• Introduction• NPOS architecture
─ Our approach─ Design parameters
• Application workload─ Partitioning and mapping
• Traffic characterization─ Variation in processing demand
• Results and tradeoffs─ NPOS parameters─ Quantitative tradeoffs
• Example NPOS scenarios
Ning Weng ANCS 2005 6
Architecture of NPOSArchitecture of NPOS• Applications
─ Multiprocessor requires application partitioning
─ Mapping during runtime• Network traffic
─ Determines workload─ Analysis of traffic
required during runtime• Dynamic aspects
─ Traffic determines application mix
─ Complete or partial adaptation necessary
Ning Weng ANCS 2005 7
Design QuestionDesign Question• How finely should applications be partitioned?• How good does the mapping approximation need
to be?• Should we spend more time on better mapping or
should we remap more frequently?• How often should the NPOS remap?• How badly does the system perform if we predict
the workload incorrectly?• Should we remap completely or should we remap
partially?
Ning Weng ANCS 2005 8
NPOS ParametersNPOS Parameters• Application partitioning
─ Partitioning granularity• Traffic characterization
─ Sample size─ Batch size ─ Single parameter: traffic variation
• Application mapping─ Mapping effort─ Mapping quality
• Workload adaptation─ Frequency─ Complete or partial reallocation
Ning Weng ANCS 2005 9
Application PartitionApplication Partition• Grouping of instruction blocks
─ Dependencies between blocks• Represented by directed acyclic graph
─ Annotation gives information on processing and dependencies
─ Annotated Directed Acyclic Graph (ADAG)• ADAG generation
─ Automatic derivation from runtime trace• Balance of node size important
─ NP-complete problem─ Heuristic approximation
• Presented at NP3• Choice of granularity in NPOS
─ monolithic ─ very fine-grained ADAG─ Balanced ADAG
Ning Weng ANCS 2005 10
Workload MappingWorkload Mapping• Process of placing ADAGs on network processor
• Baseline system:
• Analytic performance model: not discussed here
Ning Weng ANCS 2005 11
Mapping AlgorithmMapping Algorithm• Mapping problem is NP complete
─ Need heuristic approximation• Key assumption:
─ Quality of mapping depends on mapping effort• Randomized mapping
─ Randomly place ADAG─ Evaluate performance─ Keep best solution
and retry• Increasing mapping
effort yields incrementally better results
Ning Weng ANCS 2005 12
Application Partitioning Application Partitioning GranularityGranularity
• What level of granularity is best?
• Monolithic (one single node): does not exploit parallelism
• Very fine-grained: requires excessive mapping effort
Ning Weng ANCS 2005 13
Traffic CharacterizationTraffic Characterization• We can find a configuration for one particular workload
─ Workload depends on traffic, which changes dynamically• Need to adapt to traffic• Cannot adapt for every packet
─ Need to sample traffic and find configuration for longer time• Traffic models for NPOS:
─ Static: cannot adapt, generally not suitable─ Batch: batch of packet buffered, perfect prediction, long delay─ Predictive batch: sampling of traffic, prediction for entire batch
• Takes advantage of temporal locality of network traffic• Key NPOS parameters:
─ Batch size: number of packets processed using one workload allocation
─ Sample size: number of packets used to predict batch workload• Impact Metric: traffic variation
Ning Weng ANCS 2005 14
TrafficTraffic VariationVariation• Measure for traffic variation v
─ Metric for how different traffic is from what we expected
• ei,j(a): estimated number of packets for application a
• pi,j(a): the actual number of packets for application a
─ Workload allocated according to sample of size l
─ What fraction of packets in batch of size b cannot be processed?
─ Ideal: v=0 all packets match with workload allocation─ Figure:
• 4,235,403 packets, 175 categories of applications• Sample size l=100, batch size b=10,000
Ning Weng ANCS 2005 15
Sample and Batch SizeSample and Batch Size• Bigger sample reduces v
─ Better prediction• Bigger batch reduces v
─ Only if sample also increases─ Smoothes over variation
• NPOS considerations─ Limitations on size of sample
• Need to buffer packets• Need time to compute mapping
─ Limitations on batch size• Larger batches predict further ahead• More variation with larger batches
─ Need to remap during runtime l = 100
Ning Weng ANCS 2005 16
Optimal Mapping FrequencyOptimal Mapping Frequency• How often should we run mapping process?• Need to find “sweet spot”
─ Too frequently:• Low mapping quality
─ Too infrequently:• Traffic changes during • batch
─ Traffic variation reduces performance
• Depends on batch size• For our setup:
─ Optimal mapping frequency every 20-100 packets around
• Depends on relative speed of processor that performs mapping
Ning Weng ANCS 2005 17
Partial MappingPartial Mapping• Traffic changes workload incrementally• Can we adapt by partial mapping?
─ Remove unnecessary ADAG─ Map new ADAG onto
existing mapping• NPOS consideration:
─ What is the long-term performance impact?
─ How much can we change?• Repeated partial mapping
degrades performance─ Stabilizes at some suboptimal state
• Mapping granularity makes minor difference• Complete mapping is occasionally necessary for peak
performance
Ning Weng ANCS 2005 18
Design ScenariosDesign Scenarios• Tradeoffs between different NPOS scenarios
─ Scenario I: static configuration• Simple system• No flexibility at runtime• Performance degradation under traffic variations
─ Scenario II: predetermined configuration• Offline mapping of multiple static workloads• Limited adaptability during runtime• High quality mapping results
─ Scenario III: fully dynamic configuration• Complete adaptability to any workload during runtime• Limited mapping quality• Lower overprovision overhead
• Results of our work provide quantitative tradeoffs
Ning Weng ANCS 2005 19
ConclusionConclusion• Network Processor Operating System
─ Application workload─ Traffic characterization─ Design parameters─ Quantitative tradeoffs
• Next steps─ Integrate memory management─ Consider different traffic prediction algorithms─ Develop prototype system on IXP platform
Ning Weng ANCS 2005 20
ReferencesReferences[1] Memik, G., and Mangione-smith, W. H. NEPAL: A framework for
efficiently structuring applications for network processors. In Proc. of Second Network Processor Workshop (NP-2) in conjunction with Ninth International Symposium on HPCA, Feb, 2003.
[2] TEJA TECHNOLOGIES. TejaNP Datasheet, 2003. http://www.teja.com.[3] Kokku, R., Rich, T., Kunze, A., Mudigonda, J., Jason, J., and Vin, H. A
case for run-time adaptation in packet processing systems. In Proc. of the 2nd Workshop on Hot Topics in Networks , Nov. 2003.
[4] Ramaswamy, R., Weng, N., and Wolf, T. Application analysis and resource mapping for heterogeneous network processor architectures. In Proc. of Third Workshop on NP-3, Feb, 2004.
[5] Weng, N. and Wolf, T., Pipelining vs. multiprocessors - choosing the right network processor system topology, in Proc. of Advanced Networking and Communications Hardware Workshop, June, 2004.
[6] Weng, N., and Wolf, T. Profiling and mapping of parallel workloads on network processors. In Proc. of The 20th Annual ACM Symposium on Applied Computing, March, 2005.
Ning Weng ANCS 2005 21
Questions?Questions?