ASON GMPLS Optical Control Plane Tutorial

162
MUPBED Workshop at TNC2007, Copenhagen Acknowledgement: The author thanks all colleagues from the OIF for their work, which has been the basis for this tutorial The responsibility for the content of this tutorial is with the author www.oiforum.com Hans-Martin.Foisel, T-Systems / Deutsche Telekom OIF Carrier WG Chair, OIF Vice President

description

ASON GMPLS OCP

Transcript of ASON GMPLS Optical Control Plane Tutorial

  • ASON/GMPLS Optical Control Plane TutorialMUPBED Workshop at TNC2007, Copenhagen

    Acknowledgement: The author thanks all colleagues from the OIF for their work, which has been the basis for this tutorial The responsibility for the content of this tutorial is with the author

    www.oiforum.comHans-Martin.Foisel, T-Systems / Deutsche TelekomOIF Carrier WG Chair, OIF Vice President

  • *ASON/GMPLS Tutorial OutlineIntroductionRequirements & Architecture Signaling RoutingControl Plane ManagementOIF Interoperability DemonstrationsControl Plane Applications Use CasesConcluding remarks

  • *AccessEdgeMetroCoreLong-Haul CoreOptical Control Plane Goals

    Offer real multi-vendor and multi-carrier inter-workingEnhance service offering with Ethernet and IP / OpticalProvide end-to-end service activationIntegrated cross-domain provisioning of switched connection servicesProvide accurate inventory managementOptical Control PlaneManagementPlane Optical Control PlaneOptical Control PlaneEthernet10/100bTATMOC-3/12/48/192STM-1o/4/16/64FICONFCFRE-1DS-1E-3DS-3IPTMF-814

  • *Realizing Optical Control Plane Goals Framework ElementsRobust and scalable transport infrastructure that facilitates carriage of desired servicesManagement plane that complements control plane in facilitating deployment and management of servicesControl plane architecture spanning user and provider networks that supports multiple provider business models and user service requests Control plane protocols based upon existing and emerging protocols of the data worldRobust Data Communications Network architecture and mechanisms that enable interaction of the protocols running at each node

  • *Intelligent Transport Networks introduce ...A distributed Control PlaneSignaling protocols for dynamic setup and teardown of connections

    Routing protocols for automatic routingBuilding on concepts/protocols from the data world

  • *Key Concepts Derived from the Data WorldDistributed processing/knowledge/storage Directory services E.g., DNS, X.500Open Distributed Processing Standardized route determination and topology dissemination protocolsRouting information exchange mechanisms E.g., RIP, OSPF, BGP, IS-IS/ES-ISFlexibility in binding time decisionsDifference between provisioning and auto-discoverySecurity based upon logical versus physical barriersE.g., authentication, integrity, encryptionDifferentiate between provisioning and more dynamic connection managementSurvivabilityDistributed restoration using signaling

  • *Leveraging Existing Protocol Solutions CaveatsInternet serving community of users with common goals and mutual trust:Classical Internet architectureWhen taking protocol solutions developed for the classical Internet, they bring along associated underlying principles and architectural aspects

    Transport business & operational requirements:Control plane architecture enabling boundaries for policy and information sharing

    Commercialization of the Internet:More Business Critical Infrastructure & Availability Requirements

  • *Optical Control Plane Capabilities Optical Control plane(distributed intelligence)XManagementSystemBandwidth request or release from clientsNetwork failureControl PlaneSignallingRoutingDiscoveryImproved bandwidth usage/efficiency Scheduled/unscheduled BoDOSS simplificationAutodiscovery

  • *OIFIETFITU-TTMFMEFSignalling for Ethernet Services Interop ResultsASON/GMPLS E-NNI, UNIGMPLS ProtocolsASON Architecture & RequirementsEthernet ServicesControl Plane Mgmt.Use CasesRFCsRecommendationsImplementationAgreementsTechnicalSpecificationsSolution SetsRelated Standards Development Organizations

  • *Protocols and ArchitecturesControl Plane capabilities are implemented in protocols, whose elements can be combined to support different architectures/implementationsDifferent SDOs contribute various protocol elements and architectural componentsIETFITU-TControl PlaneSolutionsOIF

  • *G.7713.2Control Plane Specifications - ExampleAuto-DiscoverySignalingDCN/SCNRoutingManagementRequirements &ArchitectureG.8080G.7714G.7714.1G.7713G.7715G.7715.1G.7712RFC 3495RFC 3473RFC 4208RFC 3474RFC 3946RFC 4204TMF 814UNI 1.0ENNI 1.0UNI 2.0E-NNIOSPF 1.0 ITU-TIETF TMFOIFENNI 2.0G.7715.2G.7718.1G.7718TMF 509TMFRFC 4202GMPLSMIB RFCsRFC 4207

  • *

    Optical Internetworking Forum (OIF)Mission: To foster the development and deployment of interoperable products and services for data switching and routing using optical networking technologiesThe OIF is the only industry group that brings together professionals from the data and optical worldsIts 100+ member companies represent the entire industry ecosystem:Carriers and network usersComponent and systems vendorsTesting and software companies

  • *OIF Technical Committee Working Groups

  • *Optical Control PlaneImplement Agreement StatusApproved IALetter BallotStraw BallotDrafthttp://www.oiforum.com/public/impagreements.html

    OIF Control Plane IA DashboardSignalingRoutingSecurityManagementOIF-UNI-01.0-R2OIF-UNI-01.0-R2-RSVPOIF-ENNI-01.0-OSPFOIF-SEP-01.0OIF-SEP-02.1OIF-CDR-01.0OIF-ENNI-01.0OIF-SMI-01.0OIF-SMI-02.1Control Plane Logging and Auditing with SyslogOIF-UNI-02.0OIF-UNI-02.0-RSVPOIF-ENNI-02.0

  • *ASON/GMPLS Tutorial OutlineIntroductionRequirements & Architecture Signaling RoutingControl Plane ManagementOIF Interoperability DemonstrationsControl Plane Applications Use CasesConcluding remarks

  • *Business deployment considerations

  • *Optical Control Plane Business Deployment ConsiderationsOptical control plane viability depends upon supporting business as well as technical requirementsService Provider business modelsCommercial and operational practices Services and network infrastructure heterogeneityControl and management plane heterogeneityNetwork and equipment interoperabilityForms foundation of fundamental optical control plane architecture principles

  • *Service Provider Business ModelsInternet Service Provider (ISP)Delivers IP-based services Owns all of its infrastructure (i.e., including fiber and duct to the customer premises)Leases some of its fiber or transport capability from a third partyClassical Service ProviderOffers L1/L2/L3 servicesOwns its transport network infrastructureSells services to customers who may resell to others Carriers Carrier (Service Broker)Provides optical networking servicesMay not own transport infrastructure(s) supporting those services (connection carried over third party networks)Research networks (NRENs, GEANT2, Internet2)

  • *Commercial & Operational Practices (1)Enable protection of commercial business operating practices and resources from external scrutiny or controlAn network operator is likely to support a number of user services networks; a trust relationship cannot be assumed between the network and these users (or among the various users)A network operator will not relinquish control of its resources outside of its administrative boundaries, as the network is a prime assetSupport a pay for service commercial modelNetwork operators differentiate their services by defining their own branded bundles of functionality, service quality, support, and pricing plansProvided value added services must be verifiable and billable in a value preserving way

  • *Commercial & Operational Practices (2)Protect security and reliability of optical transport network Optical transport network connection persistence must not be affected failures of its control plane, including that of the control communications network Signaling Communications Network (SCN)The network must be safeguarded against attacks that may compromise its control plane, or seek unauthorized use of its resources Control plane security

  • *Services HeterogeneityA wide range of services may be offered; e.g.,Classical data (e.g., best effort Internet, Frame Relay)Ethernet (e.g., EPL, EVPL, EPLAN, EVPLAN)L1/L2/L3 Virtual Private Network (VPN)SONET/SDH switched connection (e.g., STS-n, VC-n)OTH switched connection (e.g., ODU, OCh)Many different service deployment scenarios; e.g.,All services interface at the IP levelVarious services interface at L1, L2, and L3Various options for L1 and L2 topologies and re-configurability in access, metro, and core networks

  • *Network Infrastructure HeterogeneityExtremely diverse network of networks, with widely varying topologies, deployed technologies, services/applications supportedSupport operator-specific criteria including cost, performance, and survivability characteristics

    Breadth of existing and emerging data plane technologiesChoice of infrastructure granularity optionsFlexible capacity adjustment schemesRange of single- and multi-layer survivability strategiesDiffering infrastructure evolution strategies

  • *Control & Management HeterogeneityControl plane-based subnetworksManagement plane-based subnetworks (with various operations support system environments)Hybrid control plane / management plane scenarios; e.g.,Use of signaling protocols in combination with centralized route calculationMix of control plane and management plane based subnetworks

    EXAMPLE

  • *Optical Control Plane Network Operator Deployment ObservationsOptimal network layering, convergence choices, equipment selection dependent upon multiple factorsNetwork size, geography, projected growth Service offerings portfolio, QoS committed in SLAsCost, performance, resiliency trade-offsOperations support system environment Whether services traverse multiple operator domainsDiffering network operator transport infrastructure, control & management deployments and evolution strategiesOptical control plane architecture must support multi-dimensional heterogeneity

  • *Heterogeneity & Research ProjectsNOBEL

  • *Fundamental optical control plane architecture principles

  • *

    Optical Control Plane Fundamental Architecture Principles (1)Decouple services from service delivery mechanismsWide range of network infrastructure optionsNetwork operator specific optimizationsDecouple QoS from realization mechanismsWide range of survivability optionsNetwork operator specific approaches

    Introduce call construct, which reflects a service association that is distinct from infrastructure/realization mechanisms

  • *Optical Control Plane Fundamental Architecture Principles (2)Provide boundaries of policy and information sharingRange of network operator business modelsVarying trust relationships among users and providers, among users, among providersTargeted solutions, scalability considerations (scope of information dissemination), etc.

    Establish modular architecture with interfaces at policy decision points

  • *Optical Control Plane Fundamental Architecture Principles (3)Provide for various distributions of control functionality among physical platformsDifferent distributions of routing and signaling controlFully centralized to fully distributed system designsDecouple topology of the controlled network from that of the network supporting control plane communications (SCN)The transmission medium may be different for control plane messages and transport plane dataIdentifiers to distinguish transport resources from, and among, signaling and routing control entities, and SCN addresses

  • *ASON architecture and standards status

  • *Rec. G.8080Automatically Switched Optical Network (ASON)

    G.7714

    G.7715

    G.7713

    G.7712G.7713.1(PNNI-based)Protocol Neutral Recs.G.7713.2(GMPLS-RSVP-TE)G.7713.3(GMPLS-CR-LDP)

    G.7718 G.7714.1(DiscoverySDH/OTN)Protocol Specific Recs.

    G.7715.1

    G.7718.1Optical Control Plane ITU-T ASON Recommendation FrameworkMgmt. FW RoutingInfo ModelSignaling AutodiscDCN/SCNLink State

    G.7716 initialization

    G.7715.2Remote Path Query

  • *Optical Control PlaneITU-T ASON ArchitectureITU-T G.8080/Y.1304, Architecture of the Automatically Switched Optical NetworkFirst version Approved Nov.01, several subsequent Amendments, first major revision Approved June 2006 Subsumes and deprecates ITU-T Rec. G.807, Requirements for Automatically Switched Transport Networks, Approved July 01Architecture considers business and operational aspects of real-world deploymentsCall and connection separation, connection persistence, customer/network address space isolation, domain constructs, reference points and interfacesLeverages transport layer network constructs utilized in all transport network architecture and equipment RecommendationsApplicable to all connection-oriented transport networks (whether circuit or packet)

  • *ITU-T ASON ArchitectureCalls and Connections Objective: Support ability to offer enhanced/new types of transport services facilitated by: Automatic provisioning of transport network connections Span one or more managerial/administrative domains Involves both a Service and Connection perspectiveCall : Support the provisioning of end-to-end services while preserving the independent nature of the various businesses involvedConnection : Automatically provision network connections (in support of a service) that span one or more managerial/administrative domains

  • *ITU-T ASON ArchitectureDomains ASON domains represent generalization of existing traditional conceptsTransport definitions of administrative/management domainsInternet administrative regionsDomains may express differing:Administrative and/or managerial responsibilitiesTrust relationships, addressing schemesDistributions of control functionalityInfrastructure capabilities, survivability techniques, etc.Domains are established by network operator policies

  • *ITU-T ASON ArchitectureInterfaces (1)Service demarcation points are where call control is providedInter-domain interfaces are service demarcation points

    Design modularized around open interfaces at domain boundariesUNI, E-NNI, I-NNI

  • *ITU-T ASON ArchitectureInterfaces (2)NEProvider AProvider BE-NNINEUNINEUNINEUNI separates the concerns of the user and provider:3.6 Modularity is good. If you can keep things separate, do so. - RFC 1548Objects referenced are User objects, and are named in User terms

    UNI enables:Client driven end-to-end service activationMulti-vendor inter-workingMulti-clientIP, Ethernet, TDM, etc.Multi-serviceSONET/SDH, Ethernet, etc.Service monitoring interface for SLA management

  • *ITU-T ASON ArchitectureInterfaces (3)NEProvider AProvider BE-NNINENENEE-NNI enables:End-to-end service activationMulti-vendor inter-workingMulti-carrier inter-workingIndependence of survivability schemes for each domainI-NNI supports:Intra-domain connection establishmentExplicit connection operations on individual switches

  • *ITU-T ASON ArchitectureCall Control & InterfacesCall state is maintained at network access points, and at key network transit points where it is necessary or desirable to apply policyCalls that span multiple domains are comprised of call segments, with call control provided at service demarcation points (UNI/E-NNI)One or more connections are established in support of individual call segments, with scope of connection control typically limited to a single call segment

    NEDomain ADomain BE-NNINEUNIUNINENEUNI CallSegmentE-NNI CallSegmentUNI CallSegmentDomain A Call SegmentDomain B Call SegmentCALLCONNECTIONS

  • *Management planeData planeCONTROL PLANEDCNCP MANAGEMENTComponents of Control Plane enabled Network Domains

  • *Optical Control Plane ServicePermanent ConnectionC: Client network domainTN:Transport Network provider domainProvisionedProvisionedProvisionedTN1C1C2TN2Permanent connectionAll intra-/inter-domain calls and connections are provisioned by Management Plane actions

  • *Optical Control Plane ServiceSoft Permanent ConnectionC: Client network domainTN:Transport network provider domainPermanentconnectionTN1C1C2TN2E-NNISPC initiating domainSoft Permanent Connection (SPC)SwitchedconnectionPermanentconnectionManagement plane of a transport network provider domain is initiating a call/connection

  • *Optical Control Plane ServiceSwitched ConnectionC: Client network domainTN:Transport Network provider domainManagement plane of a client domain is initiating a call/connection TN1C1C2TN2SC initiating domainUNIE-NNIUNISwitched connection

  • *G.805 Transport Foundation

  • *G.805 Foundation Elements Transport ResourcesIntroduction of automated control doesnt remove/change the attributes of transport resourcesControl Plane needs to be able to configure the same attributesIntroduction of automated control doesnt modify the functional components that exist within the transport plane

  • *Transport Network/Equipment ArchitectureInformal Specification Approaches Described in terms of network elements, facilities, and cross-connectionsFacilities identified in terms of the physical layer characteristicsCross-connections between constituents of facilities or embedded facilities

    XXXXXDS1DS1DS3SONETSONETSONET3:3 DCS3:1 DCS3:1 DCS3:3 DCSRegenDS1 Service example

  • *Transport Network/Equipment ArchitectureInformal Specification Approaches Issues Model specific to the technologies used in the NEsDifficult to understand network topology without understanding details of the NEsSubject to differing interpretations of equipment specifications/behaviors arising from natural language descriptionUsage of different terminology; e.g., in doing a functional decomposition, different specifiers may group functionality in different ways but use the same term to denote the functional blockDevelopment of more formalized specification techniques initiated during 1988 time frame

  • *Transport Network ConstructsFormal Specification Techniques Recognize new challenges of emergent multi-carrier, multi-vendor telecommunications environmentIncreasingly complex networks and behaviors, arising from deployment of multi-technology networks & equipmentNo single network architecture, or single set of network elements, that suited all operatorsBetter support for multi-carrier/multi-vendor interoperabilityUnambiguous specifications that dont impose unnecessary architectural constraintsNetwork operator transport infrastructure technology deployment choices and evolution strategiesNetwork equipment provider innovation re equipment types

  • *Transport Network ConstructsFormal Specification Techniques - G.805Describes the generic characteristics of networks using a common languageTranscends technology and physical architecture choicesProvides a view of functions or entities that may be distributed among a number of equipmentsDefines elements that support the modeling of topological and functional conceptsTopology refers to how elements of the network are interconnectedFunctions refer to how signals are transformed during their passage through the networkDefines small number of architectural components that may be interconnected to represent various network/equipment configurations

  • *Transport Network ConstructsTopological G.805 LayersA layer is defined in terms of its set of signal properties - characteristic informationNetworks can be represented in terms of a stack of client/server relationshipsHelps manage the complexity created by the presence of different types of characteristic information in networksAllows the management of each layer to be similar

  • *Transport Network ConstructsTopological G.805 ExampleDS3 client carried over STM-N SignalVertical

  • *Transport Network Constructs DS-1 Service Architecture & Equipment3:3DCS3:1DCSregen3:3DCS3:1DCSDS-1 Path TrailDS-1 Path ConnectionDS-3 Path TrailDS-3 Path ConnectionDS-3 Path Connection-3 TrailSTS-1 TrailSTS-1 ConnectionSTS-1 ConnectionSONET Line TrailSONET Line TrailSONET Line ConnectionSONET Line ConnectionSONET Line ConnSection TrailSection TrailSection TrailSection ConnSection ConnSection ConnOptical TrailOptical TrailOptical TrailmuxmuxXXDS1 PathConnectionDS1 PathConnectionDS1 LineTrailDS1 LineTrailXXX

  • *Transport Functional ModelingTopological G.805 Partitioning Even for a single layer, complexity arises from the many different network nodes and connections between themPartitioning is defined as the division of layer networks into separate subnetworks that are interconnected by links representing the available transport capacity between them Helps manage complexity by using the principle of recursion to tailor the amount of detail to be understood at a particular time according to the need of the viewerAllows the management of each partition to be similar

  • *Transport Functional ModelingTopological G.805 Partitioning Example subnetworklinkHorizontal

  • *G.805 Transport Network ConstructsArchitectural Component DefinitionsFunctional EntitiesAdaptation: Adapts client signal into a form suitable for the server layer Termination: Where information concerning the integrity and supervision of adapted information may be generated and added, extracted and analyzedTopological EntitiesTrail: Provides end-end connection offering means to check transport qualityNetwork Connection: Same scope as trail but without ensuring integrityLink: Represents available transport capacity between subnetworks (static)Link Connection: Transfers information transparently across a linkSubnetwork: Describes flexible connectivity Subnetwork Connection: Transfers information across a subnetwork PointsTermination Connection Point (TCP): Any binding involving a termination function source or sinkConnection Point (CP): Any binding involving an adaptation source or sinkAccess Point (AP): Delimits a layer network

  • *Transport Functional EntitiesTrail TerminationTrail Termination SourceAdds overhead (OH) to input information (payload) to allow the integrity of the transfer to be monitoredTrail Termination SinkRemoves overhead and outputs remaining payload informationDetermines integrity of the transferThe Characteristic Information for a trail is the payload plus the overheadPayloadPayloadPayloadOHTrailNetwork Connection

  • *Transport Functional EntitiesAdaptationAdaptation SourceConverts client layer characteristic information to a form suitable for transport over a trail in the server layer networkThis is termed Adapted InformationAdaptation SinkConverts the adapted information from the server layer network to the client layer characteristic informationClient Layer CIClient LayerAdapted InformationServer Layer CITrail

  • *G.805 Transport Network ConstructsMulti-layer Architecture: DS3 over STM-NSTM-1 TrailVC-3 Link ConnectionVC-3 SubnetworksVC-3 SNCVC-3 SNCDS3 Client SignalDS3 Client SignalVC-3 TrailVC-3 Trail TerminationTCPCPAPSTM-1 MS Trail TerminationEtc.VC-3 Network ConnectionAP

  • *Key ObservationsEach layer network has its own topologyNEs may have different neighbors in different layer networksNEs do not necessarily appear in all layer networksNEs may perform different functions within a layer network, or in different layer networksLink connections in a client layer are created by configuring trails and adaptation functions in a server layerDifferences in server layer networks are transparent to the client

  • *Control Components

  • *G.8080 Control Plane ConstructsTopological Entity DefinitionsSubnetwork Point (SNP): Abstraction of a G.805 CP or TCP. They are associated to form a connection.Subnetwork Point Pool (SNPP): A set of subnetwork points that are grouped for the purposes of routingSNPP link: A link associated with SNPPs in different subnetworks. Routing Area: Defined by a set of subnetworks, the SNPP links that interconnect them, and the SNPPs representing the ends of the SNPP links exiting that routing area. A routing area may contain smaller routing areas interconnected by SNPP links. The limit of subdivision results in a routing area that contains a single subnetwork.

  • *G.8080 Control Plane ConstructsTopological Entity RelationshipsTrail TerminationTCPAdaptationSNPSNPCP

    SNP: Subnetwork PointSNPP: SNP PoolSNPP LinkRelationship between the architectural entities inTransport Plane and Control plane

  • *G.8080 Control Plane Constructs

    Control plane architecture described in terms of components and interfacesRepresent logical functions (abstract entities) rather than physical implementationsThe actual location/distribution of the components is not constrained To facilitate the construction of different scenarios, leverages the Unified Modeling Language (UML)Not all of the reference points (UNI, E-NNI) need to be instantiatedA single instantiation of a G.8080 control plane may control multiple layer networks with an explicit definition of the interlayer interaction (including none)

  • *Introduction to ASON ComponentsLRM - Link Resource ManagerCCC Calling/Called Party Call ControllerNCC Network Call ControllerCC - Connection Controller RC - Routing Controller PC Protocol Controller

    DA Discovery AgentTAP Termination & Adaptation PerformerTP Traffic Policing Component

    Monitor portPolicy portCCC/NCCConfig port

  • *Link Resource ManagerResponsible for control-plane local link connection inventoryResources provided through configuration or discoveryReceives requests for resources from Connection ControllerProvides information to Routing to facilitate Topology advertisementsMonitor portPolicy portCCC/NCCConfig port

  • *Call ControllerResponsible for providing a service across the networkOrchestrates components to meet service requestedDifferent domains can have different policiesInvoked by Management Request or by Signaling messagesInteracts with peer Call Controllers via Protocol Controller

    Monitor portPolicy portCCC/NCCConfig port

  • *Connection ControllerResponsible for establishing connections across a domainRequests Route to use from Routing ControllerRequests specific local link resources from LRMInteracts with peer Connection Controllers via Protocol Controller

    Monitor portPolicy portCCC/NCCConfig port

  • *Routing ControllerResponsible for providing paths between two points in the networkMaintains topology viewPaths are calculated to meet service constraintsSignal typeDiversityInteracts with peer Routing Controllers via Protocol Controller

    Monitor portPolicy portCCC/NCCConfig port

  • *Protocol Controller(s)Responsible for providing protocol specific behaviorCan be separate per client function, or a merged functionCCC/NCC and CCMonitor portPolicy portCCC/NCCConfig port

  • *Example Component Interactions

  • *Identifiers

  • *Identifiers Names & AddressesAn identifier provides a set of characteristics for an entity that makes it uniquely recognizableName: identifies an entityUnique only if it is unique within the context, or namespace, it is being used inThe same entity may have more than one name in different namespacesAddress: identifies a position in a specific topology Unique for the topology Typically hierarchically composed; allows for address summarization for locations that are close together

    Addresses should reflect connectivity, not identity

  • *Categories of IdentifiersManagement plane identifiersTransport plane identifiers (G.805)Identifiers for transport resources that are used by the control planeIdentifiers for Signaling & Routing Protocol Controllers (PCs)Identifiers for locating PCs in the SCN Identifiers to distinguish transport resources from, and among, signaling and routing control entities, and SCN addresses

  • *Identifier SpacesMANAGEMENT PLANE (MP)DATA PLANE CONTROL PLANE (CP)DCNMCN/SCN addresses(e.g. CTP, TTP)UNI/E-NNI TRISNPP, SNP ID Node ID(e.g., G.805 CP, TCP)(e.g., G.8080 CCC, NCC, CC, RC)SC PC, RC PC ID

  • *Relationship with GMPLS Architecture

  • *Relationship with GMPLS Architecture ModelsDiffering terminology and descriptive techniquesMore classical MPLS terminology (e.g., LSP) as compared to transport functional modeling terminology Natural language architecture descriptions as compared to formalized control plane component architecturePeer model, also called the integrated model, corresponds to ASON architecture with no UNI or E-NNI interfaces instantiatedAssumes a community of users with mutual trust and shared goalsNo inherent policy or security boundariesRouting and signaling protocols flow within the network without any filtering or other constraints imposed

  • *Relationship with GMPLS Architecture ModelsOverlay model, most closely corresponds to ASON architecture with UNI (with no E-NNI interfaces instantiated) Edge nodes are not aware of the topology of the core nodes (core nodes act more as a closed system)Core and edge nodes may have a routing protocol interaction for exchange of reachability information to other edge nodesAugmented model, most closely corresponds to an ASON architecture in which E-NNI interfaces have been instantiatedReflects the case of policy driven exchange of routing and topology information between core and edge nodes

  • *ASON/GMPLS Tutorial OutlineIntroductionRequirements & Architecture Signaling RoutingControl Plane ManagementOIF Interoperability DemonstrationsControl Plane Applications Use CasesConcluding remarks

  • *Signaling in Transport NetworksEssentially a Management Plane functionDistributed Connection ManagementSignaling has existed for many years in telephony, ISDN, ATM, and MPLS.Signalling is extended for transport networks due toFixed granularities defined multiplexing hierarchyProtection functions in the data planeSeparation of data plane from control and management planesAddressing/Naming - Separation of spaces between data plane and control planeConnection centric rather than Protocol centricConnection exists even if control plane ceases

  • *Protocols and ArchitecturesSignaling capabilities are implemented in protocols, whose pieces can be combined according to different architectures.Different SDOs contribute pieces and architectures.IETFITU-TControl PlaneSolutionsOIF

  • *Signaling in ASON ArchitectureArchitectural concepts for ASON signaling include:Calls, connections, call/connection separation Reference points, AddressingSignaling protocols implemented at UNI, INNI, ENNI reference points.Call and Connection setup implemented in protocol with user/service and network addressing.NEDomain ADomain BE-NNINEUNIUNINENEUNI CallSegmentE-NNI CallSegmentUNI CallSegmentDomain A Call SegmentDomain B Call SegmentCALLCONNECTIONS

  • *ASON Protocol-Neutral Signaling ITU-T Rec. G.7713/Y.1704, Distributed Call and Connection Management (DCM)First version Approved Nov.01, several subsequent Amendments, first major revision Consented Feb. 06Protocol neutral specifications encompassing UNI, I-NNI and E-NNI, supporting both soft-permanent and switched connectionsProvides distributed call and connection management requirementsOperations procedures, signaling network resilience to user and network defects, signal flow exception handlingRestoration for single and multiple rerouting domainsIncludes attribute specifications, message specifications, state diagrams, Call and Connection Controller managementBasis for mapping to specific protocol solutions (G.7713.x series)

  • *ITU-T Recommendations for ASON signaling protocol extensions Approved March 03Rec. G.7713.1, DCM Signaling Mechanism Using PNNIRec. G.7713.2, DCM Signaling Mechanism Using GMPLS RSVP-TERec. G.7713.3, DCM Signaling Mechanism Using GMPLS CR-LDPIETF base GMPLS signaling protocol RFCs Approved by IESG, published Jan. 03 RFC 3471, GMPLS Signaling Functional DescriptionRFC 3472, GMPLS CR-LDP ExtensionsRFC 3473, GMPLS RSVP-TE ExtensionsIETF Informational RFCs containing ASON GMPLS signaling protocol extensions (aligned with G.7713.2 & G.7713.3) and IANA Code Point Assignments Approved by IESG, published March 03 RFC 3474, IANA Assignments for GMPLS RSVP-TE Usage and Extensions for ASON RFC 3475, IANA Assignments for GMPLS CR-LDP Usage and Extensions for ASON RFC 3476, IANA Assignments for LDP, RSVP, and RSVP-TE Extensions for Optical UNI Signaling Protocol Specific Signaling

  • *OIF User Network Interface Signaling SpecificationsControl Plane work driven by Carrier Working Group requirementsArchitecture consistent with ITU-T ASON Recs. G.8080, G.7713Signaling specifications in IAs based upon IETF GMPLS RFCs and ITU-T Recs. G.7713.2/3Specifies detailed usage of selected options in protocolsOIF UNI 1.0 Signaling Specification published Oct. 01 Defines the signaling protocols and mechanisms implemented by client and transport network equipment from different vendors to invoke servicesFeature focus on SDH/SONET VC-3/STS-1 and higherOIF UNI1.0R2: UNI 1.0 Signaling Specification, Release 2 published Feb. 04OIF-UNI-01.0-R2-Common - User Network Interface (UNI) 1.0 Signaling Specification, Release 2: Common PartOIF-UNI-01.0-R2-RSVP - RSVP Extensions for User Network Interface (UNI) 1.0 Signaling, Release 2Updates UNI 1.0, but does not change UNI 1.0 functionalityReflects subsequent developments in other standards bodiesBuilds upon lessons learned from the OIFs multi-vendor interoperability event conducted at OFC 2003

  • *OIF User Network Interface Signaling Specifications (cont)OIF UNI 2.0Incorporates architectural enhancements per ITU-T ASON Rec. G.8080 and G.7713 evolutionBase features Support of Ethernet services (almost complete)Support of G.709 (complete)Enhanced security (complete)Call/connection separation (complete)Support of sub-STS1 granularity (complete)

  • *OIF External Network Node InterfaceSignaling SpecificationsControl Plane work driven by Carrier Working Group requirementsArchitecture consistent with ITU-T ASON Recs. G.8080, G.7713, G.7715, G.7715.1 Signaling specifications in IAs based upon IETF GMPLS RFCs and ITU-T Recs. G.7713.2/3Specifies detailed usage of selected options in protocolsOIF E-NNI 1.0, Intra-Carrier E-NNI Signaling IA, published Feb. 04Enables end-to-end connection management by providing a uniform way for carriers to interconnect network domains; feature support consistent with UNI 1.0/1.0R2OIF E-NNI 2.0, E-NNI Signaling IA, work in progressUpdated with E-NNI Signaling 1.0 Principal Ballot comments (from Feb. 04)Updated to reflect ITU-T Recommendation and IETF RFC progressIncludes updates based upon lessons learned from 2004 and 2005 OIF World Interoperability DemonstrationsIncludes features to support UNI 2.0

  • *ITU-T/OIF and IETFSignaling Protocol DifferencesDue to concerted effort, the signaling protocols are mostly the same!Same RSVP-TE PATH/RESV processingSame RSVP-TE refresh mechanismNo change to defined RSVP objectsNo new messagesWhat are the differences between ITU-T/OIF and IETF ASON/GMPLS signaling protocols?Three new call-related objects, and some new C-Types associated with UNI and E-NNINeed for usage of ResvTear/RevErr (no change to procedures if used)

    ITU-T G.7713.2OIF UNI 1.0 R2OIF E-NNI 1.0RFC3473 andother base RFCsConsistentAdditionally specifiesdetailed usage of selected options inprotocolsBoth utilize signaling protocols definedIn IETF GMPLS RFCs

  • *Signaling Protocol Interworking ScenarioDynamic signalling and routing control over OTN/SONET/SDH networkDynamic signalling for Ethernet services using ASON interlayer architectureProvider AProvider BProvider CClientClientOIF UNIOIF E-NNIIETF UNIProtocol i/wOIF Signalling based on G.7713, G.7713.2, G.7713.3OIF ENNI routing based on G.7715, G.7715.1Ethernet services based on G.8010, G.8011, MEF.10RFC 4139RFC 4208RFC 3472RFC 3473RFC 3946RFC 4203ITU-T/OIFIETF

  • *OIF ASON/GMPLS Interworking ProjectOIF guideline document on Signaling Protocol Interworking of ASON / GMPLS network domainsDocument defines signaling protocol interworking methods between network domains utilizing OIF/ITU-T and IETF GMPLSInterworking of ASON UNI and E-NNI (based on GMPLS RSVP-TE with ASON extensions, per G.7713.2 and OIF IAs) and IETF interfaces (based on GMPLS RSVP-TE, per RFC 3473 and RFC 4208)Detailed interworking scenarios and functions; e.g.,Required translation, resolution or re-mapping of address and identifier objectsList of messages or objects supported in one specification, but not the other, along with the resultant behaviorList of objects which are examined or processed in one specification, but are tunneled or opaque to the otherDescribes pragmatic implementations of interoperable solutions

  • *Interlayer Call TechnologyClient makes an Ethernet call to destinationNetwork triggers SONET/SDH calls to match Ethernet service requestControl plane sets up Ethernet and SONET/SDH connections, and controls GFP/VCATClientUNI-COXCUNI-NUNI-NOXCUNI-NUNI-NEthernetEthernetSONET/SDH

  • *Interlayer SignalingInterlayer architecture enables business boundary between layers.Service separation between layers is at interlayer NCC relationshipNote that VCAT is a separate layerETH MAC Client ETH MAC ClientETH NCCETH NCCVC-3 NCCVC-3 NCCInterlayerWithin LayerLayer boundary

  • *ASON/GMPLS Tutorial OutlineIntroductionRequirements & Architecture Signaling RoutingControl Plane ManagementOIF Interoperability DemonstrationsControl Plane Applications Use CasesConcluding remarks

  • *Basics of IP RoutingIP routing protocolExchange of information between IP routers that allow them to determine how to forward IP packetsThere are different types of routing protocolsDistance Vector (RIP, IGRP)Path Vector (BGP)Link State (OSPF, IS-IS)Link State Routing protocols in particular support distribution of network topology as links and nodesFor IP, every router must have exactly the same network topology information (links, nodes, and link wts.)Every router must run exactly the same path computation algorithmFailure to insure these last two requirements can result in routing loops and black holes

  • *Operation of Link State Routing ProtocolsNodes establish routing adjacenciesExchange local link informationForward received link/node informationNode ENode D

  • *Routing Topology DatabaseLink State Advertisements (LSA) and other advertisements form the Topology DatabaseIdentify link by remote link endpointCarry link information, e.g., capacity, weightPeriodic or triggered updates reliably floodedNeighbors keep identical topology databasesEach node ends up with the full topology of the networkNode ANode B

  • *Shortest Path Calculation Determines Packet ForwardingShortest Path TechniquesLinks are characterized by a single link weight

  • *How is this Useful for Transport Networks?Basic Network InventoryRouting Protocols provide network link inventoryUseful for operations and planningTopology and Resource UtilizationRequired for distributed connection path selection/computationDisaster RecoveryWant timely information of whats available in the network (nodes, link, spare capacity, etc)

  • *Extended for Non-IP Networks in IETF GMPLSNew Link and Router advertisements in RFC 3630, 4202/3Kept separate from IP link information to avoid confusionOpaque LSAs kept out of the IP Topology DBLink Switching Type and MetricNon-IP types, e.g., TDM, WDM Link Characteristics, e.g., ProtectionLinear (1+1, 1:1, 1:N), Ring, etc... Diverse Routing Information Shared Risk Link Groups (SRLGs)Other non-IP link characteristics

  • *ASON Routing Specifications and ActivitiesProtocol-neutral routing fundamentals (G.7715, G.7715.1, G.7715.2)Function of routing and routing protocols in ASONASON link state routing New routing protocol requirements for ASONProtocol-specific routing (OIF)OSPF extensions based on ASON routing requirementsApplication of topology abstractionFuture workPCE

  • *ASON Routing Routing ComponentsPrimary function for ASON transport routing is to provide path computation to Connection Management (Control Plane). Key modules are shown in light blue:Path computation and associated distribution of topology information is done by the Routing Controller (RC)Conversion into a specific routing protocol and associated protocol functions (e.g., state machines) are done by the Protocol Controller (PC)CC - Connection Controller RC - Routing Controller LRM - Link Resource ManagerPC Protocol ControllerMonitor portPolicy portCCC/NCCConfig port

  • *ASON Routing IP Routing and Transport Network RoutingData Plane in Transport Networks and classic IP Networks differFor classic IP, every packet is forwarded based on address translation For label switching (generalized to TDM or WDM), once a cross connection is made, data flows without needing further path computationTopologyDatabaseOSPFControl PlaneData PlaneIP router Peers Shortest Path Algorithm (Dijkstra)LSAIPForwardingIP address Next HopTransport Routing and ForwardingIP Routing and ForwardingControl PlaneL1 Bearer TopologyG.7715 CompliantProtocolCross ConnectSDH PathData PlanePeer Routing ControllersSource Route AlgorithmSignalingSDH PathControl PlaneL1 Bearer TopologyG.7715 CompliantProtocolCross ConnectSDH PathData PlanePeer Routing ControllersSource Route AlgorithmSignalingSDH PathTopologyDatabaseOSPFControl PlaneData PlaneIP router Peers Shortest Path Algorithm (Dijkstra)LSAHeaderIPHeaderIPIPForwardingIP ForwardingTableIP ForwardingTableIP address Next HopTransport Routing and ForwardingIP Routing and Forwarding

  • *ASON RoutingSome differences between IP and Transport Network Routing

    Classic IP RoutingTransport RoutingDistribution of Routing Protocol EntitiesAlways distributedDomain-specific: may be distributed or centralizedPath computationIdentical path computation algorithm at each nodeMay be different path computation algorithms at different nodesForwarding processPath computed for each packet at each nodePath computed only at connection setup, usually only at the sourceForwarding dependencyData cannot be forwarded without stable routing databaseData can be forwarded on existing connections but new connections cannot be createdLoopingPotential problem any time the routing table changesPrevented by strict source routing

  • *ASON Routing SpecificationsITU-T Rec. G.7715, ASON Routing, Approved in July 02 Applicable after network has been subdivided into Routing Areas, and necessary network resources accordingly assignedFocus upon inter-domain routing supporting optical transport networking applicationProvides architecture, requirements, high-level attributes, messages, and state diagrams from a protocol-neutral perspectiveProtocol neutral routing requirements include support for, e.g.,Hierarchically contained Routing Areas Non-congruent routing adjacency topology and transport network topologyIndependence from intra-domain protocol and control distribution choicesPolicy constraints on information exchange (e.g., imposed at E-NNI)Architectural evolution (levels, aggregation, segmentation)Multiple links between nodes, allowing for link and node diversity.Encompasses different classes of protocols (e.g., link-state, path vector)Facilitates comparison of specific inter-domain routing protocol proposals against quantifiable requirements

  • *Link State RoutingObjectiveDisseminate and update a common network topology view across all nodes in a domainBasic Link State Routing Functions:Hello/Link Adjacency ProcedureDatabase Synchronization ProcedurePeriodic or Event-driven Link Status UpdatesLink State Routing ProtocolsOSPFIS-ISPNNI

  • *ASON RoutingArchitecture & Requirements Link StateITU-T Rec. G.7715.1/Y1706.1, ASON Routing Architecture and Requirements for Link State Protocols, Approved Feb. 04Based upon ASON foundation Recommendations (G.8080, G.7715)Further architectural analysis for link state routingEncompasses exchange of routing information between hierarchical routing levels, including visibility re reachability and topologyNode and Link routing attributesPath computation and routing are impacted by layer specific, layer independent, and client/server adaptation information elementsRouting protocol must be applicable to any transport layer network, and representation of routing attributes should not preclude their applicability to other transport network layersLayer specific characteristics (per link attribute)

  • *G.7715.1 Link Characteristics

    Layer SpecificCharacteristicsCapabilityUsageSignal TypeMandatoryOptionalLink WeightMandatoryOptionalResource ClassMandatoryOptionalLocal ConnectionTypeMandatoryOptionalLink CapacityMandatoryOptionalLink AvailabilityOptionalOptionalDiversity SupportOptionalOptionalLocal Client AdaptationsSupportedOptionalOptional

  • *Comparison with IP Link State Routing ProtocolsASON Link State Routing relies on basic link state functionsAdjacencyDatabase synchronizationPeriodic or event-driven advertisementsDifferencesControl plane and data plane topology may be differentAutomated discovery of routing peers cannot be done based on SCN topology data plane neighbors may not be neighbors in the SCNOptical routing advertisements are for Traffic Engineering rather than IP routing tableOptical link state advertisements are marked as opaque and not used for IP routingInstead a separate transport topology database is created

  • *Separation of Data and Control PlanePre-ASON, routing protocols have assumed a Label Switching RouterSingle node with both data and control plane functionsSingle source for data, signaling and routing messagesASON explicitly separates these Data plane entities can be separate from control planeRouting entity can be separate from signaling entityRouting ImplicationMust be able to separately identify the data plane entity (link or node) from the routing controller

  • *Examples of different DistributionsPossible distribution of controlFully distributed (1:1) each network element also participates in the control planeFully centralized (1:n) only one network element or proxy participates in the control planeVariable (m:n) small number of network elements or proxy servers participate in the control planeSome potential applicationsProxy for a legacy (management controlled) domainCentralization of interoperability/E-NNI translation functions for ease of administration

  • *Client Reachability AdvertisementRouting Protocols have assumed a peer model where client is a full peer to network elementsClients are advertised as IP address reachabilityAccess links are part of the TE topologyASON explicitly separates client and network address spacesClients are identified by a separate namespaceRouting to clients needs to be supported by a separate mechanismClient reachability advertisementDirectory type service

  • *Layering in the Data PlanePre-ASON, optical routing specifications gave a single parameter for link capacityAssumes that any signal type can use the link, subject to pure bandwidth availabilityDoes not take into account layering issuesASON requires routing to advertise per signal type connection availabilityTakes into account possible limitations (link supports some signal types but not others)Takes into account blocking issues (smaller signal type can block larger signal type due to positioning in the frame)

  • *Hierarchy in the Routing ArchitecturePre-ASON, routing protocols have had limited hierarchy supportOSPF and IS-IS have limited levels (see next slide for OSPF)PNNI has richer hierarchy up to 104 theoretical levelsASON requires flexible hierarchy in the routing architectureTo match transport network organizationFor greater scalabilityFor greater policy controlProtocol extensions to support hierarchy are needed

  • *Routing Hierarchy compared to OSPFExisting routing protocols need extension to meet ASON requirements. E.g., for OSPF, Area boundaries fall within a router (vs. IS-IS area boundaries, which fall on links so router belongs to a single RA)Needs extensions for more than two hierarchical routing levelsRequires operator intervention for re-definition of areas Transport network architecture (G.805) allows more flexible partitioning and multiple levels

  • *ASON Routing HierarchyIn ASON, multiple levels of hierarchy are supportedDomains at lower levels are encompassed by higher levelsDomains are organized as part of carrier administrationLevel 1Level 2Level 3RARA.1RA.2RA.3RA.3RA.2.2RA.2.1RA.1.2RA.1.1

  • *Protocol Extension Work in StandardsITU-T Has defined requirements but not protocol at this pointIETFHas begun work through analysis of ASON requirements and evaluation of existing routing protocolsSome initial proposals for extensions are in progressWill need review through OSPF and IS-IS groupsOIFHas developed and tested prototype extensions to meet ASON requirementsWorking with IETF/ITU-T to extend the standards

  • *OIF External Network Node Interface Routing SpecificationsE-NNI Routing 1.0, Intra-Carrier E-NNI Routing using OSPF, approved by Q1/07 Principal Ballot Consistent with ITU-T ASON Recs. G.8080, G.7715 and G.7715.1 architecture and requirementsPrototypes an instantiation of a routing protocol addressing ASON routing requirements Intended to enable interoperable multi-domain SPC and SC services similar to those implemented for the OIF Worldwide Interoperability Demonstrations in 2004 and 2005Documents routing protocol requirements supporting the E-NNI 1.0 interface, and prototype encodings used in OIF Interop testingWill support services provided by OIF UNI 1.0R2, UNI2.0 and E-NNI Signaling 1.0

  • *OIF E-NNI Prototype ExtensionsSeparation of Routing Controller and Node IdentifierRouting Controller is the control plane entity, Node ID identifies the transport plane entityEnabled by the addition of Local/Remote Node ID parameters in the link status updateIdentifies the link ends (data plane topology) separate from the advertising entity (control plane topology)Advertisement of TNATNA is OIFs terminology for client addressReachability to TNA is advertised through OSPF prototype extensionThis supports a separate client namespace, in theory could be non-IPv4

  • *OIF E-NNI Prototype ExtensionsLink BandwidthOIF extension specifies available connections for each signal type (e.g., STS-1/VC-3, STS-3c/VC-4, etc.)More detailed and accurate than a simple measure of total available bandwidth for the link

    Routing HierarchyCurrently not implemented but under studyLeaking of information up and down levels and protection from looping are key elements

  • *E-NNI Topology AdvertisementEach domains Routing Controller (RC) advertises to its peers across the E-NNI boundaryAn abstracted topology can be advertisedOIF UNISCN

    Domain ANENEClientDeviceClientDeviceCarrierNetworkOIF UNIRCRCRCRCRC

  • *Routing Domain Abstraction ModelsAbstraction ModelsAbstract node domain collapsed to a single node; most scalable, least accurateAbstract link series of interconnected edge nodes; less scalable, more accuratePseudo-node variation of abstract link that also shows potential server layer connectivityReal domain topology1 2 3 Abstraction must improve scalability, yet provide more than just reachability informationAbstract topology

  • *ASON/GMPLS Tutorial OutlineIntroductionRequirements & Architecture Signaling RoutingControl Plane ManagementOIF Interoperability DemonstrationsControl Plane Applications Use CasesConcluding remarks

  • *Motivation of Control Plane ManagementTo achieve and sustain automatic call & connection service management (service management), there are many things that need to be managedControl plane (Cp) entity managementInitialization, configuration, policy settingOngoing monitoring, maintenance, recoveryTransport plane (Tp) management for ASONASON functionality Installation & configuration ASON resource provisioning (e.g., names) & hand-over (from Mp to Cp)Ongoing monitoring, maintenance, & recoveryControl plane (Cp) & Management plane (Mp) ongoing interaction for Connection management by Mp as neededCentralized routing (i.e., Mp calculated)Call performance measurementManagement of call admission controlTransfer of call/connection between Mp Cp

  • *Challenges of Control Plane ManagementEnsure consistent management policy across multi-carrier environment, e.g., Network wide consistency for Cp configuration, such as time-out setting for timersBalance between delegation (to Cp) and ultimate control (by Mp) (i.e., centralized vs. distributed) e.g.,Avoid duplication of data & processMaintain consistency between Mp and Cp databaseRestore consistency without affecting active servicesSmooth migration from Mp-driven service management (Call/Connection mgmt) to hybrid or Cp-driven SMFaults correlation and root cause analysis across Cp and Tp in multi-domain multi-layer environment

  • *Scope of Cp Management & InteractionsControl planeDirectsReportsDirectsReportsDirectsReportsSupportsSupportsSupports

  • *Trail TerminationTCPAdaptationSNPTTPSNPCPCTPManagement plane viewTTP: Trail Termination PointCTP: Connection Termination PointControl plane viewSNP: Subnetwork PointSNPP: SNP PoolSNPP LinkRelationship between the architectural entities inTransport plane, Management plane, and Control planeTransport entitiesAdaptation functionTrail Termination functionCP: Connection pointTCP: Termination connection pointTransport Resources in Mp and Cp View

  • *Standards for Control Plane ManagementEMSTransport PlaneControl PlaneManagement PlaneNetworkElementTMF MTNM v3.5ITU-T G.7718ITU-T G.7718.1

  • *Architecture & RequirementsRec. G.7718/Y.1709, Framework for ASON Management, Approved Feb. 05 Deemed essential for supporting viable network deploymentsAddresses the management aspects of the ASON control plane and the interactions between the OSS (NMS, EMS) and the ASON control plane Provides architecture and requirements contextManagement perspective on control plane components and constructs, control-related services, domain, transport resources, policyManagement of restoration and protectionASON management requirementsFCAPSHeavy input from Service providers

  • *G.7718 ASON Management RequirementsFundamental requirements:Impact of Mp failure, Mp-Cp interface failure, and Cp failureConfiguration managementControl plane resourcesIdentifiers, addresses, protocol parameters (signaling & routing) Routing areasRA hierarchies, (dis) aggregation, assignment of Cp resources Transport resources (in control plane view)(de)allocation, names and identifiers, discovery, topology, resource and capacity inventoryCall and connection setup(SPC)/modification/releasePolicyFault managementControl plane components, resource/connection/call (service), Performance managementControl plane componentsAccounting managementUsage and call details record

  • *TMF MTNM v3.5 Control Plane ManagementMTNM for Multi-technology management TMF 513 Requirements & Use casesTMF 608 Protocol-neutral model (UML)TMF 814 CORBA solution TMF 814A Implementation Statement Templates and GuidelineVersion 3.5 addition: Control plane & VLAN managementKey modeling approaches Re-use the v3.0 Multi-layer approach for Routing area (ML-RA), SNPP (ML-SNPP), SNPP Link (ML-SNPP Link), Re-use of the Subnetwork connection (SNC) object for Cp ConnectionScope: Limited to retrieval of Control Plane resources, retrieval of network topology and end-to-end Call/Connection management (provisioning of SPCs)

  • *OIF-CDR-01.0, Call Detail Records for OIF UNI 1.0 Billing, Approved April 02Implementation Agreement (IA) for the usage measurement functions that an Optical Switching System will need to perform in order to enable carriers to bill for OIF UNI 1.0 optical connections using their legacy billing systems. Usage measurement functions: Automatic Message Accounting (AMA)Data generation: UNI 1.0 CDR Information Content, as generic as possible Data formatting (resulted in CDR)Billing AMA Format (BAF) ASCII CDR (ACDR) Format XML CDR (XCDR) FormatData transmitting (of CDR): Typically via FTP between management system and billing systemOIF-CDR-01.0 for OIF UNI 1.0 Billing

  • *ASON/GMPLS Tutorial OutlineIntroductionRequirements & Architecture Signaling RoutingControl Plane ManagementOIF Interoperability DemonstrationsControl Plane Applications Use CasesConcluding remarks

  • *Objectives / GoalsOIF PerspectiveMember evaluation, validation, proof of concept of current OIF draft specifications & IA for interoperable network solutionsFeedback assessment from multi-vendor testing environment to standardization/specification workCarrier PerspectiveEarly adoption, evaluation, of interoperability testing results demonstrated in multi-vendor environment.Feedback to vendor community on early implementations and integrations based on practical experiences and lessons learnedIndustry PerspectiveShowcase OIF contributions, build market awareness of emerging technologies, services and networking solutions.Public forums (Optical conference & exhibitions) utilizedInteroperability Demonstrations

  • *Interoperability Demos Role in Standards to DeploymentStandardsSpecificationsInteroperability Tests & demonstrationsOIFCarrier sitesField trialsDeploymentOIFITU-TIETFFeedbackOIF performs / organizes the next major step towards implementation interoperability evaluations of prototype implementations:Prove of conceptFeedback to standardizationFosters follow up activitiesOIF supports close relation of standardization and R&D and early implementations

  • *Ethernet Switched Connection CharacteristicsOIF UNI 2.0 support for Ethernet clientsOIF UNI 2.0 call control based on ASON specificationsTransport devices integrate multi-layer functions at control plane and data plane levelEthernet Private Line Service (E-Line Service Type) triggered by OIF UNI 2.0 connection requests and provisioned by E-NNICarrier CDomainOIF UNIOIF E-NNIOIF UNICarrier ADomainCarrier BDomainOIF E-NNINENENENEEthernetClientEthernetClientEthernet Layer Call/Connection FlowSONET/SDH Layer Call/Connection FlowUNI-NUNI-NUNI-CUNI-CEthernetEthernetSDH

  • *7 participating carrier labs around the world: China, Japan, France, Germany, Italy and USA13 participating vendorsFirst multi-layer & multi-domain call/connection demonstrationOrchestrates actions between client and server layersIntegration of control plane (UNI2.0 Ethernet, E-NNI) and NG-SONET/SDH (GFP-F/VCAT/LCAS) functionsSow of on-demand Ethernet Private Line service by usingCreation of end-to-end calls and connections across multiple network layers, network domains, multiple vendors equipment, multiple carrier labsOIF IAs based on ITU-T ASON standards including:Requirements and Architecture (G.8080, G.7713, G.7715, G.7715.1)Signaling protocols (G.7713.2)World Interoperability Demonstration public observation: SUPERCOMM 2005 (June 7-9, 2005, Chicago, IL) 2005 Worldwide Interoperability Demo

  • *USAEuropeAsiaDeutscheTelekomAviciFujitsuSycamoreCienaHuaweiAviciMarconiSycamoreAviciCiscoMarconiHuaweiLambda OSAlcatelCienaCiscoMarconiLucentAviciCienaCiscoAlcatelCienaCiscoFujitsuLucentMahiNortelSycamoreTellabsFranceTelecomTelecomItaliaChinaTelecomInteroperability Demonstrations Global Test Network Topology

  • *OIF Interoperability Labs in 2005Beijing, ChinaBerlin, GermanyMusashino, JapanLannion, FranceMiddletown, NJ-USAWaltham, MA-USATorino, Italy

  • *

  • *On-Demand Ethernet Services over multi-domain transport networks7 participating carrier labs around the world: China, Japan, France, Germany, Italy and USAPublic demonstration at ECOC2007, Sept 16 20th, 2007:ECOC2007 Workshop on Global Interoperability in Multi-Domain and Multi-Layer ASON/GMPLS NetworksECOC2007 exhibition: Live demonstration of the OIF Worldwide Interoperability Test results ECOC2007 accompany program: Lab tours to DT premises, demonstrating live the ASON/GMPLS functions of the OIF Worldwide Test Network, the MUPBED European scale network and enabling hands on real telecom world for the visitors2007 Worldwide Interoperability Demo

  • *ASON/GMPLS Tutorial OutlineIntroductionRequirements & Architecture Signaling RoutingControl Plane ManagementOIF Interoperability DemonstrationsControl Plane Applications Use CasesConcluding remarks

  • *Application 1: CP for Bandwidth DefragmentationScenarioAfter running the NG-SONET/SDH network for a while, available time slots over SONET/SDH links become fragmented (I.e., many discontinuous, small size clusters of bandwidth). Network Operations can invoke the control plane on a regular basis to (1) identify the clusters for each span in the network, and (2) run a defragmentation algorithm to pack in-use time slots into a contiguous space.Core Technologies NG-SONET/SDH Defragmentation over a single vendor domain OTN Control Plane (Auto-Discovery & Self Inventory) OTN Mgmt Plane (EMS/NMS update)

  • *Application 2: A-Z Provisioning via EMS/NMS and Control PlaneScenarioNMS/EMS receives a service order for SONET STS /SDH VC from an enterprise customer that has three sites in the region. The order specifies points A & Z (e.g., from Site 1 to Site 2), payload rate, transparency, protection class, and other constraints. The NMS/EMS issues a command to the source node (attached to Site 1), which then triggers the control plane to setup the SONET/SDH path to Site 3 according to the requirements specified in the order. Similarly, when the customer terminates the service, NMS/EMS will invoke the control plane to tear down the path.Core Technologies OTN Control Plane (E-NNI, I-NNI) OTN Mgmt Plane (EMS/NMS SPC support)

    AZ

  • *Application 3: Bandwidth on Demand (BoD) in Transport NetworksScenarioAn enterprise customer with three sites subscribes to BoD SONET/SDH service with a range of SONET/SDH payload rates. The service plan applies to all SONET/SDH connections between the sites. Based on business needs, the customer uses UNI signaling to dial-up the service between any two sites, sends information over the SONET/SDH path for a unspecified period of time, then hangs up.NG-SONET/SDH GFP/VCAT OTN Control Plane (O-UNI, E-NNI, and I-NNI) OTN Mgmt Plane (EMS/NMS SC support, TMF814)Two Sub-Cases Case 3a: With NG-SONET/SDH Virtual Concatenation (VCAT) Case 3b: Without VCAT

    UNI

  • *Application 3 (cont): Scheduled BoDCustomers with highly predictable traffic profileService bandwidth provisioned according to user provided time-of-day and/or day-of-week schedules, with the capability to make bandwidth changes as needed. Automatic tailoring service bandwidth to traffic profile

  • *Application 4: GbE Service with Bandwidth ScheduleWeekendWeekdaysScenarioAn enterprise customer with three sites subscribes to a GbE service with customized bandwidth schedules for weekdays and weekend/holidays as shown below.

    Core Technologies NG-SONET/SDH GFP/VCAT/LCAS OTN Control Plane (E-NNI, I-NNI) OTN Mgmt Plane (EMS/NMS w/scheduling support)

  • *Application 5: BoD - GbE ServiceScenarioAn enterprise customer with three sites subscribes to a BoD GbE service with a specified peak rate (P). The service plan applies to all GbE connections between the sites. Based on business needs, the customer uses UNI signaling to dial-up the service between any two sites, sends information at rates
  • *Application 6: OSS SimplificationTraditional OSSInventoryService ActivationService AssuranceAccounting& SecurityFacilityEquipmentSrvc CircuitCustomer AssignmentsTransport NetworkNet TopologyPath ComputationParameterMappingCoS Assign.Fault IsolationFault CorrelationsExceptionsTestingProtection &RestorationBillingAdmissionControlResourceAccess CntlNG-OSSInventoryService AssuranceAccounting& SecurityFacilityEquipmentSrvc CircuitCustomer AssignmentsNet TopologyPath ComputationParameterMappingCoS Assign.Fault IsolationFault CorrelationsExceptionsTestingProtection &RestorationBillingAdmissionControlResourceAccess CntlExceptionsTransport NetworkNG-OTNControl PlanePassive roles for all control plane supported functions

  • *Application 7: Control Plane for Auto-Discovery and Self-InventoryScenarioUpon start up of a CP-equipped network, all NEs will discover each other, identify resources, and create a high quality network database containing the complete topological view of the network and a highly accurate resource map. During network operation, the database will be instantly updated to reflect any change of network state, such as resource usage/addition, path setup/tear-down, etc.A high quality network database is essential to the high quality OAM&P required for NG-OTNCore Technologies OTN Control Plane (I-NNI, E-NNI) OTN Mgmt Plane (EMS/OSS update)AZ

  • *Application 8: Control Application Plane enabled Network interworkingApplications communicate with Adaptation Function through API Adaptation Function administrates access to UNI Application integrates an API or manual control

    Workstation

    Cloud

    Network Domain 1

    Network Domain 2

    Control Plane

    Data Plane

    Adaptation function

    Adaptation function

    UNI

    E-NNI

    UNI

    API

  • *ASON/GMPLS Tutorial OutlineIntroductionRequirements & Architecture Signaling RoutingControl Plane ManagementOIF Interoperability DemonstrationsControl Plane Applications Use CasesConcluding remarks

  • *ASON Reqts. & Architecture RecapRequirements intended to enable support for business/commercial operating practicesFormalized specification technique utilizing components and interfaces that can be associated in various ways to describe actual control plane implementations The actual location/distribution of the control plane components is not constrained, allowing for the range of fully distributed to centralized implementationsArchitecture does not require that the reference points always be instantiated as external interfaces (UNI, E-NNI); instantiation of interfaces and degree of information sharing are based upon operator business model/policy A single instantiation of an ASON control plane may control multiple layer networks with an explicit definition of the interlayer interaction (including none)Reference point concepts similar to those of Resource and Admission Control Function (RACF) model

  • *Standards Development Organizations (SDO) Interaction

    Goal - Evolution towards convergence of requirements & protocols

    1999/2000 MPLS: flat peer model, data/signaling congruent, IP only, data behavior (e.g., connection tear-down w/o request)

    2001: Carrier requirements across IETF, OIF, and ITU-T re need for support of commercial business & operational practices

    2003: Evolution of GMPLS signaling protocol, used as normative base for ASON extensions

    2004-2006: Ongoing communications among all three SDOs on requirements and protocol work

  • *Network of the Future Future Internet Clean Slate Internet Design (FIND, GENI)Activities in Europe and USAGoal: Basic re-design of the (multi-layer) network architecture, including Internet Paradigm shift: Customer view (business and residential) impose a number of additional, mostly non-technical requirementsThe Internet turned into a non-trusted business environmentService-centric design of architectures, protocols and networksUsability / ease of use is a major aspect for future applications and services, requiring significant efforts in automation Fundamental technical changes in network functions imposed by clean slate designNaming & addressingRouting & signalingSecurity functionality, especially authentication (advanced AAA)ScalabilityOptimization of topologies and hierarchiesCommercial role of the Internet (non-trusted environment)Monitoring functionality (regarding network functionality)

  • *Technical Implications of Network re-designClean slate design will shake the technical foundations of protocol design as well as network architectures and operationsProtocol: Protocols and architectures are expected to change considerably (optics, slim modular protocol stack)Data plane: Multi-technology environment for provisioning of end-to-end servicesControl and management plane: The Internet might actually look more telco-like an intriguing thought!

  • *Thank you!!Q & [email protected]

    www.oiforum.com

  • Backup

  • *OIF documents and linksReference Material for ITU-T ASON and Transport RecommendationsGlossary

  • *OIF DocumentsOIF presentation and newsletterswww.oiforum.com

    OIF Implementation Agreementshttp://www.oiforum.com/public/impagreements.html

    OIF workshops on ASON/GMPLS implementations in test and carrier networks http://www.oiforum.com/public/meetOIW050806.htmlhttp://www.oiforum.com/public/meetOIW073106testbeds.htmlhttp://www.oiforum.com/public/meetOIW101606.html

  • *ITU-T Recommendations Accessibility Information Go to the publications link and choose download per URL:http://www.itu.int/publications/EBookshop.html

    There is an explicit button from the download publications page where you can register up front for 3 free Recommendations

  • *Some Key ITU-T ASON RecommendationsFundamental (Protocol-Neutral) Architecture & Requirements G.8080, Architecture for the automatically switched optical network (ASON), 2006 Revision to be published imminentlyG.7713, Distributed call and connection management (DCM), 2006 Revision, to be published imminently G.7718, Framework for ASON Management, February 05G.7714, Generalized automatic discovery for transport entities, August 05 revisionITU-T G.7715/Y.1706 - Architecture and Requirements for Routing in the Automatic Switched Optical Networks, July 2002ITU-T G.7715.1/Y.1706 - ASON Routing Architecture and requirements for Link State Protocols, Feb. 04ITU-T G.7712/Y.1703 - Architecture and specification of data communication network*, March 03ITU-T T G.7716 - Control Plane Initialization, Reconfiguration, and Recovery, target Consent Nov. 06

  • *Textbooks covering ITU-T Architecture Aspects (e.g., Functional Modeling, ASON)Broadband Networking: ATM, SDH, and SONET; Michael Sexton and Andrew Reid; ISBN 0-89006-578-0 (see in particular Chapters 2 4)http://www.amazon.com/gp/product/0890065780/ref=sib_rdr_dp/103-2003697-9480609?%5Fencoding=UTF8&me=ATVPDKIKX0DER&no=283155&st=books&n=283155Achieving Global Information Networking; Varma and Stephant et al; ISBN: 0890069999 (see in particular Chapters 1-4)http://www.amazon.com/gp/product/0890069999/ref=dp_return_1/103-2003697-9480609?%5Fencoding=UTF8&n=283155&s=booksSDH/SONET Explained in Functional Models : Modeling the Optical Transport Network; Huub van Helvoort; ISBN 0-470-09123-1http://www.amazon.com/gp/product/0470091231/ref=sib_rdr_dp/103-2003697-9480609?%5Fencoding=UTF8&me=ATVPDKIKX0DER&no=283155&st=books&n=283155Optical Networking Standards : A Comprehensive Guide for Professionals ; Khurram Kazi; ISBN: 0387240624 (to be published June 2006; see for example - Chapters 2, 16)http://www.amazon.com/gp/product/0387240624/qid=1147161139/sr=1-1/ref=sr_1_1/103-2003697-9480609?s=books&v=glance&n=283155

  • *Some Key ITU-T Functional Modeling Rec.Fundamental Architecture & EquipmentITU-T Rec. G.803, Architecture of transport networks based on the synchronous digital hierarchy (SDH), March 2003ITU-T Rec. G.805 - Generic functional architecture of transport networks, March 2000ITU-T Rec. G.809 - Functional architecture of connectionless layer networks, March 2003ITU-T Rec. G.872, Architecture of optical transport networks, November 2001ITU-T Rec. G.8010, Architecture of Ethernet Layer Networks, February 2004ITU-T Rec. G.8110, MPLS layer network architecture, January 2005ITU-T G.8110.1, Architecture of Transport MPLS (T-MPLS) Layer Network, publication imminentITU-T G.783, Characteristics of synchronous digital hierarchy (SDH) equipment functional blocks, March 2006ITU-T G.8021, Characteristics of Ethernet transport network equipment functional blocks, G.8121, Characteristics of Transport MPLS (T-MPLS) Equipment Functional Blocks, publication imminentEtc.

  • *GlossaryACDRASCII CDRAMAAutomatic message accountingASONAutomatically switched optical networkAPAccess pointAPIApplication programming interfaceBAFBilling AMA FormatBoDBandwidth on Demand CCConnection controllerCCCCalling/called call controller CDRCall detail recordCORBACommon object request broker architectureCPConnection pointCpControl planeDA Discovery agentDCMDistributed Call and Connection MngmtECFEquipment control functionEMFEquipment management functionEMSElement management systemE-NNIExternal NNIETFEquipment transport functionFCAPSFault, Configuration, Accounting,Performance, SecurityFTPFile transfer protocolIAImplementation agreementI-NNIInternal NNILCASLink capacity adjustment schemeLRMLink resource managerMIBManagement information baseMpManagement planeNCCNetwork call controllerNENetwork elementNMSNetwork management systemMLRAMulti-layer routing areaMLSNPPMulti-layer SNPPMTNMMulti-technology network managementNNINetwork-network interfaceOHOverheadOSFOperations system functionOSSOperations support systemOTNOptical transport networkPCProtocol controllerRARouting areaRC Routing controllerSCSwitched connectionSCNSignaling communication networkSNCSubnetwork connectionSPCSoft permanent connectionSNPSubnetwork pointSNPPSNP PoolSRGShare risk groupSTMSynchronous Transport Module TAFTransport atomic functionTAPTermination & adaptation performerTCETransport capability exchangeTCPTermination connection pointTNATransport network addressTPTermination pointTpTransport planeTTPTrail termination pointUNIUser-network interfaceUMLUnified modeling languageVCVirtual containerVCATVirtual concatenationVLANVirtual local area networkWSFWorkstation functionXCDRXML CDR formatXMLExtensible modeling language

    *******I am guessing what we would call the ENNI 1.0 Routing IA Ive suggested OIF-ENNI-01.0-OSFP*******Boundaries of policy and information sharing allow selection of the best tool for the task rather than a lowest common denominator approach****** For management initiated calls, Call Control would reside in the Management Plane ***Examples of scope of connection control limited to a single call segment The service is realized in different ways within each domain Separate address spaces are used within each domainThere is a trust boundary There is independence of survivability (protection/restoration) for each domain

    *Allow for flexible distribution of functionality among network elements)*Functions are defined and characterized by the information processed between their inputs and outputsArchitectural components may be associated together in particular ways to form the equipment from which real networks are constructedApplication of this concept simplifies network description by keeping logical connections distinct from their actual routing in the network and the resources that physically support them. Thus, the logical pattern of interconnection of elements in the network is established without concern for the associated signal processing functions and this allows an operator to easily establish connections as required. Within the topology domain, the two fundamental concepts that relate to the organization of the network are layering and partitioning.

    *Utilize multiple technologies supporting a wide range of bandwidth

    *Utilize multiple technologies supporting a wide range of bandwidth

    *Utilize multiple technologies supporting a wide range of bandwidth

    *Utilize multiple technologies supporting a wide range of bandwidth

    ***CC: responsible for coordination among the Link Resource Manager, Routing Controller, and both peer and subordinate Connection Controllers for the purpose of the management and supervision of connection set-ups, releases and the modification of connection parameters for existing connections.RC: role of the routing controller is to:respond to requests from connection controllers for path (route) information needed to set up connections. This information can vary from end-to-end (e.g., source routing) to next hop; respond to requests for topology (SNPs and their abstractions) information for network management purposesLRM:: responsible for the management of an SNPP Link; including the allocation and deallocation of SNP link connections, providing topology and status information.Two LRM components are used - the LRMA and LRMZ. An SNPP link is managed by a pair of LRMA and LRMZ components one managing each end of the link. Requests to allocate SNP link connections are only directed to the LRMA. The LRMA is responsible for the management of the A end of the SNPP link, this includes the allocation and deallocation of link connections, providing topology and status information. The LRMZ is responsible for the management of the Z end of the SNPP link, this includes providing topology information.TP: This component is a sub class of Policy Port, whose role is to check that the incoming user connection is sending traffic according to the parameters, agreed upon. Where a connection violates the agreed parameters then the TP may instigate measures to correct the situation. Calls are controlled by means of call controllers. There are two types of call controller components:CCC: calling/called party call controller. This is associated with an end of a call and may be co-located with end systems or located remotely and acts as a proxy on behalf of end systems. This controller acts in one, or both, of two roles, one to support the calling party and the other to support the called party. NCC: provides two roles, one for support of the calling party and the other to support the called party.A calling party call controller interacts with a called party call controller by means of one or more intermediate network call controllers.PC: provides the function of mapping the parameters of the abstract interfaces of the control components into messages that are carried by a protocol to support interconnection via an interface. The monitor, policy, and configuration ports may be available on every system (and component) without further architectural specification. The monitor port allows management information to pass through the boundary relating to performance degradations, trouble events, failures, etc., for components, subject to policy constraints. The policy port allows for the exchange of policy information relating to components. The configuration port allows for the exchange of configuration, provisioning and administration information relating to components (subject to policy constraints) that may dynamically adjust the internal behaviour of the system

    *CC: responsible for coordination among the Link Resource Manager, Routing Controller, and both peer and subordinate Connection Controllers for the purpose of the management and supervision of connection set-ups, releases and the modification of connection parameters for existing connections.RC: role of the routing controller is to:respond to requests from connection controllers for path (route) information needed to set up connections. This information can vary from end-to-end (e.g., source routing) to next hop; respond to requests for topology (SNPs and their abstractions) information for network management purposesLRM:: responsible for the management of an SNPP Link; including the allocation and deallocation of SNP link connections, providing topology and status information.Two LRM components are used - the LRMA and LRMZ. An SNPP link is managed by a pair of LRMA and LRMZ components one managing each end of the link. Requests to allocate SNP link connections are only directed to the LRMA. The LRMA is responsible for the management of the A end of the SNPP link, this includes the allocation and deallocation of link connections, providing topology and status information. The LRMZ is responsible for the management of the Z end of the SNPP link, this includes providing topology information.TP: This component is a sub class of Policy Port, whose role is to check that the incoming user connection is sending traffic according to the parameters, agreed upon. Where a connection violates the agreed parameters then the TP may instigate measures to correct the situation. Calls are controlled by means of call controllers. There are two types of call controller components:CCC: calling/called party call controller. This is associated with an end of a call and may be co-located with end systems or located remotely and acts as a proxy on behalf of end systems. This controller acts in one, or both, of two roles, one to support the calling party and the other to support the called party. NCC: provides two roles, one for support of the calling party and the other to support the called party.A calling party call controller interacts with a called party call controller by means of one or more intermediate network call controllers.PC: provides the function of mapping the parameters of the abstract interfaces of the control components into messages that are carried by a protocol to support interconnection via an interface. The monitor, policy, and configuration ports may be available on every system (and component) without further architectural specification. The monitor port allows management information to pass through the boundary relating to performance degradations, trouble events, failures, etc., for components, subject to policy constraints. The policy port allows for the exchange of policy information relating to components. The configuration port allows for the exchange of configuration, provisioning and administration information relating to components (subject to policy constraints) that may dynamically adjust the internal behaviour of the system

    *CC: responsible for coordination among the Link Resource Manager, Routing Controller, and both peer and subordinate Connection Controllers for the purpose of the management and supervision of connection set-ups, releases and the modification of connection parameters for existing connections.RC: role of the routing controller is to:respond to requests from connection controllers for path (route) information needed to set up connections. This information can vary from end-to-end (e.g., source routing) to next hop; respond to requests for topology (SNPs and their abstractions) information for network management purposesLRM:: responsible for the management of an SNPP Link; including the allocation and deallocation of SNP link connections, providing topology and status information.Two LRM components are used - the LRMA and LRMZ. An SNPP link is managed by a pair of LRMA and LRMZ components one managing each end of the link. Requests to allocate SNP link connections are only directed to the LRMA. The LRMA is responsible for the management of the A end of the SNPP link, this includes the allocation and deallocation of link connections, providing topology and status information. The LRMZ is responsible for the management of the Z end of the SNPP link, this includes providing topology information.TP: This component is a sub class of Policy Port, whose role is to check that the incoming user connection is sending traffic according to the parameters, agreed upon. Where a connection violates the agreed parameters then the TP may instigate measures to correct the situation. Calls are controlled by means of call controllers. There are two types of call controller components:CCC: calling/called party call controller. This is associated with an end of a call and may be co-located with end systems or located remotely and acts as a proxy on behalf of end systems. This controller acts in one, or both, of two roles, one to support the calling party and the other to support the called party. NCC: provides two roles, one for support of the calling party and the other to support the called party.A calling party call controller interacts with a called party call controller by means of one or more intermediate network call controllers.PC: provides the function of mapping the parameters of the abstract interfaces of the control components into messages that are carried by a protocol to support interconnection via an interface. The monitor, policy, and configuration ports may be available on every system (and component) without further architectural specification. The monitor port allows management information to pass through the boundary relating to performance degradations, trouble events, failures, etc., for components, subject to policy constraints. The policy port allows for the exchange of policy information relating to components. The configuration port allows for the exchange of configuration, provisioning and administration information relating to components (subject to policy constraints) that may dynamically adjust the internal behaviour of the system

    *CC: responsible for coordination among the Link Resource Manager, Routing Controller, and both peer and subordinate Connection Controllers for the purpose of the management and supervision of connection set-ups, releases and the modification of connection parameters for existing connections.RC: role of the routing controller is to:respond to requests from connection controllers for path (route) information needed to set up connections. This information can vary from end-to-end (e.g., source routing) to next hop; respond to requests for topology (SNPs and their abstractions) information for network management purposesLRM:: responsible for the management of an SNPP Link; including the allocation and deallocation of SNP link connections, providing topology and status information.Two LRM components are used - the LRMA and LRMZ. An SNPP link is managed by a pair of LRMA and LRMZ components one managing each end of the link. Requests to allocate SNP link connections are only directed to the LRMA. The LRMA is responsible for the management of the A end of the SNPP link, this includes the allocation and deallocation of link connections, providing topology and status information. The LRMZ is responsible for the management of the Z end of the SNPP link, this includes providing topology information.TP: This component is a sub class of Policy Port, whose role is to check that the incoming user connection is sending traffic according to the parameters, agreed upon. Where a connection violates the agreed parameters then the TP may instigate measures to correct the situation. Calls are controlled by means of call controllers. There are two types of call controller components:CCC: calling/called party call controller. This is associated with an end of a call and may be co-located with end systems or located remotely and acts as a proxy on behalf of end systems. This controller acts in one, or both, of two roles, one to support the calling party and the other to support the called party. NCC: provides two roles, one for support of the calling party and the other to support the called party.A calling party call controller interacts with a called party call controller by means of one or more intermediate network call controllers.PC: provides the function of mapping the parameters of the abstract interfaces of the control components into messages that are carried by a protocol to support interconnection via an interface. The monitor, policy, and configuration ports may be available on every system (and component) without further architectural specification. The monitor port allows management information to pass through the boundary relating to performance degradations, trouble events, failures, etc., for components, subject to policy constraints. The policy port allows for the exchange of policy information relating to components. The configuration port allows for the exchange of configuration, provisioning and administration information relating to components (subject to policy constraints) that may dynamically adjust the internal behaviour of the system

    *CC: responsible for coordination among the Link Resource Manager, Routing Controller, and both peer and subordinate Connection Controllers for the purpose of the management and supervision of connection set-ups, releases and the modification of connection parameters for existing connections.RC: role of the routing controller is to:respond to requests from connection controllers for path (route) information needed to set up connections. This information can vary from end-to-end (e.g., source routing) to next hop; respond to requests for topology (SNPs and their abstraction