Ericsson Review Vol 69 1992 3

36
ERICSSON REVIEW 3 An Introduction to the Ericsson Transport Network Architecture Control and Operation of SDH Network Elements AXD 4/1, a Digital Cross-Connect System 1992

Transcript of Ericsson Review Vol 69 1992 3

ERICSSON REVIEW

3 An Introduction to the Ericsson Transport Network Architecture Control and Operation of SDH Network Elements AXD 4/1, a Digital Cross-Connect System

1992

I

ERICSSON REVIEW Number 3 • 1992 • Volume 69

Responsible publisher Bo Hedfors

Editor Per-Olof Thyselius

Editorial staff Martti Viitaniemi

Subscription Eva Karlstein

Subscription one year $30

Address S-126 25 Stockholm, Sweden

Published in English with four issues per year

Copyright Telefonaktiebolaget L M Ericsson

Contents 58 • An Introduction to the Ericsson Transport Network Architecture

62 • Control and Operation of SDH Network Elements

78 • AXD 4/1, a Digital Cross-Connect System

Cover Ericsson Transport Network Architecture offers a complete network solution for a managable Transport Network based on the Synchronous Digital Hierarchy

An Introduction to the Ericsson Transport Network Architecture

Stefan Danielsson

The telecommunications industry has formulated a number of new standards for a new transmission hierarchy, the Synchronous Digital Hierarchy, for the purpose of reducing operating costs and improving the service offered to users. Today's fixed transmission network will evolve into a managed and software-controlled Transport Network. The Transport Network will form the infrastructure of the future telecom network and so make for successful introduction of new services, such as broadband data.

The author describes the Ericsson Transport Network Architecture (ETNA), a complete system solution for a manageable synchronous Transport Network.

digital communication systems telecommunication networks telecommunication network management

Today's digital transmission systems are based on the Plesiochronous Digital Hier­archy (PDH) standard, which was intro­duced step-by-step to cater for demands for higher transmission capacity in the voice traffic network.

This has resulted in a network that allows interconnection between different vendors' transmission equipment at cer­tain standardised electrical interfaces, but also a network in which configuration changes are effected through hard-wiring and for which standards for interconnec­tion at the optical level have been lacking.

The complex network is difficult to manage and costly and time-consuming to adapt to

changing requirements for transmission capacity. It consists of a large number of multiplexers at different levels and vol­umes of cabling between them. The line systems are underutilised; often, less than 50 % of the capacity is used.

Until now, this approach has been accept­able, because the dominating voice-only traffic is predictable to some degree. But today, public network operators around the world are faced with increasing financial pressure, growing competition due to de­regulated markets, and unprecedented demands from users who now view com­munications as a strategic business tool.

Business users, notably those requiring both data and voice communications, are seeking new levels of guaranteed quality and services. The time it takes to arrange leased-line services - several weeks, sometimes months - will no longer be tol­erated. The network must provide band­width on demand.

Synchronous Digital Hierarchy Recognising these problems and de­mands, the telecommunications industry has formulated a number of new standards for a new transmission hierarchy, the Syn-

Fig. 1 Ericsson's centre for transmission research and development at Kungens Kurva, Stockholm, Sweden

ERICSSON REVIEW No. 3, 1992

59

STEFAN DANIELSSON Ericsson Telecom AB chronous Digital Hierarchy, aimed at eas­

ing the difficulties for operators and im­proving the services offered to users. In­cluded in SDH are standards for new trans­mission bit-rates, optical interfaces, infor­mation models and communication proto­cols for network management and propo­sals for network structures.

Future Transport Networks SDH offers many advantages and forms the foundation for the future transport net­work. SDH also underlies the Ericsson Transport Network Architecture (ETNA). ETNA is a single, open system concept that enables a public network operator to optimise his use of existing resources and make a smooth migration towards future broadband digital services.

Included in ETNA are all the transmission links, switching, routing and management facilities needed to deliver wideband and broadband services - data, voice, image and video. Network management is of the utmost importance. It is only through pow­erful network management that the cost and service benefits from SDH can be fully utilised.

ETNA consists of a family of Network Ele­ments and one common Network Manage­ment system, FMAS (Facility Manage­ment System).

The Network Elements have basic func­tionality in common; they terminate electric and optical signals, perform switching at various signal levels and are controlled from FMAS. Due to somewhat different ap­plications and optimisation criteria, two product lines are defined: Digital Cross­Connect Systems (DXC) and SDH Trans­mission Systems (SMUX).

Digital Cross-Connect Systems DXCs are transmission channel switches for semi-permanent connections. With to­tally transparent switching characteristics, the DXC can terminate any PDH or SDH signal for selection and rerouting at any lower-order level. DXCs provide extensive switching capabilities for network restora­tion and network configuration in central hubs with heavy concentrations of circuits.

SDH Transmission Systems SDH transmission systems include a range of terminal multiplexers, intermedi­ate regenerators and add/drop multiplex­ers based on SDH standards for trans­mission at 155 Mbit/s, 620 Mbit/s and 2.5 Gbit/s. The systems are built from a common set of modules to reduce the stockholding of spares and to simplify ca­pacity upgrades. SMUXs are used in point-to-point, bus or ring configurations and provide a distributed type of network con­figuration with line or ring protection for net­work restoration.

Fig. 2 ETNA supports a layered-architecture approach to Transport Network configurations

AXD 4/1 (DXC)

AXD 1/0 (DXC)

Add/Drop Multiplexer (SMUX)

AXD 2500 (SMUX)

AXD 620 (SMUX)

ACXD 155 (SMUX)

60

Fig. 3 AXD 155 system integration and verification test carried out at Kungens Kurva

Facility Management System The FMAS provides a single system for the management of the complete transport network, including DXCs, SMUXs and PDH systems.

The Transport Network System With the introduction of ETNA, the telecom network and its operation will change dras­tically. Today's fixed transmission network will evolve into a managed and software-controlled network which, with its trans­mission resources and operation and con­trol capabilities, is defined as the Transport Network.

How can network operators make sure that their transport network will reduce operat­ing costs in the short-time perspective, while at the same time meeting the de­mands of the future? The answer is com­plete network solutions.

By providing a complete family of comple­mentary network elements, each one op­timal for a specific application in the net­work and each one interworking with one common management system, functional­ity can be introduced at network level. End-to-end performance monitoring, automat­ic bandwidth provision and network restoration can be made available pend­ing complete TMN standards.

Fig 4 An example of the user interface of FMAS in Configuration Management mode

The Transport Network will reduce the costs for operation and maintenance of transmission resources, but will also form the infrastructure for the future telecom network and make for successful introduc­tion of new services, such as broadband data.

Furthermore, the system solution will create a platform which can be upgraded with new functionality to follow the evolve-ment of TMN standards and to meet de­mands for new features. In this way, a future-proof network solution can be achieved.

ETNA is a Transport Network system so­lution and a platform for future network de­velopment. Included in ETNA are all Net­work Elements and the Network Manage­ment system required to form all types of network structure and to meet the demand for software-controlled allocation and monitoring of bandwidth in a principally self-healing network.

The Network Elements are not only simi­lar in terms of functionality but also have commonalities in design, thus contributing to the establishment of a platform for fu­ture development of new functionality and new network elements. Design commo­nalities have been achieved at all levels:

Use of the same interface to FMAS TMN standard interfaces (Q) are used between all network elements and the common network management system, FMAS.

Use of the same information models All network elements are based on the same information models for configuration management, fault management, perfor­mance management and security man­agement.

ERICSSON REVIEW No. 3, 1992

61

References 1 Bergendahl, J. and Ekelund, S.: Trans­

port network development. Ericsson Re­view 67(1990):2, pp. 54-59.

2 Breuer, H.-J. and Hellstrom, B.: Synchro­nous Transmission Networks. Ericsson Review 67(1990):2, pp. 60-71.

3 Andersson, J.-O.-.Digital Cross-Connect Systems-a System Family for the Trans­port Network. Ericsson Review 67 (1990):2, pp. 72-83.

4 Tarle, H..FMAS - an Operations Support System for Transport Networks. Ericsson Review 67(1990):4, pp. 163-182.

Use of the same man-machine interface When using a local-craft interface at a DXC or SMUX, the operator will see displayed the same types of symbol and use the same types of command, independent of equipment.

Use of common hardware modules Common internal interfaces have been used for SMUX and DXC to ensure full compatibility of different access circuit boards within SMUXs and DXCs respec­tively. Such compatibility of circuit boards also exists across the two product lines.

Use of the same packaging system The same equipment practice, including power distribution and alarm panels, can be used for DXCs and SMUXs.

Use of the same documentation structure A standardised documentation structure will be used to facilitate administration, op­eration and maintenance of the products.

The Open Transport Network solution To provide a system that can ensure multi-vendor compatibility is equally important as providing a complete and future-proof network solution. An absolute prerequisi­te is that the benefits from ETNA are achi­evable in a network supplied by several vendors. This has been solved through the choice of an open structure for ETNA. All existing TMN standards concerning Q-in-terfaces, embedded communication chan­nels and information models will be sup­ported from the first release of any ETNA product.

Since the TMN has not yet been complet­ed, it is necessary to include Ericsson spe­cific additions. The TMN support is imple­mented in structured object-oriented software and it will thus be possible to fol­low the evolvement of standards through regular releases of software updates.

Pending complete TMN standards, adap­tations will be required for multi-vendor

interoperability. Through the open and structured design of FMAS, other vendors' equipment can be handled by implement­ing the corresponding information model in one of the FMAS layers. Above this layer is the application layer, which means that the vendor-specific differences of the net­work element are hidden: all network ele­ments can be handled in the same man­ner.

Introduction to Network Element descriptions Previous articles in Ericsson Review (No.2 and No.4 1990) describe the development of the Transport Network, the Synchro­nous Digital Hierarchy, Digital Cross-Con­nects, and FMAS.

This edition of Ericsson Review goes one step further. Two articles exemplify the re­alisation of individual Network Elements, and their Control System.

The design of the Digital Cross-Connect AXD 4/1 is described in one article. This network element is mainly used for config­uration of the network in central nodes. The SDH Transmission System AXD 155, ideal for the construction of ring configurations in the access or local network, will be de­scribed in a coming issue of Ericsson Re­view.

Solutions for the local Control System and the communication with FMAS is de­scribed in one article. The two Network Elements perform different tasks in the network, a condition which has influenced the design of the local Control System. However, what is visible to the operator is identical. The support for TMN standards regarding interfaces, information models and communication channels is also iden­tical.

The result is a family of network elements, optimal for their specific task in the network and simultaneously administrated, operat­ed and maintained in an identical way.

ERICSSON REVIEW No. 3, 1992

Control and Operation of SDH Network Elements

Johan Blume, Leif Hansson, Peder Hagg and Leif Sundin

Increasingly, flexible and powerful network management solutions allowing Network Elements and Operations Systems to work in a multi-vendor environment are becoming a key issue to network operators. The Synchronous Digital Hierarchy currently being standardised provides the required capabilities. The authors describe the main characteristics of SDH management and discuss some implementation aspects of control systems developed for Ericsson's SDH Network Elements.

digital communication systems telecommunication networks telecommunication network management

Abbreviations ACSE Association Control Service Element API Application Programmer's Interface AUI Attachment Unit Interface CLNP Conectionless Network Protocol CMISE Common Management Information Ser­

vice Element CP Central Processor CPU Central Processor Unit CS Control System CSA Control System Application CSP Control System Platform DCC Data Communications Channel DCN Data Communications Network ECC Embedded Control Channel ETNA Ericsson Transport Network Architecture FMAS Facility Management System GNE Gateway Network Element GUI Graphical User Interface ICN Internal Communications Network IM Information Model IPC Inter-Process Communication ISDN Intergrated Sevices Digital Network LAPD Link Access Protocol on D-channel LAN Local Area Network

Background and Driving Forces Today's transmission network typically consists of inflexible equipment without provision for remote reconfiguration, and fixed hard-wired point-to-point connec­tions. This means that each change of con­figuration - when supplying a 2 Mbit/s leased line, for example - requires hard­wiring, which is time-consuming and therefore costly.

The Synchronous Digital Hierarchy (SDH) eliminates these disadvantages by provid­ing flexible Network Elements (NE) ca­pable of being configured remotely. This makes it possible to provide new broad­band services - such as 2 Mbit/s leased lines - to customers in a short time and at low cost.

Another characteristic of today's transmis­sion networks is that each vendor has his

LLC MAC MAU MCF MIB MO NE OS OSI PDH PI PM ROSE SDH SDXC SEMF

SMS SMUX SNI SNPA

su TAU TMN

UP

Logical Link Control Medium Access Control Medium Attachment Unit Message Communications Function Management Information Base Managed Object Network Element Operations System Open Systems Interconnection Plesiochronous Digital Hierarchy Physical Interface Performance Monitoring Remote Operations Service Element Synchronous Digital Hierarchy SDH Digital Cross Connect Synchronous Equipment Management Function SDH Management Subnetwork SDH Multiplexer Switching Network Interface Sub-Network Point of Attachment Support Unit Termination Access Unit Telecommunications Management Net­work Unit Processor

own management system with proprietary interfaces and functionality. This situation often necessitates adaptations when new equipment is introduced on a market, which is costly in terms of time, resources and money, both for the supplier and the operator.

One of the major driving forces behind SDH is improved and standardised man­agement interfaces and functions, allow­ing the SDH Network Elements and Oper­ations Systems (OS) to interwork in a multi-vendor environment.23

Benefits of SDH and ETNA Functional overview Introducing SDH in the transport network will improve operation and maintenance, and so reduce the operational cost for the Telecom operator. SDH also enables the operator to control the network more effi­ciently, in comparison with the conditions afforded by existing transmission systems.

SDH makes it possible to set up new con­nections from a remote site within a few seconds. This enables a Telecom opera­tor to respond quickly to customer de­mands for new or higher capacity. It also reduces the operational cost because less manpower is required.

Self-healing networks can be configured in such a way that faults in the network - e.g. cable breaks - will not affect the traffic for more than a few milliseconds, or seconds, depending on the size of the network and the restoration principle applied. At present it can take hours, or even days, to locate the fault and take appropriate ac­tions.

Two different principles are applied to pro­tect the network against the effects of a fault: protection switching and protection routing.

Protection switching is performed by the SDH NE without assistance from a central network management system, such as Ericsson's FMAS. This means very fast restoration but utilises network resources quite inefficiently.

It is anticipated that rings will be a com­monly used network topology in SDH net­works, especially in local networks. In the event of a cable break, the traffic can be

ERICSSON REVIEW No. 3, 1992

restored by sending it the opposite way round the ring.

Protection routing requires assistance from the FMAS and takes a somewhat longer time (5-10 seconds) but can be used for any type and size of network. In this case alarm information is sent to the FMAS from all affected NEs. The FMAS analyses the fault situation and calculates a new, optimised way through the network. Cross-connect commands are then sent to the NEs to set up the new connection. If desired, the operator can set conditions for rerouting: that the route should not pass through a particular node, for example.

The Performance monitoring parameters enable the operator to identify potential problems before they cause degradation of end-user service. They also offer a tool for verifying the quality of the connection. This is important since many customers re­quire a high guaranteed quality level, which makes it necessary to be able to

measure that level. The parameters used conform to CCITT Recs. G.821, G.82x and G.784, Box 1.

An important issue is protection of the SDH NEs from unauthorised access. This becomes important in cases where avail­able functions may cause serious prob­lems if used incorrectly. Each operator must have a unique Userid and password, issued by the system administrator. He is also assigned one of several 'user categories'. The user category determines what functions the user is allowed to ac­cess. User categories - of which some ex-emples follow - can be configured as de­sired: - System manager: handling of user cat­

egories and database management - Read access: only read access to man­

agement information - Configuration manager: access to instal­

lation and configuration functions - Data communication manager: handling

data communication facilities.

BOX1 PERFORMANCE PARAMETERS In the future, the demand for high-quality connections will be even greater than today. It is therefore impor­tant to use relevant and accepted parameters when measuring and verifying the quality of connections. The quality parameter currently in use is Bit Error Ratio (BER), with alarm thresholds at (normally) 103 or 10*. This is not good enough for data traffic. Another drawback of BER is that it does not give any information as to how faults are distributed in the time domain. Normally, faults are not distributed uniformly, but "burstily".

To meet these new requirements, CCITT has defined quality parameters in Rec. G.821:

ES Errored Seconds SES Severely Errored Seconds DM Degraded Minutes UAS Unavailable Second

No. of errors during 1 s >0 BER, measured during 1 s >10' BER, measured during 1 min. MO* 10 consecutive SES gives 10 UAS

These parameters are initially intended for 64 kbit/s connections. Annex D to Rec. G.821 therefore defines how to deal with higher bit rates. The G.821 parameters - after rather animated discussions- have not been found to be the solution to new requirements imposed on quality parameters. For example, they are still based on BER.

A draft, Rec. G.82x, defines new quality parameters for bit rates higher than 64 kbit/s. The G.82x parame­ters will be used within SDH when the recommendation has been approved. The G.82x parameters are based on Errored Blocks (EB) instead of BER. One EB is a block that contains one or more errored bits.

The following parameters were defined in the G.82x draft of June, 1992:

ES Errored Seconds

ESR ES Ratio

SES Severely Errored Seconds

SESR SES Ratio

s= 1 EB during 1 s

The ratio of ES to the total number of seconds in available time during a specified measurement interval

ss Y % EB during 1 s. (Y > 30 provisionally)

The ratio of SES to the total number of seconds in available time dur­ing a specified measurement interval

BBER Background Block The ratio of errored blocks to the total number of blocks excluding all Error Ratio blocks during SES and unavailable time

JOHAN BLUME LEIF HANSSON PEDER HAGG LEIF SUNDIN Ericsson Telecom AB

64

These functions and a number of other functions are described in greater detail and from the point of view of the SDH NEs in the section 'SDH Management'.

ment of large transport networks. The LOT is required during installation, and can also be used to manage a single NE or a small transport network.

Mode of operation The SDH NE can be managed from: - FMAS, the management system for the

Transport Network - A Local Operator Terminal (LOT).

The FMAS is necessary for the manage-

All SDH NEs use a standardised interface to the FMAS. The protocols used are TMN (Telecommunications Management Net­work) Q3 interfaces as defined in CCITT Recs. Q.811 and Q.812, Box 2.

All SDH NEs also use a common Informa­tion Model (IM). The IM defines the syntax

Fig. A Gateway Network Elements (GNE) may be con­nected to an OS. The GNE has an attached sub­network of SDH NEs

BOX 2 Q3-lnterface

The Q3 interface provides for standardised communication and exchange of management information between an NE and an Operations System. The protocol suite and the information model must be defined when specifying a Q3 interface.

Gateway Network Element (GNE) A GNE is connected to an OS via a Q3 inter­face, Fig A. The GNE has an attached subnet­work of SDH NEs and provides remote access to these NEs by means of Embedded Control Channels (ECC). The GNE performs Interme­diate System (IS) network layer routing func­tions for ECC messages destined to any NE within the subnetwork. When considering implementation, there is no difference between a GNE and any other SDH NE. They simply perform different roles in the OSI environment.

Embedded Control Channel (ECC) The ECCs provide a high-capacity data com­munication network between SDH NEs, utilis­ing dedicated bytes (DCC) in the STM-N Sec­tion Overhead as the physical layer. Two types of ECC have been defined in the SDH stan­dards: -ECCr

A192 kbit/sdata communications channel ac­cessible by all NEs, including the intermedi­ate regenerators

-ECCm A 576 kbit/s data communications channel ac­cessible by all NEs, excluding the intermedi­ate regenerators

The ECC network is logically created by defin­ing ECC network routes in the SDH transport network. Network Protocol Data Units (NPDU) are then routed according to address and rout­ing information held locally in the NEs as rout­ing tables, or terminated within the NE.

Table 1 SDH GNE Local Routing Table

NPDU Destination Address

"SDH NE"

"OS"

"SDH GNE"

Next Hop (SNPA)

"STM-N ECC"

"Q3"

"Own Agent"

In the absence of standards, a set of Ericsson pro­prietary DCN management functions has been de­fined for the purpose of managing the routing ta­bles and DCN resources.

ERICSSON REVIEW No. 3, 1992

65

of the messages that are sent between the SDH NEs and the FMAS.

SDH Management General SDH Management is based on TMN and OSI principles to allow for the building of an open network architecture. The basic concept behind TMN is to provide an or­ganised architecture to achieve intercon­nection between various types of Opera­tions System (OS) and/or telecommunica­tions equipment for the exchange of man­agement information, using standardised protocols and interfaces.

From a management point of view, the SDH NE can be considered from three dif­ferent perspectives: - A functional perspective - An information model perspective - A data communication perspective, each of these perspectives defining some of the aspects necessary for standardised multi-vendor operations.

Functional Perspective The Functional perspective defines the management services that a single SDH NE can provide to a local operator, or to a network management system. In the TMN context, these functions are re­ferred to as TMN management functions.

The TMN management functions belong to different management functional areas. Those relevant for SDH NEs are: Config­uration Management, Fault Management, Performance Management and Security Management. In addition, a set of Data Communication Network management functions dealing with configuration of data communication resources have been de­fined.

Within ETNA, the SDH NE-related servic­es are used by the FMAS to provide net­work-related services such as trail provi­sioning, protection routing, path perfor­mance monitoring, etc. Some examples of the most important TMN management functions provided are summarised below.

Configuration Management (CM) Compared with traditional transmission equipment, the SDH NEs also include a switch which provides for the set-up of broadband semi-permanent connections.

The main CM task is to control this switch, but CM also controls other aspects of the configuration of the NE: - Termination Point Provisioning

The different types of termination point, i.e. physical interfaces, trail ter­mination points and connection termi­nation points, can be configured in dif­ferent ways, e.g. assigned identities, alarm thresholds, enabling and dis­abling of the laser, etc. The termina­tion points are automatically created when the related printed board assem­blies are inserted

- Equipment Configuration The SDH NEs are in many ways self-configurable after installation or exten­sion of the hardware, e.g. when new access ports are installed. The equip­ment configuration functions keep track of the equipment currently in­stalled, e.g. printed board assemblies and software, and report to the OS if changes have been made

- Cross-Connect The cross-connect functions set up connections through the switch and keep an up-to-date list of the cross-connections currently being ordered

- Protection Switching The SDH NEs can be configured to perform different types of autonomous protection switching (fast restoration) of paths or sections to dedicated standby network resources following a network failure

- Synchronisation Configuration Each NE must be synchronised from a valid synchronisation source, e.g. a 2 Mbit/s signal, a 2 MHz reference or an STM-N signal. The synchronisation source configuration functions define the synchronisation source to be used, and what actions should be taken when the primary source fails

- NE-Recovery The SDH NEs contain a lot of data which must be administered, e.g. reg­ularly backed up. During certain troub­le conditions it may also be necessary to perform restarts at different levels.

Fault Management (FM) Fault management provides functions for detection, isolation and correction of ab­normal states in the network. This includes both network-related faults, resulting from cable breaks or deteriorating line systems, and abnormal conditions within the NEs themselves. The main FM task is to report

ERICSSON REVIEW No. 3, 1992

66

to the OS upon detection of a serious fault in the network, but it also controls diagnos­tic and test routines: - Alarm Surveillance

The SDH NEs have the capability to send alarm reports to the OS upon de­tection of a failure, and to store the alarms in an event log

- Fault Localisation and Testing The SDH NEs can be ordered to per­form loopbacks, error injection, self-di­agnostics, etc.

• User Categories A user category defines different lev­els of function access privileges that can be assigned to a user. The lowest privilege level is read access, and the highest level is the super-user categ­ory

• Users New users can be defined and as­signed a user identity, password and user category. Users can also be de­leted.

Performance Management (PM) Performance management deals with the functions necessary for an NE to collect, store, threshold, and report performance data associated with its monitored PDH and SDH trail terminations. All application-specific and optional parameters, as speci­fied in CCITT Rec. G.784, are supported. - PM Data Attribute Setting

Basic PM data attributes, such as threshold values, can be defined

- PM Data Reporting The SDH NEs can send PM data re­ports to an OS, either when a defined threshold is exceeded (degraded or unacceptable performance level) or in accordance with predefined schedules

- PM Data Logging PM data can be stored in logs within the NEs and fetched by the OS when demanded.

Security Management (SM) Security management functions deal with user access control to protect the network against unauthorised access to resources and services. - Login/Logout

A local operator trying to access the NE is checked against a user identity and a password

DCN Management DCN Management provides the functions necessary to control and configure the data communication resources which allow communication to take place within the SDH Management Network.

- Network Node Configuration Each SDH NE can be configured as a data communication node in the OSI environment, which means that ad­dresses, application entity titles and names used locally must be defined

- Network Route Configuration Network routes in the OSI context can be defined by means of routing tables and route priorities.

The Information Model Perspective Another, more formal way of describing the operation and control functionality is in terms of an Information Model (IM). An IM is an object-oriented description, indepen­dent of the actual physical realisation of the Network Element resources and how these are managed.

The Information Model consists of a col­lection of object classes, e.g. equipment, software, trail termination point, SDH

Fig.1 Model of Duplex Cross-connection

Ptr Pointer CTP Connection Termination Point Bi Bidirectional

ERICSSON REVIEW No. 3, 1992

67

switch fabric. The characteristics of an ob­ject class are specified in terms of - read/write attribute values in objects,

e.g. values of configuration parameters and relations to other objects, represent­ed as lines between objects in Fig. 12 3

- create/delete operations of objects - actions that can be performed on the ob­

ject - notifications (i.e. spontaneous messag­

es) sent from the object.

In an executing system, manageable re­sources are represented as instances of these object classes. The collection of in­stantiated objects is referred to as the Man­agement Information Base (MIB).2 There are two types of object in the MIB: Man­aged Objects and Support Objects. A Man­aged Object (MO) represents a physical or logical resource in the Network Element. A Support Object (SO) represents a log or an alarm filter, for example.

The link between the TMN management functions and the IM is the implementation of each management function as one or more operations, actions or notifications in the objects that build up the MIB.

A Network Element has what is called a TMN Agent, which can be seen as a pro­cess (function) acting on behalf of the man­aging system(s), relaying messages in both directions, Fig. 2.

A generic Information Model is essential for the generation of management stan­dards concerning configuration, fault, per­formance and security management func­tions. A common network model, identifying the generic resources in the network - in this case the Transport Net­work- and their associated attribute types, events, actions and behaviour, provides a basis on which to explain the interrelation­ships between these resources and the

network management system. Without this common view, a multi-vendortelecom-munication network will not be achieved.

The Information Model provided by Erics­son and describing the SDH NE complies with the information model developed by CCITT's SG XV SDH Model (G.774),6 and with the SG IV Generic Network Informa­tion Model (M.3100).5 In general, the con­figuration management part is derived from G.774, while the fault management and performance management parts are derived from M.3100 and Q.821. By and large, SG IV follows the CCITT X.700 se­ries of recommendations.

The SDH Information Model is a subset of the ETNA Information Model common to all Ericsson transport network elements connected to the FMAS, including PDH equipment and AXD I/O NEs.

The Data Communication Perspective As well as providing telecommunications services, the SDH NEs also provide po­werful data communications and network layer routing functions. The TMN function block that performs these functions is cal­led the Message Communication Function (MCF).

The MCF is based on the OSI reference model, which makes it possible for an SDH NE to work as a data communication node in an open network architecture.7

Each of Ericsson's SDH NEs can be equipped with a Q-interface and provide for Embedded Control Channel (ECC) ac­cess, which means that each NE can be connected to any Operations System that conforms to OSI and TMN standards, with­out the need for additional Mediation De­vices or Q-adapters.

The MCF performs network layer routing functions between the Q-interface and any

Fig. 2 Interworking between Manager, Agent and Man­aged Objects

ERICSSON REVIEW No. 3, 1992

68

BOX 3 The SDH NE MANAGEMENT ORGANISATION­AL MODEL

Management of SDH NEs is based on the man­agement organisational model as outlined in Ret 5. The model consists of the following TMN func­tion blocks and components, Fig. A: The Network Element Function (NEF) including the Manage­ment Information Base (MIB), the Message Com­munication Function (MCF), and the Management Application Function (MAF) including the agent.

In addition, an MAF functional component con­taining a manager for local control of the NE has been defined. The local manager is housed in the local operator's terminal.

Agent Part of the MAF which is capable of responding to network management operations issued by a manager, and of issuing notifications, e.g. event reports, on behalf of the managed objects

Manager Part of the MAF which is capable of issuing re­quests for network management operations, e.g. request performance data, set thresholds, receive event reports, etc.

Local Manager A managerwhich is housed in a local operator's terminal and is capable of managing a single network element

Management Application Function (MAF) An application process providing TMN services. The MAF includes an agent, or a manager, or both. The MAF is the origin and termination of all TMN messages

Managed Object (MO) The manager's view of a resource within the telecommunications environment that may be managed via the agent. Examples of MOs re­siding in an SDH NE are equipment, software, trail termination point, SDH switch fabric, alarm log, etc.

Message Communication Function (MCF) Provides facilities for the transport of TMN mes­sages to and from the MAF, as well as network layer routing functions

Network Element Function (NEF) The entity within an NE that supports transport network based services, e.g. multiplexing, cross-connection, regeneration, etc. The NEF is represented to the manager as a set of man­aged objects.

Fig. A SDH NE Management Organisational Model

ECC subnetwork, or between any of the ECC subnetworks, Boxes 2 and 3.

Graphical User Interface General One of the most important aspects of SDH NE management is the ease with which it can be operated.

It is true that a prime driving force behind the deployment of the Transport Network is to facilitate the management of network resources by way of centralised manage­ment - that is, with Q-interfaces and FMAS - but NEs will nevertheless be operated and maintained from the local site. Some of the reasons for local operation are: - Backup when the OS, or the communi­

cation link to OS, is down - The network operator may wish to adopt

a more decentralised management phi­losophy

- Certain management functions are more easily performed on site because they require physical manipulation of the equipment.

For this purpose a local operator's termi­nal with a Graphical User Interface (GUI) can be connected to the NE. The GUI is a window-based and mouse-operated inter­face through which the operator has ac­cess to all the management functions.

In order for an operator to gain access to the functionality, he has to prove his legit­imacy by supplying an identification code and a password. This falls under Security Management, which also includes the pos­sibility of a Super-user assigning operators to particular user categories with extensive or more limited privileges.

Configuration Management covers both physical configuration in an SDH NE-typ­ically Printed Board Assemblies such as

ERICSSON REVIEW No. 3, 1992

69

the Termination Access Unit - and a logi­cal configuration of the capacities for switching and multiplexing. These differ­ent types of Configuration Management are supported by different graphical views, e.g. a physical view and a logical view.

When there is a fault of some kind, e.g. a Transport Network related alarm such as Loss of Frame Alignment or excessive Bit Error Ratio, or a fault related to HW or SW in the actual NE, the details and possible consequences are reported to the GUI. Fault Management, which deals with these functions, also covers testing and diagnos­tics.

Performance is continuously monitored, and the GUI presents the statistics graphi­cally and/or in tabular form. These func­tions fall under Performance Manage­ment.

The local GUIs conform to design stan­dards developed for user interfaces by TMOS User Interface Design Standards (TUIDS), which means that the OPEN LOOK™ user interface style is currently used for the SDXCs, but Motif will also be included. For the SMUXs, Microsoft Win­dows is used.

ETNA Harmonisation The current SDX Control System contains a GUI implemented on an IPX type Sun Workstation, but a portable terminal will also be provided. The GUI for SMUX is im­plemented on a PC 386/486. The fact that two different types of graphical terminal are used will not prevent 'look and feel' si­milarity between the systems, however.

The Graphical User Interfaces for SDH NEs will be harmonised with each other and with the GUI for FMAS. Fig. 3 shows a typical detail of the DXC user interface.

Control System Architecture Introduction The control system has to control Network Elements which vary in size from only a few connected cables to over 8,000. The functionality associated with each con­nected cable makes heavy demands on the control system processing capacity. Another important factor to consider is cost, which has to be kept very low for small systems.

A distributed computer architecture has been chosen to meet these requirements.

Fig. 3 SDXC User Interface A window from the Configuration Management functional area

ERICSSON REVIEW No. 3, 1992

Each unit in the system has a powerful mi­croprocessor, and a central master com­puter co-ordinates all the unit processors and provides input/output.

Unit processors enable the unit itself to perform a large amount of processing and so reduce the load on the central proces­sor.

All processors are connected to an Inter­nal Communication Network (ICN). De­pending on the size of the Network Ele­ments, different internal communication network structures have been implement­ed, Fig. 4.

The central processor houses the MIB, on which the OS and the local operator per­form management operations. It will also implement parts of the MCF functionality, such as a network layer routing function between the Q- interface and the ECC sub­networks.

The central processor can vary in size from small and inexpensive one-board micro­processor-based systems, to high-capac­ity redundant computers.

Each of the different printed board assem­blies under the control of the Central Pro­cessor contains its own Unit Processor. The Unit Processor is a building block con­sisting of a microprocessor, memories, communication controllers and A/D and D/A converters when required.

The Unit Processors perform routine tasks on each printed board assembly, such as alarm surveillance, collection of perfor­mance parameters, and self-diagnostics. The Unit Processors also control the lower layers of the ECC protocol suite.

SDXC CONTROL SYSTEM ARCHITECTURE General The SDXC control system may be com­posed of a central processor and up to a couple of 100 Unit Processors. It uses a packet-switched Internal Communication Network (ICN) which is integrated with the switch.

Purchased hardware and software is used; the Central Processor being a UNIX com­puter, for example.

The control system software modularity is ensured through the use of a layered struc­ture combined with object-oriented tech­niques.

Programs can be downloaded from an Op­erations System all the way down to a Unit Processor.

Control of Switching Network Besides Unit Processors and the Central Processor, the control circuitry of the SDXC switch, SNC, is also connected to the Internal Communication Network. This means that all processors can set up and release cross-connections in the SDXC. This facility comes into use, for example, when rapid switching has to be performed due to a transport network fault discove­red by an access unit. In this case the Unit Processor on the access unit immediately reconfigures the switch according to a pre­defined configuration previously commu­nicated to the Unit Processor by the Cen­tral Processor. The Central Processor is always informed of the resulting configu­ration.

Unit Processors Unit Processors typically occupy part of a board in the SDXC. A unit in the SDXC nor­mally consists of only one board. A Unit Processor includes a microprocessor chip and memory. Its current implementation is a Motorola 68 302.

A Unit Processor continuously performs tasks such as monitoring of hardware and calculation of bit errors, but at the same time it must react quickly to events such as incoming alarms from the transport net­work. The unit processor is therefore equipped with a real-time operating sys­tem kernel, OS 68, which gives response times on a microsecond level.

Central processor The Central Processor is the coordinator and master of the complete control sys­tem. It continuously monitors the Internal Communication Network to find new Unit Processors. Local operators have access to the SDXC system via graphical work­stations. These are connected to the Cen­tral Processor via an Ethernet Local Area Network, Fig. 5. Both the Central Proces­sor and the operator workstations are UNIX machines based on SPARC archi­tecture, the Central Processor being a

ERICSSON REVIEW No. 3, 1992

Fig. 4 Control System Architecture

CP Central Processor UP Unit Processor ICM Internal Communication Network SNC Switching Network Controller (only SDXC)

71

Fig. 5 The CP is connected to the ICN by a separate Ethernet. All control signals are routed through a centralised, triplicated switch and embedded in the traffic signal

CTU Control Termination Unit I Traffic signal

Control signal

Sun SPARC2 and the operator terminals Sun IPX.

The Central Processor is connected to the Internal Communication Network by a sep­arate Ethernet. It is separate to ensure that there is adequate capacity and security for communication with the Unit Processors. Ethernet, being a standardised interface, enables a change of Central Processor supplier without any interface boards hav­ing to be redesigned.

The next version of the CP will consist of basic and optional plug-in units, forming a modular system connected to a high­speed VME backplane and mounted in a standard subrack suitable for telecommu­nication purposes.

Internal Communication Network All processors are connected to the Inter­nal Communication Network. It enables all processors to communicate with each other and with the switch control circuitry.

The Internal Communication Network em­ploys packet-switching techniques which enable processors to communicate using different data rates. The Central Proces­sor uses 2 Mbit/s and Unit Processors 0.5 Mbit/s.

The internal cabling that carries the SNI signals used for transport of traffic infor­mation within the SDXC is also used as a part of the Internal Communication Net­work." Thus, expansion of the SDXC with new equipment at the same time increas­es the capacity of the communication net­work to cater for new Unit Processors. On­going communication is not affected by this expansion.

By utilising the SNI signals, the Internal Communication Network benefits from the reliable triplicated structure of the Switch. A single failure cannot cause a complete failure of the Internal Communication Net­work.

All control signals are routed through a centralised packet switch. This is triplicat­ed and each part resides on a switch plane, Fig. 5.

The Central Processor gives or refuses permission to communicate within the Internal Communication Network and keeps a record of all ongoing communica­tion. In fault situations, all the processors involved stop communicating and the fault is cleared by the Central Processor.

Communication is normally between the Central Processor and the Unit Proces­sors, but for time-critical operations direct communication between Unit Processors can be used, e.g. for fast protection switch­ing.

Broadcast messages can be distributed over the Internal Communication Network. In this case a message is sent from a pro­cessor and then duplicated by the commu­nication network and sent to all access points where a processor may be connect­ed. This facility is utilised by the Central Processor to find out whether any new Unit Processors have been connected to the Internal Communication Network. This happens, for example, when a magazine is equipped with a new interface unit. The Central Processor sends a broadcast mes­sage which is answered by all new Unit Processors. Another possibility with the

ERICSSON REVIEW No. 3, 1992

72

Fig. 6 Software Architecture

CSA CSP

Control System Application Control System Platform

broadcast facility is for the Central Proces­sor to distribute calendar date and time to all Unit Processors simultaneously.

The use of broadcast greatly reduces the load on the Central Processor, since the single message sent by the Central Pro­cessor is multiplied and distributed by the packet-switched nodes within the Internal Communication Network.

A set of rules defines the way communica­tion between the application programs in the Central Processor and the Unit Pro­cessors is accomplished. The rules spec­ify the size of packages, priority handling between different messages, actions to be taken when messages or parts of mess­ages are lost, etc.

All these rules are designed into four pro­tocol layers. Together these four layers hide all Internal Communication Network implementation details from the applica­tion. The protocols are implemented as separate software modules, and one layer can therefore be modified without affect­ing the others. The same protocol software is used both in the Unit Processors and in the Central Processor. It is written in the C programming language.

Fig. 7 The CSA Architecture, with its layered structure

Software Architecture General By definition, all software in the SDXC be­longs to the Control System. Its purpose is to provide Operation, Administration, Maintenance and Provisioning functions via the external control interfaces, namely the Graphical User Interface and the Q-in-terface. The software is physically distri­buted between the Central Processor and the Unit Processors.

There is a basic architectural distinction between platform-oriented and applica­tion-oriented software (referred to in Fig. 6 as CSP and CSA respectively).

Control System Platform The Control System Platform (CSP), to­gether with the computer hardware and its Operating System - currently UNIX SVR4.1.2 - provides a platform that offers the following services: - Internal Communication

IPC, i.e. Inter-Process Communication (inter- as well as intra-processor) by way of 'sockets'

- External Communication Provision of Application Programmer's Interface (API) for communication with external management applications. These services use open, standar­dised, 7-layer OSI stacks

- Inventory Services for checking consistency between software and hardware con­figuration

- Program loading Services for loading of software pack­ages from the local sites as well as from the centralised network manage­ment system (FMAS)

- Process handling Functions to enable supervision of pro­cesses

- Run-time fault reporting Functions to enable the reporting of internal DXC Control System faults to a management system

- Restart Upon the detection of a Control Sys­tem fault, the nature and seriousness of the fault are evaluated and the sys­tem is subsequently restarted from a well defined point of execution.

Control System Application The DXC CS application software consti­tutes all Transport Network oriented func­tionality. For a static description of the

ERICSSON REVIEW No. 3, 1992

73

CSA, three aspects of the functionality are taken into account, thereby creating a layered structure, in order to isolate differ­ent dependencies and stimulate a modular design, Fig. 7. - The User Layer

contains the functionality of the Graphi­cal User Interface. This layer of software will develop and change considerably due to different market needs and new tools for graphical presentation. It is therefore separated from the underlying software by an interface called the Con­troller Interface (CI) which can be said to represent the functionality provided by the TMN Layer and offered to manage­ment systems, such as centralised OS or local GUI

- The TMN Layer consists of functions for managing Net­work Elements specified in an object-oriented Information Model. An Informa­tion Model is becoming the standard way of specifying the manageable re­sources of a Network Element in the Transport Network. CCITT Recommen­dation G.774 describes the basis for the SDH NE Information Model, and thus forms the foundation for the DXC CS Telecommunication Management Layer (TMN). It will necessarily develop over the years, particularly since different

customers will require their SDH NE de­ployment in stages not coherent with CCITT Recommendation G.774 releas­es. For the purpose of isolating these de­pendencies, the Information Model as­pects are singled out in the TMN Layer

- The System Layer For SDH NEs, CCITT Recommenda­tions G.781-G.783 specify-through the use of Functional Block descriptions -the functionality that must be provided. The descriptions are a reasonably stable set of requirements and are singled out in the DXC CS in the System Layer so as to be distinguished from the manage­ment aspects. The object-oriented ap­proach is used here too, i.e. the System Layer consists of a number of objects.

The software described above is mapped onto the computer platform as shown in Fig. 8.

From an application programmer's point of view, the Q-interface and the F-interface are handled in a similar way. The proto­cols are specified in the Controller Inter­face. OSI standards for service specifica­tions (CMISE) are utilised. The Q-interface is a full 7-layer OSI stack, while the F-in­terface uses the IPC mechanism provided by the Control System Platform.

Fig. 8 Mapping of the CSA software onto the computer platform

74

Functional requirements Certain DXC Control System functional re­quirements must be taken into account when designing the software: - Several users

A DXC may be equipped with more than one GUI plus a Q-interface, and all of these interfaces must be able to operate simultaneously. This requires concur­rency, which is implemented through several processes working on the ob­jects (the MIB)

- Real-time characteristics The DXC Control System must be event-driven, in the sense that alarm detection mechanisms and subsequent evaluation and filtering are in some cases required to result in autonomous reconfigurations within a certain time

- Data storage and consistency The information in the objects contained in the MIB - representing everything of relevance to the management system -adds up to a considerable amount of data, the volume of which requires the use of disk storage. The information on the disk is also used for backup purpos­es. Typically, 1 GByte is used in a nor­mal configuration. Since the MIB is the image used by the management system to represent the Network Element's manageable resources, it is of course of

extreme importance that the data is con­sistent with the current configuration

- Availability and robustness It is through the DXC Control System that a management system exerts its net­work control. This control might for ex­ample involve the setting up of a digital path through the Transport Network to provide end-users with data communi­cation capacity. Unless the DXC CS has very high availability - a system which is robust, i.e. resistant to faults - the end-users will not receive their services, which ultimately leads to reduced reve­nues for the network operator.

Since different operators are allowed to operate the system simultaneously, it is essential to ensure that a consistent MIB is maintained. The operations are regard­ed as one or several transactions. Should a system fault occur, preventing a trans­action from being carried out, a roll-back to a well defined state is performed. This means, in some cases, that part of the MIB must be locked during a Transaction. What constitutes a Transaction and what has to be locked is the responsiblity of the TMN Agent functionality.

Also, to allow for the shortest possible re­action times, i.e. to provide the event-driv-

Fig. 9 The SMUX Control System Architecture

MCF Message Communications Function SEMF Synchronous Equipment Management Func­

tion S S-interfaces as specified in CCITT Recs. G.782-

G.783

ERICSSON REVIEW No. 3, 1992

75

en real-time functionality, lengthy execu­tions must be divided into Transactions.

Data related to the Information Model can be stored by using one of several possible techniques: - Object DataBase Management System,

ODBMS This solution is very attractive in the sense that it provides persistent objects, which is exactly what the Information Model specification suggests. The differ­ence between the implementation of the system and the way it is described in an Information Model is less than in the other solutions. Also, the amount of de­sign and implementation work is con­siderably less

- Relational Data-Base Management Sys­tem, RDBMS This solution also limits the amount of design and implementation work be­cause it provides mechanisms - such as an ODBMS - for data storage, transac­tion handling, roll-back, etc. However, a translation (design) must be made from the object-oriented specification to the world of tables in a relational database

- Ordinary files This solution is 'the hacker's choice'. It is straightforward in that it uses only what the Operating System and the program­ming language provide

- Class library This represents something of a compro­mise. It provides basic data persistence functionality for objects.

Implementation Structure The network resource represented by a Managed Object is partly implemented in hardware, and the software is divided between the Central Processor and Unit Processors. To utilise the distributed pro­cessor structure, which in a DXC of nor­mal size means one CP and some 100 UPs or more, as much as possible of the func­tionality is delegated to the UPs. The Inter­nal Communication Network, ICN, allows UPs to communicate directly without in­volving the CP, which means that real-time functions such as protection switching and network synchronisation can be handled in the UPs.

The DXC Control System developed for the German PTT by Ericsson together with the German companies FUBA and DeTeWe - its partners in the FLEXNODE consortium, which will install AXD 4/1 and

AXD 1/0 in 6 sites - is regarded as a pro­totype system and does not show all the characteristics mentioned as require­ments above.

SMUX CONTROL SYSTEM ARCHITECTURE General The SMUX Control System is implemen­ted mainly by programs executed on a Central Processor (the Support Unit) as well as Unit Processors (UPs) distributed on each transmission printed board as­sembly within the NE. The SU is a one-board processor common to the whole range of Ericsson SDH multiplexers and optimised for small NEs. The SU may be common to a number - normally two - of SDH multiplexers.

The SU has an overall responsibility for management of the NE and receives and evaluates management operations issued from an OS or a local operator. As a re­sponse to events detected in the network, the SU issues notifications, e.g. alarm messages, to the OS. The Q3 and ECC protocol suites, and the F-interface, are also controlled by the SU.

Functions of a simple nature but with a high repetition rate, e.g. scanning of binary in­dications, alarms, calculation of PM pa­rameters and operations in close connec­tion with transmission hardware, are performed by UPs.

The implementation of the SMUX Control System mapped onto CCITT Recs. G.782-G.783, is shown in Fig. 9.

The SEMF is a function block which sends and receives data on low-level manage­ment functions to and from transmission-oriented function blocks.

The MCF is implemented as a protocol ma­chine on the SU, and the SEMF is imple­mented as software both on the SU and on UPs. The Management Information Base (MIB) is located on the SU, while the S-interfaces, as specified in CCITT Recs. G.782-G.783, are implemented as an internal processor bus between the UP and hardware registers on the transmis­sion printed board assemblies.

Commercially available products supplied by Retix are used to implement the Q3 and ECC protocol suites.

76

Fig 10 SMUX Control System Hardware implementation

TAU Termination Access Unit MAU Medium Attachment Unit AUI Attachment Unit Interface LAN Local Area Network

Hardware Platform The management subsystem hardware platform consists of processors at two dif­ferent levels: - Central Processor (SU) - Unit Processors (UP).

In addition, the following equipment may be required for control and operation: - IBM-compatible 386/486 PC (Local

Operator's Terminal) - MAU (Ethernet transceiver) for connec­

tion to a DCN of LAN type.

The SMUX Control System Hardware Ar­chitecture is illustrated in Fig. 10.

The SU communicates with the UPs via an internal ISO 8482 bus, which is similar to RS-485.

The SU implementation is mainly based on the following circuits: - C P U - Ethernet controller (Q3-interface) - RS-232 Communication controller (F-

interface) - LAPD Communication controllers (inter­

nal communication) - Relay contacts (Station alarm interfac­

es) - Detection logic (External alarm interfac­

es) - Program memory - Data memory - Backup memory for non-volatile data.

The UP is a general hardware building block common to all transmission printed board assemblies. The UP implementa­tion is mainly based on the following cir­cuits: - C P U - LAPD communication controllers (ECC

and internal communication) - A/D and D/A converters, for measure­

ment of laser characteristics, such as input power and laser bias current

- Test interface (gains access to UP soft­ware for an authorised user)

- Program memory - Data memory - Backup memory for non-volatile data.

Not all UP circuits have to be present on every transmission printed board assem­bly.

The CPU used both for SU and UPs is the Motorola 68 302, which is a microproces­sor optimised for data communication (ISDN) purposes.

An IBM-compatible 386/486 PC is used as Local Operator Terminal. The LOT pro­vides the operator with a Network Element view. However, it is possible to manage small networks from the LOT, but of course without network view.

To manage a network from an LOT with­out assistance from the FMAS, communi­cation over the ECCs is used, Box 2. By using ECC it is possible for an LOT to ex­change messages with any other SDH NEs within the SDH network. This possibility (of accessing remotely located SDH NEs) is referred to as Remote Login. It will be a valuable feature, especially for early field trials and installations without a complete Network Management System.

Software Architecture The SU and UP software in an SMUX forms a loosely coupled, distributed soft­ware system organised into a layered structure, where each layer has a well de­fined task, Fig. 11.

Additionally, PC software which is not in­dicated in Fig. 11 is required for the Graph­ical User Interface.

The SU software layers and their tasks are as follows:

ERICSSON REVIEW No. 3, 1992

Fig. 11 SMUX Software Architecture

• The SU application software communicates peer-to-peer with the UP's application software

• » The SU communicates with the UP's by using an ISO 8482 backplane bus

References 1 Tarle, H.: FMAS - An Operations Sup­

port System for Transport Networks. Ericsson Review 67 (1990):2, pp. 163-182.

2 Widl.W.: CCITTStandardisation of Tele­communications Management Networks Ericsson Review 68 (1991) :2, pp. 34-51.

3 Widl, W. and Woldegiorgis, K.: In Search of Managed Objects. Ericsson Review 69 (1992):1/2, pp. 34-56.

4 Bergkvist, J. A., Evangelisti, G. and Hop-finger, J.: AXD 4/1, a Digital Cross-Con­nect System. Ericsson Review 69 (1992):3, pp. 78-68.

5 CCITT Draft Rec. M.3010: Principles for a Telecommunications Management Network.

6 CCITT Rec. G.774: SDH Network Infor­mation Model for TMN

7 CCITT Rec. X.200; Reference Model of Open Systems Interconnection for CCITT Applications.

- User Access layer Provides external access for an OS or local operator to the management view of the SDH multiplexer. Contains the data communication services for the F, Q3 and ECC interfaces

• TMN layer Provides the generalised TMN manage­ment view of the SDH multiplexer in the form of a Management Information Base with the managed objects, their attrib­utes, actions, and emitted notifications

• SMUX layer Contains a logical SDH multiplexer as specified in CCITT Recs. G.782-G.783. This logical multiplexer can be controlled from the TMN layer and has the stan­dardised automatic behaviour for protec­tion switching and change of synchroni­sation source

- Magazine layer Manages all hardware units in the mag­azine so that they provide the logical SDH multiplexer transmission services requested by the SMUX layer using the available units, and signal interconnec­tions in the magazine

- Unit layer Manages individually each hardware unit in the magazine, ordering changes in the unit, receiving events from the unit and ensuring that the U P software is con­sistent with the SU software

- Base layer Provides process management and communication, drivers for SU I/O ports and communication services between the SU and UPs

- Virtual machine layer Provides a real-time, multi-task virtual machine on bare machine hardware. Contains Operating System kernel and

low-level hardware interfacing to the SU hardware.

The UP software layers and their tasks are as follows: - Unit layer

Manages each hardware unit individual­ly, making changes on the unit accord­ing to orders from the SU, and reporting events from the unit

- Base layer Provides process management and communication, drivers for I/O ports on the unit, and communication services between the UP and SU

- Virtual machine layer Provides a real-time, multi-task virtual machine on bare machine hardware. Contains Operating System kernel and low-level software interfacing to the transmission circuits.

The PC application software and its pur­pose are as follows: - Graphical User Interface (GUI)

Provides the local operator with a graph­ical user interface.

Summary The control and operation of SDH Network Elements is and will continue to be adap­ted to the evolving TMN standards. This facilitates their connection to centralised Operations Systems.

The implementation of the control system takes into consideration the various de­mands of Network Elements, ranging in size and complexity from small SMUXs up to large SDXCs.

ERICSSON REVIEW No. 3, 1992

AXD 4/1, a Digital Cross-Connect System

Jan A Bergkvist, Giovanni Evangelisti and Jan Hopfinger

AXD 4/1 is one of the new cross-connect products in the ETNA - Ericsson Transport Network Architecture - concept. The system is designed to meet the different needs of an evolving network that will include both SDH and PDH equipment. The authors describe the system architecture and the present system implementation.

Fast service provisioning, supervision pro­viding guaranteed high-quality leased lines with capacities ranging from 2 Mbit/s up to VC-4 (155 Mbit/s), fast network con­figuration and better network administra­tion are some of the benefits of an AXD 4/1.3

digital communication systems telecommunication networks telecommunication network management

The AXD 4/1 is a digital Cross-Connect system which terminates digital signals and cross-connects these signals or their tributary parts. The extensive switching and supervision facilities of the AXD 4/1 system makes it suitable for a wide range of applications involving network provision and network protection.

The AXD 4/1 is a vital component in the ETNA network solution, offering fast ser­vice provisioning and high availability.

The AXD is a switch that differs from an or­dinary telephony switch in three ways: - It is controlled by commands from an op­

erating system or an operator and not by embedded control information in the transmitted signals

- The holding times for a connection are days or weeks, as compared with min­utes for a telephony switch

- The bandwidth of the switched signals is in the range 1.5-155 Mbit/s, compared with 64 kbit/s.

Functions The AXD 4/1 system replaces the manual distribution frames and multiplexers used in the present network.

The AXD 4/1 cross-connects signals at all VC levels (VC-12 to VC-4) according to the SDH standard, corresponding to 2-155 Mbit/s. Both PDH and SDH signals can be terminated and cross-connected si­multaneously in the same system, Table 1.

The specification of the system's internal interfaces allows all sixty-four 2 Mbit/s and all four 34 Mbit/s signals in a 140 Mbit/s signal to be used. This means that the in­troduction of SDH in the PDH network will entail no network restrictions.

In addition to the cross-connect functions, the AXD 4/1 adds extensive supervision functionality to the network. All terminated signals are continuously supervised for performance and fault. Combined with a central network management system,

Fig. 1 Operators working with the AXD system

ERICSSON REVIEW No. 3, 1992

Abbreviations

ASIC AXD BCP

Application Specific Integrated Circuit Ericsson's DXC products Basic Control Protocol

BiCMOS Bipolar and CMOS CEPT CMI CP CS CSB

CTU DMB DXC ECC ESU ETNA

FMAS FPGA HCB HCMOS HPB

European Telecom Standard Code Mark Inversion Central Processor Control Store Clock generation and Synchronisation Board Control Termination Unit Distribution Matrix Board Digital Cross Connect Embedded Control Channel External Synchronisation Unit Ericsson Transport Network Architec­ture Facility Management System Field Programmable Gate Array Horizontal Control Board High Speed CMOS Horizontal Power Board

HW ID MV NAS PBA PCB PDH PROM PSU RAM SDH SMB SN SNI SS STM

sw swc SWM TAU TCU TS UP VC

Hardware Identity Majority Vote North American Standard Printed Board Assembly Printed Circuit Board Plesiochronous Digital Hierarchy Programmable Read Only Memory Protection Switching Unit Random Access Memory Synchronous Digital Hierarchy Switch Matrix Board Switching Network Switching Network Interface Speech Store Synchronous Transport Module Software Switching Cabinet Switching Module Terminal Access Unit Terminal Connection Unit Time Space Switch Unit Processor Virtual Container

such as Ericsson's FMAS, the network can be strikingly improved

Features The AXD 4/1 is a powerful component for use in a variety of transport network appli­cations. Fast delivery of broadband ser­vices as well as better utilisation of network resources are the most important benefits. Some paramount features of the AXD 4/1 are: - Non-Blocking Synchronous Switching - Network Access at Various Levels - PDH/SDH Gateway Facility - Network Synchronisation Transparency - High Availability.

To ensure flexibility in the network, the AXD 4/1 system has full connectivity between all input and output ports by using a non-blocking switch structure. The non-blocking characteristic is guaranteed for all mixtures of signal levels and also when dif­

ferent mixtures of broadcast connections are handled. It is of particular importance to avoid restrictions in the network when introducing cross-connect systems.

The AXD 4/1 is based on synchronous switching, using the SDH concept, and therefore capable of cross-connecting both PDH and SDH signals without intro­ducing slips or violating the timing informa­tion. This makes for great flexibility in terms of cross-connect functionality. The internal format of the system allows the use of all 34 Mbit/s and all 2 Mbit/s tributaries in a 140 Mbit/s signal, or any mix of these tribu­taries. This means that the AXD 4/1 sys­tem is well suited to function as a gateway between the plesiochronous and the syn­chronous networks without introducing any restrictions in today's PDH network.

The large traffic capacity makes availabi­lity a key concept; a triplicated structure

JAN A BERGKVIST JAN HOPFINGER Ericsson Telecom AB Sweden GIOVANNI EVANGELISTI Ericsson-FATME Spa Italy

Table 1 Access and switch levels

ERICSSON REVIEW No. 3, 1992

80

Fig. 2 AXD 4/1, System Architecture

has been chosen to achieve extremely high availability. The triplicated structure includes both the Switch and the internal connections throughout the system. The triplication is terminated by majority vote on bit level and results in a system with ef­ficient fault detection and fault isolation and excellent availability. The cross-con­nected signals are completely unaffected by any single fault within the triplicated structure.

System Architecture Unlike most switching systems, the AXD 4/1 is functionally divided into only two structural parts, Devices and the Switch, Fig. 2. In order to support this struc­ture the system architecture has been built on three corner-stones: - Standardised interfaces - Integrated control paths - Triplicated Switch.

The purpose of the Devices is to interface external signals and produce a logic, switchable signal, whereas the Switch per­forms the task of switching this latter type of signal between different Devices. Thus, switching is transparent and does not in­volve any data processing, since this is handled by the Devices. The Central Pro­cessor of the AXD 4/1 is also connected as a Device to ensure full flexibility.

Internal Interfaces The 184 Mbit/s SNI-4, Box 1, has been de­signed to function as the internal interface between the Devices and the Switch. This interface includes both traffic and control data.

The SNIs function as connecting-devices to the Switch by carrying all information needed, i.e. both traffic, timing, and con­trol data. The transmission signals are car­ried and cross-connected in a circuit-oriented way, while the system's internal control data is transported and switched as packets.

One to eight columns of the SNI-4 can be configured to carry either packet-switched or circuit-switched data. The rest of the SNI-4 capacity, except one column, can be circuit-switched.

In order to effectively support Devices whose bandwidth demand is substantially lower than 155 Mbit/s, an interface version

with lower bandwith (SNI-3) is supported by the Switch. The SNI-3 is structured in the same way and carries the same type of information as the SNI-4. An adaptor, Terminal Connection Unit (TCU), extends the architectural structure of the triplicated Switch, Fig. 4. The TCU provides multi­plexing of four SNI-3 signals to a single SNI-4 interface.

Integrated Control Paths The communication between different pro­cessors in the system (a common Central Processor for the whole AXD and one Unit Processor at each Device) uses capacity in the internal SNI interfaces. By using these integrated paths for distribution of control information, the same redundancy and maintenance as for traffic signals are used, which gives more reliable communi­cation than any bus structure regardless of redundancy structure. Since the control capacity of the SNI-4 can be adjusted, each Device will have exactly the capac­ity it needs. In addition, installation and internal cabling is easy. Only one connec­tion has to be established to put a new De­vice into operation, and there is only one connection to supervise.

Triplicated Hardware Structure The Switch is implemented by using a sys­tem structure consisting of three identical planes working in parallel. All three planes use the same input and perform the same functions in perfect synchronisation.

The triplication is originated and terminat­ed at Devices and at all processors within the triplicated structure. Termination of the triplication is by majority vote at bit level, continuously for all signals.

The triplicated Switch structure gives the following characteristics: - 100% fault detection - 100% fault localisation - 1 0 0 % fault isolation.

All single faults within the triplicated struc­ture are immediately detected by the ma­jority vote circuit, and the affected plane is directly indicated. The majority vote iso­lates the fault, and any disturbances orig­inating from a plane are automatically fil­tered. This means that the switched signal will never be disturbed, which is of the ut­most importance for high-quality network service to be obtained. When the faulty plane has been identified, a plane com-

ERICSSON REVIEW No. 3, 1992

81

parison/majority vote is performed on sec- Processor distribution tions of the planes to indicate the faulty The Control System in AXD 4/1 consists board. of one common Central Processor (CP) for

the whole AXD and one Unit Processor Triplication results in a more robust (UP) at each Device, i.e one board or a system, with less complex maintenance Switch plane. The CP performs all control functionality compared with that of a dupli- functions and handles the communication cated system, and makes it easier to up- to/from FMAS via a Q-interface. The CP grade functionality in the future. can be a redundant or single machine with

ERICSSON REVIEW No. 3, 1992

Box 1

Switching Network Interfaces

SNI - General To minimise the number of internal interfaces in the AXD, only Switching Network Interfaces (SNI) are allowed between system parts. Two different SNIs are specified today. SNI-4

the only format allowed between SN and High Speed Devices and between SN and TCUs

SNI-3 the only format allowed between TCU and Low Speed Devices.

SNI-4 Characteristics

Data rate 163.84 Mbit/s Bit rate 184.32 Mbit/s Line code 8B9B, which means that each octet is completed with the ninth bit, which is an in­version of the eighth bit.

Physical realisation SNI-4 contains data, timing and synchronisa­tion information in one signal. One 50 ficoax-ial pair is used as transmission medium in each direction. SNI-4 is self-adjustable to the cable length delay between the SN and device/TCU.

Logical format The frame format of SNI-4 is called IVC-4, which stands for Internal Virtual Container at level 4. IVC-4 can carry both SDH and PDH signals at bit rates ranging from 1.5 Mbit/s to 155 Mbit/s, Fig. A.

SNI-3 Characteristics

Data rate 40.96 Mbit/s Bit rate 20.48 Mbit/s Not line-coded.

Physical realisation SNI-3 contains data, timing and synchronisa­tion information in three signals. Two data wires and one clock wire with CMOS levels are used for transmission of SNI-3.

Logical format The frame format of SNI-3 is called IVC-3, which stands for Internal Virtual Container at level 3. IVC-3 can carry PDH signals at bit rates ranging from 1.5 Mbit/s to 34 Mbit/s, Fig. B.

Fig. A The IVC-4 frame consists of 2560 octets, which are divided into 284 rows and 9 columns, plus 4 octets for framing information. One column forms a switching entity. One SNI-4 can consist of four SNI-3s

Fig. B The IVC-3 frame consists of 640 octets, which are divided into 71 rows and 9 columns, plus 1 octet for framing information. Four SNI-3s can be mapped into one SNI-4

82

Fig. 3 Time Space Switch structure

SS Speach Store

single or redundant connection to the rest of the system to suite different availability needs. The CP is described in another ar­ticle in this issue of Ericsson Review.

The UPs handle - Communication between the CP and the

Hardware (HW) - Maintenance and Control of the Device/-

Switch plane - Alarm from the Device/Switch plane.

The UPs are equipped with so-called Flash-PROMs, which means that one part of the program memory can be write-pro-tected and one part can be updated. This makes it possible to use central, loadable software (SW) - a n advantage in future up-gradings of the system - and still have non­volatile storage of the programs.

Program-load and inventory are two im­portant features of the processor block. New UP programs can be loaded either from the CP through the SNI interfaces or through a special test port, located at the board front. Programs are loaded without disturbing traffic in progress. Thanks to the Flash-PROMs, the program will be kept

during power-off, for example when shift­ing the board between two magazines.

The product number, revision state and in­dividual serial number of each board are stored during production tests at the fac­tory and can be read by the CP. Another function handled by the UP is the reading of the physical board position in the maga­zine and the magazine ID. This informa­tion will be sent to the CP on request to give a complete picture of the system's functional and physical configuration.

The control system is based on an open architecture using object-oriented system design and language. Motorola type MC 68 302 computers are used as UPs. Sun or Stratos UNIX machines can be chosen as CP, depending on reliability re­quirements.

Switching The switching network performs all con­nections necessary for communication between the Devices. The Switch provides two types of function for the Devices: cross-connection of transmission signals and switching of control information. 128 STM-1 equivalent ports, correspon­ding to 8196 2 Mbit/s signals, is the maxi­mum capacity of the present AXD 4/1 Switch.

The AXD 4/1 is based on column switch­ing, which allows all types of VC-n switch­ing (n = 1,2,3 or 4) using the basic switch­ing element of one SDH column, i.e. 9 x 64 kbit/s = 576 kbit/s. These columns are then combined to form the VC-ns that are to be cross-connected. The SDH and PDH signals that have pre-defined cross-connections are:

VC-4, VC-3, VC-2, VC-12, VC-11 and 140, 34, 2, 45 and 1.5 Mbit/s.

A number of VC-ns, or columns, can be grouped together to form other cross-con­nections, e.g. concatenated VC-2s.

Different types of connection can be es­tablished in the AXD 4/1 Switch, regard­less of the hierarchical level or mix of the connections: - Simplex, one way signals - Duplex, bidirectional signals - Broadcast, multi-destination signals.

In the AXD 4/1 Switch, the number of des­tinations in a broadcast connection - as

ERICSSON REVIEW No. 3, 1992

83

well as the number of simultaneous broad­cast connections - is unlimited. This means that an AXD 4/1 Switch is suitable for services on leased or switched circuits from 1.5 Mbit/s up to 140/155 Mbit/s.

SNI-4 column, the basic switching entity An SNI-4 column, 9 x 64 kbit/s, is the smal­lest possible switchable entity. The band­width of one SNI-4 column is regarded as sufficiently small for internal applications too.

Each external signal terminated by the AXD 4/1 and each standard tributary of that signal is mapped into an integer number of columns. As a maximum, a whole STM-1 channel (155 Mbit/s) including overhead (OH) can be switched.

To be able to provide connections that use a number of columns, the integrity of the frame of columns is guaranteed, i.e. the time sequence integrity.

Non-blocking, full connectivity Switch matrix The Switch matrix is designed to provide circuit-switched connections without any internal congestion and to ensure full con­nectivity between any input and any out­put port, independently of the hierarchical level of the cross-connected signals and of the mix of hierarchies (CEPT, NAS, SDH, simplex, broadcast).

The Switch is a Time-Space-switch (TS) consisting of a number of Speech Stores (SS), arranged in rows and columns: one row for each input and one column for each output, Fig. 3.

Every time a signal is to be fed to a certain output, it can be chosen from among the signals stored in one of the SS (n) asso­ciated exclusively with that output. The choice is made through the use of Control Stores (CS) associated with that same out­put.

No switch action will affect connections al­ready established.

Characteristics of the TS structure Since no internal routing is necessary and only one T-stage is involved, the TS switch has the following characteristics: - No rearrangement of established con­

nections is needed to permit the set-up

of a new connection, capable of being used for broadcast also

- The delay introduced by the switch is minimised.

This results in a switch with short set-up time and very low transmission delay, which is necessary in a large network with stringent service requirements.

Multicolumn switching Multiconnection switching is characterised by the connection of a number of columns executed at the same time. The number of columns that can be connected in a multi-connect switching operation has no upper limit. As an extreme, the whole switch can be reconfigured at the same time.

Internal communication The AXD 4/1 Switch is capable of hand­ling packet-oriented signals. This function is used both for internal control communi­cation and to handle Embedded Commu­nication Channels (ECC) in SDH signals. A special communication protocol is de­veloped to handle the internal communi­cation - the Basic Communication Proto­col (BCP). BCP is a self-addressing protocol that uses the tree-structure of the integrated control paths to give reliable in­ternal communication. Information is car­ried in 'packet format'. The packet-hand­ling equipment and the circuit switch are triplicated, so as to fulfil general require­ments for failure immunity and fault de­tection capability.

The supervision of internal communication is handled by BCP functionality and by the triplication.

Devices All types of equipment connected to the Switch are called Devices. The most com­mon types are Termination Access Units (TAU), which terminate transmission sig­nals. The Central Processor, too, is con­nected as a Device.

In addition to the termination of signals, i.e. line signal access and multiplexing, a De­vice terminates the switching network. Ma­jority vote is used to terminate the signals from the triplicated Switch. This is the basic function that forms a single fault-tolerant system.

84

Fig. 4 AXD 4/1 Devices

Devices of AXD 4/1, Fig. 4, can be divid­ed into four different groups: - Terminal Access Units (TAU) interfacing

plesiochronous transmission systems: • TAU 140, interfacing 140 Mbit/s signals • TAU 34, interfacing 34 Mbit/s signals • TAU 16x2, interfacing sixteen 2 Mbit/s

signals - TAU interfacing transmission signals be­

longing to the Synchronous Digital Hier­archy (SDH) family: •TAU STM-1E, interfacing 155 Mbit/s

electric signals • STM-1 and STM-4 optical TAU will be

provided later on - Devices interfacing special external sig­

nals: • ESU, External Synchronisation Unit for

2 MHz synchronisation signals

- Devices for internal use: • CTU, Control Termination Unit, access

unit for the Central Processor.

Power distribution Power distribution in AXD 4/1 is decentra­lised. All boards containing electronic com­ponents are equipped with their own DC/DC converters, which produce the nec­essary voltages from the duplicated -48 V power source. This makes it possible to have truly duplicated power supply all the way from the exchange battery to the PBAs.

Hardware Structure Basic Technology and Methodology The major part of the Hardware (HW) of AXD units is realised with Application Spe­cific Integrated Circuits (ASIC). Texas BiC-MOS is used for high-speed applications. BiCMOS is a 0.8 micron process whose available speed in AXD applications ex­ceeds 200 MHz. Gate arrays with a maxi­mum of 100,000 usable gates are avail­able. Motorola HCMOS 0.7 micron circuits are used for other system parts. Max. speed in AXD applications is above 60 MHz, and arrays with 127,000 usable gates are used.

Structured HW design methodology has been used when designing ASICs. This means that functions are described in a High Level Language (Verilog); the code is then synthesised to gates and flip-flops by a synthesis program (Synopsys). This procedure increases design efficiency and facilitates the structuring of large arrays.

The occurrence of a number of high-speed signals between different units places ex­tremely stringent requirements on the ana­log parts. Analog parts handle the termi­nation, generation and regeneration of line signals, and phase and frequency control of signals. A library of analog blocks with schematics, component specifications and layouts has been created, which al­lows identical functions to be implement­ed in the same way in the different De­vices. The use of one standard internal interface reduces to the absolute minimum the problems with analog design.

A new metric building practice conforming to ETSI standard is used. Boards with 6 to 10 layers are placed in cabinets with a board spacing down to 16 mm. Circuit

ERICSSON REVIEW No. 3, 1992

85

packages with a pin spacing down to 0.5 mm are used for some ASICs. Boards can be placed both vertically and horizontally. This allows the use of the crossboard tech­nique: vertical boards are interfacing hori­zontal boards in the same subrack.

Each Switch Cabinet (SWC) has a capac­ity of 64 x 128 ports. Two SWCs are re­quired to arrange a 128 x 128-port Switch Fabric Plane. The minimum capacity is four ports, and extensions can be made in steps of four.

The technology and methods described above offer a number of advantages: - Compact design, which gives high func­

tional density per volume unit - Low power dissipation - High testability - Simple verification - Short design time.

Switch Fabric The Switch Fabric which is the complete switch with all three planes, is implement­ed through single-type cabinets. The HW in one cabinet belongs to one Switch Fab­ric Plane.

Board Functions Switching Network Fig. 5 shows the structuring of the main functions of the Switch Fabric on the dif­ferent board types.

The Clock generation and Synchronisa­tion Board (CSB) generates all the clock and synchronisation signals required in the system. Each CSB receives clock frequen­cy and phase information from the other CSBs. Information from the selected syn­chronisation sources is also received.

Fig. 5 Functional partioning of Switching Cabinet Only one plane is shown CSB HCB VCB DMB SMB UP DP CSG SWP Rx Tx

Clock generation and Synchronisation Board Horizontal Control Board Vertical Control Board Distribution Matrix Board Switching Matrix Board Unit Processor Device Processor Clock and Synchronisation Generator Switch Port Receiver Transmitter

The maximum capacity of the Switch Fab­ric, in the present implementation, is 128STM-1 equivalent ports.

Modularity The AXD 4/1 features two types of me­chanical modularity: - Board modularity - Cabinet modularity.

The CSB boards of the three planes are interconnected to ensure synchronisation of the system. The three signals are also compared for supervisory purposes.

The Horizontal Control Board (HCB) is used for distribution of clock and control signals. The clock signal from CSB is dis­tributed through HCB to all vertical boards.

ERICSSON REVIEW No. 3, 1992

86

The control signals, which come from the Vertical Control Board (VCB), are distribut­ed to all SMBs. In addition, HCB collects external alarms, e.g. temperature, fans, etc.

The VCB is the core of the control func­tions in the AXD 4/1. Packet handling and control of the Switch plane are the two main tasks performed by the VCB, which contains the Unit Processor (UP). The packet handling is performed by a Router. The Router continuously looks for a pack­et to be routed and keeps track of the pack­et load.

connected to all DMBs in the cabinet. Up to sixteen SMBs can be accommodated in one SWC.

Each board is equipped with a micropro­cessor for board supervision (DP).

Devices TAU 140, Figs. 6 and 7, interfaces 140 Mbit/s plesiochronous line signals ac­cording to CCITT G.703. TAU 140 can be set to different modes, which allows for branching at 2, 34, and 140 Mbit/s level and makes switching at these levels pos­sible.

The UP handles the control of the entire Switch plane by controlling the Device Pro­cessors (DP) located on the respective boards in the Switching Cabinet.

The Distribution Matrix Board (DMB) ter­minates four incoming SNI-4s from Devic­es, and separates the traffic part of the sig­nal from the control information carried in the internal overhead of the SNI-4. The traffic signals are distributed to all the Switching Matrix Boards (SMB) in the cabi­net. The VCB handles the separated con­trol information for routing. One SWC can be equipped with up to sixteen DMBs.

The SMB is the heart of the Switch. It con­tains the Switch for four outgoing SNI-4 signals. In addition, four outputs used for expansion are located at the SMB. These ports are used to interconnect two SWCs when expanding the switch. Each SMB is

TAU 140 contains five different types of ASIC. Together they handle CMI-decod-ing of 140 Mbit/s line signals, framing and maintenance at the 140,34,8 and 2 Mbit/s levels, buffering and frequency justifica­tion between line signals, synchronisation, generation of the synchronous IVC-4 frame and sending of SNI-4 to all three switch planes. TAU 140 has the corre­sponding functionality in the transmit direc­tion, completed with majority vote for data and clock signals from three switch planes.

TAU 140 contains approximately 1.5 Mil­lion used gates, processor block with 256 kWord-PROM, 512 kWord-RAM, DC/DC converters and a number of analog high speed blocks.

TAU STM-1E interfaces 155 Mbit/s electric line signals belonging to the Synchronous Digital Hierarchy, and performs demap-

Fig.6 TAU 140

ERICSSON REVIEW No. 3.1992

87

ping of an STM-1 line signal to VC-12s and maintenance of them, pointer adjustment, buffering, mapping into IVC-4 which is sent to the three planes of the Switching Net­work. TAU STM-1 E has the correspond­ing functionality in the transmit direction completed with majority vote. TAU STM-1E contains six different types of ASIC.

TAU 140 and TAU STM-1 E are placed in the same magazine which can be equip­ped with up to sixteen TAUs, arbitrarily placed and mixed.

TAU 34 interfaces 34 Mbit/s plesiochron-ous line signals. Two different types of ASIC, common to TAU 34 and TAU 140, are used. The signal from TAU 34 in the direction towards the switch is a triplicat­ed SNI-3.

board handles two SNI-3/SNI-4 multiplex­ers in the direction towards the Switch and two SNI-4/SNI-3 demultiplexers in the di­rection from the Switch. TCU is triplicated, and all Devices with an SNI-3 interface can be connected to the Switch by means of a TCU.

TAU 16x2 interfaces sixteen 2 Mbit/s ple-siochronous line signals. The same TAU 16x2 is used both in AXD and SDH systems. Since as many as sixteen line signals are handled by one board, a pro­tection function may be used. For every four active TAU 16x2 is it possible to have an additional standby TAU 16x2. All six­teen signals can be switched to this board when any one of the ordinary boards is faulty. TAU 16x2 contains three different ASICs.

Fig. 7 TAU 140 functionality

FS Frame Synchronisation

I n order to make optimal use of the capac­ity of the Switching Network, a unit called Terminal Connection Unit (TCU) and lo­cated between TAU 34 and the switch has been designed, Fig 4. This unit multiplex­es four SNI-3s to one SNI-4. Each TCU

The protection function is controlled from the TCU and is performed by the Protec­tion Switching Unit (PSU). The PSU has a minimum number of components and, hence, no UP. The PSU is supervised from theTCUs, Fig. 8.

ERICSSON REVIEW No. 3, 1992

ESU is used for synchronisation of the AXD with external 2 MHz synchronisation signals. ESU compares the frequency of the incoming 2 MHz signal with the divid­ed signal from AXD's internal clock. The result of this comparison is sent to the Cen­tral Processor (CP) for control of the clock modules. The generation of the 2 MHz tim­ing information is also performed by the ESU. Frequency division logic is per­formed in a Field Programmable Gate Array (FPGA), and one ASIC is used for SNI-3 communication.

CTU is used for connection of the CP to the AXD. The CP is connected via Ether­net, and control information is converted to the internal Basic Control Protocol (BCP) embedded in SNI interfaces. One ASIC is used for SNI-3 communication.

TAU 34, TAU 16x2, ESU and CTU are lo­cated in the same magazine. In this maga­zine there are three TCUs used by the

SNI-3 Devices for communication to/from the Switching Network. Devices can be mixed arbitrarily, except for the CTUs, which have two specific slots. This flexibil­ity permits efficient utilisation of available space when the exchange layout is made.

Conclusions The AXD 4/1 system presented is a sys­tem with great flexibility and a functional distribution that makes it useful in a wide variety of network applications. High avail­ability, thanks to the use of triplicated HW, as well as an open architecture have been key concepts in the system design.

The system handles accesses ranging from 1.5 Mbit/s up to 155 Mbit/s, PDH (both NAS and CEPT) as well as SDH. Switch­ing can be performed at all levels from 576 kbit/s (9x64 kbit/s) to 155 Mbit/s in steps of 576 kbit/s including all standard­ised signal levels.

References 1 CCITT Rec. G.703 2 CCITT Rec. G 707-709 3 Andersson, JO.: Digital Cross-Connect

Systems - a System Family for the Trans­port Network. Ericsson Review 67 (1990):2, pp. 72-83.

4 Breuer, H-J. and Hellstrom, B.: Synchro­nous Transport Networks. Ericsson Re­view 67(1990):2, pp. 60-71.

ERICSSON REVIEW No. 3. 1992

Fig. 8 Protection Switch Unit structure The triplication of the Switch structure includes the TCUs

88

ERICSSON

ISSN 0014-0171 Telefonaktiebolaget LM Ericsson 92434 Ljungforetagefl. Orebro 1992