Single RAN - LT1203 - V1.1

118

Click here to load reader

description

LTE single RAN Wray Castle

Transcript of Single RAN - LT1203 - V1.1

  • www.wraycastle.com

    Course Code: LT1203 Duration: 2 days Technical Level: 2

    Single RAN

    LTE courses include:

    LTE Engineering Overview

    LTE Evolved Packet Core Network

    LTE Air Interface

    LTE Radio Access Network

    Cell Planning for LTE Networks

    LTE Parameters and Tuning

    LTE Voice Options and Operations

    LTE Technologies, Services and Markets

    4G Air Interface Technologies

  • SINGLE RAN

    Wray Castle Limited

    First published 2012

    WRAY CASTLE LIMITEDBRIDGE MILLS

    STRAMONGATE KENDALLA9 4UB UK

    Yours to have and to hold but not to copy

    The manual you are reading is protected by copyright law. This means that Wray Castle Limited could take you and your employer to court and claim heavy legal damages.

    Apart from fair dealing for the purposes of research or private study, as permitted under the Copyright, Designs and Patents Act 1988, this manual may only be reproduced or transmitted in any form or by any means with the prior

    permission in writing of Wray Castle Limited.

    All of our paper is sourced from FSC (Forest Stewardship Council) approved suppliers.

  • Single RAN

    ii Wray Castle Limited LT1203/v1.1

  • SINGLE RAN

    CONTENTS

    iii Wray Castle LimitedLT1203/v1.1

    Section 1 Single RAN Concepts

    Section 2 Multi-Standard Cell Sites

    Section 3 Single RAN Backhaul

    Section 4 Core Networks

    Section 5 Single RAN Implementation

  • Single RAN

    iv Wray Castle Limited LT1203/v1.1

  • Single RAN

    1.i Wray Castle LimitedLT1203/v1.1

    SECTION 1

    SINGLE RAN CONCEPTS

  • Single RAN

    1.ii Wray Castle Limited LT1203/v1.1

  • CONTENTS

    Single RAN Concepts

    1.iii Wray Castle LimitedLT1203/v1.1

    Defining the Single RAN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1.1

    3GPP Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1.2

    Multi-RAT Single RAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1.3

    Multi-Operator Single RAN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1.4

    Multi-RAT, Multi-Operator Single RAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1.5

    Core Network Sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1.6

    Potential Benefits of Single RAN Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1.7

    Potential Dangers of Single RAN Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1.8

  • Single RAN

    1.iv Wray Castle Limited LT1203/v1.1

  • At the end of this section you will be able to:

    OBJECTIVES

    Single RAN Concepts

    1.v Wray Castle LimitedLT1203/v1.1

    define the term Single RAN in a generally accepted way

    identify the key aspects of the evolution of 3GPP networks that have facilitated the

    development of Single RAN techniques

    describe the main features of the Multi-RAT and Multi-Operator Single RAN concepts

    outline ways in which core network resources may be shared in Multi-Operator

    environments

    identify some of the potential benefits and dangers attendant upon the deployment of a

    Single RAN solution

  • Single RAN

    1.vi Wray Castle Limited LT1203/v1.1

  • LT1203/v1.1 1.1 Wray Castle Limited

    Single RAN Concepts

    Defining the Single RAN

    The term Single RAN is one that has several interpretations.

    For many, the Single RAN concept provides a blueprint for multi-RAT (Radio Access Technology) access networks in which 2G, 3G and 4G cells and services are combined into a single access solution, with user equipment free to make use of 2G GERAN (GSM EDGE Radio Access Network), 3G UTRAN (UMTS Terrestrial Radio Access Network) and 4G E-UTRAN (Evolved UTRAN) connectivity as need and coverage dictates. Multi-RAT single RAN services are often implemented by deploying multi-standard base stations, which are capable of generating 2G, 3G and 4G cells simultaneously from a single base station node. Other terms often used to describe this concept are Multi-Standard RAN, Multi-Generation RAN and Combined RAN.

    An extension of the multi-RAT version of the Single RAN concept sees the combination of both fixed and mobile broadband services into one coherent access environment. An operator may therefore use the Single RAN concept as a way of converging their legacy fixed and mobile broadband access networks to support a UC (Unified Communications) service.

    The term is sometimes used in association with some form of network sharing agreement between operators. The concatenation of the multi-RAT and multi-operator concepts in the Single RAN environment most often occurs when network operators seek to undertake a rollout of new multi-standard base stations as part of the process of implementing a network sharing arrangement with a partner operator.

    Most equipment vendors use the term Single RAN to refer to their multi-standard RAN products Huawei, for example, have a SingleRAN product range, NSN (Nokia Siemens Networks) have a product range called Single RAN Advanced and Ericsson has a product range named Evo RAN, all of which offer multi-standard (GSM, WCDMA, LTE) base stations and associated equipment.

    The multi-RAT definition of the term Single RAN will be discussed in this course.

    Further Reading:Huawei SingleRAN: http://www.huawei.com/en/products/radio-access/signleran/index.htmEricsson Evo RAN: hugin.info/1061/R/1290464/291045.pdfNSN: http://www.nokiasiemensnetworks.com/portfolio/products/mobile-broadband/single-ran-advanced

  • LT1203/v1.11.2 Wray Castle Limited

    Single RAN

    3GPP Evolution

    A high level view of the generic architecture of 2G GSM/GPRS/EDGE, 3G UMTS/HSPA and 4G LTE networks is shown in the diagram to highlight the differences and similarities between the three generations of network.

    2G GSM/GPRS networks consist of a GERAN access component connected to separate CS (Circuit Switched) and PS (Packet Switched) core networks. GERAN RRM (Radio Resource Management) functions are performed by the BSC (Base Station Controller), each of which coordinates the activities of numerous base stations. The air interface technique employed by GSM/GPRS/EDGE networks was TDMA (Time Division Multiple Access), which supported low numbers of users per channel and offered limited data rates. Transmission and connectivity in GSM/GPRS networks is based on a mix of TDM (Time Division Multiplexing) and IP-based bearers.

    A similar architectural model was followed when designing 3G UMTS/HSPA networks. In these, user connectivity to the CS and PS core networks is provided via the UTRAN, in which RRM functionality is managed by an RNC (Radio Network Controller). The UMTS air interface was based on WCDMA (Wideband Code Division Multiple Access), which offers high capacity and high user data rates, and inter-node connectivity is based on a mix of ATM (Asynchronous Transfer Mode) and IP links.

    LTE networks, by contrast, employ a flatter more simplified architecture. The LTE E-UTRAN (Evolved UTRAN) access network consists of only the eNB (E-UTRAN Node B) base stations and the backhaul links that connect them to the core network, there is no equivalent of the BSC or RNC controller node. This flatter architecture is designed to offer far lower latency levels for both user traffic and signalling, as there are fewer devices on the path between the user terminal and the datas destination. The LTE EPC (Evolved Packet Core) is an all IP environment and does not support or replicate the functionality of the legacy CS core. CS-type services, such as voice or video calling, can be provided by a legacy CS core or via an IMS (IP Multimedia Subsystem). The LTE air interface is based on OFDMA, which can offer very high capacity and very high user data rates. All inter-node network connectivity is based on IP.

    Further Reading: www.3gpp.org/Tutorials

  • LT1203/v1.1 1.3 Wray Castle Limited

    Single RAN Concepts

    Multi-RAT Single RAN

    As cellular services have evolved many operators have found themselves in a position where they have several generations of radio access network technologies deployed and in use simultaneously. Traditionally, each RAT required its own bespoke access solution, leading to networks often deploying separate 2G GSM/EDGE, 3G UMTS/HSPA and 4G LTE/LTE-Advanced base stations to the same sites.

    A number of recent developments have allowed equipment vendors to release multi-standard base stations, which are capable of generating 2G, 3G and 4G cells simultaneously, offering operators the opportunity to replace multiple separate base stations per site with a single, combined node. The main advances that have led to this include:

    Common base station design initiatives, such as OBSAI (Open Base Station Architecture Initiative) and CPRI (Common Public Radio Interface), which have produced a homogeneous set of functional blocks and internal interfaces for base stations irrespective of whether those nodes support 2G, 3G or 4G transmission.

    SDR (Software Defined Radio) techniques have been developed which move much of the bespoke radio signal processing effort undertaken by a base station away from technology-specific hardware units and onto standard DSP (Digital Signal Processing) chips. The difference between a base station generating a GSM signal and one generating WCDMA is now largely a matter of software and configuration rather than of hardware capabilities. The speed and capabilities of the processor and DSP chips employed in modern base station nodes allow each device to undertake a wider and more complex set of duties than traditional single-RAT nodes would have been capable of.

    Packet-based backhaul technologies, typically based on IP (Internet Protocol) and/or Ethernet have been deployed which are capable of carrying traffic for multiple radio access types, have largely replaced the specific backhaul technologies (E1/T1 TDM and ATM) employed by legacy 2G and 3G deployments

    3GPP began specifying the radio characteristics of MSR (Multi-Standard Radio) devices such as base stations in the 37 series of specifications, published from Release 9 onwards.

    Further Reading: http://www.3gpp.org/ftp/Specs/html-info/37-series.htm

  • LT1203/v1.11.4 Wray Castle Limited

    Single RAN

    Multi-Operator Single RAN

    The less-commonly encountered definition of Single RAN relates to the sharing of a radio access environment between two or more operators. Other terms more commonly employed to describe this scenario include RAN Sharing and MORAN (Multi-Operator RAN).

    In a multi-operator environment RAN and/or core network elements may be shared in a variety of methods.

    The simplest forms of RAN sharing involve sharing cell site locations and possibly cell site infrastructure, such as power feeds, towers and even backhaul connections.

    More complex forms of RAN sharing involve the use of combined base stations, which can be used to serve customers of the partnered networks. Shared base stations may operate in a separate frequency manner, in which separate cells are generated per operator, or in a shared frequency manner, in which the partnered operators share the same cells and frequencies.

    Site sharing schemes are often described as being passive or active. In a passive sharing scheme, each operator maintains their own base station elements - radio units, power amplifiers, signal processors but share key site infrastructure and may even share base station enclosures. In an active sharing scheme operators share sites, infrastructure and key base station elements such as signal processors, radio units and even radio frequencies.

  • LT1203/v1.1 1.5 Wray Castle Limited

    Single RAN Concepts

    Multi-RAT, Multi-Operator Single RAN

    One of the reasons why there are two definitions of the term Single RAN in use is that many networks encounter the concept as part of a combined RAN consolidation and RAN sharing process. In these situations partnered networks have elected to combine the process of deploying a single RAN (multi-RAT base station) solution with the implementation of a RAN or network sharing agreement with each other.

    In such a multi-RAT, multi-operator scenario, newly deployed multi-RAT base stations could be configured to serve multiple operators (via either separate or shared core networks), leading to a shared Single RAN.

    GSM, UMTS and LTE all have schemes that allow for RAN sharing by multiple (up to 6) operators.

    Further Reading: 3GPP TS 23.236

  • LT1203/v1.11.6 Wray Castle Limited

    Single RAN

    Core Network Sharing

    RAN sharing can be associated with two different kinds of core network sharing, known as MOCN (Multi-Operator Core Networks) and GWCN (Gateway Core Networks).

    In MOCN configurations, the shared RAN nodes sharing is implemented at the BSC level in the GERAN, the RNC level in the UTRAN and the eNB level in the EUTRAN are connected to fully separate core networks. Supporting UEs (e.g. UEs (User Equipment) that support the additional control mechanisms) would perform PLMN (Public Land Mobile Network) Selection on Attach and the Iu/S1-flex function performed by the RAN node would select an MSC-S/SGSN/MME (Mobile-services Switching Centre Server/ Serving GPRS Support Node/ Mobility Management Entity) in the chosen PLMN to forward Attach Requests on to. Non-supporting UEs will function using legacy techniques but may be redirected to a different core network element once they have Attached to the network.

    In GWCN configurations, the shared RAN nodes are connected to a set of shared MSC-S/SGSN/MMEs, which in turn connect to a set of separate core networks. Supporting UEs would again perform PLMN selection but the RAN node would perform Iu/S1-flex functions towards a single, combined set of MSC-S/SGSN/MMEs. The selected core network node would then perform separate core network gateway selection per PLMN and would support interfaces to a different set of HLR/HSS (Home Location Register/ Home Subscriber Server) nodes per PLMN.

    The diagram illustrates core network sharing arrangements for LTE networks; similar arrangements also exist for legacy CS and PS core networks which allow for MSC Server and SGSN pool sharing.

    Further Reading: 3GPP TS 23.251, 23.236 (Iu-Flex), 23.401 (S1-Flex)

  • LT1203/v1.1 1.7 Wray Castle Limited

    Single RAN Concepts

    Potential Benefits of Single RAN Implementation

    The potential benefits that can be expected from a Single RAN deployment could include:

    simplification a single RAN is simpler to build than multiple RANs leading to CAPEX (Capital Expenditure) savings

    organizational savings a shared network often requires fewer people to run it than separate networks would, leading to rationalizations and OPEX (Operational Expenditure) savings

    cost savings combined sites are less expensive to deploy and maintain than separate sites

    energy saving a single combined base station can be expected to consume less energy due to the removal of duplication in areas such as processor units and transmission cards

    coverage improvements combining multi-RAT resources can lead to better penetration and wider mobile broadband coverage, especially in rural areas, as lower frequency channels become available for 3G and 4G use

    capacity improvements combining multi-RAT resources can lead to increases in capacity, especially in urban areas, as 3G and 4G technologies become available in smaller cells

    spectrum pooling multi-RAT deployments can pool the operators spectrum resources and lead to greater general capacity

    frequency evolution operators are able to deploy multi-RAT services into the same MSR bands, allowing base stations to serve multiple RATs in the same frequency band

    convergence a Single RAN solution will typically employ a shared, IP-based backhaul network, allowing operators to converge all generations of RAT onto a modern IP basis

    software simplification a multi-standard base station can be expected to run from a single, combined software package that supports the functionality of all RATs, therefore reducing the development, testing and deployment load associated with software updates

    Each Single RAN deployment can be expected to be different, as different operators will be starting from their own specific architectures and will build their environment to meet their specific requirements, meaning that each operator might only benefit from a subset of the above mentioned benefits.

  • LT1203/v1.11.8 Wray Castle Limited

    Single RAN

    Potential Dangers of Single RAN Implementation

    The potential dangers that should be considered when implementing a Single RAN deployment could include:

    introduction of single points of failure into operators RAN environments if a multi-standard base station fails then all cells (and RATs) supported by that site also fail

    vendor dependency in the past, when operators have sourced different generations of RAN from different suppliers, they have been able to play vendors off against each other to obtain the best deals. A Single RAN generally means a single supplier, meaning that operators run the risk of becoming dependent upon a single vendor

    antenna system complexity depending upon the site configurations that operators select and deploy they may find that the complexity of their cell site antenna systems increases as additional combiners, duplexers and/or splitters are added.

    increased interference co-location of transmitters serving different RATs could lead to an increase of interference (from spurious emissions and intermodulation) of each RAT to the others

    Each of these potential risks can be mitigated, if managed effectively. In urban areas the overlap created by neighbouring sites may nullify the single point of failure risk, whilst effective supplier-management and price benchmarking should overcome the vendor dependency issue. The antenna complexity risk may only be an issue if a site is being converted from single-RAT to multi-RAT operation and operators may equally find that the configuration of existing multi-RAT sites is simplified by the deployment of Single RAN equipment. 3GPP TS 37.104 contains strict guidelines related to the total amounts of inter-RAT interference that are permitted, allowing the risk of reduced quality due to interference and intermodulation to exist within predictable boundaries that allow for a reasonable degree of mitigation.

  • Single RAN

    2.i Wray Castle LimitedLT1203/v1.1

    MULTI-STANDARD CELL SITES

    SECTION 2

  • Single RAN

    2.ii Wray Castle Limited LT1203/v1.1

  • CONTENTS

    Multi-Standard Cell Sites

    2.iii Wray Castle LimitedLT1203/v1.1

    Typical Site Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2.1

    MSR Base Stations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2.2

    GSM Radio Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2.3

    UMTS Radio Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2.4

    LTE Radio Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2.5

    Carrier Aggregation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2.6

    Software Defined Radio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2.7

    Multi-Standard Band Sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2.8

    MSR Band Category 1 (3G/4G) Example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2.9

    MSR Band Category 2 (2G/3G/4G) Example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2.10

    MSR Base Station Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2.11

    OBSAI and CPRI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2.12

    Localized vs Distributed Cell Sites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2.13

    Remote Radio Heads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2.14

    Distributed Cell Site Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2.15

    C-RAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2.16

    Multi-RAT Deployment Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2.17

    Infrastructure Sharing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2.18

    Base Station Sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2.19

    MSR Base Station Sharing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2.20

    Frequency Band Sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2.21

    Potential RF Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2.22

    Single RAN Architectures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2.23

  • Single RAN

    2.iv Wray Castle Limited LT1203/v1.1

  • At the end of this section you will be able to:

    OBJECTIVES

    Multi-Standard Cell Sites

    2.v Wray Castle LimitedLT1203/v1.1

    highlight the main similarities and differences between traditional cell site configurations and

    those employed to support Single RAN deployments

    describe the main features of an MSR base station

    outline the main features of the 2G TDMA, 3G WCDMA and 4G OFDMA air interface

    technologies

    describe the main features of the SDR (Software Defined Radio) concept and its

    applicability to MSR base stations

    identify the arrangements that have been developed to support multi-standard band sharing

    in 3GPP networks

    describe the basic architecture of a multi-standard base station

    outline the functionality supported by the OBSAI and CPRI initiatives and identify the role

    they play in enabling distributed base station architectures

    describe some of the multi-RAT deployment options that exist including techniques that

    support passive and active base station sharing

    identify some of the potential RF issues that might be associated with MSR operation,

    including interference and intermodulation

    describe some basic Single RAN network architectures

  • Single RAN

    2.vi Wray Castle Limited LT1203/v1.1

  • LT1203/v1.1 2.1 Wray Castle Limited

    Multi-Standard Cell Sites

    Typical Site Configuration

    The typical configuration of a legacy single RAT base station site is outlined in the diagram.

    In this model each base station supports just one RAT and can be assumed to be using radio units whose hardware and/or software is dedicated to supporting that single radio technology.

    A network operator who followed this model but who also wished to deployed multi-RAT services would usually be required to deploy multiple base stations (one per required RAT) to each cell site.

  • LT1203/v1.12.2 Wray Castle Limited

    Single RAN

    MSR Base Stations

    3GPP, in TS 371.04, defines an MSR base station as being a Base Station characterized by the ability of its receiver and transmitter to process two or more carriers in common active RF (Radio Frequency) components simultaneously in a declared RF bandwidth, where at least one carrier is of a different RAT than the other carrier(s).

    To decode some of the more obscure parts of this definition: common active RF components means that signals belonging to two or more RATs (to GSM and UMTS, for example) are being processed or served by the same radio elements simultaneously, this is sometimes described as active sharing. A declared RF bandwidth could be a specific frequency band, like the 1800 MHz band, for example. The example MSR base station depicted in the diagram is simultaneously managing 2G, 3G and 4G cells as part of a Single RAN deployment.

    Traffic for all RATs/cells shares the same packet-based backhaul connection and is processed by a shared transmission card. Traffic (user plane, control plane and O&M (Operation and Maintenance)) for all RATs/cells is managed by the same shared processor/controller unit. Digital versions of the downlink RF signals to be transmitted in each RAT/cell are created in the same shared DSP unit (even if traffic for each specific RAT and cell is managed by a different logical part of the DSP array). Uplink traffic for all RATs/cells is also processed by the shared DSP resource. When multi-RAT carriers are sharing the same frequency band, the DSP can in theory create a single multi-carrier signal that carries the combined traffic of multiple cells of multiple RATs. Sites that employ different frequency bands for each RAT, or where the bandwidth allocations are in the same band but are widely non-contiguous, may require multiple radio units.

    Each physical radio sector generated by the site is served by a shared radio unit, which converts the digital versions of the downlink carriers to analogue RF signals, up-converts them to the appropriate band and amplifies them before passing them to the antennas for transmission. The radio units also handle the reception, down conversion and sampling of uplink signals.

    3GPP has defined a range of frequency bands that are available for MSR operation in which limited combinations of 2G, 3G and 4G carriers may be transmitted within the same band using MSR techniques.

    Further Reading: 3GPP TS 37.104

  • LT1203/v1.1 2.3 Wray Castle Limited

    Multi-Standard Cell Sites

    GSM Radio Interface

    A recap of some of the basic features of the GSM/EDGE air interface is provided in the diagram.

    Further Reading: 3GPP TS 45.001

  • LT1203/v1.12.4 Wray Castle Limited

    Single RAN

    UMTS Radio Interface

    A recap of some of the basic features of the UMTS/HSPA air interface is provided in the diagram.

    Further Reading: 3GPP TS 25 series

  • LT1203/v1.1 2.5 Wray Castle Limited

    Multi-Standard Cell Sites

    LTE Radio Interface

    A recap of some of the basic features of the LTE air interface is provided in the diagram.

    Further Reading: 3GPP TS 36 series

  • LT1203/v1.12.6 Wray Castle Limited

    Single RAN

    Carrier Aggregation

    CA (Carrier Aggregation) is the most prominent feature of Release 10 LTE-Advanced. It offers an inverse multiplexing facility that allows a UE to substantially increase the overall data rate it can achieve by allowing an eNB to schedule capacity for it on multiple cells (or carriers) simultaneously.

    Each carrier (either downlink or uplink) assigned for use by a UE is known as a CC (Component Carrier) and the set of CCs allocated to a UE at any one time forms a Carrier Aggregate. R10 CA permits up to two CCs to be bound into a Carrier Aggregate. However, the specifications will ultimately support five CCs, potentially providing a suitably-equipped UE with up to 100 MHz of bandwidth and an aggregate downlink data rate of over 3 Gbit/s.

    The lowest level of carrier aggregation allows a UE to connect via just one cell. The radio connectivity of this cell is described as the PCC (Primary Carrier Component) and the cellular service it offers is known as the PCell (Primary Serving Cell). The PCell carries NAS (Non-Access Stratum) and RRC (Radio Resource Control) services for a UE and is also the carrier measured by the UE to support functions such as quality feedback and handover measurements.

    A Release 8/9 UE (or an R10 UE that didnt require CA services) would just connect via the PCell and would not be assigned any additional carriers. An R10, CA-capable UE that did require a CA service would be scheduled with capacity on between one and four SCells (Secondary Serving Cells), each of which would be carried by an SCC (Secondary Component Carrier).

    The PCC and any SCCs aggregated to provide a CA service for a UE must all be under the control of the same eNB, but the terms primary and secondary used in relation to CA carriers are determined from the point of view of each UE different UEs in the same area may have selected different cells to be their PCell and may therefore regard an assigned cell as an SCC which may be employed by another UE as a PCC.

    A PCell is always used in a bidirectional manner, as befits the cell that carries NAS and RRC traffic, but an SCell may be used in either a bidirectional or unidirectional manner depending upon local configuration and current requirements. If an SCell is used unidirectionally then it is only able to operate in downlink-only mode, there is no provision for cells to operate in uplink-only mode.

  • LT1203/v1.1 2.7 Wray Castle Limited

    Multi-Standard Cell Sites

    Software Defined Radio

    The technical advance that lies at the heart of the Single RAN concept is the emergence of SDR (Software Defined Radio).

    Traditional HDR (Hardware Defined Radio) systems employed hardware-based techniques to create radio signals and encode data onto them. A GSM base station form the 1990s, for example, would have contained a separate radio unit (usually known as a TRX or transceiver) for each radio carrier managed by the site. The TRX would contain hardware elements that generated a carrier frequency and then performed the specific modulations that allowed the signal to carry digital data. Different designs of TRX would have been required to support different forms of radio signal, so the evolution of a basic GSM base station into one that also supported GPRS or EDGE would have required a new physical TRX to be fitted to the base station.

    SDR, in contrast, happens in software on a DSP (Digital Signal Processor) and the modulation and signal generation techniques required for a particular radio system are controlled by a bespoke mathematical algorithm. A change in radio techniques would typically require only a change in software.

    The SDR element in a cellular base station (or baseband) will accept downlink input in the form of traffic streams arriving from higher layers and will receive uplink input in the form of digital samples of the received radio channel taken by the associated radio unit. DSP functions are usually deployed to pools of FPGA (Field-Programmable Gate Array) elements and each FPGA will manage an instance of the required SDR algorithm for a given carrier. The SDR process will take downlink data and create a virtual, mathematical representation of that data as it passes through the stages of formatting, precoding, modulation, Fourier transform and up-conversion that would have been employed as physical processes in an HDR system.

    The result, on the downlink, is a stream of complex-valued symbols that can be passed to an RF frequency synthesizer in one of the base stations radio units that will create a physical analogue signal that matches the virtual description provided by the DSP. The radio unit will also contain the power amplifier, filters and other physical elements required to generate a useable radio signal before passing it on to the antenna system. The uplink process works in reverse, starting with samples taken in the radio unit.

    In a Single RAN environment, the output of different SDR processes, one for each carrier and each RAT in use, can be summed into a combined signal within the DSP before being passed to a multi-standard radio unit, thus allowing all carriers on a sector to be generated by the same combined process.

  • LT1203/v1.12.8 Wray Castle Limited

    Single RAN

    Multi-Standard Band Sharing

    3GPP has laid out the radio transmission and reception for MSR and Multi-Carrier/Multi-RAT base stations in specifications 37.104 and 37.900 respectively.

    Amongst other technical information, these documents specify the frequency bands that are available for MSR operation, as shown in the diagram.

    MSR frequency band sharing is only mandated in certain combinations of band and the level of Multi-RAT sharing that can be supported is classified into three Band Categories (BC).

    BC1 specifies the frequency bands that are available for MSR sharing (meaning that the noted combinations of RATs may be simultaneously generated by the same MSR base station) by combined 4G LTE (E-UTRA-FDD (Evolved Universal Terrestrial Radio Access-Frequency Division Duplex)) and 3G UMTS (UTRA (Universal Terrestrial Radio Access)-FDD) base stations.

    BC2 specifies the frequency bands that are available for MSR sharing by combined 4G LTE (EUTRA-FDD), 3G UMTS (UTRA-FDD) and 2G GSM/EDGE base stations.

    BC3 specifies the sharing options for combined LTE TDD and UMTS TDD base stations, but is not detailed in the diagram.

    A Single RAN network that combines support for 2G, 3G and 4G services is therefore able to operate in four bands (MSR Bands 2, 3, 5 and 8), which fortunately coincides with the 1900, 1800, 850 and 900 MHz bands in which GSM is typically deployed anyway. This means that the operator would be able, if their licences permitted, to deploy a combination of GSM900, UMTS900 and LTE900, for example, simultaneously from the same MSR base stations.

    Further Reading: 3GPP TS 37.104, 37.900

  • LT1203/v1.1 2.9 Wray Castle Limited

    Multi-Standard Cell Sites

    MSR Band Category 1 (3G/4G) Example

    The example in the diagram shows a base station using MSR Band Category 1 to transmit LTE and UMTS cells on adjacent carriers in the 2100 MHz band, which is referred to as MSR Band 1.

    The LTE cell uses EUTRAN Band 1 and transmits a symmetrical 5 MHz FDD cell that is described using the appropriate EARFCNs.

    The UMTS cell uses UTRAN Band I and transmits an FDD cell that is described using the appropriate UARFCNs (UMTS Absolute Radio Frequency Channel Numbers).

    If SDR techniques are employed by the base station it is possible that the different downlink carriers were generated in the same DSP and may, depending upon power output requirements and power amplifier capabilities, be transmitted via the same radio unit and antenna.

    The total radio bandwidth utilized by the MSR base station is known as the Base Station RF Bandwidth, which is defined in 3GPP TS 37.104 as being the bandwidth in which a Base Station transmits and receives multiple carriers and/or RATs simultaneously.

    The bandwidth occupied by the transmitted LTE and UMTS carriers is termed a Sub-Block, which is defined in 37.104 as being one contiguous allocated block of spectrum for use by the same Base Station. There may be multiple instances of sub-blocks within an RF bandwidth.

    Further Reading: 3GPP TS 25.104, 36.104, 37.104

  • LT1203/v1.12.10 Wray Castle Limited

    Single RAN

    MSR Band Category 2 (2G/3G/4G) Example

    In this example, a Single RAN base station is configured to manage 2G, 3G and 4G cells and must therefore utilize Band Category 2 resources in MSR Band 8.

    The GSM cell uses the GERAN E-GSM (900 MHz) band and transmits an FDD cell that is described using the appropriate ARFCN.

    The UMTS cell uses UTRAN Band VIII and transmits an FDD cell that is described using the appropriate UARFCNs.

    The LTE cell uses E-UTRAN Band 8 and transmits a symmetrical 10 MHz FDD cell that is described using the appropriate EARFCNs (E-UTRAN Absolute Radio Frequency Channel Number).

    The Base Station RF Bandwidth in this case hosts two Sub-Blocks, one that covers a single GERAN carrier and another that covers GERAN, UTRAN and EUTRAN carriers.

    Further Reading: 3GPP TS 45.005, 25.104, 36.104, 37.104

  • LT1203/v1.1 2.11 Wray Castle Limited

    Multi-Standard Cell Sites

    MSR Base Station Architecture

    Modern Single RAN base stations typically employ SDR techniques and follow 3GPP MSR guidelines, which allow them to operate in multi-standard, multi-RAT modes. A high-level view of the generic layout of a hypothetical MSR base station is shown in the diagram.

    The transmission element connects the base station to a shared, packet-based backhaul, which carries traffic for all supported RATs. The central processor/control element maintains signalling connections with the supported access controllers and core networks and routes traffic between backhaul channels and DSP Pools.

    In this example, a separate DSP instance from the baseband DSP Pool has been assigned to handle each RAT transmitted in each sector, so each sector has one DSP dedicated to handling a GSM cell, another for a UMTS cell and a third for an LTE cell. Each DSP will perform the SDR functions required to take the received user traffic and signalling and turn it into a transmitted downlink signal. The set of downlink signals created for a given radio unit may be summed. They will also take the signals received on the uplink and extract from them the user traffic and signalling to pass on to the appropriate access controller or core network.

    The links between baseband and radio units carry digital traffic. This consists of complex-valued samples representing the carriers to be transmitted on the TX side and high-rate sample streams describing the received RF signal on the uplink side.

    Each sector, in this example, requires its own radio unit to manage the physical RF TX and RX functions. On the transmit side the radio unit performs DAC (Digital to Analogue Conversion) and turns the stream of complex-valued data symbols generated by the baseband DSPs into a physical analogue RF signal with the same characteristics. An LPA (Linear Power Amplifier) then boosts the RF signal to the required output power. In this example a single multi-carrier, multi-RAT LPA is employed to amplify all RATs simultaneously before the signal is passed through a TX filter to remove any out-of-band components.

    On the RX side, the received signal is passed through an RX bandpass filter to limit it to the required bandwidth and is then passed through an LNA (Low Noise Amplifier) which boosts the signal, partly to overcome the loss experienced on its journey from the antenna. The RF receiver and ADC (Analogue to Digital Convertor) receive and sample the incoming RF signal at a very high rate to be passed to the baseband.

  • LT1203/v1.12.12 Wray Castle Limited

    Single RAN

    OBSAI and CPRI

    A key feature of SDR-based architectures is that the link between the baseband and the radio units is typically carried by a digital interface, as opposed to the analogue RF interface that would have been used in a legacy base station. In the early 2000s two similar but competing initiatives, known as OBSAI and CPRI, were launched that specified an open-standards architecture for this interface.

    OBSAI (Open Base Station Architecture Initiative), was an industry-sponsored initiative that sought to define a standardized layout for cellular base stations, with a set of common component modules connected by open interfaces. An OBSAI base station consists of four main modules (Transport, Control & Clocking, Baseband and RF) linked by a set of RP (Reference Point) internal interfaces, with external interfaces that connect to UEs on one side and network controllers on the other.

    The main objective of OBSAI was to promote a design blueprint for base stations which would provide commonality between different suppliers modules, allowing vendors to act more like integrators by assembling base stations from compatible modules provided by a variety of third parties.

    The backers of the CPRI (Common Public Radio Interface) put forward a more modest architectural model that defined a common interface between an REC (Radio Equipment Controller) and the RE (Radio Equipment). The REC equates to the transport/controller/baseband sections of a base station and the RE is the RF unit.

    From a deployment point of view, the most useful feature of both OBSAI and CPRI is the digital interface that exists between the baseband and the radio unit. The legacy analogue connections that used to serve this interface imposed limits on the design of base stations, in the sense that, due to the losses associated with analogue transmission, the RF units always had to be located within a few metres of the baseband unit. A digital baseband-RF link can, in theory, be any length the designers want it to be, as long as the digital information is carried over a medium such as optical fibre. Base stations with digital baseband-RF interfaces can be designed in a distributed way that allows RRH (Remote Radio Head) RF units to connect to a centralized baseband/control unit across distances of anything up to a few kilometres.

    Further Reading: www.obsai.com, www.cpri.info

  • LT1203/v1.1 2.13 Wray Castle Limited

    Multi-Standard Cell Sites

    Localized vs Distributed Cell Sites

    Base stations that follow the OBSAI/CPRI models, or that employ a similar vendor-specific solution, offer network planners two main site design choices that can be described as localized and distributed.

    A localized design employs traditional antenna connectivity in the form of analogue feeder cables. Typical configurations employ mast head downlink amplifiers, which are commonly abbreviated as MHAs (Mast Head Amplifiers), TMAs (Tower Mounted Amplifiers) or some other vendor-specific acronym and are used to boost the strength of downlink signals to offset some of the losses that signals can expect to experience as they travel down the feeder. Power for an MHA can be fed up the feeder itself by installing a Bias T unit at the base station end of the cable. The loss associated with analogue feeders typically limits the maximum distance between the RF units and the antennas to something less than 120 m, even with MHAs.

    A distributed design makes use of RRH techniques and can greatly extend the maximum distance between base station and antenna. If the digital baseband-RF interface is carried by optical fibre cables (usually the less expensive multimode fibres are used) then the typical maximum distance the antennas can be from the base station goes up to something in the order of 20 km.

    The main challenges associated with a distributed model relate to the need to provide a local power feed for the RRH units at the antenna site and the difficulties associated with routing an optical fibre cable between the connected nodes.

  • LT1203/v1.12.14 Wray Castle Limited

    Single RAN

    Remote Radio Heads

    Different vendors have adopted a variety of designs for their particular RRH units, which may incorporate a range of different elements.

    Most RRH designs include at least the DAC/ADC elements that allow the digital baseband-RF interface to connect to the analogue RF components and all will incorporate an RF transceiver. Some RRH types include radio up/down conversion elements that position that radio signal into the correct part of the spectrum, although other designs handle this functionality in SDR back in the baseband.

    The transmitter side of the RRH includes the LPA and TX filter, while the receive side includes the RX filter and LNA. Both sides are typically connected to an RF combiner/diplexer, which allows the RRH to support both main and diverse TX/RX feeder cables, which would be used if the RRH was connected to, for example, a cross-polarized antenna panel.

    Many cellular antennas have some form of remote tilt control fitted either electrical downtilt control or motor that allows the antenna to be remotely panned or tilted. This allows the operator to optimize or adjust the orientation of the antenna without sending an engineer to the site.

    The RRH will typically need to be connected to a local power source.

  • LT1203/v1.1 2.15 Wray Castle Limited

    Multi-Standard Cell Sites

    Distributed Cell Site Examples

    Examples of some of the distributed cell site models made possible by the use of RRH units are shown in the diagram.

    The first example shows a central base station connected to a set of RRH units that are serving a city centre or a business district. Each antenna site, in this configuration, would typically occupy less space and would require less infrastructure than a traditional base station site would, so the benefits associated with the use of a distributed model would include lower site rental and power costs.

    The second example shows a central base station and a set of RRHs serving a section of motorway. Each mast along the route creates a traditional three-sector site but uses RRHs to achieve this, meaning that each site does not require a full base station deployment.

  • LT1203/v1.12.16 Wray Castle Limited

    Single RAN

    C-RAN

    A further evolution of the distributed base station and Single RAN concepts is provided by the so-called C-RAN Cloud RAN or Centralized RAN.

    In this innovation a network operator is able to deploy their base stations at centralized sites, such as core network centres.

    A fibre-optic DWDM (Dense Wavelength Division Multiplexing) can then be used to distribute CPRI signals to a network of RRH (Remote Radio Heads).

    Some vendors deploy separate physical base stations at the central site, whereas other vendors specify the use of virtualized base stations base station applications running on virtual servers.

    The Cloud RAN concept has the potential benefit of further reducing the footprint and power requirements of each cell site, needing only enough space and power to handle the RRH and optical transmission equipment at each site. There can also be large savings made on deployment and maintenance costs, as repairs to base stations would not necessitate the use of field engineers.

    As the all of the base stations (whether physical or virtual) serving an area are co-located at the same site it means that any X2 interfaces between LTE eNBs become low-latency local connections, which could significantly improve handover completion times.

    The C-RAN concept has been largely driven by China Mobile, with work also undertaken by the NGMN (Next Generation Mobile Network) Alliance.

    Further Reading: http://labs.chinamobile.com/cran/wp-content/uploads/C-RAN%20NGMN-GSMA-2012-Feb-Bill-v16(1).pdf (China Mobile)www.ngmn.org/workprogramme/centralisedran.html (NGMN)

  • LT1203/v1.1 2.17 Wray Castle Limited

    Multi-Standard Cell Sites

    Multi-RAT Deployment Options

    Multi-RAT operation can be supported in a number of different ways.

    In what could be termed the traditional deployment model, an operator might deploy different RATs at the same or different sites in a way that shares little if any cell site infrastructure between the various base stations the model shown in the diagram, where the base stations on a site even have their own separate mast can be seen as an exaggerated example of this concept.

    A more realistic example of separate RAN deployment can be seen in the Site Sharing infrastructure sharing option. In this model an operator would deploy different RATs to a site in separate base stations but would share site resources such as power, BBU (Battery BackUp) and transmission and would also share a single mast between the different services. Infrastructure sharing of this kind may or may not extend to antenna system sharing; in some models each RAT would have its own set of antennas, in others a system of duplexers and splitters would be used to share antennas between RATs.

    As multi-RAT deployments began to evolve towards the Single RAN model, operators began sharing not only site resources between RATs but even whole base stations. Base station sharing is often categorized into two options; passive and active.

    In passive sharing a single base station chassis could house equipment dedicated to serving different RATs. Common elements such as power supplies might be shared but radio and baseband elements would all be kept separate. Active sharing methods allow key components such as transmission links, baseband processors and even radio units to be shared between RATs and is very much the deployment model made possible by the adoption of OBSAI/CPRI techniques supported by the use of SDR, RRH units and distributed site architectures.

  • LT1203/v1.12.18 Wray Castle Limited

    Single RAN

    Infrastructure Sharing

    The diagram shows an example of passive infrastructure sharing.

    Separate 2G and 3G base stations have been co-located to a site and some resources are shared, such as site power and transmission. The mast and antennas are also shared, which assumes that the base stations are either sharing the same frequency band or that the antennas are capable of multi-band operation.

    To reduce the number of feeder cables that need to be run up the tower, the site configuration employs duplexers and/or diplexers at the top and bottom of the feeders. The duplexers/diplexers combine (at one end) and separate (at the other) the RF signals belonging to the two different RATs and allow a single feeder to be shared by both RATs per sector. Duplexers are used to combine/split signals from the same frequency band and diplexers are used to combine/split signals from different frequency bands.

    More complex arrangements may be required if the 3G base station employs MHA/TMA devices.

  • LT1203/v1.1 2.19 Wray Castle Limited

    Multi-Standard Cell Sites

    Base Station Sharing

    The diagram shows an example of Base Station sharing in which one physical base station has been configured to support two different RATs.

    The level of integration employed in this example is fairly low, as only some of the base stations elements are being actively shared by the RATs the power, transmission, control and baseband elements are shared but each RAT still has its own separate RF unit per sector. Each RAT, in this example, has been deployed in a different frequency band, meaning t hat this could be an example of a single base station supporting separate GSM900 and UMTS2100 operation.

    Diplexers are still required to support antenna sharing between the RATs.

  • LT1203/v1.12.20 Wray Castle Limited

    Single RAN

    MSR Base Station Sharing

    The diagram shows a further example of Base Station sharing but this time one in which one physical base station has been configured to support three different RATs using a combination of separate RAT and MSR radio units.

    This example is based on the assumption that the operator has deployed LTE in the same band as their existing GSM service an operator that uses GSM1800 and LTE1800, for example. The 2G and 4G cells can therefore share MSR radio units, whilst the 3G cells (based in this example on UMTS2100) uses a separate single-RAT radio unit.

    Diplexers are still required to support antenna sharing between the 2G/4G and 3G signals.

  • LT1203/v1.1 2.21 Wray Castle Limited

    Multi-Standard Cell Sites

    Frequency Band Sharing

    In this example the operator has elected to deploy 2G, 3G and 4G cells in the same frequency band and so is able to take advantage of the full range of MSR techniques.

    A single baseband pool of DSP resources serves all three RATs and the transmitted signals for each sector are generated in a single, shared radio unit per sector.

    This configuration and real-world configurations like it sit at the heart of the Single RAN solutions being deployed by operators around the world.

  • LT1203/v1.12.22 Wray Castle Limited

    Single RAN

    Potential RF Issues

    Multi-RAT deployments, whether they follow traditional co-location models or more recent Single RAN models, all face a similar set of potential RF-related issues.

    All of these issues relate to the fact that a multi-RAT deployment necessarily entails generating radio signals belonging to different services at the same site or even within the same base station. The set of typical RF issues that may be caused by co-location and base station sharing include:

    Interference caused by spurious emissions a spurious emission is any unwanted signal component generated by a transmitter and may include noise introduced by the transmitter, amplifier or antennas system, harmonics of the signal(s) being transmitted and intermodulation products.

    Harmonics are an unavoidable by-product of the carrier transmission and modulation process and cause peaks of noise to appear at predictable intervals above and below the carrier frequency.

    Intermodulation occurs when signals of different frequencies are combined, for example in an amplifier or duplexer/diplexer. Intermodulation products cause noise to appear in frequencies above and below the transmitted carriers, like harmonics, but unlike harmonics intermodulation products are less predictable and are therefore more difficult to plan against.

    Most forms of co-location-related interference can be mitigated by employing appropriate transmit and/or receive filters on the effected units.

    Further Reading: http://www.ericsson.com/mx/res/thecompany/docs/publications/ericsson_review/2003/2003024.pdf

  • LT1203/v1.1 2.23 Wray Castle Limited

    Multi-Standard Cell Sites

    Single RAN Architectures

    MSR or Single RAN base stations will logically support the functions of the legacy base stations that they have replaced, so each could perform the functions associated with a GSM BTS (Base Transceiver Station), a UMTS Node B and/or an LTE eNB. These combined base station devices are deployed within a wider combined access network environment.

    One of the benefits of the MSR/Single RAN approach is that all RATs served by a base station are able to share the same packet-based backhaul connectivity, this usually equates to backhaul connections that carry IP traffic over an Ethernet bearer. The shared backhaul connection will usually be terminated at a SeGW (Security Gateway) which will distribute the separate RATs logical interface traffic to the appropriate nodes.

    GSM A-bis interface traffic will be passed from the SeGW to a BSC and UMTS Iub interface traffic will be passed to an RNC. LTE base stations support two interfaces that are typically carried via backhaul links; S1 interface traffic will be routed from the SeGW to the EPC network, while any X2 interfaces will be routed on to the backhaul connections that lead to the target eNBs.

    In keeping with the combined nature of the Single RAN base station, many vendors produce combined multi-RAT controller platforms that perform the separate logical functions of BSC and RNC nodes within the same physical device. The benefits associated with the use of multi-RAT controller nodes include reduced space requirements (as a single node may be replacing multiple separate nodes), reduced power consumption, reductions in the numbers of transmission cables required and others.

  • LT1203/v1.12.24 Wray Castle Limited

    Single RAN

  • Single RAN

    3.i Wray Castle LimitedLT1203/v1.1

    SINGLE RAN BACKHAUL

    SECTION 3

  • Single RAN

    3.ii Wray Castle Limited LT1203/v1.1

  • CONTENTS

    Single RAN Backhaul

    3.iii Wray Castle LimitedLT1203/v1.1

    Backhaul Networks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3.1

    Backhaul Architectures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3.2

    1st Mile Backhaul Sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3.3

    Aggregation Network Sharing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3.4

    SeGW (Security Gateway) Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3.5

    Backhaul Transmission Technologies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3.6

    Carrier Ethernet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3.7

    MPLS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3.8

    IP RAN Backhaul Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3.9

    Single RAN Synchronization Options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3.10

    Redundancy and Protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3.11

    RAN QoS Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3.12

    Backhaul Protocol Stacks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3.13

    Single RAN Protocol Stacks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3.14

    VLANs and Dot 1q . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3.15

    Backhaul VLANs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3.16

    Backhaul VLAN Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3.17

    Backhaul VLAN Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3.18

    Single RAT VLAN Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3.19

    Multi-RAT, Multi-RAN VLAN Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3.20

    Multi-RAT, Single-RAN VLAN Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3.21

  • Single RAN

    3.iv Wray Castle Limited LT1203/v1.1

  • At the end of this section you will be able to:

    OBJECTIVES

    Single RAN Backhaul

    3.v Wray Castle LimitedLT1203/v1.1

    describe the basic services provided by a backhaul network

    outline some of the architectural models commonly employed in backhaul networks

    including the separation of backhaul into access and aggregation zones

    identify the key functions performed by the SeGW (Security gateway) in IP-based backhaul

    networks

    list some of the transmission technologies commonly employed in modern backhaul

    networks

    outline the functionality of popular backhaul technologies such as Carrier Ethernet and

    MPLS

    describe some of the techniques employed to facilitate synchronization, redundancy and

    QoS in IP-based backhaul networks

    describe the layers of the protocols stacks employed by both legacy and Single RAN

    backhaul interfaces

    describe the operation of the VLAN (Virtual LAN) concept in relation to backhaul for Single

    RAN networks

  • Single RAN

    3.vi Wray Castle Limited LT1203/v1.1

  • LT1203/v1.1 3.1 Wray Castle Limited

    Single RAN Backhaul

    Backhaul Networks

    The backhaul service provided to a networks remote access nodes can be generically divided into several basic areas:

    The access node itself will generally be a cellular base station (BTS, Node B, eNode B), the backhaul link supplied to the access node is termed the first or last mile connection and forms part of the Access Transport Network.

    Unless a backhaul link operates in pure point-to-point mode, in which case it will connect directly to one of the operators core network sites, the access link will connect to an aggregation point. These nodes are referred to differently by different operators but they are commonly known as transmission high sites, Point of Concentration (PoC) aggregation sites or even first aggregation sites. In networks that employ microwave access links the first aggregation node is often equipped with a tall tower or is located on a hill or tall building (hence the term high site); this allows it to act as a hub for microwave links emanating from local base station sites. First aggregation points generally aggregate traffic from multiple low capacity access links onto a smaller number of high capacity microwave or fibre connections that lead further back into the operators network. Some network designs incorporate further levels of aggregation in the access network, leading to second aggregation points and second mile connections.

    In legacy 2G and 3G radio access networks backhaul links often connected to remote BSC or RNC sites, which served as the radio resource management nodes for an area of the access network. In addition to signalling and management functions, these sites provided a further aggregation point for access traffic as all access connections for nodes in a given area would be routed to the controller site. In more modern network designs the access network controller has either been moved backwards to a core network site or doesnt exist at all but many networks have kept the remote sites operational to continue to act as traffic aggregation points, with high capacity fibre connections to the core network.

    The high capacity connections established between the first/second aggregation points and the network controller site is often termed the metro transport network. Connections between remote network controller sites and the core network can also be carried by the metro network.

  • LT1203/v1.13.2 Wray Castle Limited

    Single RAN

    Backhaul Architectures

    Backhaul transmission links may be configured in many ways. Some methods offer lower cost of deployment but little in the way of redundancy, whilst others sacrifice capacity at the expense of resilience. The choice of which method to use is generally dictated by the type of network being built, by the importance of the individual sites being connected and by operator policy.

    A selection of the most common backhaul architecture types is shown in the diagram, as is an indication of the trade-off between capacity, resilience and cost inherent in each option.

  • LT1203/v1.1 3.3 Wray Castle Limited

    Single RAN Backhaul

    1st Mile Backhaul Sharing

    The architecture employed by an operator when implementing a Single RAN backhaul scheme can vary widely, especially when it comes to deciding where the sharing should stop.

    The example in the diagram shows a scenario in which an operator has elected to share the initial access backhaul or 1st Mile connection between the deployed RATs at each site.

    A common backhaul Ethernet connection carries traffic for all RATs to an aggregation node (which may also be acting as a SeGW), from where separate 2G, 3G and 4G traffic streams are forwarded over ap-propriate routes.

    2G traffic is forwarded via a BSC to the 2G core networks, 3G traffic is forwarded via an RNC to the 3G core networks and 4G traffic is forwarded directly to the EPC. All core network interfaces shown here share the same aggregation network but are carried as separate traffic streams.

    This approach may be particularly useful in networks that have previously decided to deploy remote BSC/RNC nodes. The aggregation node/SeGW in these cases might be co-located with the access con-trollers and could take advantage of the existing backhaul network site configuration.

  • LT1203/v1.13.4 Wray Castle Limited

    Single RAN

    Aggregation Network Sharing

    In this example the operator has elected to keep the traffic for all RATs at a site combined within the same backhaul connection all the way back to the core network. Single RAN traffic will therefore be sharing logical connections across both the access and aggregation networks.

    The logical connections employed for this purpose could, for example, consist of a Carrier Ethernet private virtual line or an MPLS VPN (Multi-Protocol Label Switching Virtual Private Network).

    This configuration would be attractive in scenarios where the operator has previously elected to deploy BSC/RNC nodes at core network sites.

  • LT1203/v1.1 3.5 Wray Castle Limited

    Single RAN Backhaul

    SeGW (Security Gateway) Deployment

    The SeGW is a generic node that could be deployed in the backhaul environment to manage connection security functions. 3GPP abbreviate the Security Gateway as SeGW in some documentation and SEG in others, most notably in the specifications that deal with NDS (Network Domain Security) 33.210 and 33.310.

    The SeGW is mainly responsible for the creation and management of IPsec SA (Security Association) relationships with access nodes such as base stations. Packet-based cellular backhaul links are typically protected using the IPsec ESP (Encapsulating Security Payload) mode with Mutual Authentication enabled.

    A single backhaul IPsec tunnel is typically created per base station that carries all backhaul traffic to the SeGW. Any access node-access node interfaces, such as the X2 interface in LTE networks and the Iur/Iurh/Iur-g interfaces optionally configured in UTRAN and GERAN networks, may be routed via the SeGW, as shown in the diagram.

    The main benefit to be derived from the use of the SeGW is enhanced security for a networks backhaul connections. Disadvantages related to the use of the SeGW generally relate to the potential traffic bottleneck that each SeGW could become and the additional access network latency that traversal of the security protocols could induce into connections.

    Further Reading: 3GPP TS 33.210, 33.310

  • LT1203/v1.13.6 Wray Castle Limited

    Single RAN

    Backhaul Transmission Technologies

    The backhaul transmission solutions employed in early mobile networks generally used systems based on PDH (Plesiochronous Digital Hierarchy) data rates. E1 (2 Mbit/s) was the primary rate PDH interface type in Europe and many other regions, whilst T1/JT1 (1.5 Mbit/s) was the base standard employed in the US, Japan and several other countries. Higher-order transmission was deployed as multiples of the primary rate, either as direct multiples (2xE1, 4xE1, 8xE1, etc) or in the steps dictated by the PDH standards in use. In Europe these would have been E1 (2 Mbit/s), E2 (8 Mbit/s), E3 (34 Mbit/s) and E4 (140 Mbit/s) and in the US they would have been T1 (1.5 Mbit/s), T2 (6 Mbit/s), T3 (45 Mbit/s), T4 (275 Mbit/s).

    Legacy TDM microwave and copper-based transmission solutions were typically scaled to fit in with the strictures of the PDH standards, although more modern systems were designed to use the higher data rates, more efficient multiplexing services and improved inter-working capabilities of SDH (Synchronous Digital Hierarchy), which is known as SONET (Synchronous Optical Network) in the USA. SDH offers transmission capacities calculated in units known as STMs (Synchronous Transport Modules); STM-1 operated at 155 Mbit/s and was capable of multiplexing up to 63 E1 tributaries or a similar quantity of data structured in some other format. Higher order multiplexing options were STM-4 (622 Mbit/s), STM-16 (2.5 Gbit/s) STM-64 (10 Gbit/s) and STM-256 (40 Gbit/s). The highest order multiplexing versions of SDH were only available over fibre connections. SDH microwave systems were available that offered STM-1 or STM-4 and copper-based SDH systems generally topped out at STM-1.

    Fibre optical transmission systems are available that operate in a variety of frequency/wavelength bands but generally conform to one of two physical fibre types: Multi-mode fibres which have a comparatively large cross-sectional area of around 5060m and are applicable to short distance (of up to a few hundred metres) communication using relatively inexpensive equipment; single-mode fibres which are thinner, around 10m, but are able to operate over much longer distances using more expensive transmission equipment. Data rates of 100 Gbit/s or more are possible with optical fibre transmission, especially when WDM (Wavelength Division Multiplexing) techniques are employed.

    The graph in the diagram provides an indication of the popularity of the three basic backhaul physical layer options prior to the commencement of large-scale LTE rollouts.

  • LT1203/v1.1 3.7 Wray Castle Limited

    Single RAN Backhaul

    Carrier Ethernet

    Carrier Ethernet is the term used to describe one of the options that exist to turn Ethernet, which was initially designed to support LAN (Local Area Network) services into a technology that can be employed to support WAN (Wide Area Network) functionality. Ethernet provides a relatively cheap, simple and well-understood transmission protocol that is flexible enough to carry just about any form of digital traffic. This makes it ideal for use as the bearer in a combined backhaul network that serves a Single RAN deployment.

    Carrier Ethernet functionality makes use of VLAN (Virtual LAN) techniques in which Ethernet frames are tagged to indicate the VLAN to which they belong, the tag values affect the switching decisions made by Ethernet nodes when attempting to deliver those frames. Source and destination customer networks assign C-tag VLAN tags to frame for local switching purposes, whilst Carrier Ethernet providers assign S-tags to frames to allow them to be switched to the appropriate destination node or network.

    In this example a mobile network operator has contracted with a Carrier Ethernet provider to receive backhaul aggregation services. The providers Ethernet network offers connectivity between the operators core network sites and their remote aggregation sites. Onward connectivity from the aggregation site to each base station site is carried by operator-owned Ethernet-based backhaul transmission such as GigE Microwave.

    The base station in this example has been configured to belong to the operators VLAN 3, so traffic destined for that site will be tagged by core network Ethernet switches with VLAN ID 3.

    When outbound Ethernet frames pass through the Carrier Ethernet service providers gateway they are provided with an additional tag. The existing tag now becomes the C-Tag and the new tag pushed on at the gateway becomes the S-Tag. The service provider has assigned VLAN ID 1027 to this customer virtual connection, which is the value carried in the S-Tag.

    The S-Tag allows the frame to be switched through the providers network to the appropriate cell site or aggregation site gateway, where the S-Tag is popped from the frame and the original, single tagged frame is available to be forwarded to the destination cell site.

  • LT1203/v1.13.8 Wray Castle Limited

    Single RAN

    MPLS

    MPLS predates Carrier Ethernet and offers an alternative way of configuring Layer 2 services such as Ethernet to act as a bearer for WAN services.

    Whereas Carrier Ethernet inserts VLAN tags into Ethernet frames to identify the service flows to which particular frames belong, MPLS inserts one or more shim headers between the Ethernet frame header and the frame payload, which is typically an IP packet. The shim header carries a label that identifies the path over which the frame should be switched, which is known as an LSP (Label Switched Path). Shim headers may be stacked in a frame, allowing multiple layers of path switching and aggregation to take place and giving an MPLS network the ability to scale as required. LSPs are setup and cleared down using label distribution techniques that are similar in effect to the signalling protocols employed in legacy circuit-switched networks.

    MPLS introduces some new terminology for Layer 2 switching.

    An LSR (Label Switched Router) is an MPLS Layer 2 switch, which also has the appropriate control plane functions to participate in setting up and tearing down LSPs.

    An LSP is an end-to-end MPLS virtual connection between a pair of edge-LSRs, and across a network of LSRs. This is similar to ATM Virtual Circuits and Virtual Paths.

    An edge-LSR is a special LSR that originates or terminates LSPs, and can classify IP traffic for forwarding across the most appropriate LSP by placing the packet in a FEC (Forwarding Equivalence Class).

    An FEC is a collection of different packets that are treated identically by the MPLS network. So, for example, in a best-endeavour Internet, all packets to the same aggregate network prefix would typically be within the same FEC.

  • LT1203/v1.1 3.9 Wray Castle Limited

    Single RAN Backhaul

    IP RAN Backhaul Requirements

    Cellular networks were originally designed to make use of traditional backhaul transmission technologies, such as E1/T1 links, which provided a default set of services and characteristics.

    The migration to IP-based backhaul technologies has meant that those services and characteristics must now be provided by IP, by a Layer 2 bearer technology or by some additional protocols.

    The main functions that need to be provided in addition to the physical backhauling of RAN traffic include the following:

    Synchronization

    Security

    Scalability

    QoS (Quality of Service) Management

    Redundancy and Protection

    Multiple protocol options exist that enable each of these services to be configured in an IP-based environment.

  • LT1203/v1.13.10 Wray Castle Limited

    Single RAN

    Single RAN Synchronization Options

    There are three generic methods available to operators to provide synchronization signals to base stations: Synchronization by TDM backhaul, synchronization via satellite and synchronization via packet network.

    The default method of providing backhaul services to legacy base stations was via a TDM-based (E1, T1, PDH, SDH) wireline or microwave connection.

    The timing signals transmitted by navigational and other types of satellite are generally derived from G.811-compatible atomic clocks carried by the satellites and are therefore of an order of accuracy that can be used as a sync source by telecoms equipment.

    If a legacy synchronization service is unavailable or if an operator has elected to employ next generation techniques then they may decide to employ a packet network-based timing solution. There are many packet network timing protocols available NTPv3/v4, IEEE 1588v2/PTP, Sync-E and many others which provide timing services in a number of ways.

    Synchronous Ethernet (Sync-E) provides wire-based timing in a way that is very similar to the way in which sync signals were delivered over copper E1 circuits. A timing signal is carried from node to node embedded in the electrical signal that operates over interconnecting cables. Most other forms of packet network timing carry timing messages in data packets. Some, like NTPv4 (Network Timing Protocol version 4), carry accurate timestamp information to client nodes allowing them to adjust their local clocks, whilst others, like IEEE 1588v2/PTP (Precision Timing Protocol), transmit a regular stream of synchronization packets to client devices to provide a timing signal.

    All of the various packet network-based timing share some common features. Firstly, all of them at some point refer back to a central timing source, preferably a G.811 PRC. Secondly, all of them are able to reuse elements of traditional timing networks, if required, allowing timing servers themselves to synchronised via GPS, E1 or SDH timing inputs as well as by packet-based methods.

    In all three methods available to distribute synchronization it is assumed that each network node incorporates a reasonably accurate internal clock of its own that can establish a PLL (Phase Lock Loop) relationship with the recovered timing signal which can then be adjusted by subsequent timing inputs. The more accurate the local PLL clock the less frequently it will need to be updated.

  • LT1203/v1.1 3.11 Wray Castle Limited

    Single RAN Backhaul

    Redundancy and Protection

    Legacy backhaul transmission systems, such as E1/T1 and SDH/SONET, were provided with multiple forms of link redundancy to ensure that network failures could be overcome. Collectively these technologies provided what is known as APS (Automatic Protection Switching).

    E1/T1 systems have access to duplication nodes that could create a copy of each frame and route it over a separate redundant path, while SDH/SONET networks were provided with a sophisticated set of link, ring and mesh redundancy options.

    If legacy transmission technologies are not to be employed in a next generation operators network then other forms of APS may be required. Given that Ethernet in one form or another is the most likely data link layer protocol to be deployed in these networks, this section examines some of the protection and redundancy protocols available for that environment.

    STP (the Spanning Tree Protocol) works by examining the structure of a network on a link-by-link basis. It finds all redundant links between pairs of switches and nominates one link to be used to forward traffic; the others are blocked. If the nominated forwarding link fails then one of the blocked links is reactivated and takes over the traffic forwarding duties. The Rapid Spanning Tree Protocol (RSTP) is another IEEE protocol (standardized as 802.1w) which addresses the problem of slow convergence experienced with the original STP and is designed to work in multivendor switching environments; its main benefit is an improved convergence time of around 30 seconds. Multiple STP (MSTP) was originally standardized as 802.1s and extends RSTP to work on a per VLAN basis. Whereas original STP permitted only one instance of the protocol to run in a domain, MSTP allows there to be separate instances of STP running for each VLAN that is operational in a domain.

    Other commonly-employed options include G.8031 Ethernet Linear Protection Switching (ELPS) and G.8032 Ethernet Ring Protection Switching (ERPS) which provide sub-50ms protection switching of Ethernet connections. ELPS is used to protect point-to-point links, while ERPS protects ring networks. Both use standard Ethernet learning, forwarding and filtering functions. APS messaging is carried by Ethernet OAM using R-APS (Ring Automatic Protection Switching) message formats, which exists as an OAM (Operations, Administration and Maintenance) function that resides on an Ethernet switch.

  • LT1203/v1.13.12 Wray Castle Limited

    Single RAN

    RAN QoS Requirements

    QoS, or at least configurable QoS, wasnt a topic that drew much attention in networks that employed TDM-based backhaul as the bearers employed in those networks applied the same QoS to all traffic. User data was mapped into a timeslot and was carried across a switched circuit at a standard speed; there was no possibility of allowing some traffic to jump the queue as there was no opportunity for traffic to be queued.

    IP-based backhaul networks have great flexibility in terms of the QoS that can be configured and applied to traffic flows, which can be seen as being both an advantage (as it allows operators to prioritize certain types of traffic over others) and as a disadvantage (as it increases the complexity of the networks configuration).

    Quality of Service in an IP-based RAN can be applied at Layer 2 or at Layer 3 (or even at both layers, in the case of connections that traverse differently-configured network domains).

    QoS at the IP (Layer 3) level is usually managed by DiffServe (Differentiated Services), which provides a very simple but highly effective mechanism for flagging the relative priorities of different packets in a routers egress buffers. It works by applying a marker, known as a DCSP (DiffServe Code Point) to each IP packet created in or passing through a network.

    DSCPs are defined within a hierarchy which encompasses high, medium and low priority services and those that will receive a best effort service. The appropriate code point is marked on a packet by encoding it into the DSCP field, which used to be the ToS (Type of Service) field in an IPv4 packet and is a defined field in IPv6 packets. A network operator can configure the DiffServe application in their routers to create different egress queues for each support traffic class and, during busy periods, to forward traffic from these queues in a manner that expedites higher priority traffic.

    IP-based QoS can also be applied at Layer 2. Both Carrier Ethernet and MPLS support methods that allow the Cos (Class of Service) being carried by a frame to be marked and for frames to receive prioritized service in line with those markings. Carrier Ethernet applies CoS markings to the PCP (Priority Code Point) field of a Dot 1q tag, whereas MPLS supports various methods of reusing DiffServe.

  • LT1203/v1.1 3.13 Wray Castle Limited

    Single RAN Backhaul

    Backhaul Protocol Stacks

    Each generation of cellular network had its own specific backhaul interfaces defined, each with its own set of protocol stacks.

    Traditionally the GSM A-bis backhaul interface employed a protocol stack that consisted of frame-based traffic and signalling formats that were transported over TDM transmission links, typically using E1/T1/JT1 transmission standards. GSM networks accumulated huge TDM transmission networks that typically supplied one or more E1/T1/JT1 link to each cell site.

    UMTS Iub backhaul interfaces were designed to be carried over ATM links and therefore consisted of traffic and signalling channels that sat on top of an AAL (ATM Adaptation Layer), which manipulated their data so that messages could be carried in an ATM cell stream. UMTS R99 networks generally built their backhaul environment on top of their existing GSM TDM-based networks and, as sites got busier and required more bandwidth, were obliged to employ additional techniques such as IMA (Inverse Multiplexing of ATM) to keep up with demand. Many networks took the decision to provide high capacity SDH connections to busier sites, thus avoiding the need to employ TDM or IMA.

    LTE operates in an all-IP environment in which all traffic (signalling, user plane, O&M) is carried by IP. Operators typically employ packet-based backhaul networks that use technologies such as Carrier Ethernet or MPLS to provide connectivity for individual cell sites.

  • LT1203/v1.13.14 Wray Castle Limited

    Single RAN

    Single RAN Protocol Stacks

    Single RAN backhaul traffic may be carried by a variety of Layer 2 and Layer 1 technologies.

    2G traffic will continue to be mapped to TDM (E1/T1) frames for transit across the RAN but those frames will themselves be mapped, via CES (Circuit Emulation Service) framing protocol, to an IP-based connection. User plane traffic will typically map to a UDP/IP (User Datagram Protocol) bearer, whilst signalling and administrative traffic might map to a TCP (Transmission Control Protocol) or SCTP (Stream Control Transmission Protocol) bearer.

    Similar functions will be performed for ATM-based traffic connecting to UMTS R99 Node Bs. Air Interface traffic will continue to be mapped, via the Iu Frame Protocol and AAL2, to ATM cells. Those cell streams might be mapped via IMA to a set of TDM (E1/T1) framing functions which would then map to UDP/IP or the ATM cell stream might map directly to UDP/IP. Control plane traffic will continue to map to SAAL5 (Signalling AAL5) and into ATM cells which would then follow the same path as User plane traffic.

    Post-Release 7 UMTS Node Bs and LTE eNBs are able to connect directly to native IP bearers with no requirement for framing or emulation services.

    Below the IP layer, converged networks typically employ MPLS over Ethernet or Carrier Ethernet to transport IP traffic at Layer 2 and may employ a variety of Layer 1 transmission techniques to provide physical bearers for that traffic.

  • LT1203/v1.1 3.15 Wray Castle Limited

    Single RAN Backhaul

    VLANs and Dot 1q

    Ethernet was originally designed to provide local area networking services to nodes connected to the same local physical medium or to devices connected to the same hub or switch. This came to be seen as a limitation to the way in which organizations created workgroups and to their ability to network between associated devices that were physically remote. Later iterations of Ethernet therefore added the ability to create virtual LANs or VLANs.

    A VLAN operates in the same way as a physical LAN; it provides an all-informed broadcast medium linking a set of devices together with the added benefit that the members of a VLAN do not need to be connected to the same physical medium. VLAN functionality is managed by Ethernet switches. When a node is to be included in a VLAN the Ethernet switch that serves it is configured with the appropriate VLAN ID on the port that connects to the node. Any frames transmitted by the node are automatically copied by the switch to ports configured with the same VLAN ID. When frames transmitted by a VLAN member are forwarded from the originating switch to other switches a VLAN tag is added to the frame to indicate the VLAN to which it belongs. Receiving switches will then forward the received frame through any local ports configured as members of that VLAN.

    The ability to add and remove (or push and pop) VLAN tags plus the signalling intelligence that allows switches to advertise details of the VLANs that their local connected nodes belong to are provided by VLAN trunking protocols. There are two main trunking protocols in common use; VTP (VLAN Trunking Protocol) is a proprietary Cisco protocol and 802.1q (commonly known as dot 1q) is an open-standards IEEE protocol.

    802.1q adds tags to VLAN frames as they travel between switches. Dot 1q tags are inserted between the Source MAC (Medium Access Control) Address and EtherType fields and are removed before the frames are forwarded to end-user devices. A dot 1q tag is 4 bytes long and consists of a TPID (Tag Protocol ID), which usually defaults to a value of 8100 in hexadecimal. Two further fields, PCI (Priority Code Indication) and CFI (Canonical Format Indicator), are generally unused and are set to default values, leaving the VLAN ID field as the only substantive part of the tag. The VLAN ID Field is 12 bits long, allowing a network (or a region of a network) to define up to 4096 separate VLANs.

    Further Reading: www.ieee802.org/pages/802.1Q.html

  • LT1203/v1.13.16 Wray Castle Limited

    Single RAN

    Backhaul VLANs

    VLANs can be and are employed to support packet-based backhaul deployments.

    In a generic Ethernet backhaul environment, VLAN management can be handled by Ethernet switches deployed at various points. In the example shown in the diagram the interface between the operators core network and the access/backhaul network is managed by an IP router acting as a CNG (Core Network Gateway) this node may also