C-RAN: Moving baseband to the Cloud
Transcript of C-RAN: Moving baseband to the Cloud
C-‐RAN Moving Baseband to the Cloud Matthew C. Valenti, West Virginia University Joint work with Peter Rost, Nokia Networks Aleksandra Checko, MTI Radiocomp
2
• Ma)hew C. Valen1, Founder and Director • Brian Woerner, Professor and Dept. Chair. • Natalia Schmid, Professor • Daryl Reynolds, Associate Professor • Vinod Kulathumani, Associate Professor
Faculty
Graduate Students Tools and Facili1es
Research Topics
• 400-‐core cluster computer. • Open source simula1on soKware: CML. • Web portal for researchers: WebCML. • USRP soKware radio testbed.
• Salvatore Talarico (interference modeling). • Terry Ferre) (physical-‐layer network coding). • Mohammad Fanaei (distributed detec1on). • Veeru Talreja (biometrics in the cloud).
• Wireless and cellular networks. • Coopera1ve communica1ons. • Coding and modula1on. • Spread spectrum systems.
• Op1miza1on of frequency hop systems. • Wireless sensor networks
• Distributed detec1on • Cloud compu1ng.
• Cloud radio access networks • Biometric and human ac1vity detec1on.
WVU Wireless Communica1ons Research Lab
Nokia AirScale Radio Access
3
Cloud RAN AirScale Wi-‐Fi AirScale Base Sta6on
Common so9ware supports evolu6on towards unified architecture and 5G
High scalability Enabler for efficient resource scalability. Meets the requirements of next genera6on heterogeneous access
Evolu6onary Combining licensed & un-‐licensed spectrum for maximum u6liza6on of radio assets
5G readiness A founda6on for the next leap of spectral efficiency, peak rates & latency op6miza6ons
Business agility Enables new services and business opportuni6es & agility to run shorter so9ware and innova6on cycles
MTI Radiocomp
u Primary focus: research and development of Remote Radio Heads (RRHs) for LTE, WCDMA, WiMAX and towards 5G
u Global pioneer in the development and manufacturing of software-configurable and remotely tunable radio head systems
u One of the world’s leading competence centers for CPRI, OBSAI, and JESD204A/B IPC controllers, core development, testing, and systems integration.
u Research on 5G networks, including C-RAN and fronthaul u Danish site of MTI Mobile u http://www.mti-mobile.com/
4
PART I: INTRO TO C-RAN
5
Traditional Base Stations
u Traditionally, all processing is done locally at the base station ª Power Amplifier (PA) ª RF ª Baseband ª Control Plane ª Transport
6 [1]. A. Checko et al, “Cloud RAN for Mobile Networks – A Technology Overview,” IEEE Comm. Surveys & Tutorials, First Quarter 2015.
RF cabling (co-axial)
Separating the RRH from the BBU
u The modern trend is to separate the RF and baseband functionality ª Remote Radio Head (RRH)
• RF and antenna (often integrated) • Analog front-end • Power amplifier
ª Baseband Unit (BBU) • Baseband processing • Implemented digitally
7 [1]. A. Checko et al, “Cloud RAN for Mobile Networks – A Technology Overview,” IEEE Comm. Surveys & Tutorials, 1Q 2015.
optical cable
The RRH / BBU Functional Split
8 [1]. A. Checko et al, “Cloud RAN for Mobile Networks – A Technology Overview,” IEEE Comm. Surveys & Tutorials, 1Q 2015.
Example RRH Systems
u Alcatel-Lucent (now Nokia) Light Radio
9 http://www.lgsinnovations.com/solutions/broadband-wireless-network-light-radio
Example RRH Systems
10 Telecom Italia Demo MTI Radiocomp Demo
CPRI
u Common Public Radio Interface u Features:
ª Constant bit-rate up to 12 Gbps ª Transmission of I/Q samples
u Maximum distance between RRH and BBU ª 40 km ª 0.1 ms latency (speed of light in a fiber) and no repeater needed
u Other protocols for connecting RRH and BBU ª Open Base Station Architecture Initiative (OBSAI) ª Open Radio equipment Interface (ORI)
11
Required Rates for a CPRI-Based Fronthaul
12 [1]. A. Checko et al, “Cloud RAN for Mobile Networks – A Technology Overview,” IEEE Comm. Surveys & Tutorials, 1Q 2015.
Variability of Processing Load
u Each BBU has its own load ª Depends on traffic in cell ª As a result, the load on the
different BBU’s is highly variable
13 [1]. A. Checko et al, “Cloud RAN for Mobile Networks – A Technology Overview,” IEEE Comm. Surveys & Tutorials, 1Q 2015.
offered processing load
original
The (Virtual) Baseband Hotel
u Several Baseband Units can be co-located ª Creates a pool of BBU’s
or BBU Hotel ª BBU’s could still be
physically separated ª However, combining
BBU’s into a shared resource provides a statistical multiplexing gain
14 [1]. A. Checko et al, “Cloud RAN for Mobile Networks – A Technology Overview,” IEEE Comm. Surveys & Tutorials, 1Q 2015.
Fronthauls and Backhauls
u The fronthaul connects the RRH’s to the BBU Pool u The backhaul connects to Pool to the Network
15 [1]. A. Checko et al, “Cloud RAN for Mobile Networks – A Technology Overview,” IEEE Comm. Surveys & Tutorials, 1Q 2015.
fronthaul backhaul
C-RAN u Aggressive use of virtual BBU pools is known as C-RAN u RAN = Radio Access Network u C = ?
ª Cloud ª Central ª Collaborative ª Cooperative ª Clean (i.e., Green)
16 [2]. China Mobile Research Institute, “C-RAN: The Road Towards Green RAN,” White Paper, 2010.
cloud group
Benefits of C-RAN u Centralized processing exploits statistical multiplexing gain
ª Processing resources provisioned for the average across the cloud group, not the peak requirement of any one cell
ª Adaptability to non-uniform and time-varying traffic u Centralized management
ª Easier HW/SW upgrades (and they are decoupled) ª Support for multiple standards ª Facilitates self-organizing networks (SONs)
u Facilitates cooperative processing ª COMP and eICIC ª Easier handoffs
u Other applications and services can be integrated into the network edge
17
Perceived Benefits
u Survey of Operator Motivations for C-RAN ª 86% : Cost savings ª 64% : Better interference management ª 57% : More effective traffic management ª 57% : Agility ª 41% : Low power consumption
18
19
Statistical multiplexing gain (processing)
RAN
RANaaS
RANaaS Interface RANaaS Hypervisor
u Exploitation of temporal and spatial traffic fluctuations u Efficiently use available resources, scale resource according to needs
(resource pooling, elasticity)
20
Customization benefits (flexibility)
RAN
RANaaS
RANaaS Interface RANaaS Hypervisor
u Optimisation based on purpose, deployment, … u Using software implementation rather than configuration (SON) u Flexible software assignment over time and space
21
Performance improvement (cooperation) u PHY processing approaches: Uplink
ª Local processing in BSs ª Distributed processing among BSs ª Central processing ª Hybrid local / distributed / central processing
local central
distrib.
RAN-CN
X2
C-RAN BS1
DET
DET
DEC
DEC
DEC
MAC RLC RRC
MAC RLC RRC
DET &
DEC
RF+ A/D FFT RE
Demap
RF+ A/D FFT RE
Demap
BS2
hybrid
Challenges of C-RAN
u Requires significant infrastructure for fronthaul ª High throughput requirements ª Can be alleviated by data compression techniques
u Timing pressures ª Data must be quickly transmitted over the fronthaul ª Processing must be done in real time and within a hard deadline ª Characterizing the performance of a system where missing the
deadline is oftentimes more limiting than the degradation of the channel itself (“computational” outage)
u Virtualization technologies ª Need to efficiently share the computing resources in real time
22
C-RAN vs. DAS
u C-RAN technology is reminiscent of Distributed Antenna Systems ª DAS systems also connect multiple antennas to a centralized BBU. ª Earlier DAS systems used the antennas to service a single cell, while in a
C-RAN system, each antenna placement corresponds to a different logical cell.
23 [3]. A. A. M. Saleh, A. J. Rustako and R. S. Roman, “Distributed Antennas for Indoor Radio Communications,” IEEE Transactions on Commun., vol. 35, pp. 1245-1251, Dec. 1987.
ª More advanced DAS systems used antennas to service different sectors.
ª C-RAN can be considered to be an evolution of DAS, whereby the antennas serve different cells rather than merely different sectors, and optical interconnects are used between RRH and BBUs.
Tidal Usage Patterns
u Daily load depends on the base station location u Peak traffic can be more than 10X that of off-peak traffic
24 Sources: www.manycities.org, and [2] China Mobile Research Institute, “C-RAN: The Road Towards Green RAN,” White Paper, 2010.
0 0.5 1
1.5 2
2.5
0 6 12 18 24
DL data
Time (h)
Traffic in New York, from MIT/Ericsson Lower ManhaWan Ridgewood
0
10
20
30
40
50
0 6 12 18 24
Load
Time (h)
Data from China Mobile Office Residen6al
Tidal Usage Patterns
u Example showing the migration of users at two different times. u Statistical multiplexing gain is enhanced by joining cells with
disparate traffic patterns into the same cloud group
25
0 5 10 15 20 25 300
5
10
15
20
25
30
Distance [Km]
Dis
tanc
e [K
m]
0 5 10 15 20 25 300
5
10
15
20
25
30
Distance [Km]D
ista
nce
[Km
]
Switching off RRHs: Greener Networks
u Under-utilized RRH’s can be switched off, thereby saving power u This creates larger cells u Particularly effective for dense deployments
26
0 5 10 15 20 25 300
5
10
15
20
25
30
Distance [Km]
Dis
tanc
e [K
m]
0 5 10 15 20 25 300
5
10
15
20
25
30
Distance [Km]D
ista
nce
[Km
]
Network Upgrades
u Upgrades can be done for two purposes ª Increasing coverage
• Add more RRH to the existing network • Need to also add the corresponding fronthaul link • RRH’s are small and compact, since they have no baseband processing
27
ª Increasing capacity • Add more BBU to the pool • If virtualized, translates to adding more
computational resources
BBU Pool
Upgrading and Maintaining BBUs
u A failed compute server is easily swapped out u Software easily upgraded when BBU hardware is centralized
ª Very little maintenance needed at the remote sites
u Compute hardware and software upgrades are decoupled ª Different vendors for each?
u Additional comm standards supported through software update ª New RRH typically only needed for new RF bands
28
The Cloud in C-RAN is Different than Cloud in General
29 [1]. A. Checko et al, “Cloud RAN for Mobile Networks – A Technology Overview,” IEEE Comm. Surveys & Tutorials, 1Q 2015.
BBU Hardware Architectures
u Digital Signal Processor (DSP) vs. General-Purpose Processor (GPP) u Tradeoff between using commodity GPP hardware, vs. special-
purpose DSP hardware.
30
LTE on a GPP?
u Intel has developed the C-RAN LTE Base Station Architecture ª A complete LTE base station
with Intel Core i7 processor and software
ª Parallelizes baseband processing across multiple cores
ª No need to special-purpose hardware
ª Leverages unused CPU cycles to run other network applications
31 http://www.intel.com/content/www/us/en/communications/communications-c-ran-solution-video.html
RRH / BBU Functional Split: Partial Centralization
u Rather than doing all L1 processing in the BBU Pool, some of the L1 processing can be done at the RRH.
u Reduces the load on the fronthaul. u Increases computational load at the RRH.
32 [1]. A. Checko et al, “Cloud RAN for Mobile Networks – A Technology Overview,” IEEE Comm. Surveys & Tutorials, 1Q 2015.
Source: www.ict-ijoin.eu 33
RRH / BBU Functional Split: Partial Centralization
RF
PHY
MAC
RRM
Netw. Mgmt.
Adm./Cong. Control
“Conventional” implementation of LTE
RF
PHY
MAC
RRM
Netw. Mgmt.
Adm./Cong. Control
C-RAN Implementation (BB-pooling)
Exe
cute
d at
BS
Centrally executed
Cen
trally
ex
ecut
ed
Executed at R
RH
Flexible Functional Split
Example: Partly centralised (inter-cell) RRM
Example: Joint Decoding
Source: www.ict-ijoin.eu 34
Tradeoffs
RF processing
A/D conversion / pre-processing
Lower PHY (incl. DET)
Upper PHY (incl. FEC)
MAC (incl. MUX, HARQ)
RLC (RB buffers, ARQ)
PDCP (ciphering)
RRC/C-Plane
Sche
dulin
g
Radio Access Point
EPC
Split A
Split C
Split D
High centraliza1on and coordina1on gains
Strong backhaul and implementa1on requirements
Achievable gains Efforts and cost
Split B
be)er
worse
Level of centralization
Source: www.ict-ijoin.eu 35
Fronthaul technologies
Fiber (CWDM/dark)
mmWave
μ-‐Wave
Sub-‐6 GHz
PON
xDSL
Metro-‐op1cal
Split A
Split B
Split C
Split D
10 1 10 2 10 3 10 4 10 5
10 -4
10 -3
10 -2
10 -1
10 0
Throughput [Mbps]
Late
ncy
(per
hop
, RTT
)[s]
Fronthaul Compression
u To alleviate the demands on the fronthaul, compression can be used u Standard compression techniques have limited success
ª Reduced sample rate and compressive sampling ª Non-uniform and vector quantization
u Subcarrier compression ª Requires FFT/IFFT at the RRH ª Digital automatic gain control
u Special attention should be paid to the random access channel (RACH)
u Partial centralization can be considered a type of compression
36
Survey of Fronthaul Compression Techniques
37 [1]. A. Checko et al, “Cloud RAN for Mobile Networks – A Technology Overview,” IEEE Comm. Surveys & Tutorials, 1Q 2015.
Fronthaul optimization
u Compression – up to 4:1, 3:1 u With load-dependent functional splits (Split B and above) higher
bandwidth savings can be achieved on fronthaul ª Up to reaching user data level – compression: 2.5Gbps/150 Mbps =
16:1
u Variable bit rate traffic fits packet-based transport solution
38
Shared Ethernet for cost-saving and flexibility ü Widely deployed (reuse!)
ü Dedicated links ü Shared links
ü Aggregation ü Multiplexing gains on BBU and links
ü Switching
39
! Fronthaul cost savings vs problems with delays and synchronization ! Synchronous CPRI vs asynchronous
Ethernet ! Data delay: 100-400 us, ≈constant
BBU Pool
GPS/1588 Slave
RRH
RRH
Sync delivered together with data (CPRI)
Ethernet network
Eth switches
BBU Pool
1588 Master
1588 Slave
RRH
CPRI2Ethgateway
RRH
CPRI2Ethgateway
1588 Slave
u Timing is really important ª Frequency of transmission ª Handover, coding, cooperative techniques,
positioning u Requirements (4G)
ª Frequency error LTE – A TDD/FDD: ±50 ppb ª Phase error LTE-A with eICIC/CoMP: ± 1.5 - 5 μs, MIMO: 65 ns, positioning: ± 30 ns
u Current solutions for timing distribution ª GPS ª PHY layer clock – SyncEth ª Packet-based timing
• IEEE 1588v2 (PTP) ª Multiple
40
Timing in fronthaul
00 01 02
00 01 02
Device A
Device B
How to reduce queueing delays on a shared fronthaul? u Preemption (switch upgrade required)
u Source scheduling
A
BB A
A
BB A
No scheduling – B delay non deterministic
Scheduling – B delayed initially, delay deterministic
BBU pool BBU pool
A
B
A
AB B
B
Input (high priority)
Input (low priority)
Output (no preemption)
Output (with preemption)
Preemption
A
B
AB
AB B
SDO activities on fronthaul u IEEE 1904.3
ª Encapsulation and mapping of IQ data over Ethernet, ª Fields need to be compatible with sync solution
u IEEE 1914 Next Generation Fronthaul Interface ª Architectures for fronthaul transport ª Requirements for the fronthaul network ª Functional split
u IEEE 802.1, Time Sensitive Networking task force ª Frame preemption (802.1Qbu) ª Scheduled traffic (802.1Qbv) ª Time-Sensitive Networking for Fronthaul (profile definition, 802.1CM)
u IEEE 1588v2 (-2008) ª Packet based solution for time reference delivery ª A working group preparing a new version of the standard with an expected completion date of 31 December 2017
àcooperation between activities u Study item in China Communications Standards Association (CCSA)
42
43
Multiplexing gains (transport network)
102 103 104 105 106 107100
101
102
103
no. o
f sup
porte
d iS
Cs
BH data rate [Mbps]
Split A.1, no muxSplit A.1, muxSplit B.1, no muxSplit B.1, muxSplit C, no muxSplit C, mux
Sharing BBU Pools By Multiple Operators u The BBU pool is not necessarily owned and maintained by the
network operator. ª A large metropolitan pool could handle baseband processing for
multiple network operators. ª Each operator would be a different tenant in the BB hotel. ª BBU hosting is a new business model (RAN as a Service).
44
BBU Pool
Operator A Operator B
Enabling Tighter Inter-Cell Cooperation
u Inter-cell cooperation is standardized in LTE-A ª eICIC: (enhanced) Inter-Cell Interference Coordination (R10) ª COMP: CoOrdinated Multi-Point (R11)
45
u These features, as well as handover, require inter-cell communications ª In a traditional architecture, done
using X2 interface u In a C-RAN environment, inter-cell
coordination can be done without any additional latency ª Opportunities for tighter
coordination
[1]. A. Checko et al, “Cloud RAN for Mobile Networks – A Technology Overview,” IEEE Comm. Surveys & Tutorials, 1Q 2015.
PART II: FROM C-RAN TO VIRTUALIZED MOBILE NETWORKS
46
47
The cloudification of the network • SaaS • PaaS • IaaS
• On-demand • Broad access • Pooling • Elasticity • Measured
• C-RAN • RAN-Sharing • SDN • SDR
SQL
Mobile Network Radio Access NW Backhaul NW Core NW
ISP
• NFV • Soft-EPC • SDN
Virtualisation / “Cloudification”
Communication Technology Information Technology
Cloud-Technologies u Cloud computing will have a telecommunication
specialized branch ª Telco-grade clouds, telco over cloud,… ª Self-healing systems ª Carrier grade operating systems and hypervisors ª High availability clouds ª Quick response control planes ª ……………………………………………….
u Container virtualization technology (virtualization of application layer)
u HW assisted virtualization u Low-level, OS/hypervisor internal enhancements to
overcome hypervisor limitations (e.g. interrupt latency optimization)
48
Source: www.ict-ijoin.eu 49
Integration in 3GPP LTE Architecture
Source: www.ict-ijoin.eu 49
RRH RRH
eNB
MME / S-GW MME / S-GW
eNB
eNB
S1 S1
S1 S1
X2
X2X2
Virtual eNB
MME / S-GW
Network Controller
X2
S1 S1
E-UTRAN
physical link
logical interface
Legend:
Transport NW
CRAN Data Center
Source: 5gnorma.5g-ppp.eu 50
Integration in 5G Architecture
Hardware Resources
Library of Network Services
Library of Network Func1ons
Service Service
Physical
Resources
Virtua
lized
Re
sources
Network Cloud Node
Edge Cloud Node
Physical NF
Virtualiza1on layer (Hypervisor)
Compute Storage
Networking
Compute Storage
Networking
Virtualiza1on layer (Hypervisor)
VNF VNF VNF VNF VNF VNF VNF VNF
Control Data
VNF
VNF
Service
VIM Agent VIM Agent VIM Agent MANO-‐F Mgmt & Orch.
MAN
O-‐F
VNF
MAN
O-‐F
VNF
VNF
MAN
O-‐F
VNF
MAN
O-‐F
Compute Storage
Networking
VNF
VNF
PF
Compute Storage
Networking
Networking
Compute Storage
SoKware Resources u Software libraries
ª functions ª services
u Hardware resources ª computing, storage, networking ª virtualized and physical NF
u Edge and network clouds operate according to ETSI NFV principles
u Physical network functions ª SW and HW cannot be decoupled ª SW highly embedded in the HW
Added
Source: 5gnorma.5g-ppp.eu 51
Integration in 5G Architecture
SDN-‐ enabled NW
SDN-‐ enabled NW
Edge-‐Cloud
Bare-‐metal
Edge-‐Cloud
Central-‐Cloud
VNFVNF
VNF
VNF
Controllers
Bare-‐metal
VNF
MANO-‐F
VIM AgentVIM AgentMANO-‐F
VIM AgentVIM AgentMANO-‐F
OrchestratorMANO-‐F
NF ManagerMANO-‐F
ControllerSDM Controller
NF ManagerMANO-‐F
SDM Controller
u Concrete illustration of network deployment and function placement
u Shows mapping of functions to geographical locations
u May be used to display per instance/per location properties ª functions allocated ª space demand ª link capacity and latency
52
Functional Split Options
RF processing
A/D conversion / pre-processing
Lower PHY (incl. DET)
Upper PHY (incl. FEC)
MAC (incl. MUX, HARQ)
RLC (RB buffers, ARQ)
PDCP (ciphering)
RRC/C-‐Plane
Sche
dulin
g
Radio Access Point
EPC
Split A
Split D
Split C
Split B
CoMP INP
Joint detection
Joint decoding
ICIC Scheduling
Load balancing
HO
AC
RRM schemes
Decoupled Uplink and Downlink
u Uplink and downlink don’t have to be serviced by the same RRH. ª Uplink serviced by closest RRH ª Dowlink serviced by the link with best SINR ª Especially important in HetNet deployments
53
Downlink Uplink
[4] J. Andrews, “Seven Ways that HetNets Are a Cellular Paradigm Shift,” IEEE Comm. Mag., 2013
Decoupling the Control and Data Planes
u Use high-powered microwave RRH for control information u Lower-powered mmwave RRH for data transfer
u Device-Centric Networks (“Cell-Less”)
54
Source: www.ict-ijoin.eu 55
Exemplary deployments: Stadium
Physical Architecture Description: u Architecture based on a number of Rings equipped
with several RAP. Each Ring has a BH interface toward the EPC
u All the Rings and the Macro eNB coordinated by EMS and CRAN.
Stadium Ring #1 placed on the roof top
iTN
EPC
RANaaS
macroeNB
S1iTN
J1
Stadium Ring #2 placed on the roof topStadium Ring #3 placed on the roof top
J2
MME S-GW P-GW
Small Cell
Macro eNB
Use Case Description:
u Typical stadium: wide area (order of 50.000 m2) u Average number of spectators: 40000.
• very high density of RAPs that are connected through optical fibre in multiple rings
• local processing capabilities are provided in order to allow for breaking out local traffic and thereby reducing the traffic towards the core network
General Parameters Value
UE density ~1 UE/m2
Small Cells Number 50 -‐ 300
Experienced UE DL+UL Throughput 1.5 – 10 Mbps
Traffic density 0.5 – 10 Mbps/m2
Source: www.ict-ijoin.eu 56
Exemplary deployments: Plaza
Physical Architecture Description: u Random small cell deployment u Random user deployment u Static/Nomadic user u Heterogeneous backhaul
Use Case Description: u Exemplary typical town square of 50.000 m2. u Number of users: range from 1000 (off-peak
hours) to 5000 (busy hours). u Overall throughput in a square: from 1.5Gbps to
50Gbps (assuming 5% active UEs with 1.5-10 Mbps total throughput demand)
S1
J1
J2
General Parameters Value Number of small-‐cells 4-‐10 (sparse to dense deployment)
Number of UEs 15-‐30 (lightly to highly loaded scenarios)
Radius for small cell dropping in a cluster 50 m (3GPP TR 36.872 [1]) Random Dropping
Radius for UE dropping in a cluster 70 m (3GPP TR 36.872 [1]) Random Dropping
Minimum distance 3GPP TR 36.872 [1]
RAP-‐RAP 20 m UE-‐RAP 5 m Macro eNB-‐iSC cluster center 105 m
Backhaul Capacity / Latency ~50 -‐ 100 Mbps, 1-‐10ms 10 Gbps, 5μs
iSC
eNodeB
Local RANaaS
S-‐GW
RANaaS
X2S1
J1
iTN
Source: www.ict-ijoin.eu 57
Exemplary deployments: Inner-city
Physical Architecture Description: u Regular small cell deployment u Random user deployment with Slow/High mobility u Heterogeneous backhaul
Use Case Description: u existence of hotspots and high demand areas, or
coverage holes of the macro-cell layer u Throughput: 10-15 Mbps (busy hour), assuming 10
MHz bandwidth. u Layout: regular small cell deployment in a
hexagonal grid, which covers 1 Km2.
• Urban scenario where high capacity is continuously provided through RAPs
• Challenging scenario as each RAP needs to be connected to the backhaul network
• Number of aggregation points and CRAN data centers instances must be optimised
General Parameters Value Number of small-‐cells 19
Number of UEs 1-‐3 UE per small cell (avg.)
Small cell dropping Regular on Hexagonal Grid
ISD 50 m
UE dropping in a cluster Random dropping
Minimum distance UE-‐RAP 5 m
Backhaul Capacity / Latency ~~50 -‐ 100 Mbps / 1-‐10ms ; 10 Gbps / 5μs
Source: www.ict-ijoin.eu 58
Exemplary deployments: Indoor
Physical Architecture Description: u Regular small cell deployment u Random user deployment u Nomadic user u Wireline backhaul (optical fibre and ADSL)
Use Case Description: u Large amount of active users (range
of 200-500) in one hall, with high communication activity.
u System throughput: from 200 Mbps to 5 Gbps (assuming throughput demand of 1 – 10 Mbps)
J3J3
iSC iSC iSC iSC
iTNiTN
S1/X2/J1/J2/J3 J1/S1 S1/S5
RANaaS / iLGW
Indoor Deployment
iNC
iTN
MME / S-GW
wired physical link
wireless physical link
General Parameters Value Number of areas per floor 16 Areas;1 or 2 Floors Floor height 6 m Area size 15 m X 15 m Hall size 120 m X 20 m Number of small cells 2 (sparse) / 4 (dense) per floor Small cell dropping Regular
Number of UEs 10 per small cell (sparse) 5/10 per small cell (dense)
UE dropping Random ISD 60 m (sparse) / 30 m (dense) Minimum distance UE-‐RAP 3 m
Backhaul Capacity / Latency >100Mbps per RAP-‐EPC link <1, 10, and 50 ms
Enabling Higher-Layer Applications
u With C-RAN, can move applications and services from the core network to the network edge ª RAN caching
u Examples ª Voice-over-LTE ª LTE-Broadcast ª Content caching
59 [5] X. Wang et al, “Cache in the Air: Exploiting Content Caching and Delivery Techniques for 5G” IEEE Comm. Mag., 2014
60
C-RAN Plays Well with Mobile Edge Computing
Hotspots Cities Network-wide
• City applications • E.g. IoT applications deployed as
part of Smart City initiatives, or services for city residents and visitors
• Deployed at metro aggregation sites and baseband hotels
• Network-wide applications • E.g. essential network functions, and
ubiquitous services that require a consistent experience / performance
• Deployment in combination with Radio Cloud, or specific deployment patterns (e.g. Car2X along roads)
• Zonal applications • E.g. special services in stadiums,
exhibitions, malls, enterprise campuses
• Deployed in combination with Small Cell and Macro BTS (RRH, DAS)
LTE Broadcast and Content Caching u LTE-Broadcast
ª aka enhanced Multimedia Broadcast Multicast Services (eMBMS) and MBSFN Single Frequency Network (MBSFN)
ª Special LTE subframes reserved for delivery of broadcast data ª Same signal is transmitter from multiple base stations in an eMBMS area ª C-RAN facilitates the transmission of identical signals
61
BBU Pool
[6] S. Talarico and M.C. Valenti, “An accurate and efficient analysis of an MBSFN network, in Proc. IEEE Int. Conf. on Acoustics, Speech, and Signal Processing (ICASSP), (Florence, Italy), May 2014.
Summary of C-RAN Deployment Scenarios
62 [1]. A. Checko et al, “Cloud RAN for Mobile Networks – A Technology Overview,” IEEE Comm. Surveys & Tutorials, 1Q 2015.
PART III: LTE CASE STUDY
63
LTE Case Study
64 [1]. A. Checko et al, “Cloud RAN for Mobile Networks – A Technology Overview,” IEEE Comm. Surveys & Tutorials, 1Q 2015.
LTE Releases
65 [7]. E. Dahlman, S. Parkvall, and J. Skold, 4G LTE / LTE-Advanced for Mobile Broadband, second edition 2014.
LTE Protocol Stack
66 [7]. E. Dahlman, S. Parkvall, and J. Skold, 4G LTE / LTE-Advanced for Mobile Broadband, second edition 2014.
LTE Time-Frequency Grid
67
Base Sta1on
UE 1me
Downlink Data Stream
50 49
1 2
3
1 ms 1 ms 1 ms
Resource Block (RB) (12 subcarriers)
1 ms
Resource Element (RE)
LTE Frame Structure
68 [7]. E. Dahlman, S. Parkvall, and J. Skold, 4G LTE / LTE-Advanced for Mobile Broadband, second edition 2014.
Downlink Arrangement
u Downlink subframe divided into control and data regions. ª Control region is first 1 and 3 OFDM symbols of the subframe. ª Physical Downlink Shared Channel (PDSCH) is in the region
following the control region.
69 [7]. E. Dahlman, S. Parkvall, and J. Skold, 4G LTE / LTE-Advanced for Mobile Broadband, second edition 2014.
Baseband Processing for the Downlink
70 [7]. E. Dahlman, S. Parkvall, and J. Skold, 4G LTE / LTE-Advanced for Mobile Broadband, second edition 2014.
Code Block Segmentation
u Maximum code block size is 6144 bits ª Maximum TB size is 75,376 bits. ª May need to segment into as many as 13 code blocks.
71 [7]. E. Dahlman, S. Parkvall, and J. Skold, 4G LTE / LTE-Advanced for Mobile Broadband, second edition 2014.
Turbo Coding
72 [7]. E. Dahlman, S. Parkvall, and J. Skold, 4G LTE / LTE-Advanced for Mobile Broadband, second edition 2014.
Rate Matching
73 [7]. E. Dahlman, S. Parkvall, and J. Skold, 4G LTE / LTE-Advanced for Mobile Broadband, second edition 2014.
Adaptive Modulation and Coding
u The base station selects the highest Modulation and Coding Scheme (MCS) that can be supported at a 10% BLER
74
MCS Table
u The selected MCS is one of 27 options u Indicated by a 5-bit quantity u Here,
ª Imcs = MCS index ª Qm = modulation order ª C = number of code blocks ª Kr = number data bits per block ª η = spectral efficiency ª An allocation of 45 RB is assumed
75
76
MCS Adaptation Cloud RANaaS
RAP RAP
UE UE ...
...
Global scheduler
Local scheduler
... UE
PH
Y
PH
Y+
&
MA
C
HARQ
Multiplexing
Priority Handling
FEC
Modulation
Etc. U
L in
form
atio
n flo
w a
t eN
B RLC
PDCP
DL
information
flow at eN
B
Fronthaul interface
Scheduling / LA
u Limited capacity and latency affect link adaptation conditions ª At the CRAN: outdated but global
channel state information (CSI) ª At the RAP: recent but local CSI
Hybrid-ARQ
77 [7]. E. Dahlman, S. Parkvall, and J. Skold, 4G LTE / LTE-Advanced for Mobile Broadband, second edition 2014.
Multiple Parallel HARQ Processes
u 8 stop-and-wait processes run in parallel
78 [7]. E. Dahlman, S. Parkvall, and J. Skold, 4G LTE / LTE-Advanced for Mobile Broadband, second edition 2014.
Baseband Processing for the Uplink
79 [7]. E. Dahlman, S. Parkvall, and J. Skold, 4G LTE / LTE-Advanced for Mobile Broadband, second edition 2014.
HARQ Timing u Downlink
ª HARQ are asynchronous ª Retransmission can occur at any time ª Must be flagged with a 3-bit HARQ process ID
u Uplink ª Retransmissions are synchronous ª Processes serviced in round-robin fashion: Sent once every 8 ms ª Transmissions flagged with a 1-bit new data indicator ª ACK/NACK indicator received within 4 ms
u Key uplink timing constraint: ª Uplink must be processed and downlink ACK sent within 4 ms.
80
81
HARQ Timing
HARQ
RTT 6mer
subframe
inde
x
UE RAP CRAN
BH latency
Air int.
RANaaS processing
+ frame building
Air int. BH latency
UL DL
UL data UL data
Latency breakdown
0
1
2
3
4
5
6
7
8
9
0
1
2
3
4
5
6
7
8
0
1
2
3
4
5
6
7
8
0
1
2
3
4
5
6
7
8
UL DL 9
0
1
2
3
4
5
6
7
1
2
3
4
5
6
7
8
9
UL DL
ACK/NACK
ACK/NAC
K expe
cted
in SF n+
4
ACK/NACK
Added
http://www.ict-ijoin.eu/ @ict_ijoin (( 82 ))
RAN Timing Constraints Timer Purpose Min Max Default
PHY Subframe Physical subframe length 1 ms (fix) Frame Physical frame length 10 ms (fix)
MAC HARQ RTT Timer When an HARQ process is available
In FDD: 8ms
In TDD: FFS
RLC
t-PollRetransmit
For AM RLC, poll for retransmission @tx side (if no status report received) 5ms 500ms 45ms
t-Reordering For UM/AM RLC, RLC PDU loss detection @rx side 0ms 200ms 35ms t-StatusProhibit Prohibit generation of a status report @rx side 0ms 500ms 0ms
PDCP discardTimer
At UE side in UL.
Start at reception of PDCP SDU from upper layer.
Discard PDCP SDU / PDU if expiration or successful transmission
50ms Infinity
RRC
T300 RRCConnectionRequest.
If expire, reset MAC & signal RRC connection failure
100ms 2s
T301 RRCConnectionReestablishmentRequest
If expire, go to RRC_IDLE 100ms 2s
T304 RRCConnectionReconfiguration 50ms/100ms 2s/8s
T310 Detection of physical problem (successive out-of-sync from lower layers) 0ms 2s 1s
T311 RRC connection reestablishment (E-UTRA or another RAT). 1s 3s 1s
Turbo Decoding is Iterative
u On the uplink, around 50% of compute load is due to turbo decoding.
u Because a CRC is used to halt decoding, the load is directly proportional to the number of iterations
u Operating with a higher SNR margin reduces the number of iterations
83 [8]. M.C. Valenti, S. Talarico, and P. Rost, “The role of computational outage in dense cloud-based centralized radio access networks,” in Proc. IEEE Global Commun. Conf. (GLOBECOM), (Austin, TX), Dec. 2014.
Computational Load for Turbo Decoding
u The load to decode a given transport block is:
u Where: ª Load depends on SINR γ and the selected MCS ª C is the number of code blocks after segmentation ª Kr is the number of information bits in the rth code block ª Ir is the number of decoding iterations for the rth code block
u Load is in units of bit-iterations ª Relation between bit-iterations and CPU cycles is implementation
dependent, but fixed for a given architecture 84
Computational Outage
u If a transport block is not decoded before the deadline, then a computational outage occurs
u From a systems perspective, a computational outage is no different than any other kind of outage (e.g., due to fading or interference)
u For a conventional (locally processed / non-pooled) system, a computational outage occurs when the following condition occurs C(γ) > Cmaxwhere Cmax is the maximum number of bit-iterations that can be supported within the deadline
u The computational outage probability is the probability of this event
85
A Few Details About Computational Outage u The deadline must account for
ª The time from receiving the uplink transmission until when the HARQ ACK/NACK signal must be sent. 3 msec
ª Delays in the fronthaul: 0.1 msec each way ª Other processing delays
u Cmax should characterize ª The amount of computing resources available for turbo decoding, which
has the most variability. ª Other operations (typically of fixed complexity) should not be
incorporated into Cmax
86
0 1 2 3 4 5 6 7 0
Scheduling Policy Influences the Load
u MRS = max-rate scheduling ª Target 10-1 BLER after 8 iterations
u CAS = computationally aware scheduling u Target 10-1 BLER after just 2 iterations
87 [8]. M.C. Valenti, S. Talarico, and P. Rost, “The role of computational outage in dense cloud-based centralized radio access networks,” in Proc. IEEE Global Commun. Conf. (GLOBECOM), (Austin, TX), Dec. 2014.
Conservative Scheduling Helps if Compute Limited u Comparison
ª Unlimited compute power ª Compute limited
u Channel model ª Block Rayleigh fading ª Perfect T-CSI ª No interference
u Outages can be due to channel or compute effects
u Outage probability with CAS is much lower when compute limited
88 [8]. M.C. Valenti, S. Talarico, and P. Rost, “The role of computational outage in dense cloud-based centralized radio access networks,” in Proc. IEEE Global Commun. Conf. (GLOBECOM), (Austin, TX), Dec. 2014.
Influence on Throughput
u Throughput is the rate of correct data transfer
u Even though CAS has a lower peak rate, its throughput is better due to reduced occurence of compute outage
89 [8]. M.C. Valenti, S. Talarico, and P. Rost, “The role of computational outage in dense cloud-based centralized radio access networks,” in Proc. IEEE Global Commun. Conf. (GLOBECOM), (Austin, TX), Dec. 2014.
Influence on the Required Complexity
u Complexity required to correctly decode a transmission
u MRS can be computationally costly because each retransmission must be processed
90 [8]. M.C. Valenti, S. Talarico, and P. Rost, “The role of computational outage in dense cloud-based centralized radio access networks,” in Proc. IEEE Global Commun. Conf. (GLOBECOM), (Austin, TX), Dec. 2014.
Computational Outage in a C-RAN Environment
u Computational resources are shared by the pool u Let Ncloud be the number of RRH serviced by the pool u A computational outage occurs when
u where γi is the SINR at the ith RRH and Cmax is the available computing per RRH
u By exploiting the statistical multiplexing gain, it may be possible to reduce Cmax --- but by how much?
91
Role of Interference u The uplink SINR in a multi-cell network is
u Where ª Yj is the jth RRH and Xj is the mobile served by it ª gi,j is the fading gain between Xi and Yj
ª α is the path-loss exponent ª s is partial-power control compensation factor (s=1 for full PC) ª Γ is the SNR at the RRH
92
Local Processing vs. Centralized Processing
u Example scenario ª N = 129 base stations (actual locations from UK) ª Ncloud = 8 in the center are considered ª Can be processed centrally (CP) or locally (LP)
u Simulation parameters ª Mobile devices placed according to a Poisson Point Process (PPP) ª Density λ devices per km2
ª Just one device serviced per cell (TDMA scheduling) ª α = 3.7 and s=0.1ª Γ = 20 dB
93
Sum Throughput as a Function of Compute Power
u Fixed density of mobiles λ =0.1 per km2
u Central Processing always outperforms Local Processing
u CAS scheduling better than MRS when compute resources are constrained
94 [8]. M.C. Valenti, S. Talarico, and P. Rost, “The role of computational outage in dense cloud-based centralized radio access networks,” in Proc. IEEE Global Commun. Conf. (GLOBECOM), (Austin, TX), Dec. 2014.
Effect of Mobile Density
u Centrally processed u Variable density of
mobile devices u When constrained, MRS
degrades with increasing user density
95 [8]. M.C. Valenti, S. Talarico, and P. Rost, “The role of computational outage in dense cloud-based centralized radio access networks,” in Proc. IEEE Global Commun. Conf. (GLOBECOM), (Austin, TX), Dec. 2014.
Considerations for LTE-Advanced
96
• Can combine up to 5 carriers to achieve 100 MHz of useful bandwidth.
Carrier Aggrega1on
Coordinated Mul1point
HetNets & Relaying
Enhanced Spa1al Mul1plexing
• Be)er support for small cells. • Support for relays.
• Neighboring base sta1ons simultaneously transmit to the same UE.
• Up to 8 layer MIMO on the downlink. • Up to 4 layer MIMO on the uplink.
BBU Pool
Towards a Theory for Computational Outage
u The complexity of decoding can be modeled statistically u Similar to modeling the channel statistically u By using the statistical model, analytical insight can be obtained
without resorting to simulation 97
[9]. P. Rost, S. Talarico, and M.C. Valenti , “The complexity-rate tradeoff of centralized radio access networks,” IEEE Transactions on Wireless Communications, vol. 14, no. 11, pp. 6164-6176, Nov. 2015.
Outage Complexity u Outage complexity is the amount of
computing power required to achieve a desired computational outage probability
u Analogous to outage capacity u Useful to plot as a function of the
cloud group size Ncloud
u Can be used to rapidly determine compute power needed
98 [9]. P. Rost, S. Talarico, and M.C. Valenti , “The complexity-rate tradeoff of centralized radio access networks,” IEEE Transactions on Wireless Communications, vol. 14, no. 11, pp. 6164-6176, Nov. 2015.
Optimal Scheduling
u The scheduling problem can be formulated as
where rk is the rate of the kth user, Ck is its offered computational load, and Cserver is the total available computing power
u Optimal solution results in a water-filling algorithm u A heuristic alternative solution is to simply pick the user with
highest complexity and back off its rate until the complexity constraint is satisfied
99
Benefits of Optimal Scheduling
u Three approaches ª max-rate scheduling (MRS) ª scheduling with complexity
cutoff (SCC) ª scheduling with water filling
(SWF)
u Cellular network ª 129 actual base stations ª λ = 1 device per km2
u Variable Ncloud
100 [10] P. Rost, A. Maeder, M.C. Valenti, and S. Talarico, “Computationally aware sum-rate optimal scheduling for centralized radio access networks,” in Proc. IEEE Global Commun. Conf. (GLOBECOM), (San Diego, CA), Dec. 2015.
Scheduling More Important As Network Densifies
u Fix Ncloud = 10 u Vary the user density u Optimal scheduling
provides 20% higher throughput for highly dense networks
101 [10] P. Rost, A. Maeder, M.C. Valenti, and S. Talarico, “Computationally aware sum-rate optimal scheduling for centralized radio access networks,” in Proc. IEEE Global Commun. Conf. (GLOBECOM), (San Diego, CA), Dec. 2015.
The Economics of C-RAN
u The costs of C-RAN are determined by ª Cost per RRH ª Cost per BBU server ª Cost for fronthaul / backhaul (per km)
102 [11] P. Rost, I. Berberana, A. Maeder, H. Paul, V. Suryaprakash, M.C. Valen6, D. Wubben, A. Dekorsy, and G. FeWweis, “Benefits and challenges of virtualiza6on in 5G radio access networks,” IEEE Communica-ons Magazine, vol. 53, no. 12, Communica6ons Standards Supplement, pp. 75-‐82, Dec. 2015.
Economic Analysis: An Example
103
0 1 2 3 4 5 6 7 8 9 102.6
2.7
2.8
2.9
3
3.1
3.2
3.3
3.4
3.5
Data center intensity [per sq-km]
CA
PEX
[MU
SD p
er sq
-km
]
DRANFull CRANCloud-RAN, 8 iterations
[11] P. Rost, I. Berberana, A. Maeder, H. Paul, V. Suryaprakash, M.C. Valen6, D. Wubben, A. Dekorsy, and G. FeWweis, “Benefits and challenges of virtualiza6on in 5G radio access networks,” IEEE Communica-ons Magazine, vol. 53, no. 12, Communica6ons Standards Supplement, pp. 75-‐82, Dec. 2015.
Industry/research interest
u NGMN u Small Cell Forum u Several EU projects
ª Mobile Cloud Networking (MCN) ª IJOIN ª HARP ª Any many others
u Major equipment vendors u Ongoing prototyping and field trials by operators
104
Concluding remarks u C-RAN is coming
ª It can be implemented with current technology ª Trials are underway, products to follow
u C-RAN is beneficial ª Creates opportunities for enhanced collaboration
u C-RAN is green ª Reduces the computational load ª Opportunities to selectively turn off sites
u C-RAN is challenging ª Trades benefit on the wireless side for increased demands on the
wired fronthaul
105
QUESTIONS?
106
107
Cited References [1] A. Checko, H.L. Chris6ansen, Y. Yan, L. Scolari, G. Kardaras, M.S. Berger, and L. DiWmann,, “Cloud RAN for Mobile Networks – A
Technology Overview,” IEEE Comm. Surveys & Tutorials, vol. 17, no. 1, pp. 405-‐426, First Quarter 2015.
[2] China Mobile Research Ins6tute, “C-‐RAN: The Road Towards Green RAN,” White Paper, 2010.
[3] A. A. M. Saleh, A. J. Rustako and R. S. Roman, “Distributed Antennas for Indoor Radio Communica6ons,” IEEE Transac6ons on Commun., vol. 35, pp. 1245-‐1251, Dec. 1987.
[4] J. Andrews, “Seven ways HeTNets are a cellular paradigm shi9,” IEEE Communica6ons Magazine, 2013.
[5] X. Wang et al, “Cache in the Air: Exploi6ng Content Caching and Delivery Techniques for 5G” IEEE Comm. Mag., 2014.
[6] S. Talarico and M.C. Valen6, “An accurate and efficient analysis of an MBSFN network, in Proc. IEEE Int. Conf. on Acous6cs, Speech, and Signal Processing (ICASSP), (Florence, Italy), May 2014.
[7] H. Paul, B.-‐S. Shin, D. Wübben, A. Dekorsy: In-‐Network-‐Processing for Small Cell Coopera6on in Dense Networks, CLEEN 2013, Las Vegas, USA, September 2013.
[8] M.C. Valen6, S. Talarico, and P. Rost, “The role of computa6onal outage in dense cloud-‐based centralized radio access networks,” in Proc. IEEE Global Commun. Conf. (GLOBECOM), (Aus6n, TX), Dec. 2014.
[9] P. Rost, S. Talarico, and M.C. Valen6 , “The complexity-‐rate tradeoff of centralized radio access networks,” IEEE Transac6ons on Wireless Communica6ons, vol. 14, no. 11, pp. 6164-‐6176, Nov. 2015.
[10] P. Rost, A. Maeder, M.C. Valen6, and S. Talarico, “Computa6onally aware sum-‐rate op6mal scheduling for centralized radio access networks,” in Proc. IEEE Global Commun. Conf. (GLOBECOM), (San Diego, CA), Dec. 2015.
[11] P. Rost, I. Berberana, A. Maeder, H. Paul, V. Suryaprakash, M.C. Valen6, D. Wubben, A. Dekorsy, and G. FeWweis, “Benefits and challenges of virtualiza6on in 5G radio access networks,” IEEE Communica6ons Magazine, vol. 53, no. 12, Communica6ons Standards Supplement, pp. 75-‐82, Dec. 2015.
108
Additional References [12] P. Rost, C.J. Bernardos, A. De Domenico, M. Di Girolamo, M. Lalam, A. Maeder, D. Sabella, D. Wübben: Cloud Technologies for
Flexible 5G Radio Access Networks, 5G Special Issue of IEEE Communica6ons Magazine, Vol. 52, No. 5, pp. 68-‐76, May 2014.
[13] D. Wübben, P. Rost, J. Bartelt, M. Lalam, V. Savin, M. Gorgoglione, A. Dekorsy, G. FeWweis: Benefits and Impact of Cloud Compu-ng on 5G Signal Processing, Special Issue "The 5G Revolu6on" of the IEEE Signal Processing Magazine, vol. 31, no. 6, pp. 35-‐44, November 2014.
[15] M. Lalam, T. Lestable: On-‐Demand Radio Resource Management for Mul--‐Point Turbo-‐Detec-on in Dense Small Cell Deployment, submiWed to IWCPM 2015, London, June 2015.
[16]
J. Bartelt, P. Rost, D. Wübben, J. Lessmann, B. Melis, G. FeWweis: Fronthaul and Backhaul Requirements of Flexibly Centralized Radio Access Networks, IEEE Wireless Communica6ons Magazine, vol. 22, no. 5, pp. 105-‐111, Oct. 2015.
[17] H. Paul, J. Fliege, A. Dekorsy, In-‐Network-‐Processing: Distributed Consensus-‐Based Linear Es-ma-on, IEEE Communica6ons LeWers, vol.17, no.1, pp.59-‐62, January 2013.
[8] B.-‐S. Shin, H. Paul, D. Wübben, A. Dekorsy: Reduced Overhead Distributed Consensus-‐Based Es-ma-on Algorithm, IWCPM 2013, Atlanta, USA, December 2013.
[9] G. Xu, H. Paul, D. Wübben, A. Dekorsy: Fast Distributed Consensus-‐based Es-ma-on (Fast-‐DiCE) for Coopera-ve Networks, WSA2014, Erlangen, Germany, March 2014.
[10] D. Wübben, H. Paul, B.-‐S. Shin, G. Xu, A. Dekorsy: Distributed Consensus-‐based Es-ma-on for Small Cell Coopera-ve Networks, BWA 2014, Aus6n, TX, USA, Dec. 2014.