The Cloud is the future Internet: How do we engineer a cloud?
description
Transcript of The Cloud is the future Internet: How do we engineer a cloud?
![Page 1: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/1.jpg)
The Cloud is the future Internet:
How do we engineer a cloud?
Jim RobertsInria, France
![Page 2: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/2.jpg)
The role of the engineer
• to quantify the three-way relationship between demand, capacity and performance
capacity
performancedemand
![Page 3: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/3.jpg)
![Page 4: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/4.jpg)
![Page 5: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/5.jpg)
The role of the engineer
• to quantify the three-way relationship between demand, capacity and performance
capacity
performancedemand
network
![Page 6: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/6.jpg)
capacity
performancedemand
capacity number of trunks, N
The role of the engineer
• an example from the telephone network: the Erlang formula
performancecall blocking probability, B
network
![Page 7: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/7.jpg)
Traffic variations and stationarity
one day
meannumberof calls
busy hourdemand, A
one week
meannumberof calls
a stationary stochastic process:Poisson call arrivals
mean, A
![Page 8: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/8.jpg)
capacity
performancedemand
capacity number of trunks, N
The role of the engineer
• an example from the telephone network: the Erlang formula• insensitivity (of performance to detailed traffic characteristics)
facilitates engineering
performancecall blocking probability, B
demandPoisson call process
of intensity A
network
![Page 9: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/9.jpg)
capacity
performancedemand
capacity bandwidth and how it is shared
The role of the engineer
• what about the Internet? what about the Cloud?
performanceloss rates,
response times,...
demanda stationary
arrival process...
network
?
![Page 10: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/10.jpg)
Outline
• the future Internet as a network of data centers
• a survey of data center network research
• lessons from Internet bandwidth sharing
• how do we engineer a cloud?
![Page 11: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/11.jpg)
A network of data centers
• most traffic in an ISP network originates in a data center– Google, Facebook,..., Akamai, Limelight,...
![Page 12: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/12.jpg)
Towards an information-centric future Internet
• eg, equip routers with a “content store”• cache popular content to realize a more favourable memory-bandwidth tradeoff
![Page 13: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/13.jpg)
Towards an information-centric future Internet
• eg, equip routers with a “content store”• cache popular content to realize a more favourable memory-bandwidth tradeoff
![Page 14: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/14.jpg)
Towards an information-centric future Internet
• eg, equip routers with a “content store”• cache popular content to realize a more favourable memory-bandwidth tradeoff
![Page 15: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/15.jpg)
Towards an network of data centers
• router content stores are limited in size by technology while effective traffic reduction requires VERY large caches
![Page 16: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/16.jpg)
Towards an network of data centers
• router content stores are limited in size by technology while effective traffic reduction requires VERY large caches
• and content delivery is a business needing compute power• and data centers can also do routing...
![Page 17: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/17.jpg)
Evaluating the memory-bandwidth tradeoff
• assuming a stationary content request process (the “independent reference model”)
requests
content
bigcache
lowbandwidth
![Page 18: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/18.jpg)
Evaluating the memory-bandwidth tradeoff
• assuming a stationary content request process (the “independent reference model”)
• accounting for a very large content catalogue...– web pages, user-generated content, file sharing, video – petabytes of content
• and highly dispersed content item popularities– Zipf-like behaviour with exponent < 1
• an example from BitTorrent trackers... [Dan & Carlsson, IPTPS 2010]
requests
content
smallcache
highbandwidth
![Page 19: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/19.jpg)
A popularity law for torrents
![Page 20: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/20.jpg)
A popularity law for torrents
(∝1/x0.6) (∝1/x0.8)
(10%) (60%) (30%)
![Page 21: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/21.jpg)
LRU hit rate versus cache size
only 25% traffic reduction for 10 TB
90% traffic reduction needs 300 TB
![Page 22: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/22.jpg)
Large storage, low bandwidth
• using best guess cost data suggests large (~petabyte) stores capturing more than 90% of content traffic are cost effective
• instead of “routers that do caching”, we have “data centers that do routing”!
![Page 23: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/23.jpg)
Outline
• the future Internet as a network of data centers
• a survey of data center network research
• lessons from Internet bandwidth sharing
• how do we engineer a cloud?
![Page 24: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/24.jpg)
Data centers are built from commodity devices
• most data centers today have a hierarchical structure with multiple alternative paths between thousands (and thousands) of servers
Internet
top of rackswitches
aggregationswitches
core routers
servers
![Page 25: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/25.jpg)
Single user and multi-tenant data centers
virtual machinesallocated to tenant
• single user networks (eg, Facebook) – can introduce proprietary protocols– servers and network shared by different “services”
• multi-tenant data centers (eg, Amazon) – must ensure isolation and meet SLAs
![Page 26: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/26.jpg)
Data center traffic characteristics
• data center flows– a sparse traffic matrix with pronounced locality– highly variable flow sizes: query traffic (< 2 KB), updates (50 KB
- 1 MB), fresh data (1 MB to 50 MB)– bursty flow arrivals
to server
from server
low
high
from [Kandula, IMC 2009]
![Page 27: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/27.jpg)
Data center congestion control
• TCP proves inadequate for bandwidth sharing– big flows beat up little flows in small shared switch buffers
from [Alizadeh, Sigcomm 2010]
![Page 28: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/28.jpg)
Data center congestion control
• TCP proves inadequate for bandwidth sharing– big flows beat up little flows in small shared switch buffers– exacerbated by the “incast problem” – ie, many flows converge
on one receiver
from [Chen, USENIX 2012]
![Page 29: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/29.jpg)
Data center congestion control
• TCP proves inadequate for bandwidth sharing– big flows beat up little flows in small shared switch buffers– exacerbated by the “incast problem” – eg, many flows
“shuffled” at same time to one “reducer”
• many proposals for new congestion control protocols– DCTCP, limits delays by refined ECN scheme [Sigcomm 2010]
– D3, uses explicit deadlines for flows to complete [Sigcomm 2011]
– D2TCP, combines aspects of previous two [Sigcomm 2012]
– PDQ, explicit rates accounting for deadlines [Sigcomm 2012]
– HULL, low delay by “phantom queues” and ECN [NSDI 2012]
• as effectiveness relies on universal adoption, this is not obviously practical in a multi-tenant data center
![Page 30: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/30.jpg)
Data center congestion control
• multipath forwarding to alleviate congestion– using MPTCP, “packet spraying”,...
from [Raicu, Sigcomm 2011]
![Page 31: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/31.jpg)
Data center congestion control
• use OpenFlow to route or re-route flows to avoid congestion
from [Al-Fares, NSDI 2010]
![Page 32: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/32.jpg)
Data center congestion control
• use OpenFlow to route or re-route flows to avoid congestion
from [Al-Fares, NSDI 2010]
![Page 33: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/33.jpg)
Data center congestion control
• recap: many proposals for new congestion control protocols, for using multipath forwarding, for flow routing using OpenFlow,...
• but, almost all proposals evaluated using static flows, ignoring real traffic characterisics !!!– eg, the “permutation traffic matrix”:
• every server sends to one other server chosen at random!• pessimistic link load, optimistic bandwidth sharing
![Page 34: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/34.jpg)
Sharing the data center network
• reserve “virtual data centers” from given traffic matrix, with or without over-booking (SecondNet, Oktopus, Gatekeeper)
from [Ballani, Sigcomm 2011]
![Page 35: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/35.jpg)
Sharing the data center network
• reserve “virtual data centers” from given traffic matrix, with or without over-booking (SecondNet, Oktopus, Gatekeeper)
• perform weighted fair sharing between “entities”, using congestion control and/or weighted fair scheduling (SeaWall, Netshare, FairCloud) ...
![Page 36: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/36.jpg)
Weighted fair shares
• NetShare proposes weighted fair link sharing with weight equal to min “upstream” and “downstream” VM weights
servers servers
![Page 37: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/37.jpg)
Weighted fair shares
• NetShare proposes weighted fair link sharing with weight equal to min “upstream” and “downstream” VM weights
VM instances VM instances
upstream downstream
![Page 38: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/38.jpg)
Weighted fair shares
• NetShare proposes weighted fair link sharing with weight equal to min “upstream” and “downstream” VM weights
upstream downstream
![Page 39: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/39.jpg)
Weighted fair shares
• NetShare proposes weighted fair link sharing with weight equal to min “upstream” and “downstream” VM weights
• FairCloud proposes weighted fair link sharing with weight equal to sum of upstream and downstream VM weights
upstream downstream
![Page 40: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/40.jpg)
Weighted fair shares
• NetShare proposes weighted fair link sharing with weight equal to min “upstream” and “downstream” VM weights
• FairCloud proposes weighted fair link sharing with weight equal to sum of upstream and downstream VM weights
upstream downstream
![Page 41: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/41.jpg)
Weighted fair shares
upstream downstream
• NetShare proposes weighted fair link sharing with weight equal to min “upstream” and “downstream” VM weights
• FairCloud proposes weighted fair link sharing with weight equal to sum of upstream and downstream VM weights
• in fact, both NetShare and FairCloud are more complicated than this...
![Page 42: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/42.jpg)
Sharing the data center network
• recap: sharing the network– reserve “virtual data centers” (SecondNet, Oktopus, Gatekeeper) – perform weighted fair sharing between entities (SeaWall,
Netshare, FairCloud)
• although data center traffic characterization reveals bursty arrivals of flows of highly variable size, all the above are evaluated assuming fixed patterns of flows
• our experience of Internet traffic control under stationary random traffic suggests– bandwidth reservation doesn’t meet user requirements and is
generally unnecessary– service differentiation by weighted sharing has unpredictable
performance and is also generally unnecessary– it is not difficult to ensure excellent quality of service for all
![Page 43: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/43.jpg)
Outline
• the future Internet as a network of data centers
• a survey of data center network research
• lessons from Internet bandwidth sharing
• how do we engineer a cloud?
![Page 44: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/44.jpg)
Internet traffic control
• my “mantra” (for more than 10 years!): routers should impose per-flow fair sharing and not rely on end-system implemented congestion control • fair queuing is feasible and scalable... ... and realizes implicit service differentiation... ... for network neutral traffic control• fairness is an expedient, not a socio-economic objective
FairQueuing
fair rate
![Page 45: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/45.jpg)
Statistical bandwidth sharing
• consider a network link handling flows between users, servers, data centers,... (that may be sources or sinks)
• define, link load = flow arrival rate x mean flow size / link rate = packet arrival rate x mean packet size / link rate = link utilization
sources sinks
![Page 46: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/46.jpg)
Traffic variations and stationarity
one day
meanlinkutilization
busy hourdemand
one week
meanlinkutilization
a stationary stochastic process
mean
![Page 47: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/47.jpg)
Statistical bandwidth sharing
• consider a network link handling flows between users, servers, data centers,... (that may be sources or sinks)
• define, link load = flow arrival rate x mean flow size / link rate = packet arrival rate x mean packet size / link rate = link utilization
sources sinks
![Page 48: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/48.jpg)
Statistical bandwidth sharing
• in the following simulation experiments, assume flows– arrive as Poisson process– have exponential size distribution– instantaneously share link bandwidth fairly
• results apply more generally thanks to insensitivitysources sinks
![Page 49: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/49.jpg)
Performance of fair shared link
time
number ofactive flows
flowperformance
meanrate
duration
(arrival rate x mean size / link rate)
![Page 50: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/50.jpg)
Performance of fair shared link
number ofactive flows
time
![Page 51: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/51.jpg)
Performance of fair shared link
number ofactive flows
time
![Page 52: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/52.jpg)
Performance of fair shared link
number ofactive flows
time
![Page 53: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/53.jpg)
Performance of fair shared link
number ofactive flows
time
![Page 54: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/54.jpg)
Performance of fair shared link
number ofactive flows
time
![Page 55: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/55.jpg)
Observations
• the number of flows using a fairly shared link is small until load approaches 100% (for any link capacity)
• therefore, fair queuing schedulers are feasible and scalable• our simulations make Markovian assumptions but the
results for the number of active flows are true for much more general traffic [Ben-Fredj et al, Sigcomm 2001]
Poissonsessionarrivals
flowarrivals
sessiondepartures
new flowof samesession
...flow1
flow2
flow3
flown
thinktime
thinktime
a session
![Page 56: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/56.jpg)
More simulations
• on Internet core links (≥ 10 Gbps), the vast majority of flows cannot use all available capacity; their rate is constrained elsewhere on their path (eg, ≤ 10 Mbps)
• consider a link shared by flows whose maximum rate is only 1% of the link rate– conservatively assume these flows emit packets as a Poisson
process at rate proportional to the number of flows in progress
![Page 57: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/57.jpg)
Performance with rate limited flows
time
number offlows inprogress
number ofactive flows
“active flows”have ≥ 1 packet
in queue
![Page 58: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/58.jpg)
Performance with rate limited flows
time
number offlows inprogress
number ofactive flows
![Page 59: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/59.jpg)
Performance with rate limited flows
time
number offlows inprogress
number ofactive flows
![Page 60: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/60.jpg)
Performance with rate limited flows
time
number offlows inprogress
number ofactive flows
![Page 61: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/61.jpg)
Performance with rate limited flows
time
number offlows inprogress
number ofactive flows
![Page 62: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/62.jpg)
Performance with rate limited flows
time
number offlows inprogress
number ofactive flows
![Page 63: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/63.jpg)
Observations 2
• most flows are not elastic and emit packets at their peak rate
• these flows are “active”, and need to be scheduled, only when they have a packet in the queue
• the number of active flows is small until load approaches 100%
• fair queuing is feasible and scalable, even when the number of flows in progress is very large
![Page 64: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/64.jpg)
More simulations
• links may be shared by many rate limited flows and a few elastic flows
• consider a link shared by 50% of traffic from flows whose peak rate is 1% of link rate and 50% elastic traffic
![Page 65: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/65.jpg)
Performance of link with elastic and rate limited flows
time
number offlows inprogress
number ofactive flows
![Page 66: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/66.jpg)
Performance of link with elastic and rate limited flows
time
number offlows inprogress
number ofactive flows
![Page 67: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/67.jpg)
Performance of link with elastic and rate limited flows
time
number offlows inprogress
number ofactive flows
![Page 68: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/68.jpg)
Observations 3
• the number of active flows is small (<100) with high probability until load approaches 100%
• therefore, fair queuing is feasible and scalable• fair queuing means packets of limited peak rate flows see
negligible delay: – they are delayed by at most 1 round robin cycle– this realizes implicit service differentiation since conversational
and streaming flows are in the low rate category
![Page 69: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/69.jpg)
Yet more simulations
• weighted fair queuing is proposed for service differentiation– eg, users have a share proportional to the price they pay
• consider a link shared by two types of flow, type 1 flows get 10 times the rate of type 2 flows
![Page 70: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/70.jpg)
Performance of weighted fair shared link
number ofactive flows
time
type 1 flows get 10 timesas much astype 2 flows
![Page 71: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/71.jpg)
Performance of weighted fair shared link
number ofactive flows
time
![Page 72: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/72.jpg)
Performance of weighted fair shared link
number ofactive flows
time
![Page 73: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/73.jpg)
Performance of weighted fair shared link
number ofactive flows
time
![Page 74: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/74.jpg)
Observations 4
• weighted fair sharing hardly favours high weight class until load approaches 100% (when all flows suffer!)
• it’s not worth the (considerable) effort to account for weights
• results also show that quality is OK even when sharing is not perfectly fair
![Page 75: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/75.jpg)
Recommendation for bandwidth sharing
• implement per-flow fair queuing in router queues– this is scalable and feasible (though more complex than FIFO)– view fairness as an expedient not a socio-economic objective
• apply traffic engineering to ensure load is not too close to 100% and overload controls in case this fails
• there is then an equivalent to the Erlang formula for the Internet [Bonald & Roberts, CCR 2012]– by insensitivity, the only significant traffic characteristic is link
load implying simple network engineering
• access networks need more than fair sharing - but see “Bufferbloat” where fair queuing is the preferred solution
• so what about the Cloud?
![Page 76: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/76.jpg)
Outline
• the future Internet as a network of data centers
• a survey of data center network research
• lessons from Internet bandwidth sharing
• how do we engineer a cloud?
![Page 77: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/77.jpg)
Turning the Cloud into the future Internet
• define the network architecture– name-based routing, receiver control, chunking,...
• elaborate the network structure– eg, bring highly popular VoD catalogue closer to users– concentrate compute, distribute content storage
![Page 78: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/78.jpg)
Turning the Cloud into the future Internet
• define the network architecture– name-based routing, receiver control, chunking,...
• elaborate the network structure– eg, bring highly popular VoD catalogue closer to users– concentrate compute, distribute content storage
![Page 79: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/79.jpg)
Designing a better data center
• instead of networks built with legacy switches and routers,• seek an original design that maximizes performance,
minimizes energy, facilitates content retrieval,...
![Page 80: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/80.jpg)
Designing a better data center
• instead of networks built with legacy switches and routers• seek an original design that maximizes performance,
minimizes energy, facilitates content retrieval,...• using software routers to perform dynamic bandwidth
allocation on WDM lightpaths...
![Page 81: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/81.jpg)
Sharing data center bandwidth
• avoid bandwidth reservation since users are unable to predict highly variable demand
• avoid complicated weighted bandwidth sharing that does not in fact bring expected service differentiation
• apply dynamic sharing algorithms that are simple to implement and yield robust network engineering, like per-flow fair sharing
• evaluate proposals using a realistic model of demand
![Page 82: The Cloud is the future Internet: How do we engineer a cloud?](https://reader035.fdocuments.in/reader035/viewer/2022062722/56813aee550346895da35e27/html5/thumbnails/82.jpg)
Last slide
• the Cloud is the future Internet• where nodes are assemblies of ubiquitous CPU, memory
and storage devices that do routing among other things
• enough network engineering research challenges for another 40 years!
capacity
performancedemand