From control of networks to networked control · Routing and scheduling on the fat-tree topology 3....
Transcript of From control of networks to networked control · Routing and scheduling on the fat-tree topology 3....
From control of networks to networked control
Wing Shing Wong
Department of Information Engineering
The Chinese University of Hong Kong
Objective: Explore impacts of recent
Internet development on network
control and networked control
problems
Partially based on joint works with: LO Yuan-
Hsun, ZHANG Yijin, CHEN Yi, LIU
Zhongchang, LIU Fang, Luo Jingjing, TAN
Cheng and others.
2
Talk outline
1. Recent Internet developments
2. Routing and scheduling on the fat-tree topology
3. Network control on fat-tree networks
4. Implication on networked control
Networked control over an open network
Physical network
Network of decision-makers (agents)
Open communication network
Two recent trends in the Internet
• SDN – Software Defined Networks
• Growth of mega-sized data center networks
Good ideas don’t die: Clos Networks
C. Clos, “A study of non-blocking switching networks,” The Bell System Technical Journal, 1953.
Three stage network, intended for crossbar switches
Western Electric 100-point six-wire Type B crossbar switch
From C Clos, “A study of non-blocking Switching networks,” BSTJ 1952.
A typical three-layer Clos network
𝒏 …
𝒏 …
𝒏 …
𝒏 …
𝒏 …
𝒏 …
𝒏 …
𝒏 …
…
…
…
𝒏 × 𝒓
𝒏 × 𝒏
𝒓 × 𝒏
𝑟 𝟏
𝟏
𝟏
𝒏
𝒏 𝟐
𝟐
𝟐
r−𝟏
n-𝟏
n-𝟏
Rearrangeably non-blocking Condition: 𝑟 ≥ 𝑛
𝑛2 output links
𝑛2 input links
• The fat-tree is a folded version of a Clos network: an example involving 4 PODs
Fat-tree, a popular architecture for data center networks
Based on Mohammad Al-Fares et.. Al., “A Scalable, Commodity Data Center Network Architecture”, SIGCOMM’08, August 17–22, 2008, Seattle, Washington, USA
Core
Aggregate
Edge
hosts
POD 1 POD 2 POD 3 POD 4
• 𝐓𝑛: 𝑛-ary fat-tree network – each switch/router has 2𝑛 ports
and there are 2𝑛 PODs.
• 𝑐𝑖,𝑗: the 𝑗-th core switch in the 𝑖-th core group. 𝐶𝑛 = 𝑛2.
• 𝑎𝑖,𝑗: the 𝑗-th aggregation switch in the 𝑖-th pod. 𝐴𝑛 = 2𝑛2.
• 𝑒𝑖,𝑗: the 𝑗-th edge switch in the 𝑖-th pod. 𝐸𝑛 = 2𝑛2.
• 𝑐𝑖,𝑗: the 𝑗-th host in the edge switch 𝑒𝑡,𝑖. 𝐻𝑛 = 2𝑛3.
• With 256 ports per switch, system can support 4,194,304 hosts
Fat-tree network architecture
Then Now
Switch fabric Interconnect
Circuit switch Circuit/packet switch
No inside buffering Buffering inside network
Blocking Blocking/Queueing delay
Single rate Single and Multi-rate
Undivided connection Undivided and divided
Static, centralized control Dynamic,
centralized/distributed control
No priority class Priority class possible
Difference in application scenarios
Basic routing and scheduling
issues on fat-tree networks
11
Global packing number of a network
• A basic issue is to understand “bandwidth” requirement for a given traffic pattern.
• If two or more connections use the same link at the same time then there will be blocking or queuing unless the link can be shared by techniques such as multiple wavelengths (under WDMA) or multiple time slots (under TDMA).
• Roughly, the minimal number of wavelengths or time slots required in order to satisfy a given traffic demand without blocking or queueing is the global packing number (GPN).
• 𝑁 = 1,2,3,4 ,Φ 𝐺,𝑁 = 3: • Assume uniform traffic
• Global packing number can understood as the minimum: – Wavelengths required in an optical network – Time slots to ensure zero-queuing delay
A toy example
A toy example • 𝑁 = 1,2,3,4 ,Φ 𝐺,𝑁 = 3: • Assume uniform traffic
• Global packing number can understood as the minimum: – Wavelengths required in an optical network – Time slots to ensure zero-queuing delay
A toy example • 𝑁 = 1,2,3,4 ,Φ 𝐺,𝑁 = 3: • Assume uniform traffic
• Global packing number can understood as the minimum: – Wavelengths required in an optical network – Time slots to ensure zero-queuing delay
• Theorem (Lo Zhang Chen Wong Fu preprint): For any integer, 𝑛 > 1, the global packing number for uniform traffic is 2𝑛3 − 1.
• The construction makes use of Latin square.
• Consider a general integer-valued matrix,
𝐴 ∈ ℤ2𝑛3×2𝑛3 , for example:
GPN for two types of traffic
0 2 ⋯ 7 13 0 ⋯ 6 5⋮51
⋮92
⋱⋯⋯
⋮83
⋮02
Result for a general traffic matrix • Define an induced bipartite multigraph, 𝐵 𝐴 = (𝑋 ∪𝑌, 𝐸) where 𝑋 = 𝑌 = 2𝑛3 and the multiplicity of an edge from node 𝑋𝑖 to node 𝑌𝑗 is 𝐴𝑖,𝑗 .
• Nodes represent the hosts and edges represent the traffic between host pairs.
• Use different color of the edge to represent the aggregate switches used in a POD.
Result for a general traffic matrix • Define an induced bipartite multigraph, 𝐵 𝐴 = (𝑋∪ 𝑌, 𝐸) where 𝑋 = 𝑌 = 2𝑛3 and the multiplicity of an edge from node 𝑋𝑖 to node 𝑌𝑗 is 𝐴𝑖,𝑗 .
• Nodes represent the hosts and edges represent the traffic between host pairs.
• Use different color of the edge to represent the aggregate switches used in a POD.
Result for a general traffic matrix • Theorem (Chen Wong Lo Zhang preprint): The global
packing number for traffic 𝐴,
𝜙 𝐓𝑛, 𝐻𝑛, 𝐴 = 𝜒 𝐵 𝐴 ,
where 𝜒 is the chromatic index. In particular,
𝜙 𝐓𝑛, 𝐻𝑛, 𝐴 = max max𝑖
𝐴𝑖𝑗 ,
𝐻𝑛
𝑗=1
max𝑗
𝐴𝑖𝑗
𝐻𝑛
𝑖=1
.
Network control on fat-tree
networks:
Stability and delay analysis
20
Analytical results for switches
• Extensively studied for switches, Input-Queued Switches and Output-Queued Switches
• Throughput is limited by Head-of-line (HOL) problem which can be addressed by Virtual Output Queueing (VOQ)
• Scheduling is mapped to a bipartite graph matching problem by maximum size matching (𝑂(𝑁5/2) complexity) or by maximum weight matching
• Only maximum weight matching is throughput optimal (Tassiulas, Ephremides, Kumar, Meyn, McKeown, Anantharam, Walrand, and others.)
Analytic model and throughput optimality
• At each time slot, either 0 or 1 packet arrives at each input
• Stationary, ergodic arrivals with input node 𝑖 to output node 𝑗 traffic rate = 𝜆𝑖,𝑗 .
• Each node or link can route at most one packet per slot
• The arrivals are admissible if 𝜆𝑖,𝑗 < 1𝑖 and 𝜆𝑖,𝑗 < 1𝑗 for all 𝑖, 𝑗
• A schedule is throughput optimal if it stabilizes all admissible rates
Output nodes
Input nodes
Virtual output queues
𝜆𝑖,𝑗
A short digression:
Classification of network control
models using the choice-base
control perspective
23
Google’s Project Loon Loon for All Image from http://www.google.com/loon/
Locations requiring attention
A motivating example
• Assuming a simple linear dynamic model
𝑑𝑥
𝑑𝑡= 𝐴𝑥 𝑡 + 𝑏𝑖𝑢𝑖 𝑡 , 𝑥(𝑡0)
𝐿
𝑖=1
∈ ℝ4
• Agent i can select a point to preferentially serve from a given set:
{𝐏𝑖,1, 𝐏𝑖,2, …𝐏𝑖,𝑁𝑖}
• Assume there is a pre-agreed location depending on the selected choices, is it possible to find distributed controls that to steer the satellite according to the agents’ choices without a central coordinator?
Satellite positioning problem
Choice-based action system • Consider a distributed agent system with L agents:
𝑙 ∈ 𝐼 = {1, … , 𝐿}
• Agent l has 𝑁𝑙 choices 𝑖𝑙 ∈ 𝐶𝑙 = {1,… , 𝑁𝑙}
• The choice combination of all agents: 𝑖 = 𝑖1, 𝑖2, … , 𝑖𝐿 ∈ 𝐶1 × 𝐶2 ×⋯× 𝐶𝐿 ≡ 𝐶
• For each choice combination there is a target state to be reached: 𝐻𝑖 ∈ ℝ𝑛
• The number of targets: 𝑁1 × 𝑁2 ×⋯×𝑁𝐿
• Represent all the targets in a tensor 𝐻 = 𝐻𝑖 𝑁1×𝑁2×⋯×𝑁𝐿
The basic questions
• Assume agents select their choices with a known (uniform) distribution and the choices remain unchanged.
• Can any target be reached under joint control of the agents without explicit communication (a central coordinator or agent-to-agent communication)?
• If not, how much information is needed among the agents?
• Two extreme cases: No communication versus full-communication – Is it possible to realize a target from a set of multiple
choices with no communication between agents? – If full-communication is provided, the problem is
equivalent to a collection of single target problems.
Systems with linear dynamics • Deterministic a linear system with L agents
𝑥𝑖 𝑡 = 𝐴𝑥𝑖 𝑡 + 𝐵𝑙𝑢𝑙 𝑡, 𝑥𝑖 , 𝑖𝑙
𝐿
𝑙=1
, 𝑥𝑖 0 ∈ ℝ𝑘 (∗)
to minimize the cost function
𝐽 = 𝛼𝑒𝑖 𝑡𝑓𝑇𝑒𝑖 𝑡𝑓 + 𝛽 𝑢𝑙
𝑇(𝑡)𝑢𝑙 𝑡 𝑑𝑡𝑡𝑓
0
𝐿
𝑙=1𝑖 ∈𝐶
where 𝑒𝑖 𝑡𝑓 = 𝑥𝑖 𝑡𝑓 −𝐻𝑖 is the target error for the
choice 𝑖 and 𝛼, 𝛽 are positive normalization factors.
Definition: A target matrix is reachable if for 𝛽 = 0, there exists controls such that
𝐽 = 0
Result for linear systems
Theorem (Guo Liu Wong) If (*) is individually controllable, then there is an explicit solution for the above problem. Any target is reachable with open loop control if and only if for any agents, 𝑙 and 𝑚, and choices 𝑖𝑙 , 𝑖𝑙
′ 𝑖𝑛 the choice set of agent 𝑙 and choices 𝑖𝑚, 𝑖𝑚
′ in the choice set of agent 𝑚:
𝐻𝑖1⋯𝑖𝑙⋯𝑖𝑚⋯𝑖𝐿 − 𝐻𝑖1⋯𝑖𝑙⋯𝑖𝑚
′ ⋯𝑖𝐿= 𝐻𝑖1⋯𝑖𝑙
′⋯𝑖𝑚⋯𝑖𝐿− 𝐻𝑖1⋯𝑖𝑙
′⋯𝑖𝑚′ ⋯𝑖𝐿
For two agent cases, then for any choices set (𝑖, 𝑗, 𝑘, 𝑙)
𝐻𝑖,𝑙 −𝐻𝑖,𝑘 = 𝐻𝑗,𝑙 −𝐻𝑗,𝑘
Agents made choices at t0, t1
and t2
Arithmetic mean targets
Circle center Nonlinear function of
Incompatible target matrix case (non-reachable without communication): Center of
the minimum covering circle
Target reaching control with signaling
• For incompatible H, signaling is necessary to ensure the targets can be reached.
• Two-round solution: – First round – target signaling
– Second round – target control
• Round 1, round 2,
• Definition: A code-tensor
encodes a target matrix if
1. It is a tensor with indices in and entries in ,
2. It is compatible,
3. For any two distinct indices,
whenever
33 Planned trajectories of the sensor’s position
o: code states *: target states
Scheduling problems from the choice-based perspective
• Traffic demands of the hosts are regarded as choices
• Switches also hold partial information
• Scheduling algorithm design is based on information exchange arrangement assumptions:
– Centralized set-up
• Maximum weight matching
• Modified Q-CSMA
– Partially distributed
– Distributed
Centralized Distributed
Virtual output queueing model for fat-tree
• Each host-to-host source-destination pair, (𝑖, 𝑗) has a virtual queue at the source link, with queue size 𝑄𝑖,𝑗(𝑡) at
the t-th iteration. (𝑂(𝑛6) such pairs)
• Let 𝒫 be the set of via paths for all host-to-host communication in a fat-tree network (𝑂(𝑛8))
• Let 𝐐(𝑛) be the 2𝑛3 by 2𝑛3 queue length matrix at time 𝑛
VOQ
1
2 3 4
1
2 3 4
1
2 3 4
1
2 3 4
Host 1
Host 2
Host 3
Host 4
Non-Conflicting
Routing for Fat-Tree
Adopting McKeown et al’s Longest Queue First (LQF) for Fat-tree
• At each time slot, 𝑛, solve the problem
max 𝑡𝑟𝐐𝑇(𝑛)𝐏, where 𝐐(𝑛) is the queue state matrix at time 𝑛 and 𝐏 is
a permutation matrix.
• Routing solution always exists for a permutation traffic.
→ Zero-queueing inside the network
→ The solution is throughput optimal
• The solution requires a centralized controller with access to all queue length information. The selected schedule has to be conveyed to all hosts.
• Not scalable
Centralized Distributed
Applying Q-CSMA to fat-tree
• Originally developed by Ni, Tan, Srikant for wireless network Medium Access Control (MAC)
• Motivated by the Glauber dynamics in physics, where multiple links (in our case paths) can update their states in each slot
• Aim to obtain non-conflicting schedules (in our case zero-queueing delay schedules) and guarantee throughput-optimality
• A non-conflicting schedule is a path set, whose elements do not have overlapping links. Let ℳ0 denote a collection of all non-conflicting schedules and satisfies
Hosts
Hosts
The red paths are selected to transmit packets
One or multiple paths for each host pair
𝑴
𝑴∈𝓜𝟎
= 𝓟.
Path scheduling algorithm time slot t time slot t+1 ……
Control phase Data phase
Schedule in t Candidate
Schedule M in t+1
+
Schedule in t+1
Operations in a control phase
• A time-reversible discrete-time Markov chain (DTMC) is defined that operates in the control phase to find the routing schedule.
• The DTMC transition probability is defined by selecting randomly a non-conflicting candidate schedule M, where M ∈ ℳ0 and merge it with the schedule defined by the current Markov state.
Throughput optimality • The merge algorithm guarantees that at equilibrium the
Markov state, 𝑋, is selected with a probability of proportional to
1
𝑍 α𝑄𝑖,𝑗(𝑡)
(𝑖.𝑗)
where the product is over all (𝑖, 𝑗) node pairs in 𝑋.
• It can be argued as in the original Q-CSMA model that the scheduling algorithm is throughput optimal.
• However, the complexity is still high. – A central controller has to select the candidate schedule
– Merging operations depend on queue length information
(𝑂(𝑛3))
Centralized Distributed
0 1 2
Pod 0
Core group 1
Pod 1
0 1 2
Pod 2
0 1 2
Pod 3
0 1 2
Pod 4
0 1 2
Pod 5
0 1 2
Core group 0
0 1 2
Core group 2
0 1 2 0 1 2 0 1 2 0 1 2 0 1 2
0 1 2
← Aggregation
← Edge
← Core
← Host
0 1 2 0 1 2
Using buffers in the network
• Buffers are available
0 1 2
Pod 0
Core group 1
Pod 1
0 1 2
Pod 2
0 1 2
Pod 3
0 1 2
Pod 4
0 1 2
Pod 5
0 1 2
Core group 0
0 1 2
Core group 2
0 1 2 0 1 2 0 1 2 0 1 2 0 1 2
0 1 2
← Aggregation
← Edge
← Core
← Host
0 1 2 0 1 2
Aiming for more scalable solution
0 1 2
Pod 0
Core group 1
Pod 1
0 1 2
Pod 2
0 1 2
Pod 3
0 1 2
Pod 4
0 1 2
Pod 5
0 1 2
Core group 0
0 1 2
Core group 2
0 1 2 0 1 2 0 1 2 0 1 2 0 1 2
0 1 2
← Aggregation
← Edge
← Core
← Host
0 1 2 0 1 2
Aiming for more scalable solution
0 1 2
Pod 0
Core group 1
Pod 1
0 1 2
Pod 2
0 1 2
Pod 3
0 1 2
Pod 4
0 1 2
Pod 5
0 1 2
Core group 0
0 1 2
Core group 2
0 1 2 0 1 2 0 1 2
0 1 2
0 1 2
0 1 2
← Aggregation
← Edge
← Core
← Host
0 1 2 0 1 2
0 1 2
Pod 0
Core group 1
Pod 1
0 1 2
Pod 2
0 1 2
Pod 3
0 1 2
Pod 4
0 1 2
Pod 5
0 1 2
Core group 0
0 1 2
Core group 2
0 1 2 0 1 2 0 1 2 0 1 2 0 1 2
0 1 2
← Aggregation
← Edge
← Core
← Host
0 1 2 0 1 2
Aiming for more scalable solution
0 1 2
Pod 0
Core group 1
Pod 1
0 1 2
Pod 2
0 1 2
Pod 3
0 1 2
Pod 4
0 1 2
Pod 5
0 1 2
Core group 0
0 1 2
Core group 2
0 1 2 0 1 2 0 1 2
0 1 2
0 1 2
0 1 2
← Aggregation
← Edge
← Core
← Host
0 1 2 0 1 2
Assume each POD has an individual controller which has assess to all queue information in the POD
0 1 2
Pod 0
Core group 1
Pod 1
0 1 2
Pod 2
0 1 2
Pod 3
0 1 2
Pod 4
0 1 2
Pod 5
0 1 2
Core group 0
0 1 2
Core group 2
0 1 2 0 1 2 0 1 2 0 1 2 0 1 2
0 1 2
← Aggregation
← Edge
← Core
← Host
0 1 2 0 1 2
Allowing buffering inside network
Focusing on each POD, by applying the LQF algorithm on the uplink, we can ensure the Uplink Edge Queues are stable. There is no need for queueing at the Uplink Aggregate Queues
0 1 2
Pod 0
Core group 1
Pod 1
0 1 2
Pod 2
0 1 2
Pod 3
0 1 2
Pod 4
0 1 2
Pod 5
0 1 2
Core group 0
0 1 2
Core group 2
0 1 2 0 1 2 0 1 2 0 1 2 0 1 2
0 1 2
← Aggregation
← Edge
← Core
← Host
0 1 2 0 1 2
Uplink Edge Queues
Uplink Aggregate Queues
0 1 2
Pod 0
Core group 1
Pod 3
0 1 2
Pod 5
0 1 2
Core group 0
0 1 2
Core group 2
0 1 2 0 1 2
0 1 2
← Aggregation
← Edge
← Core
← Host
0 1 2 0 1 2
Contention at the downlink of the core switches may occur and packets may be queued. If the queue sizes at these downlink queues are announced to all PODs, it is possible to design throughput optimal scheduling for the network by means of the Tassiulas-Ephremides-Lyapunov argument. The amount of message exchanges is in the order of 𝑂(𝑛2).
Centralized Distributed
Distributed load-balancing via packing and BIBD
• Basic idea: Core switches (servers) periodically announce the queue size information to aggregation switches (users) which use pre-assigned sequences to obtain load-balancing effects.
• Assume 𝑣 servers and 𝑏 users each has 𝑘 jobs to transmit at each time slot with probability 𝑝
• Set a threshold, 𝑆, if queue size strictly greater than 𝑆 then the server will not accept new job until the queue size is below the threshold. (It changes from an available server to a non-available server.)
• The set of available servers at each time slot is known to all users via broadcast.
0
Core group 1
2 ← Users
← Servers 0 1 2
1
Job scheduling sequence • Pre-assign job scheduling sequences to users -- one sequence
per user for each possible number of available servers
• Definition [1]: A balanced incomplete block design (BIBD) is a pair (𝑋, ℬ), where 𝑋 = 𝑣, and ℬ is a collection of 𝑘 −subsets of 𝑋 such that each element of 𝑋 is contained in exactly 𝑟 blocks and any 2-susbet of 𝑋 is contained in exactly 𝜆 blocks.
That is: (1) 𝑣𝑟 = 𝑏𝑘 (2) 𝑟 𝑘 − 1 = 𝜆(𝑣 − 1)
• Definition [1]: A 𝑡 − 𝑣, 𝑘, 𝜆 packing is a pair (𝑋, ℬ), where 𝑋 = 𝑣, and ℬ is a collection of 𝑘 −subsets of 𝑋 (blocks),
such that every 𝑡-subset of 𝑋 is a subset of at most 𝜆 blocks.
[1] C. Colbourn and J. Dinitz, “Handbook of Combinatorial Design,” Chapman and Hall, 2007
Centralized Distributed
𝑣 = 7, 2−(7,3,1) BIBD design 𝑣 = 3, 2−(3,3,1) BIBD design
𝑣 = 6 𝑣 = 5
Examples of packing and BIBD Example: Given 7 users, 7 servers, and every user transmits 3 jobs per time slot, then we have the following scheme:
𝑣 = 4
User-pro
Service Rate
Threshold Waiting-avg Waiting-max
Random* BIBD Random BIBD
100% 2 2 1.500 0.500 3 1
100% 2 3 1.999 0.999 4 2
100% 2 10 5.499 4.499 7 5
100% 3 3 0.984 0 2 0
100% 3 10 2.740 0 5 0
80% 2 2 0.998 0.910 3 3
80% 2 3 1.480 1.398 4 4
80% 2 10 4.972 4.893 7 7
80% 3 3 0.203 0 2 0
80% 3 10 0.300 0 5 0
60% 2 2 0.467 0.347 3 3
60% 2 3 0.647 0.470 4 4
60% 2 10 1.305 0.698 7 7
60% 3 3 0.075 0 2 0
60% 3 10 0.081 0 4 0
Numerical study 1: 7 servers, 7 users, 3 jobs each time slot
Random means a random 3-subset is selected for each user at each round
User-pro
Service Rate
Threshold Waiting-avg Waiting-max
Random BIBD Random BIBD
100% 5 5 3.413 0.600 7 2
100% 5 7 3.827 0.999 8 2
100% 5 10 4.453 1.600 8 3
100% 7 7 1.073 0 5 0
100% 7 10 1.371 0 6 0
80% 5 5 1.906 1.597 7 7
80% 5 7 2.305 1.995 8 8
80% 5 10 2.904 2.595 8 8
80% 7 7 0.123 0 3 0
80% 7 10 0.134 0 3 0
60% 5 5 0.229 0.096 3 2
60% 5 7 0.274 0.099 4 2
60% 5 10 0.314 0.101 4 3
60% 7 7 0.024 0 2 0
60% 7 10 0.024 0 2 0
Numerical study 2: 15 servers, 35 users, 3 jobs each time slot
Implications on networked
control
51
General trends
• Better control on network delays enables more sophisticated, massively parallel time critical applications over open networks – Remote surgery, interactive virtual reality games,
remote control of UAV/UGVs etc.
• More integrated models for control and network communication
• E.g. application to time sampled system with network delay and packet loss
Integrated communicated control system
• In such a system network control is part of system control consideration
Time sampled System
Prior result on sampled systems • Assumptions:
– (H1) 𝐴 is unstable and nonsingular, 𝐵 has full-column rank. – (H2) 𝐴 , 𝐵 is a stabilizable pair.
• Theorem: [Tan and Zhang] Under the above assumptions and that the packet dropout rate is fixed then the system is stabilizable if and only if the packet dropout out rate is strictly less than 𝑝𝑚𝑎𝑥 where 𝑝𝑚𝑎𝑥 is obtained by solving :
For scalar systems,
𝑥𝑘+1 = 𝐴 𝑥𝑘 + 𝛾𝑘𝐵 𝑢𝑘−𝑑
𝑝𝑚𝑎𝑥 = sup𝑆>0,𝑌
𝑝
𝑆 > 0, 0 < 𝑝 < 1,
−𝑆 ∗ ∗𝐴 𝑆 + 1 − 𝑝 𝐵 𝑌 −𝑆 ∗
𝑝(1 − 𝑝)𝐴 𝑑𝐵 𝑌 0 −𝑆
< 0.
𝑝𝑚𝑎𝑥 =1
𝐴 2𝑑+2 − 𝐴 2𝑑 + 1
Multiple paths to improve delay performance
Time sampled System
Control under SDN (CuSDN)
Sampling period: ℎ 𝐴 = 𝑒𝐴ℎ System state 𝑥𝑘 = 𝑥(𝑘ℎ) ∈ ℝ𝑙 Control delay 𝑑ℎ Control 𝑢𝑘−𝑑 = 𝑢(𝑥𝑘−𝑑)
Delay for the 𝑘-th sample 𝑑𝑘 = 𝑑𝑠𝑐
𝑘 + 𝑑𝑐𝑎𝑘
Assume that delays are iid with distribution 𝐹 𝑡 = 𝑝𝑟𝑜𝑏(𝑑 ≤ 𝑡) Packet loss probability: 𝑝𝑟𝑜𝑏 𝛾𝑘 = 0 = 1 − 𝐹(𝑑ℎ) For multiple paths cases assuming path independency: 𝑝𝑟𝑜𝑏 𝛾𝑘 = 0 = 1 − 𝐹 𝑑ℎ 𝑚
• Theorem: (Wong and Tan) Consider a scalar CuSDN system satisfying assumptions H1-H2 with the packet dropout distribution satisfying 𝐹 𝑡 = 1 − 𝑒−𝑟𝑡 for some 𝑟 > 0.
(1) If 𝑟𝑚 > 2𝐴, the system is stabilizable for any
sampling period, ℎ.
(2) If 𝑟𝑚 = 2𝐴, then the system is stabilizable if and
only if 0 < ℎ < ℎ𝑚𝑎𝑥 where ℎ𝑚𝑎𝑥 =𝑙𝑛2
2𝐴.
(3) If 𝑟𝑚 < 2𝐴, then the system is stabilizable if and
only if 0 < ℎ < ℎ𝑚𝑎𝑥 where
ℎ𝑚𝑎𝑥= 𝑡 2𝐴𝑡
ln [𝑒 𝑚𝑟−2𝐴 𝑡 −𝑒−2𝐴𝑡 +1]
−1
,
𝑡 =𝑙𝑛(2𝐴) − ln (2𝐴 −𝑚𝑟)
𝑚𝑟.
Consider 𝐴 = 0.25, 𝐵 = 1, 𝑑 = 2,𝑚 = 2, 𝑥0 = 2, 𝑢−2 = 𝑢−1 = 0.
If 𝑟1 = 0.5, 𝑟1𝑚 > 2𝐴, then the system is stabilizable for any sampling period ℎ > 0
If 𝑟2 = 0.2 , we have 𝑟2𝑚 < 2𝐴. Then, 𝑡 =𝑙𝑛(2𝐴)−ln (2𝐴−𝑚𝑟2)
𝑚𝑟2= 4.0236,
2𝐴𝑡
ln [𝑒 𝑚𝑟−2𝐴 𝑡 −𝑒−2𝐴𝑡 +1]= 4.6947 = 5, and ℎ𝑚𝑎𝑥 = 0.8047.
ℎ = 1
Future directions for CuSDN
• Extension to more complex sampled systems:
– High dimensional with stochastic noise
• Dealing with quantized packets
• Control signals with target activation time
– Controller design controls with targeted application time.
– Plant implements control with simple logic.
– Target time can adapt to network congestions.
THANK YOU