IEEE GLOBECOM 2020 Tutorial on Distributed Deep Learning ...

37
IEEE GLOBECOM 2020 Tutorial on Distributed Deep Learning: Concepts, Methods & Applications in Wireless Networks Part 3: Distributed & Federated Learning in Wireless Networks

Transcript of IEEE GLOBECOM 2020 Tutorial on Distributed Deep Learning ...

Page 1: IEEE GLOBECOM 2020 Tutorial on Distributed Deep Learning ...

IEEE GLOBECOM 2020 Tutorial onDistributed Deep Learning:

Concepts, Methods & Applications in Wireless Networks

Part 3: Distributed & Federated Learning in Wireless Networks

Page 2: IEEE GLOBECOM 2020 Tutorial on Distributed Deep Learning ...

Tutorial Outline• Introduction (DG)• Distributed & Federated Learning: Concepts & Methods (WS)• Basic concepts of federated and distributed learning• Reducing communication overhead • Clustered federated learning

• Distributed & Federated Learning in Wireless Networks (DG)• Distributed inference• Distributed training• Resource optimization for federated learning• Over-the-air distributed learning

• Distributed Learning & Neural Network Compression (WS)• Conceptual similarities and differences between these two problems• Standardization activities: ITU FG ML5G, MPEG AHG CNNMCD

• Conclusions

Page 3: IEEE GLOBECOM 2020 Tutorial on Distributed Deep Learning ...

Distributed Machine Learning• Distributed inference

• Distributed training

Page 4: IEEE GLOBECOM 2020 Tutorial on Distributed Deep Learning ...

• Most IoT devices have limited computational resources. Cannot deploy complex DNN models.

Local Inference on Device

• Alternatives:• Shallow networks

• Design and train simpler networks for edge devices• Knowledge distillation

• Custom hardware designs for FPGAs (Microsoft Brainwave, Xilinx Everest) or ASICs (Google TPUs, IntelNervana, IBMTrueNorth)

• Quantization: Reduce precision of weights• Quantization of both weights and activations possible with below 1%

accuracy loss• Better results with layer-wise optimized precision

• Network pruning: Remove unnecessary weights• Pruning weights or channels

Page 5: IEEE GLOBECOM 2020 Tutorial on Distributed Deep Learning ...

• Most IoT devices have limited computational resources: local inference not possible

Joint Device – Edge Inference

at the edge device

at the edge server

Feature size of intermediate layers can be larger than input (up to 10x in VGG16)

Output feature tensor size of each residual block in ResNet-50

A. E. Eshratifar et al., “Bottlenet: A deep learning architecture for intelligent mobile cloud computing services,” IEEE/ACM ISLPED, 2019.

Device-edge co-inference by splitting deep neural networks

Page 6: IEEE GLOBECOM 2020 Tutorial on Distributed Deep Learning ...

Joint Feature Encoding and Communication

• Consider feature encoding and communication jointly in an end-to-end fashion• Prune feature encoder network

to reduce complexity

A. E. Eshratifar, A. Esmaili, and M. Pedram, “Bottlenet: A deep learning architecture for intelligent mobile cloud computing services,” IEEE/ACM ISLPED, 2019. J. Shao and J. Zhang, “Bottlenet++: An end-to-end approach for feature compression in device-edge co-inference systems,” arXiv preprint arXiv:1910.14315, 2019.M. Jankowski, D. Gündüz, K. Mikolajczyk, ”Deep joint transmission-recognition for power-constrained IoT devices,” IEEE SPAWC, Jul. 2020.

bottleNet++

bottleNetPruning + JSCC

Page 7: IEEE GLOBECOM 2020 Tutorial on Distributed Deep Learning ...

Distributed Inference at the Edge

• Edge devices typically have low computational capacity and power• Local inference not possible • Decisions may rely on data (e.g., terrain information)

available at an edge server (e.g., a base station), or on signals from other devices

• Latency becomes a major challenge: In self-driving cars immediate detection of obstacles is critical to avoid accidents

Page 8: IEEE GLOBECOM 2020 Tutorial on Distributed Deep Learning ...

Image Retrieval at the Edge

Standard approach: - Transmit images to the cloud- Determine features most relevant for re-

identification over image database- Re-ID baseline: Deep convolutional neural

network, e.g., ResNet-50

M. Jankowski, D. Gunduz, and K. Mikolajczyk, Wireless image retrieval at the edge, arXiv:2007.10915 [cs.IT].

Goal: match a pedestrian’s image from a wireless camera with another image in a large database

Pedestrian image Featurevector

Page 9: IEEE GLOBECOM 2020 Tutorial on Distributed Deep Learning ...

Original images

ID predictions Ground truth

Entr

opy

mod

el

Rate loss

Cross entropyloss

Re-

Iden

tifica

tion

Bas

elin

e

Feature vector

Feat

ure

enco

ding

Latent

representation

Fully

-Con

nect

edC

lass

ifier

Qua

ntiz

atio

n

Arit

hmet

icen

codi

ng

Arit

hmet

icde

codi

ng

Bitstream

Retrieval-based Compression

M. Jankowski, D. Gunduz, and K. Mikolajczyk, Deep joint source-channel coding for wireless image retrieval, ICASSP 2020.

Page 10: IEEE GLOBECOM 2020 Tutorial on Distributed Deep Learning ...

Joint Compression and Channel Coding

E. Bourtsoulatze, D. Burth Kurka and D. Gündüz, Deep joint source-channel coding for wireless image transmission, IEEE Trans. on Cognitive Comms. and Networking, Sep. 2019.

Re-

Iden

tifica

tion

Bas

elin

e

Original images

ID predictions Ground truthChannel

symbols

Noisy channel

symbols

AWGNchannel

Feat

ure

enco

ding

Fully

-Con

nect

edC

lass

ifier Cross entropy

loss

Feat

ure

deco

ding

• Feature transmission is a joint source-channel coding problem

• Separation is suboptimal in general

• Joint approach provides better performance as well as `graceful degradation’

Page 11: IEEE GLOBECOM 2020 Tutorial on Distributed Deep Learning ...

B = 128 channel uses

Person Re-identification Over Noisy Channels

• CUHK03 dataset: 14096 images of 1467 identities taken from two camera views.

• 256x128 coloured images

Page 12: IEEE GLOBECOM 2020 Tutorial on Distributed Deep Learning ...

Distributed Training

• With the increase in data size and model complexity, ML tasks cannot be handled on a single machine• DeepMind’s AlphaGo ran on 1920 CPUs and 280 GPUs

• Mobile edge devices employ computation power of edge servers

• Data can be huge with limited information density• Communication links are power and bandwidth limited• Privacy concerns

Master Server

Worker 1

Worker 2 Worker 3

Worker 4

Distributed learning/ computing

Federated learning

Page 13: IEEE GLOBECOM 2020 Tutorial on Distributed Deep Learning ...

Distributed Training at the Edge

• Computationally limited edge devices can exploit edge servers for training large models• Increase computation speed• To introduce privacy and security

Challenges:• Straggling servers• Colluding servers• Communication bottleneck

Page 14: IEEE GLOBECOM 2020 Tutorial on Distributed Deep Learning ...

• N labeled data points• Minimize mean squared error:

• Gradient Descent:

Distributed Linear Regression

Parameter vector

Remains constant over iterations

To be computed at each iteration

Page 15: IEEE GLOBECOM 2020 Tutorial on Distributed Deep Learning ...

Distributed Matrix-Vector Multiplication

Page 16: IEEE GLOBECOM 2020 Tutorial on Distributed Deep Learning ...

Distributed Coded Matrix-Vector Multiplication

K. Lee, M. Lam, R. Pedarsani, D. Papailiopoulos, and K. Ramchandran, “Speeding up distributed machine learning using codes,” IEEE Trans. Inform. Theory, Mar. 2018.

Page 17: IEEE GLOBECOM 2020 Tutorial on Distributed Deep Learning ...

Distributed Matrix-Matrix MultiplicationRemains constant over iterations

To be computed at each iteration

can be computed in a distributed manner

Page 18: IEEE GLOBECOM 2020 Tutorial on Distributed Deep Learning ...

Distributed Matrix-Matrix Multiplication

Master Server

Page 19: IEEE GLOBECOM 2020 Tutorial on Distributed Deep Learning ...

Gradient Coding

Master Server

Computation load: r = 2

R. Tandon, Q. Lei, A. G. Dimakis, and N. Karampatziakis, “Gradient coding,” arXiv preprint: arXiv:1612.03301, 2016.

Page 20: IEEE GLOBECOM 2020 Tutorial on Distributed Deep Learning ...

Coded Computation and Learning• Distributed polynomial codes

• S. Li et al., “Polynomially coded regression: Optimal straggler mitigation via data encoding,” 2018• E. Ozfatura, Ulukus, and DG, ”Straggler-aware distributed learning: Communication-computation latency trade-off,”

2020. • Hasircioglu, Gomez-Vilardebo, and DG, “Bivariate polynomial coding for exploiting stragglers in heterogeneous coded

computing systems,” 2020.

• Coded computing for privacy/ security• Yu et al., “Lagrange coded computing: Optimal design for resiliency, security and privacy,” 2019.• D’Oliveira et al. “GASP codes for secure distributed matrix multiplication,” 2018. • Mital, Ling, and DG, Secure distributed matrix computation with discrete Fourier transformion with

discrete Fourier transform, submitted, Jul. 2020.• Computation scheduling

• Mohammadi Amiri and DG, ”Computation scheduling for distributed machine learning with straggling workers”, 2019.

• Partial recovery• E. Ozfatura, S. Ulukus, and DG, Coded distributed computing with partial recovery, submitted, 2020.

• Dynamic coded computation• Buyukates et al., “Gradient coding with dynamic clustering for straggler mitigation”, 2020.

Page 21: IEEE GLOBECOM 2020 Tutorial on Distributed Deep Learning ...

Federated Edge Learning (FEEL)

• Wireless devices with their own data• FL provides not only privacy but also

communication efficiency• Channels are time-varying and inhomogeneous

across devices• Devices interfere with each other

B. McMahan, et al., Communication-efficient learning of deep networks from decentralized data,” Int’l Conf. on Artificial Int. and Stat., 2017.

(a)

(b)

Page 22: IEEE GLOBECOM 2020 Tutorial on Distributed Deep Learning ...

Device Scheduling

• Device selection should take channels into account• Let each device compress their updates to d bits• Allocate bandwidth across users:

Base station(PS)

Minimize delay in each round:

Alternatively, we can fix a deadline, and maximize number of participating devices

Page 23: IEEE GLOBECOM 2020 Tutorial on Distributed Deep Learning ...

Channel-Aware Device Scheduling

Choose 20 out of 100 devices randomly distributed in a 500m radius cell

CIFAR-10 training

Page 24: IEEE GLOBECOM 2020 Tutorial on Distributed Deep Learning ...

Age-based Device Scheduling

Yang et al., Age-based scheduling policy for federated learning in mobile edge networks, 2019.

MNIST classification with SVM

Page 25: IEEE GLOBECOM 2020 Tutorial on Distributed Deep Learning ...

Hierarchical FEEL• Performance limited by worst user’s channel

M. Abad, E. Ozfatura, D. Gunduz, and O. Ercetin, Hierarchical federated learning across heterogeneous cellular networks, ICASSP 2019.

Base station(PS)

Uplink Downlink

MBS MBS

SBSSBS

MU

SBSSBS

MU

Intra-cluster gradient aggregationInter-cluster model averaging

Page 26: IEEE GLOBECOM 2020 Tutorial on Distributed Deep Learning ...

Hierarchical FEEL over Cellular Networks• 28 devices uniformly distributed in a circular area with radius 750m, 7 clusters • 600 subcarriers with subcarrier spacing of 30KHz. • Max transmit power of BS, SBSs, and devices: 20W, 6.3W, and 0.2W, respectively• CIFAR-10 image classification• ResNet-18 architecture

M. Abad, E. Ozfatura, D. Gunduz, and O. Ercetin, Hierarchical federated learning across heterogeneous cellular networks, arXiv preprint, Sep. 2019.

H: no. of intra-cluster SGD iterations, for each global model update

Reduction in latency (pathloss exponent 3.5):

H=2 H=4 H=6

x35 X39.5 X41.5

Page 27: IEEE GLOBECOM 2020 Tutorial on Distributed Deep Learning ...

Update-aware Device Scheduling• Scheduled devices quantize their model updates based on their link qualities

Mohammadi Amiri, DG, Kulkarni and Poor, Convergence of update aware device scheduling for federated learning at the wireless edge, 2020.

• Best Channel (BC): Allocate channel resources so that each device transmits same number of bits

• Best l2 norm (BN2): Based on the l2 norm of local updates. Allocate channel proportional to l2 norms

• Best Channel - Best l2 norm (BC-BN2): Choose top Kc channels, and then top K among those based on updates. Bw. allocation as in BN2

• Best l2 norm - Channel (BN2-C): Devices find quantized updates based on full channel bandwidth. Choose top K devices based on l2 norm of quantized updates. Bw. allocation as in BN2.

(a)

(b)

Page 28: IEEE GLOBECOM 2020 Tutorial on Distributed Deep Learning ...

Update-aware Device Scheduling

- MNIST- 3 local iterations- M=40 devices- iid data distribution

Better to schedule only one device!

• BN2 and BC-BN2 best performance: important to consider updates as well as channels when scheduling devices

Page 29: IEEE GLOBECOM 2020 Tutorial on Distributed Deep Learning ...

Update-aware Device Scheduling: Non-iid Data

- MNIST dataset, multi-layer perceptron

- 3 local iterations- M=40 devices- Non-iid data distribution

Better to schedule 10 devices!

BN2-C policy

Page 30: IEEE GLOBECOM 2020 Tutorial on Distributed Deep Learning ...

Federated Learning at the Wireless Edge• Mobile devices connected to PS through wireless links: bandlimited multiple

access channel (MAC)• Digital approach: Separate learning and communications• Analog approach: Let channel do gradient averaging

Mohammadi Amiri and DG, Machine learning at the wireless edge: Distributed stochastic gradient descent over-the-air, 2020.

noise

Parameterserver

Device 1

Device 2

Device K

Page 31: IEEE GLOBECOM 2020 Tutorial on Distributed Deep Learning ...

Analog Distributed Gradient Descent (A-DSGD)• All workers transmit simultaneously• Gradient dimension typically >> bandwidth

• (50-layer ResNet network has ~26million weight parameters)• Thresholding to sparsify gradient estimates• Use pseudo-random linear projection to reduce bandwidth

noise

Parameterserver

Φ

Φ

Φ

Page 32: IEEE GLOBECOM 2020 Tutorial on Distributed Deep Learning ...

Over-the-air Federated Image Classification• Distributed MNIST classification (single layer with 10 neurons, ADAM)• Parameter vector size d = 28 x 28 x 10 + 10 = 7850• Bandwidth: d/2 symbols• Sparsity level: d/2

I.I.D. data Non I.I.D. data

Page 33: IEEE GLOBECOM 2020 Tutorial on Distributed Deep Learning ...

FEEL over Fading Wireless Channels• Channel gains known: requires channel inversion

M. Mohammadi Amiri and D. Gündüz, Federated learning over wireless fading channels, IEEE Transactions on Wireless Communications, 2020.G. Zhu et al., Broadband analog aggregation for low-latency federated edge learning, IEEE Trans. Wireless Commun., 2020.

noise

Parameterserver

Device 1

Device 2

Device K

h1

h2

hK

Page 34: IEEE GLOBECOM 2020 Tutorial on Distributed Deep Learning ...

FEEL with Over-the-Air Majority Voting• Analog over-the-air aggregation may not be applicable to legacy systems• We instead use single bit quantization ala signSGD and over-the-air

majority voting:

G. Zhu, Y. Du, DG, K. Huang, One-bit over-the-air aggregation for communication-efficient federated edge learning: Design and convergence analysis, 2020.

noise

Parameterserver

• Map each bit to a BPSK symbol (or QPSK)• Truncated channel inversion -> remain silent

in poor channel condition with probability ⍺

• PS sends back

Page 35: IEEE GLOBECOM 2020 Tutorial on Distributed Deep Learning ...

FEEL with Over-the-Air Majority Voting

G. Zhu, Y. Du, D. Gunduz, K. Huang, One-bit over-the-air aggregation for communication-efficient federated edge learning: Design and convergence analysis, 2020.

Analog

Page 36: IEEE GLOBECOM 2020 Tutorial on Distributed Deep Learning ...

FEEL with Blind Transmitters

• Channel gains unknown at the transmitters

M. Amiri, T. Duman, and D. Gunduz, Collaborative machine learning at the wireless edge with blind transmitters,” IEEE GlobalSIP, Nov. 2019.

noise

Multi-antennaBase station

User 1

User 2

User K

M=20 devices

Page 37: IEEE GLOBECOM 2020 Tutorial on Distributed Deep Learning ...

For more information about our work:Information Processing and Communications Lab

www.imperial.ac.uk/ipc-lab

IEEE JSAC Special Series on Machine Learning in Communications and Networks, Jan 2021

IEEE Communications Magazine, Special Issue on Communication Technologies for Efficient Edge Learning, Dec. 2020

D. Gündüz, D. Burth Kurka, M. Jankowski, M. Mohammadi Amiri, E. Ozfatura, and S. Sreekumar,Communicate to learn at the edge, IEEE Communications Magazine, 2020.

D. Gunduz, P. de Kerret, N. Sidiroupoulos, D. Gesbert, C. Murthy, M. van der Schaar,Machine learning in the air, IEEE Journal on Selected Areas in Communications, Oct. 2019.