Elements of Network Information Theory - Stanford...

121
Elements of Network Information Theory Abbas El Gamal and Young-Han Kim Stanford University and UC San Diego Tutorial, ISIT 2011 Slides available at http://isl.stanford.edu/ ˜ abbas

Transcript of Elements of Network Information Theory - Stanford...

Page 1: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Elements of Network Information Theory

Abbas El Gamal and Young-Han Kim

Stanford University and UC San Diego

Tutorial, ISIT 2011

Slides available at http://isl.stanford.edu/˜abbas

Page 2: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Introduction

Networked Information Processing System

Communication network

System: Internet, peer-to-peer network, sensor network, . . .

Sources: Data, speech, music, images, video, sensor data

Nodes: Handsets, base stations, processors, servers, sensor nodes, . . .

Network: Wired, wireless, or a hybrid of the two

Task: Communicate the sources, or compute/make decision based on them

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 2 / 118

Page 3: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Introduction

Network Information Flow Questions

Communication network

What is the limit on the amount of communication needed?

What are the coding scheme/techniques that achieve this limit?

Shannon (1948): Noisy point-to-point communication

Ford–Fulkerson, Elias–Feinstein–Shannon (1956): Graphical unicast networks

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 3 / 118

Page 4: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Introduction

Network Information Theory

Simplistic model of network as graph with point-to-point links and forwardingnodes does not capture many important aspects of real-world networks:

é Networked systems have multiple sources and destinations

é The network task is often to compute a function or to make a decision

é Many networks allow for feedback and interactive communication

é The wireless medium is a shared broadcast medium

é Network security is often a primary concern

é Source–channel separation does not hold for networks

é Data arrival and network topology evolve dynamically

Network information theory aims to answer the information flow questionswhile capturing some of these aspects of real-world networks

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 4 / 118

Page 5: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Introduction

Brief History

First paper: Shannon (1961) “Two-way communication channels”

é He didn’t find the optimal rates (capacity region)

é The problem remains open

Significant research activities in 70s and early 80s with many new results andtechniques, but

é Many basic problems remained open

é Little interest from information and communication theorists

Wireless communications and the Internet revived interest in mid 90s

é Some progress on old open problems and many new models and problems

é Coding techniques, such as successive cancellation, superposition, Slepian–Wolf,Wyner–Ziv, successive refinement, writing on dirty paper, and network coding,beginning to impact real-world networks

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 5 / 118

Page 6: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Introduction

Network Information Theory Book

The book provides a comprehensive coverage of key results, techniques, andopen problems in network information theory

The organization balances the introduction of new techniques and new models

The focus is on discrete memoryless and Gaussian network models

We discuss extensions (if any) to many users and large networks

The proofs use elementary tools and techniques

We use clean and unified notation and terminology

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 6 / 118

Page 7: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Introduction

Book Organization

Part I. Preliminaries (Chapters 2,3): Review of basic information measures,typicality, Shannon’s theorems. Introduction of key lemmas

Part II. Single-hop networks (Chapters 4 to 14): Networks with single-round,one-way communication

é Independent messages over noisy channels

é Correlated (uncompressed) sources over noiseless links

é Correlated sources over noisy channels

Part III. Multihop networks (Chapters 15 to 20): Networks with relaying andmultiple communication rounds

é Independent messages over graphical networks

é Independent messages over general networks

é Correlated sources over graphical networks

Part IV. Extensions (Chapters 21 to 24): Extensions to distributed computing,secrecy, wireless fading channels, and information theory and networking

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 7 / 118

Page 8: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Introduction

Tutorial Objectives

Focus on elementary and unified approach to coding schemes

é Typicality and simple “universal” lemmas for DM models

Lossless source coding as a corollary of lossy source coding

Extending achievability proofs from DM to Gaussian models

Illustrate the approach through proofs of several classical coding theorems

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 8 / 118

Page 9: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Introduction

Outline

1. Typical Sequences

2. Point-to-Point Communication

3. Multiple Access Channel

4. Broadcast Channel

5. Lossy Source Coding

6. Wyner–Ziv Coding

7. Gelfand–Pinsker Coding

8. Wiretap Channel

9. Relay Channel

10. Multicast Network

è 10-minute break

è 10-minute break

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 9 / 118

Page 10: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Typical Sequences

Typical Sequences

Empirical pmf (or type) of xn ∈ Xn:

π(x |xn) =!!!!{i: xi = x}!!!!

nfor x ∈ X

Typical set (Orlitsky–Roche 2001): For X ∼ p(x) and є > 0,

T(n)є (X) = �xn: !!!!π(x |xn) − p(x)!!!! ≤ є ⋅ p(x) for all x ∈ X � = T

(n)є

Typical Average Lemma

Let xn ∈ T(n)є (X) and (x) ≥ 0. Then

(1 − є)E((X)) ≤ 1

n

n

Hi=1

(xi) ≤ (1 + є)E((X))

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 10 / 118

Page 11: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Typical Sequences

Properties of Typical Sequences

Let xn ∈ T(n)є (X) and p(xn) = ∏n

i=1 pX(xi). Then2−n(H(X)+δ(є)) ≤ p(xn) ≤ 2−n(H(X)−δ(є)) ,

where δ(є) → 0 as є → 0 (Notation: p(xn) ≐ 2−nH(X))

|T (n)є (X)| ≐ 2nH(X) for n sufficiently large

Let Xn ∼ ∏ni=1 pX(xi). Then by the LLN, limn→∞ P{Xn ∈ T

(n)є } = 1

Xn

T(n)є (X)

typical xn

p(xn) ≐ 2−nH(X)|T (n)є | ≐ 2nH(X)

P{Xn ∈ T(n)є } ≥ 1 − є

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 11 / 118

Page 12: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Typical Sequences

Jointly Typical Sequences

Joint type of (xn , yn) ∈ Xn × Y

n:

π(x , y |xn , yn) =!!!!{i: (xi , yi) = (x , y)}!!!!

nfor (x , y) ∈ X × Y

Jointly typical set: For (X ,Y) ∼ p(x , y) and є > 0,

T(n)є (X ,Y) = �(xn , yn): |π(x , y |xn , yn) − p(x , y)| ≤ є ⋅ p(x , y) for all (x , y)�

= T(n)є ((X ,Y))

Let (xn , yn) ∈ T(n)є (X ,Y) and p(xn , yn) = ∏n

i=1 pX ,Y (xi , yi). Thené xn ∈ T

(n)є (X) and yn ∈ T

(n)є (Y)

é p(xn) ≐ 2−nH(X), p(yn) ≐ 2−nH(Y), and p(xn , yn) ≐ 2−nH(X ,Y)

é p(xn|yn) ≐ 2−nH(X|Y) and p(yn|xn) ≐ 2−nH(Y |X)

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 12 / 118

Page 13: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Typical Sequences

Conditionally Typical Sequences

Conditionally typical set: For xn ∈ Xn,

T(n)є (Y |xn) = �yn: (xn , yn) ∈ T

(n)є (X ,Y)�

|T (n)є (Y |xn)| ≤ 2n(H(Y |X)+δ(є))

Conditional Typicality Lemma

Let (X ,Y) ∼ p(x , y). If xn ∈ T(n)

є�(X) and Yn ∼ ∏n

i=1 pY |X(yi |xi), then for є > є�,

limn→∞

P�(xn ,Yn) ∈ T(n)є (X ,Y)� = 1

If xn ∈ T(n)

є�(X) and є > є�, then for n sufficiently large,

|T (n)є (Y |xn)| ≥ 2n(H(Y |X)−δ(є))

Let X ∼ p(x), Y = (X), and xn ∈ T(n)є (X). Then

yn ∈ T(n)є (Y |xn) iff yi = (xi), i ∈ [1 : n]

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 13 / 118

Page 14: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Typical Sequences

Illustration of Joint Typicality

xn

yn

T(n)є (Y)

�| ⋅ | ≐ 2nH(Y)�

T(n)є (X) �| ⋅ | ≐ 2nH(X)�

T(n)є (X ,Y)

�| ⋅ | ≐ 2nH(X ,Y)�

T(n)є (Y |xn) T

(n)є (X|yn)

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 14 / 118

Page 15: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Typical Sequences

Another Illustration of Joint Typicality

��������

��������

T(n)є (X)

xn

Xn

Yn

T(n)є (Y)

T(n)є (Y |xn)

�| ⋅ | ≐ 2nH(Y |X)�

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 15 / 118

Page 16: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Typical Sequences

Joint Typicality for Random Triples

Let (X ,Y , Z) ∼ p(x , y, z). The set of typical sequences is

T(n)є (X ,Y , Z) = T

(n)є ((X ,Y , Z))

Joint Typicality Lemma

Let (X ,Y , Z) ∼ p(x , y, z) and є� < є. Then for some δ(є) → 0 as є → 0:

If (xn , yn) is arbitrary and Zn ∼ ∏ni=1 pZ|X(zi |xi), then

P�(xn , yn , Zn) ∈ T(n)є (X ,Y , Z)� ≤ 2−n(I(Y ;Z|X)−δ(є))

If (xn , yn) ∈ T(n)

є�and Zn ∼ ∏n

i=1 pZ|X(zi |xi), then for n sufficiently large,

P�(xn , yn , Zn) ∈ T(n)є (X ,Y , Z)� ≥ 2−n(I(Y ;Z|X)+δ(є))

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 16 / 118

Page 17: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Typical Sequences

Summary

1. Typical Sequences

2. Point-to-Point Communication

3. Multiple Access Channel

4. Broadcast Channel

5. Lossy Source Coding

6. Wyner–Ziv Coding

7. Gelfand–Pinsker Coding

8. Wiretap Channel

9. Relay Channel

10. Multicast Network

è Typical average lemma

Conditional typicality lemma

Joint typicality lemma

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 17 / 118

Page 18: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Point-to-Point Communication

Discrete Memoryless Channel (DMC)

Point-to-point communication system

M MXn Yn

Encoder p(y|x) Decoder

Assume a discrete memoryless channel (DMC) model (X , p(y|x),Y)é Discrete: Finite-alphabet

é Memoryless: When used over n transmissions with message M and input Xn,

p(yi |x i , yi−1 ,m) = pY |X(yi |xi)When used without feedback, p(yn|xn ,m) = ∏n

i=1 pY |X(yi |xi)

A (2nR , n) code for the DMC:

é Message set [1 : 2nR] = {1, 2, . . . , 2⌈nR⌉}é Encoder: a codeword xn(m) for each m ∈ [1 : 2nR]é Decoder: an estimate m(yn) ∈ [1 : 2nR] ∪ {e} for each yn

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 18 / 118

Page 19: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Point-to-Point Communication

M MXn Yn

Encoder p(y|x) Decoder

Assume M ∼ Unif[1 : 2nR]Average probability of error: P(n)

e = P{M = M}Assume cost b(x) ≥ 0 with b(x0) = 0

Average cost constraint:

n

Hi=1

b(xi(m)) ≤ nB for every m ∈ [1 : 2nR]

R achievable if ∃ (2nR , n) codes that satisfy the cost constraint with limn→∞

P(n)e = 0

Capacity–cost function C(B) of the DMC p(y|x) with average cost constraint Bon X is the supremum of all achievable rates

Channel Coding Theorem (Shannon 1948)

C(B) = maxp(x):E(b(X))≤B

I(X ;Y)El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 19 / 118

Page 20: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Point-to-Point Communication

Proof of Achievability

We use random coding and joint typicality decoding

Codebook generation:

é Fix p(x) that attains C(B/(1 + є))é Randomly and independently generate 2nR sequences xn(m) ∼ ∏n

i=1 pX(xi),m ∈ [1 : 2nR]

Encoding:

é To send message m, the encoder transmits xn(m) if xn(m) ∈ T(n)є

(by the typical average lemma, ∑ni=1 b(xi(m)) ≤ nB)

é Otherwise it transmits (x0 , . . . , x0)

Decoding:

é Decoder declares that m is sent if it is unique message such that (xn(m), yn) ∈ T(n)є

é Otherwise it declares an error

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 20 / 118

Page 21: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Point-to-Point Communication

Analysis of the Probability of Error

Consider the probability of error P(E) averaged over M and codebooks

Assume M = 1 (symmetry of codebook generation)

The decoder makes an error iff one or both of the following events occur:

E1 = �(Xn(1),Yn) ∉ T(n)є �

E2 = �(Xn(m),Yn) ∈ T(n)є for some m = 1�

Thus, by the union of events bound

P(E) = P(E |M = 1)= P(E1 ∪ E2)≤ P(E1) + P(E2)

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 21 / 118

Page 22: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Point-to-Point Communication

Analysis of the Probability of Error

Consider the first term

P(E1) = P�(Xn(1),Yn) ∉ T(n)є �

= P�Xn(1) ∈ T(n)є , (Xn(1),Yn) ∉ T

(n)є � + P�Xn(1) ∉ T

(n)є , (Xn(1),Yn) ∉ T

(n)є �

≤ Hxn∈T (n)

å

n

Ii=1

pX(xi) Hyn∉T (n)

å (Y |xn)

n

Ii=1

pY |X(yi |xi) + P�Xn(1) ∉ T(n)є �

≤ H(xn ,yn)∉T (n)

å

n

Ii=1

pX(xi)pY |X(yi |xi) + P�Xn(1) ∉ T(n)є �

By the LLN, each term → 0 as n → ∞

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 22 / 118

Page 23: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Point-to-Point Communication

Analysis of the Probability of Error

Consider the second term

P(E2) = P�(Xn(m),Yn) ∈ T(n)є for some m = 1�

For m = 1, Xn(m) ∼ ∏ni=1 pX(xi), independent of Yn ∼ ∏n

i=1 pY (yi)

Xn(2)

Xn(m)

Xn Y

n T(n)є (Y)

Yn

To bound P(E2), we use the packing lemma

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 23 / 118

Page 24: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Point-to-Point Communication

Packing Lemma

Let (U , X ,Y) ∼ p(u, x , y)Let (Un , Yn) ∼ p(un , yn) be arbitrarily distributed

Let Xn(m) ∼ ∏ni=1 pX|U (xi |ui), m ∈ A, where |A| ≤ 2nR, be

pairwise conditionally independent of Yn given Un

Packing Lemma

There exists δ(є) → 0 as є → 0 such that

limn→∞

P�(Un , Xn(m), Yn) ∈ T(n)є for some m ∈ A} = 0,

if R < I(X ;Y |U) − δ(є)

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 24 / 118

Page 25: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Point-to-Point Communication

Analysis of the Probability of Error

Consider the second term

P(E2) = P�(Xn(m),Yn) ∈ T(n)є for some m = 1�

For m = 1, Xn(m) ∼ ∏ni=1 pX(xi), independent of Yn ∼ ∏n

i=1 pY (yi)Hence, by the packing lemma with A = [2 : 2nR] and U = , P(E2) → 0 if

R < I(X ;Y) − δ(є) = C(B/(1 + є)) − δ(є)

Since P(E) → 0 as n → ∞, there must exist a sequence of (2nR , n) codes withlimn→∞ P(n)

e = 0 if R < C(B/(1 + є)) − δ(є)By the continuity of C(B) in B, C(B/(1 + є)) → C(B) as є → 0, which impliesthe achievability of every rate R < C(B)

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 25 / 118

Page 26: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Point-to-Point Communication Gaussian Channel

Gaussian Channel

Discrete-time additive white Gaussian noise channel

X

Y = X + Z

Z

é : channel gain (path loss)

é {Zi}: WGN(N0/2) process, independent of MAverage power constraint: ∑n

i=1 x2

i (m) ≤ nP for every m

é Assume N0/2 = 1 and label received power 2P as S (SNR)

Theorem (Shannon 1948)

C = maxF(x):E(X2)≤P

I(X ;Y) = 1

2log(1 + S)

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 26 / 118

Page 27: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Point-to-Point Communication Gaussian Channel

Proof of Achievability

We extend the proof for DMC using a discretization procedure (McEliece 1977)

First note that the capacity is attained by X ∼ N(0, P), i.e., I(X ;Y) = C

Let [X] j be a finite quantization of X such that E([X]2j) ≤ E(X2) = P and[X] j → X in distribution

X [X] j Yj [Yj]k

Z

Let Yj = [X] j + Z and [Yj]k be a finite quantization of Yj

By the achievability proof for the DMC, I([X] j ; [Yj]k) is achievable for every j , k

By the data processing inequality and the maximum differential entropy lemma,

I([X] j ; [Yj]k) ≤ I([X] j ;Yj) = h(Yj) − h(Z) ≤ h(Y) − h(Z) = I(X ;Y)By the weak convergence and the dominated convergence theorem,

lim infj→∞

limk→∞

I([X] j ; [Yj]k) = lim infj→∞

I([X] j ;Yj) ≥ I(X ;Y)Combining the two bounds I([X] j ; [Yj]k) → I(X ;Y) as j , k → ∞El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 27 / 118

Page 28: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Point-to-Point Communication Gaussian Channel

Summary

1. Typical Sequences

2. Point-to-Point Communication

3. Multiple Access Channel

4. Broadcast Channel

5. Lossy Source Coding

6. Wyner–Ziv Coding

7. Gelfand–Pinsker Coding

8. Wiretap Channel

9. Relay Channel

10. Multicast Network

è Random coding

Joint typicality decoding

Packing lemma

Discretization procedure for Gaussian

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 28 / 118

Page 29: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Multiple Access Channel

DM Multiple Access Channel (MAC)

Multiple access communication system (uplink)

M1

M2

Xn1

Xn2

Encoder 1

Encoder 2

Decoderp(y|x1 , x2) Yn M1 , M2

Assume a 2-sender DM-MAC model (X1 × X2 , p(y|x1 , x2),Y)A (2nR1 , 2nR2 , n) code for the DM-MAC:

é Message sets: [1 : 2nR1 ] and [1 : 2nR2 ]é Encoder j = 1, 2: xnj (m j)é Decoder: (m1(yn), m2(yn))Assume (M1 ,M2) ∼ Unif([1 : 2nR1] × [1 : 2nR2]): xn

1(M1) and xn

2(M2) independent

Average probability of error: P(n)e = P{(M1 , M2) = (M1 ,M2)}

(R1 , R2) achievable: if ∃ (2nR1 , 2nR2 , n) codes with limn→∞ P(n)e = 0

Capacity region: closure of the set of achievable (R1 , R2)El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 29 / 118

Page 30: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Multiple Access Channel

Theorem (Ahlswede 1971, Liao 1972, Slepian–Wolf 1973b)

Capacity region of DM-MAC p(y|x1 , x2) is the set of rate pairs (R1 , R2) such that

R1 ≤ I(X1 ;Y |X2 ,Q),R2 ≤ I(X2 ;Y |X1 ,Q),

R1 + R2 ≤ I(X1 , X2 ;Y |Q)for some pmf p(q)p(x1|q)p(x2|q), where Q is an auxiliary (time-sharing) r.v.

C1

C2

C12

C12

R1

R2

Individual capacities:C1 = maxp(x1), x2 I(X1 ;Y |X2 = x2)C2 = maxp(x2), x1 I(X2 ;Y |X1 = x1)Sum-capacity:C12 = maxp(x1)p(x2) I(X1 , X2 ;Y)

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 30 / 118

Page 31: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Multiple Access Channel

Proof of Achievability (Han–Kobayashi 1981)

We use simultaneous decoding and coded time sharing

Codebook generation:

é Fix p(q)p(x1|q)p(x2|q)é Randomly generate a time-sharing sequence qn ∼ ∏n

i=1 pQ(qi)é Randomly and conditionally independently generate 2nR1 sequencesxn1(m1) ∼ ∏n

i=1 pX1 |Q(x1i |qi), m1 ∈ [1 : 2nR1 ]

é Similarly generate 2nR2 sequences xn2(m2) ∼ ∏n

i=1 pX2 |Q(x2i |qi), m2 ∈ [1 : 2nR2 ]

Encoding:

é To send (m1 ,m2), transmit xn1(m1) and xn

2(m2)

Decoding:

é Find the unique message pair (m1 , m2) such that (qn , xn1(m1), xn2 (m2), yn) ∈ T

(n)є

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 31 / 118

Page 32: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Multiple Access Channel

Analysis of the Probability of Error

Assume (M1 ,M2) = (1, 1)Joint pmfs induced by different (m1 ,m2)

m1 m2 Joint pmf

1 1 p(qn)p(xn1|qn)p(xn

2|qn)p(yn|xn

1, xn

2, qn)

∗ 1 p(qn)p(xn1|qn)p(xn

2|qn)p(yn|xn

2, qn)

1 ∗ p(qn)p(xn1|qn)p(xn

2|qn)p(yn|xn

1, qn)

∗ ∗ p(qn)p(xn1|qn)p(xn

2|qn)p(yn|qn)

We divide the error events into the following 4 events:

E1 = �(Qn , Xn1(1), Xn

2(1),Yn) ∉ T

(n)є �

E2 = �(Qn , Xn1(m1), Xn

2(1),Yn) ∈ T

(n)є for some m1 = 1�

E3 = �(Qn , Xn1(1), Xn

2(m2),Yn) ∈ T

(n)є for some m2 = 1�

E4 = �(Qn , Xn1(m1), Xn

2(m2),Yn) ∈ T

(n)є for some m1 = 1,m2 = 1�

Then P(E) ≤ P(E1) + P(E2) + P(E3) + P(E4)El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 32 / 118

Page 33: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Multiple Access Channel

m1 m2 Joint pmf

1 1 p(qn)p(xn1|qn)p(xn

2|qn)p(yn|xn

1, xn

2, qn)

∗ 1 p(qn)p(xn1|qn)p(xn

2|qn)p(yn|xn

2, qn)

1 ∗ p(qn)p(xn1|qn)p(xn

2|qn)p(yn|xn

1, qn)

∗ ∗ p(qn)p(xn1|qn)p(xn

2|qn)p(yn|qn)

E1 = �(Qn , Xn1(1), Xn

2(1),Yn) ∉ T

(n)є �

E2 = �(Qn , Xn1(m1), Xn

2(1),Yn) ∈ T

(n)є for some m1 = 1�

E3 = �(Qn , Xn1(1), Xn

2(m2),Yn) ∈ T

(n)є for some m2 = 1�

E4 = �(Qn , Xn1(m1), Xn

2(m2),Yn) ∈ T

(n)є for some m1 = 1,m2 = 1�

By the LLN, P(E1) → 0 as n → ∞By the packing lemma (A = [2 : 2nR1], U ← Q, X ← X1, Y ← (X2 ,Y)),P(E2) → 0 as n → ∞ if R1 < I(X1 ; X2 ,Y |Q) − δ(є) = I(X1 ;Y |X2 ,Q) − δ(є)Similarly, P(E3) → 0 as n → ∞ if R2 < I(X2 ;Y |X1 ,Q) − δ(є)

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 33 / 118

Page 34: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Multiple Access Channel

Packing Lemma

Let (U , X ,Y) ∼ p(u, x , y)Let (Un , Yn) ∼ p(un , yn) be arbitrarily distributed

Let Xn(m) ∼ ∏ni=1 pX|U (xi |ui), m ∈ A, where |A| ≤ 2nR, be

pairwise conditionally independent of Yn given Un

Packing Lemma

There exists δ(є) → 0 as є → 0 such that

limn→∞

P�(Un , Xn(m), Yn) ∈ T(n)є for some m ∈ A} = 0,

if R < I(X ;Y |U) − δ(є)

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 34 / 118

Page 35: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Multiple Access Channel

m1 m2 Joint pmf

1 1 p(qn)p(xn1|qn)p(xn

2|qn)p(yn|xn

1, xn

2, qn)

∗ 1 p(qn)p(xn1|qn)p(xn

2|qn)p(yn|xn

2, qn)

1 ∗ p(qn)p(xn1|qn)p(xn

2|qn)p(yn|xn

1, qn)

∗ ∗ p(qn)p(xn1|qn)p(xn

2|qn)p(yn|qn)

E1 = �(Qn , Xn1(1), Xn

2(1),Yn) ∉ T

(n)є �

E2 = �(Qn , Xn1(m1), Xn

2(1),Yn) ∈ T

(n)є for some m1 = 1�

E3 = �(Qn , Xn1(1), Xn

2(m2),Yn) ∈ T

(n)є for some m2 = 1�

E4 = �(Qn , Xn1(m1), Xn

2(m2),Yn) ∈ T

(n)є for some m1 = 1,m2 = 1�

By the LLN, P(E1) → 0 as n → ∞By the packing lemma (A = [2 : 2nR1], U ← Q, X ← X1, Y ← (X2 ,Y)),P(E2) → 0 as n → ∞ if R1 < I(X1 ; X2 ,Y |Q) − δ(є) = I(X1 ;Y |X2 ,Q) − δ(є)Similarly, P(E3) → 0 as n → ∞ if R2 < I(X2 ;Y |X1 ,Q) − δ(є)

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 35 / 118

Page 36: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Multiple Access Channel

m1 m2 Joint pmf

1 1 p(qn)p(xn1|qn)p(xn

2|qn)p(yn|xn

1, xn

2, qn)

∗ 1 p(qn)p(xn1|qn)p(xn

2|qn)p(yn|xn

2, qn)

1 ∗ p(qn)p(xn1|qn)p(xn

2|qn)p(yn|xn

1, qn)

∗ ∗ p(qn)p(xn1|qn)p(xn

2|qn)p(yn|qn)

E1 = �(Qn , Xn1(1), Xn

2(1),Yn) ∉ T

(n)є �

E2 = �(Qn , Xn1(m1), Xn

2(1),Yn) ∈ T

(n)є for some m1 = 1�

E3 = �(Qn , Xn1(1), Xn

2(m2),Yn) ∈ T

(n)є for some m2 = 1�

E4 = �(Qn , Xn1(m1), Xn

2(m2),Yn) ∈ T

(n)є for some m1 = 1,m2 = 1�

By the packing lemma (A = [2 : 2nR1] × [2 : 2nR2], U ← Q, X ← (X1 , X2)),P(E4) → 0 as n → ∞ if R1 + R2 < I(X1 , X2 ;Y |Q) − δ(є)Remark: (Xn

1(m1), Xn

2(m2)), m1 = 1, m2 = 1, are not mutually independent but

each of them is pairwise independent of Yn (given Qn)

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 36 / 118

Page 37: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Multiple Access Channel

Summary

1. Typical Sequences

2. Point-to-Point Communication

3. Multiple Access Channel

4. Broadcast Channel

5. Lossy Source Coding

6. Wyner–Ziv Coding

7. Gelfand–Pinsker Coding

8. Wiretap Channel

9. Relay Channel

10. Multicast Network

è Coded time sharing

Simultaneous decoding

Systematic procedure for decomposingerror event

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 37 / 118

Page 38: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Broadcast Channel

DM Broadcast Channel (BC)

Broadcast communication system (downlink)

M1 ,M2 Xn

p(y1 , y2|x)Yn1

Yn2

M1

M2

Encoder

Decoder 1

Decoder 2

Assume a 2-receiver DM-BC model (X , p(y1 , y2|x),Y1 × Y2)A (2nR1 , 2nR2 , n) code for the DM-BC:

é Message sets: [1 : 2nR1 ] and [1 : 2nR2 ]é Encoder: xn(m1 ,m2)é Decoder j = 1, 2: m j(ynj )Assume (M1 ,M2) ∼ Unif([1 : 2nR1] × [1 : 2nR2])Average probability of error: P(n)

e = P{(M1 , M2) = (M1 ,M2)}(R1 , R2) achievable: if ∃ (2nR1 , 2nR2 , n) codes with limn→∞ P(n)

e = 0

Capacity region: closure of the set of achievable (R1 , R2)El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 38 / 118

Page 39: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Broadcast Channel

Superposition Coding Inner Bound

Capacity region of the DM-BC is not known in general

There are several inner and outer bounds tight in some cases

Superposition Coding Inner Bound (Cover 1972, Bergmans 1973)

A rate pair (R1 , R2) is achievable for the DM-BC p(y1 , y2|x) ifR1 < I(X ;Y1 |U),R2 < I(U ;Y2),

R1 + R2 < I(X ;Y1)for some pmf p(u, x), where U is an auxiliary random variable

This bound is tight for several special cases, including

é Degraded: X → Y1 → Y2 physically or stochastically

é Less noisy: I(U ;Y1) ≥ I(U ;Y2) for all p(u, x)é More capable: I(X ;Y1) ≥ I(X ;Y2) for all p(x)é Degraded ⇒ Less noisy ⇒ More capable

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 39 / 118

Page 40: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Broadcast Channel

Proof of Achievability

We use superposition coding and simultaneous nonunique decoding

Codebook generation:é Fix p(u)p(x|u)é Randomly and independently generate 2nR2 sequences (cloud centers)un(m2) ∼ ∏n

i=1 pU (ui), m2 ∈ [1 : 2nR2 ]é For each m2 ∈ [1 : 2nR2 ], randomly and conditionally independently generate 2nR1

sequences (satellite codewords) xn(m1 ,m2) ∼ ∏ni=1 pX|U (xi |ui(m2)), m1 ∈ [1 : 2nR1 ]

xn(m1 ,m2)un(m2) Xn

Un

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 40 / 118

Page 41: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Broadcast Channel

Proof of Achievability

We use superposition coding and simultaneous nonunique decoding

Codebook generation:é Fix p(u)p(x|u)é Randomly and independently generate 2nR2 sequences (cloud centers)un(m2) ∼ ∏n

i=1 pU (ui), m2 ∈ [1 : 2nR2 ]é For each m2 ∈ [1 : 2nR2 ], randomly and conditionally independently generate 2nR1

sequences (satellite codewords) xn(m1 ,m2) ∼ ∏ni=1 pX|U (xi |ui(m2)), m1 ∈ [1 : 2nR1 ]

Encoding:é To send (m1 ,m2), transmit xn(m1 ,m2)Decoding:é Decoder 2 finds the unique message m2 such that (un(m2), yn2 ) ∈ T

(n)є

(by the packing lemma, P(E2) → 0 as n → ∞ if R2 < I(U ;Y2) − δ(є))é Decoder 1 finds the unique message m1 such that

(un(m2), xn(m1 ,m2), yn1 ) ∈ T(n)є for some m2

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 40 / 118

Page 42: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Broadcast Channel

Analysis of the Probability of Error for Decoder 1

Assume (M1 ,M2) = (1, 1)Joint pmfs induced by different (m1 ,m2)

m1 m2 Joint pmf

1 1 p(un , xn)p(yn1|xn)

∗ 1 p(un , xn)p(yn1|un)

∗ ∗ p(un , xn)p(yn1)

1 ∗ p(un , xn)p(yn1)

The last case does not result in an error

So we divide the error event into the following 3 events:

E11 = �(Un(1), Xn(1, 1),Yn1) ∉ T

(n)є �

E12 = �(Un(1), Xn(m1 , 1),Yn1) ∈ T

(n)є for some m1 = 1�

E13 = �(Un(m2), Xn(m1 ,m2),Yn1) ∈ T

(n)є for some m1 = 1, m2 = 1�

Then P(E1) ≤ P(E11) + P(E12) + P(E13)El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 41 / 118

Page 43: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Broadcast Channel

m1 m2 Joint pmf

1 1 p(un , xn)p(yn1|xn)

∗ 1 p(un , xn)p(yn1|un)

∗ ∗ p(un , xn)p(yn1)

1 ∗ p(un , xn)p(yn1)

E11 = �(Un(1), Xn(1, 1),Yn1) ∉ T

(n)є �

E12 = �(Un(1), Xn(m1 , 1),Yn1) ∈ T

(n)є for some m1 = 1�

E13 = �(Un(m2), Xn(m1 ,m2),Yn1) ∈ T

(n)є for some m1 = 1, m2 = 1�

By the packing lemma (A = [2 : 2nR1]), P(E12) → 0 as n → ∞ ifR1 < I(X ;Y1|U) − δ(є)By the packing lemma (A = [2 : 2nR1] × [2 : 2nR2], U ← , X ← (U , X)),P(E13) → 0 as n → ∞ if R1 + R2 < I(U , X ;Y1) − δ(є) = I(X ;Y1) − δ(є)Remark: P(E14) = P{(Un(m2), Xn(1,m2),Yn

1) ∈ T

(n)є for some m2 = 1} → 0 as

n → ∞ if R2 < I(U , X ;Y1) − δ(є) = I(X ;Y1) − δ(є)Hence, the inner bound continues to hold when decoder 1 is also to recover M2

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 42 / 118

Page 44: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Broadcast Channel

Summary

1. Typical Sequences

2. Point-to-Point Communication

3. Multiple Access Channel

4. Broadcast Channel

5. Lossy Source Coding

6. Wyner–Ziv Coding

7. Gelfand–Pinsker Coding

8. Wiretap Channel

9. Relay Channel

10. Multicast Network

è Superposition coding

Simultaneous nonunique decoding

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 43 / 118

Page 45: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Lossy Source Coding

Lossy Source Coding

Point-to-point compression system

Xn M (Xn ,D)Encoder Decoder

Assume a discrete memoryless source (DMS) (X , p(x))a distortion measure d(x , x), (x , x) ∈ X × X

Average per-letter distortion between xn and xn:

d(xn , xn) = 1

n

n

Hi=1

d(xi , xi)

A (2nR , n) lossy source code:

é Encoder: an index m(xn) ∈ [1 : 2nR) := {1, 2, . . . , 2⌊nR⌋}é Decoder: an estimate (reconstruction sequence) xn(m) ∈ X

n

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 44 / 118

Page 46: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Lossy Source Coding

Xn M (Xn ,D)Encoder Decoder

Expected distortion associated with the (2nR , n) code:D = E�d(Xn , Xn)� = H

xnp(xn)d(xn , xn(m(xn)))

(R,D) achievable if ∃ (2nR , n) codes with lim supn→∞ E(d(Xn , Xn)) ≤ D

Rate–distortion function R(D): infimum of R such that (R,D) is achievable

Lossy Source Coding Theorem (Shannon 1959)

R(D) = minp(x|x):E(d(x,x))≤D

I(X ; X)for D ≥ Dmin = E[minx(x) d(X , x(X))]

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 45 / 118

Page 47: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Lossy Source Coding

Proof of Achievability

We use random coding and joint typicality encoding

Codebook generation:

é Fix p(x|x) that attains R(D/(1 + є)) and compute p(x) = ∑x p(x)p(x|x)é Randomly and independently generate sequences xn(m) ∼ ∏n

i=1 pX(xi), m ∈ [1 : 2nR]Encoding:

é Find an index m such that (xn , xn(m)) ∈ T(n)є

é If more than one, choose the smallest index among them

é If none, choose m = 1

Decoding:

é Upon receiving m, set the reconstruction sequence xn = xn(m)

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 46 / 118

Page 48: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Lossy Source Coding

Analysis of Expected Distortion

We bound the expected distortion averaged over codebooks

Define the “encoding error” event

E = �(Xn , Xn(M)) ∉ T(n)є � = �(Xn , Xn(m)) ∉ T

(n)є for all m ∈ [1 : 2nR]�

Xn(m) ∼ ∏ni=1 pX(xi), independent of each other and of Xn ∼ ∏n

i=1 pX(xi)

Xn(1)

Xn(m)

Xn X

n T(n)

є�(X)

Xn

To bound P(E), we use the covering lemma

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 47 / 118

Page 49: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Lossy Source Coding

Covering Lemma

Let (U , X , X) ∼ p(u, x , x) and є� < є

Let (Un , Xn) ∼ p(un , xn) be arbitrarily distributed such that

limn→∞

P{(Un , Xn) ∈ T(n)

є�(U , X)} = 1

Let Xn(m) ∼ ∏ni=1 pX|U (xi |ui), m ∈ A, where |A| ≥ 2nR, be

conditionally independent of each other and of Xn given Un

Covering Lemma

There exists δ(є) → 0 as є → 0 such that

limn→∞

P�(Un , Xn , Xn(m)) ∉ T(n)є for all m ∈ A� = 0,

if R > I(X ; X|U) + δ(є)

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 48 / 118

Page 50: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Lossy Source Coding

Analysis of Expected Distortion

We bound the expected distortion averaged over codebooks

Define the “encoding error” event

E = �(Xn , Xn(M)) ∉ T(n)є � = �(Xn , Xn(m)) ∉ T

(n)є for all m ∈ [1 : 2nR]�

Xn(m) ∼ ∏ni=1 pX(xi), independent of each other and of Xn ∼ ∏n

i=1 pX(xi)By the covering lemma (U = ), P(E) → 0 as n → ∞ if

R > I(X ; X) + δ(є) = R(D/(1 + є)) + δ(є)Now, by the law of total expectation and the typical average lemma,

E�d(Xn , Xn)� = P(E)E�d(Xn , Xn)|E� + P(E c)E�d(Xn , Xn)|E c�≤ P(E) dmax + P(E c)(1 + є)E(d(X , X))

Hence, lim supn→∞ E[d(Xn , Xn)] ≤ D and there must exist a sequence of(2nR , n) codes that satisfies the asymptotic distortion constraint

By the continuity of R(D) in D, R(D/(1 + є)) + δ(є) → R(D) as є → 0

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 49 / 118

Page 51: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Lossy Source Coding Lossless Source Coding

Lossless Source Coding

Suppose we wish to reconstruct Xn losslessly, i.e., Xn = Xn

R achievable if ∃ (2nR , n) codes with limn→∞ P{Xn = Xn} = 0

Optimal rate R∗: infimum of achievable R

Lossless Source Coding Theorem (Shannon 1948)

R∗ = H(X)

We prove this theorem as a corollary of the lossy source coding theorem

Consider the lossy source coding problem for a DMS X, X = X , and

Hamming distortion measure (d(x , x) = 0 if x = x, and d(x , x) = 1 otherwise)

At D = 0, the rate–distortion function is R(0) = H(X)We now show operationally R∗ = R(0) without using the fact that R∗ = H(X)El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 50 / 118

Page 52: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Lossy Source Coding Lossless Source Coding

Proof of the Lossless Source Coding Theorem

Proof of R∗ ≥ R(0):é First note that

limn→∞

E(d(Xn , Xn)) = limn→∞

1

n

n

Hi=1

P{Xi = Xi} ≤ limn→∞

P{Xn = Xn}

é Hence, any sequence of (2nR , n) codes with limn→∞ P{Xn = Xn} = 0 achieves D = 0

Proof of R∗ ≤ R(0):é We can still use random coding and joint typicality encoding!

é Fix p(x|x) = 1 if x = x and 0 otherwise (p(x) = pX(x))é As before, generate a random code xn(m), m ∈ [1 : 2nR]é Then P(E) = P{(Xn , Xn) ∉ T

(n)є } → 0 as n → ∞ if R > I(X ; X) + δ(є) = R(0) + δ(є)

é Now recall that if (xn , xn) ∈ T(n)є , then xn = xn (or if xn = xn, then (xn , xn) ∉ T

(n)є )

é Hence, P{Xn = Xn} → 0 as n → ∞ if R > R(0) + δ(є)

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 51 / 118

Page 53: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Lossy Source Coding Lossless Source Coding

Summary

1. Typical Sequences

2. Point-to-Point Communication

3. Multiple Access Channel

4. Broadcast Channel

5. Lossy Source Coding

6. Wyner–Ziv Coding

7. Gelfand–Pinsker Coding

8. Wiretap Channel

9. Relay Channel

10. Multicast Network

è Joint typicality encoding

Covering lemma

Lossless as a corollary of lossy

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 52 / 118

Page 54: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Wyner–Ziv Coding

Lossy Source Coding with Side Information at the Decoder

Lossy compression system with side information

Xn

Yn

M (Xn ,D)Encoder Decoder

Assume a 2-DMS (X × Y , p(x , y)) and a distortion measure d(x , x)A (2nR , n) lossy source code with side information available at the decoder:

é Encoder: m(xn)é Decoder: xn(m, yn)Expected distortion, achievability, rate–distortion function: defined as before

Theorem (Wyner–Ziv 1976)

RSI-D(D) = min�I(X ;U) − I(Y ;U)� = min I(X ;U |Y),where the minimum is over all p(u|x) and x(u, y) such that E(d(X , X)) ≤ D

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 53 / 118

Page 55: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Wyner–Ziv Coding

Proof of Achievability

We use binning in addition to joint typicality encoding and decoding

yn

xn

un(1)

un(l)

un(2nR)

B(1)

B(m)

B(2nR)

T(n)є (U ,Y)

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 54 / 118

Page 56: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Wyner–Ziv Coding

Proof of Achievability

We use binning in addition to joint typicality encoding and decoding

Codebook generation:

é Fix p(u|x) and x(u, y) that attain RSI-D(D/(1 + є))é Randomly and independently generate 2nR sequences un(l) ∼ ∏n

i=1 pU (ui), l ∈ [1 : 2nR]é Partition [1 : 2nR] into bins B(m) = [(m − 1)2n(R−R) + 1 :m2n(R−R)], m ∈ [1 : 2nR]Encoding:

é Find l such that (xn , un(l)) ∈ T(n)

є�

é If more than one, it picks one of them uniformly at random

é If none, choose l ∈ [1 : 2nR] uniformly at random

é Send the index m such that l ∈ B(m)Decoding:

é Upon receiving m, find the unique l ∈ B(m) such that (un( l), yn) ∈ T(n)є , where є > є�

é Compute the reconstruction sequence as xi = x(ui( l), yi), i ∈ [1 : n]

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 54 / 118

Page 57: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Wyner–Ziv Coding

Analysis of Expected Distortion

We bound the distortion averaged over the random codebook and encoding

Let (L,M) denote chosen indices and L be the index estimate at the decoder

Define the “error” event

E = �(Un(L), Xn ,Yn) ∉ T(n)є �

and consider

E1 = �(Un(l), Xn) ∉ T(n)

є�for all l ∈ [1 : 2nR]�

E2 = �(Un(L), Xn ,Yn) ∉ T(n)є �

E3 = �(Un( l),Yn) ∈ T(n)є for some l ∈ B(M), l = L�

The probability of “error” is bounded as

P(E) ≤ P(E1) + P(E c1∩ E2) + P(E3)

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 55 / 118

Page 58: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Wyner–Ziv Coding

E1 = �(Un(l), Xn) ∉ T(n)

є�for all l ∈ [1 : 2nR]�

E2 = �(Un(L), Xn ,Yn) ∉ T(n)є �

E3 = �(Un( l),Yn) ∈ T(n)є for some l ∈ B(M), l = L�

P(E) ≤ P(E1) +P(E c1∩ E2) +P(E3)

By the covering lemma, P(E1) → 0 as n → ∞ if R > I(X ;U) + δ(є�)Since E

c1= {(Un(L), Xn) ∈ T

(n)

є�}, є > є�, and

Yn | {Un(L) = un , Xn = xn} ∼n

Ii=1

pY |U ,X(yi |ui , xi) =n

Ii=1

pY |X(yi |xi),by the conditional typicality lemma, P(E c

1∩ E2) → 0 as n → ∞

To bound P(E3), it can be shown that

P(E3) ≤ P�(Un( l),Yn) ∈ T(n)є for some l ∈ B(1)�

Since each Un( l) ∼ ∏ni=1 pU (ui), independent of Yn,

by the packing lemma, P(E3) → 0 as n → ∞ if R − R < I(Y ;U) − δ(є)Combining the bounds, we have shown that P(E) → 0 as n → ∞ if

R > I(X ;U) − I(Y ;U) + δ(є) + δ(є�) = RSI-D(D/(1 + є)) + δ(є) + δ(є�)El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 56 / 118

Page 59: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Wyner–Ziv Coding Lossless Source Coding with Side Information

Lossless Source Coding with Side Information

What is the minimum rate R∗SI-D needed to recover X losslessly?

Theorem (Slepian–Wolf 1973a)

R∗

SI-D = H(X |Y)

We prove the Slepian–Wolf theorem as a corollary of the Wyner–Ziv theorem

Let d be the Hamming distortion measure and consider the case D = 0

Then RSI-D(0) = H(X|Y)As before, we can show operationally R∗

SI-D = RSI-D(0)é R∗

SI-D ≥ RSI-D(0) since (1/n)∑ni=1 P{Xi = Xi} ≤ P{Xn = Xn}

é R∗SI-D ≤ RSI-D(0) by Wyner–Ziv coding with X = U = X

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 57 / 118

Page 60: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Wyner–Ziv Coding Lossless Source Coding with Side Information

Summary

1. Typical Sequences

2. Point-to-Point Communication

3. Multiple Access Channel

4. Broadcast Channel

5. Lossy Source Coding

6. Wyner–Ziv Coding

7. Gelfand–Pinsker Coding

8. Wiretap Channel

9. Relay Channel

10. Multicast Network

è Binning

Application of conditional typicalitylemma

Channel coding techniques in sourcecoding

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 58 / 118

Page 61: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Gelfand–Pinsker Coding

DMC with State Information Available at the Encoder

Point-to-point communication system with state

M Xn Yn MEncoder Decoder

p(s)

p(y|x , s)

Sn

Assume a DMC with DM state model (X × S , p(y|x , s)p(s),Y)é DMC: p(yn|xn , sn ,m) = ∏n

i=1 pY |X ,S(yi |xi , si)é DM state: (S1 , S2 , . . .) i.i.d. with Si ∼ pS(si)A (2nR , n) code for the DMC with state information available at the encoder:

é Message set: [1 : 2nR]é Encoder: xn(m, sn)é Decoder: m(yn)

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 59 / 118

Page 62: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Gelfand–Pinsker Coding

M Xn Yn MEncoder Decoder

p(s)

p(y|x , s)

Sn

Expected average cost constraint:

n

Hi=1

E[b(xi(m, Sn))] ≤ nB for every m ∈ [1 : 2nR]

Probability of error, achievability, capacity–cost function: defined as for DMC

Theorem (Gelfand–Pinsker 1980)

CSI-E(B) = maxp(u|s), x(u,s):E(b(X))≤B

�I(U ;Y) − I(U ; S)�

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 60 / 118

Page 63: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Gelfand–Pinsker Coding

Proof of Achievability (Heegard–El Gamal 1983)

We use multicodingsn

un

un(1)

un(l)

un(2nR)

C(1)

C(m)

C(2nR)

T(n)є (U , S)

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 61 / 118

Page 64: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Gelfand–Pinsker Coding

Proof of Achievability (Heegard–El Gamal 1983)

We use multicoding

Codebook generation:

é Fix p(u|s) and x(u, s) that attain CSI-E(B/(1 + є�))é For each m ∈ [1 : 2nR], generate a subcodebook C(m) consisting of

2n(R−R) randomly and independently generated sequences un(l) ∼ ∏ni=1 pU (ui),

l ∈ [(m − 1)2n(R−R) + 1 :m2n(R−R)]

Encoding:

é To send m ∈ [1 : 2nR] given sn, find un(l) ∈ C(m) such that (un(l), sn) ∈ T(n)

є�

é Then transmit xi = x(ui(l), si) for i ∈ [1 : n](by the typical average lemma, ∑n

i=1 b(xi(m, sn)) ≤ nB)

é If no such un(l) exists, transmit (x0 , . . . , x0)

Decoding:

é Find the unique m such that (un(l), yn) ∈ T(n)є for some un(l) ∈ C(m), where є > є�

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 61 / 118

Page 65: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Gelfand–Pinsker Coding

Analysis of the Probability of Error

Assume M = 1

Let L denote the index of the chosen Un sequence for M = 1 and Sn

The decoder makes an error only if one or more of the following events occur:

E1 = �(Un(l), Sn) ∉ T(n)

є�for all Un(l) ∈ C(1)�

E2 = �(Un(L),Yn) ∉ T(n)є �

E3 = �(Un(l),Yn) ∈ T(n)є for some Un(l) ∉ C(1)�

Thus, the probability of error is bounded as

P(E) ≤ P(E1) + P(E c1∩ E2) + P(E3)

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 62 / 118

Page 66: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Gelfand–Pinsker Coding

E1 = �(Un(l), Sn) ∉ T(n)

є�for all Un(l) ∈ C(1)�

E2 = �(Un(L),Yn) ∉ T(n)є �

E3 = �(Un(l),Yn) ∈ T(n)є for some Un(l) ∉ C(1)�

P(E) ≤ P(E1) +P(E c1∩ E2) +P(E3)

By the covering lemma, P(E1) → 0 as n → ∞ if R − R > I(U ; S) + δ(є�)Since є > є�, E c

1= {(Un(L), Sn) ∈ T

(n)

є�} = {(Un(L), Xn , Sn) ∈ T

(n)

є�}, and

Yn |{Un(L) = un , Xn = xn , Sn = sn} ∼n

Ii=1

pY |U ,X ,S(yi |ui , xi , si) =n

Ii=1

pY |X ,S(yi |xi , si),

by the conditional typicality lemma, P(E c1∩ E2) → 0 as n → ∞

Since Un(l) ∉ C(1) is distributed according to ∏ni=1 p(ui), independent of Yn,

by the packing lemma, P(E3) → 0 as n → ∞ if R < I(U ;Y) − δ(є)Remark: Yn is not i.i.d.

Combining the bounds, we have shown that P(E) → 0 as n → ∞ if

R < I(U ;Y) − I(U ; S) − δ(є) − δ(є�) = CSI-E(B/(1 + є�)) − δ(є) − δ(є�)El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 63 / 118

Page 67: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Gelfand–Pinsker Coding

Multicoding versus Binning

Multicoding

Channel coding technique

Given a set of messages

Generate many codewords foreach message

To communicate a message, senda codeword from its subcodebook

Binning

Source coding technique

Given a set of indices (sequences)

Map indices into a smaller numberof bins

To communicate an index, sendits bin index

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 64 / 118

Page 68: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Gelfand–Pinsker Coding

Wyner–Ziv versus Gelfand–Pinsker

Wyner–Ziv theorem: rate–distortion function for a DMS X with sideinformation Y available at the decoder:

RSI-D(D) = min�I(U ; X) − I(U ;Y)�We proved achievability using binning, covering, and packing

Gelfand–Pinsker theorem: capacity–cost function of a DMC with stateinformation S available at the encoder:

CSI-E(B) = max�I(U ;Y) − I(U ; S)�We proved achievability using multicoding, covering, and packing

Dualities:min ⇔ max

binning ⇔ multicoding

covering rate − packing rate ⇔ packing rate − covering rate

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 65 / 118

Page 69: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Gelfand–Pinsker Coding Writing on Dirty Paper

Writing on Dirty Paper

Gaussian channel with additive Gaussian state available at the encoder

Xn

S Z

Yn

Encoder Decoder

Sn

MMEncoder

M

é Noise Z ∼ N(0,N)é State S ∼ N(0,Q), independent of ZAssume expected average power constraint: ∑n

i=1 E(x2i (m, Sn)) ≤ nP for every m

C = 1

2log �1 + P

N+Q�

CSI-ED = 1

2log �1 + P

N� = CSI-D

Writing on Dirty Paper (Costa 1983)

CSI-E = 1

2log �1 + P

N�

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 66 / 118

Page 70: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Gelfand–Pinsker Coding Writing on Dirty Paper

Proof of Achievability

Proof involves a clever choice of F(u|s), x(u, s) and discretization procedure

Let X ∼ N(0, P) independent of S and U = X + αS, where α = P/(P + N). ThenI(U ;Y) − I(U ; S) = 1

2log �1 + P

N�

Let [U] j and [S] j� be finite quantizations of U and S

Let [X] j j� = [U] j − α[S] j� and [Yj j�]k be a finite quantization of thecorresponding channel output Yj j� = [U] j − α[S] j� + S + Z

We use Gelfand–Pinsker coding for the DMC with DM statep([y j j�]k |[x] j j� , [s] j�)p([s] j�)é Joint typicality encoding: R − R > I(U ; S) ≥ I([U] j ; [S] j� )é Joint typicality decoding: R < I([U] j ; [Yj j� ]k)é Thus R < I([U] j ; [Yj j� ]k) − I(U ; S) is achievable for any j , j� , k

Following similar arguments to the discretization procedure for Gaussianchannel coding,

limj→∞

limj�→∞

limk→∞

I([U] j ; [Yj j�]l) = I(U ;Y)El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 67 / 118

Page 71: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Gelfand–Pinsker Coding Writing on Dirty Paper

Summary

1. Typical Sequences

2. Point-to-Point Communication

3. Multiple Access Channel

4. Broadcast Channel

5. Lossy Source Coding

6. Wyner–Ziv Coding

7. Gelfand–Pinsker Coding

8. Wiretap Channel

9. Relay Channel

10. Multicast Network

è Multicoding

Packing lemma with non i.i.d. Yn

Writing on dirty paper

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 68 / 118

Page 72: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Wiretap Channel

DM Wiretap Channel (WTC)

Point-to-point communication system with an eavesdropper

EncoderM

MDecoder

Eavesdropper

Yn

Znp(y, z|x)Xn

Assume a DM-WTC model (X , p(y, z|x),Y × Z)A (2nR , n) secrecy code for the DM-WTC:

é Message set: [1 : 2nR]é Randomized encoder: Xn(m) ∼ p(xn|m) for each m ∈ [1 : 2nR]é Decoder: m(yn)

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 69 / 118

Page 73: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Wiretap Channel

EncoderM

MDecoder

Eavesdropper

Yn

Znp(y, z|x)Xn

Assume M ∼ Unif[1 : 2nR]Average probability of error: P(n)

e = P{M = M}Information leakage rate: R(n)

L= (1/n)I(M ; Zn)

(R, RL) achievable if ∃ (2nR , n) codes with limn→∞ P(n)e = 0, lim supn→∞ R(n)

L≤ RL

Rate–leakage region R∗: closure of the set of achievable (R, RL)

Secrecy capacity: CS = max{R: (R, 0) ∈ R∗}

Theorem (Wyner 1975, Csiszar–Korner 1978)

CS = maxp(u,x)

�I(U ;Y) − I(U ; Z)�

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 70 / 118

Page 74: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Wiretap Channel

Proof of Achievability

We use multicoding and two-step randomized encoding

Codebook generation:é Assume CS > 0 and fix p(u, x) that attains it (I(U ;Y) − I(U ; Z) > 0)

é For each m ∈ [1 : 2nR], generate a subcodebook C(m) consisting of

2n(R−R) randomly and independently generated sequences un(l) ∼ ∏ni=1 pU (ui),

l ∈ [(m − 1)2n(R−R) + 1 :m2n(R−R)]

C(1) C(2) C(3) C(2nR)

l : 1 2n(R−R) 2nR

Encoding:é To send m, choose an index L ∈ [(m − 1)2n(R−R) + 1 :m2n(R−R)] uniformly at random

é Then generate Xn ∼ ∏ni=1 pX|U (xi |ui(L)) and transmit it

Decoding:é Find the unique m such that (un( l), yn) ∈ T

(n)є for some un( l) ∈ C(m)

By the LLN and the packing lemma, P(E) → 0 as n → ∞ if R < I(U ;Y) − δ(є)El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 71 / 118

Page 75: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Wiretap Channel

Analysis of the Information Leakage Rate

For each C(m), the eavesdropper has ≐ 2n(R−R−I(U ;Z)) un(l) jointly typical with zn

C(1) C(2) C(3) C(2nR)

l : 1 2n(R−R) 2nR

If R − R > I(U ; Z), the eavesdropper has roughly same number of sequences ineach subcodebook, providing it with no information about the message

Let M be the message sent and L be the randomly selected index

Every codebook C induces a pmf of the form

p(m, l , un , zn |C) = 2−nR2−n(R−R)p(un | l , C)n

Ii=1

pZ|U (zi |ui)

In particular, p(un , zn) = ∏ni=1 pU ,Z(ui , zi)

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 72 / 118

Page 76: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Wiretap Channel

Analysis of the Information Leakage Rate

Consider the amount of information leakage averaged over codebooks:

I(M ; Zn |C) = H(M |C) − H(M |Zn , C)= nR − H(M , L |Zn , C) + H(L |Zn ,M , C)= nR − H(L |Zn , C) + H(L |Zn ,M , C)

The first equivocation term

H(L |Zn , C) = H(L |C) − I(L; Zn |C)= nR − I(L; Zn |C)= nR − I(Un , L; Zn |C)≥ nR − I(Un , L, C ; Zn)(a)= nR − I(Un ; Zn)= nR − nI(U ; Z)

(a) (L, C) → Un → Zn form a Markov chain

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 73 / 118

Page 77: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Wiretap Channel

Analysis of the Information Leakage Rate

Consider the amount of information leakage averaged over codebooks:

I(M ; Zn |C) ≤ nR − nR + nI(U ; Z) + H(L |Zn ,M , C)The remaining equivocation term can be upper bounded as follows

Lemma

If R − R ≥ I(U ; Z), thenlim supn→∞

1

nH(L |Zn ,M , C) ≤ R − R − I(U ; Z) + δ(є)

Substituting (recall that R < I(U ;Y) − δ(є) for decoding), we have shown that

lim supn→∞

1

nI(M ; Zn |C) ≤ δ(є)

if R < I(U ;Y) − I(U ; Z) − δ(є)Thus, there must exist a sequence of (2nR , n) codes such that P(n)

e → 0 and

R(n)L

≤ δ(є) as n → ∞El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 74 / 118

Page 78: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Wiretap Channel

Summary

1. Typical Sequences

2. Point-to-Point Communication

3. Multiple Access Channel

4. Broadcast Channel

5. Lossy Source Coding

6. Wyner–Ziv Coding

7. Gelfand–Pinsker Coding

8. Wiretap Channel

9. Relay Channel

10. Multicast Network

è Randomized encoding

Bound on equivocation (list size)

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 75 / 118

Page 79: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Relay Channel

DM Relay Channel (RC)

Point-to-point communication system with a relay

Xn1

Xn2

Yn2

Yn3M M

p(y2 , y3|x1 , x2)

Relay encoder

Encoder Decoder

Assume a DM-RC model (X1 × X2 , p(y2 , y3|x1 , x2),Y2 × Y3)A (2nR , n) code for the DM-RC:

é Message set: [1 : 2nR]é Encoder: xn

1(m)

é Relay encoder: x2i(yi−12), i ∈ [1 : n]

é Decoder: m(yn3)

Probability of error, achievability, capacity: defined as for the DMC

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 76 / 118

Page 80: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Relay Channel

Xn1

Xn2

Yn2

Yn3M M

p(y2 , y3|x1 , x2)

Relay encoder

Encoder Decoder

Capacity of the DM-RC is not known in general

There are upper and lower bounds that are tight in some cases

We discuss two lower bounds: decode–forward and compress–forward

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 77 / 118

Page 81: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Relay Channel Multihop

Multihop Lower Bound

The relay recovers the message received from the sender in each block andretransmits it in the following block

M

M

MX1

Y2 :X2

Y3

Multihop Lower Bound

C ≥ maxp(x1)p(x2)

min{I(X2 ;Y3), I(X1 ;Y2 |X2)}

Tight for a cascade of two DMCs, i.e., p(y2 , y3|x1 , x2) = p(y2|x1)p(y3|x2):C = min�max

p(x2)I(X2 ;Y3), max

p(x1)I(X1 ;Y2)�

The scheme uses block Markov coding, where codewords in a block can dependon the message sent in the previous block

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 78 / 118

Page 82: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Relay Channel Multihop

Proof of AchievabilitySend b − 1 messages in b blocks using independently generated codebooks

m1 m2 m3 mb−1 1

n

Block 1 2 3 b − 1 b

Codebook generation:é Fix p(x1)p(x2) that attains the lower bound

é For each j ∈ [1 : b], randomly and independently generate 2nR sequencesxn1(m j) ∼ ∏n

i=1 pX1(x1i), m j ∈ [1 : 2nR]

é Similarly, generate 2nR sequences xn2(m j−1) ∼ ∏n

i=1 pX2(x2i), m j−1 ∈ [1 : 2nR]

é Codebooks: C j = {(xn1(m j), xn2 (m j−1)) : m j−1 ,m j ∈ [1 : 2nR]}, j ∈ [1 : b]

Encoding:é To send m j in block j, transmit xn

1(m j) from C j

Relay encoding:é At the end of block j, find the unique m j such that (xn

1(m j), xn2 (m j−1), yn2 ( j)) ∈ T

(n)є

é In block j + 1, transmit xn2(m j) from C j+1

Decoding:é At the end of block j + 1, find the unique m j such that (xn

2(m j), yn3 ( j + 1)) ∈ T

(n)є

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 79 / 118

Page 83: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Relay Channel Multihop

Analysis of the Probability of Error

We analyze the probability of decoding error for M j averaged over codebooks

Assume M j = 1

Let M j be the relay’s decoded message at the end of block j

Since {M j = 1} ⊆ {M j = 1} ∪ {M j = M j}, the decoder makes an error only if oneof the following events occur:

E1( j) = �(Xn1(1), Xn

2(M j−1),Yn

2( j)) ∉ T

(n)є �

E2( j) = �(Xn1(m j), Xn

2(M j−1),Yn

2( j)) ∈ T

(n)є for some m j = 1�

E1( j) = �(Xn2(M j),Yn

3( j + 1)) ∉ T

(n)є �

E2( j) = �(Xn2(m j),Yn

3( j + 1)) ∈ T

(n)є for some m j = M j�

Thus, the probability of error is upper bounded as

P(E( j)) = P{M j = 1} ≤ P(E1( j)) + P(E2( j)) + P(E1( j)) + P(E2( j))

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 80 / 118

Page 84: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Relay Channel Multihop

E1( j) = �(Xn1(1), Xn

2(M j−1),Yn

2( j)) ∉ T

(n)є �

E2( j) = �(Xn1(m j), Xn

2(M j−1),Yn

2( j)) ∈ T

(n)є for some m j = 1�

E1( j) = �(Xn2(M j),Yn

3( j + 1)) ∉ T

(n)є �

E2( j) = �(Xn2(m j),Yn

3( j + 1)) ∈ T

(n)є for some m j = M j�

By the independence of the codebooks, M j−1, which is a function of Yn2( j − 1)

and codebook C j−1, is independent of the codewords Xn1(1), Xn

2(M j−1) in C j

Thus by the LLN, P(E1( j)) → 0 as n → ∞By the packing lemma, P(E2( j)) → 0 as n → ∞ if R < I(X1 ;Y2|X2) − δ(є)By the independence of the codebooks and the LLN, P(E1( j)) → 0 as n → ∞By the same independence and the packing lemma, P(E2( j)) → 0 as n → ∞ ifR < I(X2 ;Y3) − δ(є)Thus we have shown that under the given constraints on the rate,P{M j = M j} → 0 as n → ∞ for each j ∈ [1 : b − 1]

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 81 / 118

Page 85: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Relay Channel Coherent Multihop

Coherent Multihop Lower Bound

In the multihop coding scheme, the sender knows what the relay transmits ineach block

M

M

MX1

Y2 :X2

Y3

Hence, the multihop coding scheme can be improved via coherent cooperationbetween the sender and the relay

Coherent Multihop Lower Bound

C ≥ maxp(x1 ,x2)

min�I(X2 ;Y3), I(X1 ;Y2 |X2)�

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 82 / 118

Page 86: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Relay Channel Coherent Multihop

Proof of Achievability

We again use a block Markov coding scheme

é Send b − 1 messages in b blocks using independently generated codebooks

Codebook generation:

é Fix p(x1 , x2) that attains the lower bound

é For j ∈ [1 : b], randomly and independently generate 2nR sequencesxn2(m j−1) ∼ ∏n

i=1 pX2(x2i), m j−1 ∈ [1 : 2nR]

é For each m j−1 ∈ [1 : 2nR], randomly and conditionally independently generate 2nR

sequences xn1(m j |m j−1) ∼ ∏n

i=1 pX1 |X2(x1i |x2i(m j−1)), m j ∈ [1 : 2nR]

é Codebooks: C j = {(xn1(m j |m j−1), xn2 (m j−1)) : m j−1 ,m j ∈ [1 : 2nR]}, j ∈ [1 : b]

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 83 / 118

Page 87: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Relay Channel Coherent Multihop

Block 1 2 3 . . . b − 1 b

X1 xn1 (m1|1) xn1 (m2|m1) xn1 (m3|m2) . . . xn1 (mb−1|mb−2) xn1 (1|mb−1)

Y2 m1 m2 m3 . . . mb−1

X2 xn2 (1) xn2 (m1) xn2 (m2) . . . xn2 (mb−2) xn2 (mb−1)

Y3 m1 m2 . . . mb−2 mb−1

Encoding:

é In block j, transmit xn1(m j |m j−1) from codebook C j

Relay encoding:

é At the end of block j, find the unique m j such that

(xn1(m j |m j−1), xn2 (m j−1), yn2 ( j)) ∈ T

(n)є

é In block j + 1, transmit xn2(m j) from codebook C j+1

Decoding:

é At the end of block j + 1, find unique message m j such that (xn2(m j), yn3 ( j + 1)) ∈ T

(n)є

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 84 / 118

Page 88: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Relay Channel Coherent Multihop

Analysis of the Probability of Error

We analyze the probability of decoding error for M j averaged over codebooks

Assume M j−1 = M j = 1

Let M j be the relay’s decoded message at the end of block j

The decoder makes an error only if one of the following events occur:

E( j) = {M j = 1}E1( j) = �(Xn

2(M j),Yn

3( j + 1)) ∉ T

(n)є �

E2( j) = �(Xn2(m j),Yn

3( j + 1)) ∈ T

(n)є for some m j = M j�

Thus, the probability of error is upper bounded as

P(E( j)) = P{M j = 1} ≤ P(E( j)) + P(E1( j)) + P(E2( j))Following the same steps as in the multihop coding scheme, the last two terms→ 0 as n → ∞ if R < I(X2 ;Y3) − δ(є)El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 85 / 118

Page 89: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Relay Channel Coherent Multihop

Analysis of the Probability of Error

To upper bound P(E( j)) = P{M j = 1}, defineE1( j) = �(Xn

1(1|M j−1), Xn

2(M j−1),Yn

2( j)) ∉ T

(n)є �

E2( j) = �(Xn1(m j |M j−1), Xn

2(M j−1),Yn

2( j)) ∈ T

(n)є for some m j = 1�

ThenP(E( j)) ≤ P(E( j − 1))+ P(E1( j) ∩ E

c( j − 1))+ P(E2( j))Consider the second term

P(E1( j) ∩ Ec( j − 1)) = P{(Xn

1(1|M j−1), Xn

2(M j−1),Yn

2( j)) ∉ T

(n)є , M j−1 = 1}

≤ P{(Xn1(1|1), Xn

2(1),Yn

2( j)) ∉ T

(n)є | M j−1 = 1},

which, by the independence of the codebooks and the LLN, → 0 as n → ∞By the packing lemma, P(E2( j)) → 0 as n → ∞ if R < I(X1 ;Y2|X2) − δ(є)Since M0 = 1, by induction, P(E( j)) → 0 as n → ∞ for every j ∈ [1 : b − 1]Thus we have shown that under the given constraints on the rate,P{M j = M j} → 0 as n → ∞ for every j ∈ [1 : b − 1]El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 86 / 118

Page 90: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Relay Channel Decode–Forward

Decode–Forward Lower Bound

Coherent multihop can be further improved by combining the informationthrough the direct path with the information from the relay

M

M

MX1

Y2 :X2

Y3

Decode–Forward Lower Bound (Cover–El Gamal 1979)

C ≥ maxp(x1 ,x2)

min�I(X1 ,X2 ;Y3), I(X1 ;Y2 |X2)�

Tight for a physically degraded DM-RC, i.e.,

p(y2 , y3 |x1 , x2) = p(y2 |x1 , x2)p(y3 | y2 , x2)

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 87 / 118

Page 91: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Relay Channel Decode–Forward

Proof of Achievability (Zeng–Kuhlmann–Buzo 1989)

We use backward decoding (Willems–van der Meulen 1985)Codebook generation, encoding, relay encoding:

é Same as coherent multihop

é Codebooks: C j = {(xn1(m j |m j−1), xn2 (m j−1)):m j−1 ,m j ∈ [1 : 2nR]}, j ∈ [1 : b]

Block 1 2 3 . . . b − 1 b

X1 xn1 (m1|1) xn1 (m2|m1) xn1 (m3|m2) . . . xn1 (mb−1|mb−2) xn1 (1|mb−1)

Y2 m1 → m2 → m3 → . . . mb−1

X2 xn2 (1) xn2 (m1) xn2 (m2) . . . xn2 (mb−2) xn2 (mb−1)

Y3 m1 ← m2 . . . ← mb−2 ← mb−1

Decoding:

é Decoding at the receiver is done backwards after all b blocks are received

é For j = b − 1, . . . , 1, the receiver finds the unique message m j such that

(xn1(m j+1|m j), xn2 (m j), yn3 ( j + 1)) ∈ T

(n)є , successively with the initial condition mb = 1

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 88 / 118

Page 92: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Relay Channel Decode–Forward

Analysis of the Probability of Error

We analyze the probability of decoding error for M j averaged over codebooks

Assume M j = M j+1 = 1

The decoder makes an error only if one or more of the following events occur:

E( j) = {M j = 1}E( j + 1) = {M j+1 = 1}

E1( j) = �(Xn1(M j+1 |M j), Xn

2(M j),Yn

3( j + 1)) ∉ T

(n)є �

E2( j) = �(Xn1(M j+1 |m j), Xn

2(m j),Yn

3( j + 1)) ∈ T

(n)є for some m j = M j�

Thus, the probability of error is upper bounded as

P(E( j)) = P{M j = 1}≤ P(E( j) ∪ E( j + 1) ∪ E1( j) ∪ E2( j))≤ P(E( j)) + P(E( j + 1)) + P(E1( j) ∩ E

c( j) ∩ Ec( j + 1)) + P(E2( j))

As in the coherent multihop scheme, the first term → 0 as n → ∞ ifR < I(X1 ;Y2|X2) − δ(є)El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 89 / 118

Page 93: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Relay Channel Decode–Forward

E( j) = {M j = 1}E( j + 1) = {M j+1 = 1}

E1( j) = �(Xn1(M j+1 |M j), Xn

2(M j),Yn

3( j + 1)) ∉ T

(n)є �

E2( j) = �(Xn1(M j+1 |m j), Xn

2(m j),Yn

3( j + 1)) ∈ T

(n)є for some m j = M j�

P(E( j)) ≤ P(E( j))+ P(E( j + 1)) + P(E1( j) ∩ Ec( j) ∩ E

c( j + 1)) + P(E2( j))The third term is upper bounded as

P�E1( j) ∩ {M j+1 = 1} ∩ {M j = 1}�= P�(Xn

1(1|1), Xn

2(1),Yn

3( j + 1)) ∉ T

(n)є , M j+1 = 1, M j = 1�

≤ P�(Xn1(1|1), Xn

2(1),Yn

3( j + 1)) ∉ T

(n)є | M j = 1�,

which, by the independence of the codebooks and the LLN, → 0 as n → ∞By the same independence and the packing lemma, the fourth termP(E2( j)) → 0 as n → ∞ if R < I(X1 , X2 ;Y3) − δ(є)Finally for the second term, since Mb = Mb = 1, by induction, P{M j = M j} → 0as n → ∞ for every j ∈ [1 : b − 1] if the given constraints on the rate are satisfied

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 90 / 118

Page 94: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Relay Channel Compress–Forward

Compress–Forward Lower Bound

In the decode–forward coding scheme, the relay recovers the entire message

M MX1

Y2 :X2

Y3

Y2

|If channel from sender to relay is worse than direct channel to receiver, thisrequirement can reduce rate below that of direct transmission (relay is not used)

In the compress–forward coding scheme, the relay helps communication bysending a description of its received sequence to the receiver

Compress–Forward Lower Bound(Cover–El Gamal 1979, El Gamal–Mohseni–Zahedi 2006)

C ≥ maxp(x1)p(x2)p( y2|y2 ,x2)

min�I(X1 , X2 ;Y3) − I(Y2 ; Y2 |X1 , X2 ,Y3), I(X1 ; Y2 ,Y3 |X2)�

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 91 / 118

Page 95: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Relay Channel Compress–Forward

Proof of Achievability

We use block Markov coding, joint typicality encoding, binning, andsimultaneous nonunique decoding

M MX1

Y2 :X2

Y3

Y2

é At the end of block j, the relay chooses a reconstruction sequence yn2( j) of the

received sequence yn2( j)

é Since the receiver has side information yn3( j), we use binning to reduce the rate

é The bin index is sent to the receiver in block j + 1 via xn2( j + 1)

é At the end of block j + 1, the receiver recovers the bin index and then m j and thecompression index simultaneously

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 92 / 118

Page 96: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Relay Channel Compress–Forward

Proof of Achievability

We use block Markov coding, joint typicality encoding, binning, andsimultaneous nonunique decoding

M MX1

Y2 :X2

Y3

Y2

Codebook generation:

é Fix p(x1)p(x2)p( y2|y2 , x2) that attains the lower bound

é For j ∈ [1 : b], randomly and independently generate 2nR sequencesxn1(m j) ∼ ∏n

i=1 pX1(x1i), m j ∈ [1 : 2nR]

é Similarly generate 2nR2 sequences xn2(l j−1) ∼ ∏n

i=1 pX2(x2i), l j−1 ∈ [1 : 2nR2 ]

é For each l j−1 ∈ [1 : 2nR2 ], randomly and conditionally independently generate 2nR2

sequences yn2(k j |l j−1) ∼ ∏n

i=1 pY2 |X2( y2i |x2i(l j−1)), k j ∈ [1 : 2nR2 ]

é Codebooks: C j = {(xn1(m j), xn2 (l j−1)):m j ∈ [1 : 2nR], l j−1 ∈ [1 : 2nR2 ]}, j ∈ [1 : b]

é Partition the set [1 : 2nR2 ] into 2nR2 equal-size bins B(l j), l j ∈ [1 : 2nR2 ]El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 93 / 118

Page 97: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Relay Channel Compress–Forward

Block 1 2 3 . . . b − 1 b

X1 xn1 (m1) xn1 (m2) xn1 (m3) . . . xn1 (mb−1) xn1 (1)

Y2 yn2 (k1|1), l1 yn2 (k2|l1), l2 yn2 (k3|l2), l3 . . . yn2 (kb−1|lb−2), lb−1

X2 xn2 (1) xn2 (l1) xn2 (l2) . . . xn2 (lb−2) xn2 (lb−1)

Y3 l1 , k1 , m1 l2 , k2 , m2 . . . lb−2 , kb−2 , mb−2 lb−1 , kb−1 , mb−1

Encoding:

é Transmit xn1(m j) from codebook C j

Relay encoding:

é At the end of block j, find an index k j such that (yn2( j), yn

2(k j |l j−1), xn2 (l j−1)) ∈ T

(n)

є�

é In block j + 1, transmit xn2(l j), where l j is the bin index of k j

Decoding:

é At the end of block j + 1, find the unique l j such that (xn2( l j), yn3 ( j + 1)) ∈ T

(n)є

é Find the unique m j such that (xn1(m j), xn2 ( l j−1), yn2 (k j | l j−1), yn3 ( j)) ∈ T

(n)є for some

k j ∈ B( l j)El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 94 / 118

Page 98: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Relay Channel Compress–Forward

Analysis of the Probability of Error

Assume M j = 1 and let L j−1 , L j , K j denote the indices chosen by the relayThe decoder makes an error only if one or more of the following events occur:

E( j) = �(Xn2(L j−1), Yn

2(k j |L j−1),Yn

2( j)) ∉ T

(n)

є�for all k j ∈ [1 : 2nR2]�

E1( j − 1) = {L j−1 = L j−1}E1( j) = {L j = L j}E2( j) = �(Xn

1(1), Xn

2(L j−1), Yn

2(K j | L j−1),Yn

3( j)) ∉ T

(n)є �

E3( j) = �(Xn1(m j), Xn

2(L j−1), Yn

2(K j | L j−1),Yn

3( j)) ∈ T

(n)є for some m j = 1�

E4( j) = �(Xn1(m j), Xn

2(L j−1), Yn

2(k j | L j−1),Yn

3( j)) ∈ T

(n)є

for some k j ∈ B(L j), k j = K j ,m j = 1�Thus, the probability of error is bounded as

P(E( j)) = P{M j = 1}≤ P(E( j)) + P(E1( j − 1)) + P(E1( j)) + P(E2( j) ∩ E

c( j) ∩ Ec1( j − 1))

+ P(E3( j)) + P(E4( j) ∩ Ec1( j − 1) ∩ E

c1( j))

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 95 / 118

Page 99: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Relay Channel Compress–Forward

E( j) = �(Xn2(L j−1), Yn

2(k j |L j−1),Yn

2( j)) ∉ T

(n)

є�for all k j ∈ [1 : 2nR2]�

E1( j − 1) = {L j−1 = L j−1}E1( j) = {L j = L j}E2( j) = �(Xn

1(1), Xn

2(L j−1), Yn

2(K j | L j−1),Yn

3( j)) ∉ T

(n)є �

E3( j) = �(Xn1(m j), Xn

2(L j−1), Yn

2(K j | L j−1),Yn

3( j)) ∈ T

(n)є for some m j = 1�

E4( j) = �(Xn1(m j), Xn

2(L j−1), Yn

2(k j | L j−1),Yn

3( j)) ∈ T

(n)є

for some k j ∈ B(L j), k j = K j ,m j = 1�P(E( j)) ≤ P(E( j)) +P(E1( j − 1)) + P(E1( j)) +P(E2( j) ∩ E

c( j) ∩ Ec1( j − 1))

+ P(E3( j)) + P(E4( j) ∩ Ec1( j − 1) ∩ E

c1( j))

By the independence of codebooks and the covering lemma (U ← X2, X ← Y2,X ← Y2), the first term → 0 as n → ∞ if R2 > I(Y2 ;Y2|X2) + δ(є�)As in the multihop coding scheme, the next two terms P{L j−1 = L j−1} → 0 and

P{L j = L j} → 0 as n → ∞ if R2 < I(X2 ;Y3) − δ(є)The fourth term ≤ P�(Xn

1(1), Xn

2(L j−1), Yn

2(K j |L j−1),Yn

3( j)) ∉ T

(n)є | E c( j)� → 0 by

the independence of codebooks and the conditional typicality lemma

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 96 / 118

Page 100: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Relay Channel Compress–Forward

Covering Lemma

Let (U , X , X) ∼ p(u, x , x) and є� < є

Let (Un , Xn) ∼ p(un , xn) be arbitrarily distributed such that

limn→∞

P{(Un , Xn) ∈ T(n)

є�(U , X)} = 1

Let Xn(m) ∼ ∏ni=1 pX|U (xi |ui), m ∈ A, where |A| ≥ 2nR, be

conditionally independent of each other and of Xn given Un

Covering Lemma

There exists δ(є) → 0 as є → 0 such that

limn→∞

P�(Un , Xn , Xn(m)) ∉ T(n)є for all m ∈ A� = 0,

if R > I(X ; X|U) + δ(є)

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 97 / 118

Page 101: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Relay Channel Compress–Forward

E( j) = �(Xn2(L j−1), Yn

2(k j |L j−1),Yn

2( j)) ∉ T

(n)

є�for all k j ∈ [1 : 2nR2]�

E1( j − 1) = {L j−1 = L j−1}E1( j) = {L j = L j}E2( j) = �(Xn

1(1), Xn

2(L j−1), Yn

2(K j | L j−1),Yn

3( j)) ∉ T

(n)є �

E3( j) = �(Xn1(m j), Xn

2(L j−1), Yn

2(K j | L j−1),Yn

3( j)) ∈ T

(n)є for some m j = 1�

E4( j) = �(Xn1(m j), Xn

2(L j−1), Yn

2(k j | L j−1),Yn

3( j)) ∈ T

(n)є

for some k j ∈ B(L j), k j = K j ,m j = 1�P(E( j)) ≤ P(E( j)) +P(E1( j − 1)) + P(E1( j)) +P(E2( j) ∩ E

c( j) ∩ Ec1( j − 1))

+ P(E3( j)) + P(E4( j) ∩ Ec1( j − 1) ∩ E

c1( j))

By the independence of codebooks and the covering lemma (U ← X2, X ← Y2,X ← Y2), the first term → 0 as n → ∞ if R2 > I(Y2 ;Y2|X2) + δ(є�)As in the multihop coding scheme, the next two terms P{L j−1 = L j−1} → 0 and

P{L j = L j} → 0 as n → ∞ if R2 < I(X2 ;Y3) − δ(є)The fourth term ≤ P�(Xn

1(1), Xn

2(L j−1), Yn

2(K j |L j−1),Yn

3( j)) ∉ T

(n)є | E c( j)� → 0 by

the independence of codebooks and the conditional typicality lemma

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 98 / 118

Page 102: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Relay Channel Compress–Forward

E( j) = �(Xn2(L j−1), Yn

2(k j |L j−1),Yn

2( j)) ∉ T

(n)

є�for all k j ∈ [1 : 2nR2]�

E1( j − 1) = {L j−1 = L j−1}E1( j) = {L j = L j}E2( j) = �(Xn

1(1), Xn

2(L j−1), Yn

2(K j | L j−1),Yn

3( j)) ∉ T

(n)є �

E3( j) = �(Xn1(m j), Xn

2(L j−1), Yn

2(K j | L j−1),Yn

3( j)) ∈ T

(n)є for some m j = 1�

E4( j) = �(Xn1(m j), Xn

2(L j−1), Yn

2(k j | L j−1),Yn

3( j)) ∈ T

(n)є

for some k j ∈ B(L j), k j = K j ,m j = 1�P(E( j)) ≤ P(E( j)) + P(E1( j − 1)) + P(E1( j)) +P(E2( j) ∩ E

c( j) ∩ Ec1( j − 1))

+ P(E3( j)) +P(E4( j) ∩ Ec1( j − 1) ∩ E

c1( j))

By the same independence and the packing lemma, P(E3( j)) → 0 as n → ∞ ifR < I(X1 ; X2 , Y2 ,Y3) + δ(є) = I(X1 ; Y2 ,Y3|X2) + δ(є)As in Wyner–Ziv coding, the last term

≤ P{(Xn1(m j), Xn

2(L j−1), Yn

2(k j |L j−1),Yn

3( j)) ∈ T

(n)є for some k j ∈ B(1),m j = 1},

which, by the independence of codebooks, joint typicality lemma, and unionbound, → 0 as n → ∞ if R + R2 − R2 < I(X1 ;Y3|X2) + I(Y2 ; X1 ,Y3|X2) − δ(є)El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 99 / 118

Page 103: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Relay Channel Compress–Forward

Summary

1. Typical Sequences

2. Point-to-Point Communication

3. Multiple Access Channel

4. Broadcast Channel

5. Lossy Source Coding

6. Wyner–Ziv Coding

7. Gelfand–Pinsker Coding

8. Wiretap Channel

9. Relay Channel

10. Multicast Network

è

Block Markov coding

Coherent cooperation

Decode–forward

Backward decoding

Compress–forward

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 100 / 118

Page 104: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Multicast Network

DM Multicast Network (MN)

Multicast communication network

p(y1 , . . . , yN |x1 , . . . , xN )M

M j

Mk

MN

1

2

3

j

k

ND

Assume an N-node DM-MN model (⨉Nj=1 X j , p(yN |xN ),⨉N

j=1 Y j)Topology of the network is defined through p(yN |xN )A (2nR , n) code for the DM-MN:é Message set: [1 : 2nR]é Source encoder: x1i(m, yi−1

1), i ∈ [1 : n]

é Relay encoder j ∈ [2 : N]: x ji(yi−1j ), i ∈ [1 : n]é Decoder k ∈ D: mk(ynk )

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 101 / 118

Page 105: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Multicast Network

p(y1 , . . . , yN |x1 , . . . , xN )M

M j

Mk

MN

1

2

3

j

k

ND

Assume M ∼ Unif[1 : 2nR]Average probability of error: P(n)

e = P{Mk = M for some k ∈ D}R achievable if there exists a sequence of (2nR , n) codes with limn→∞ P(n)

e = 0

Capacity C: supremum of achievable R

Special cases:é DMC with feedback (N = 2, Y1 = Y2, X2 = , and D = {2})é DM-RC (N = 3, X3 = Y1 = , and D = {3})é Common-message DM-BC (X2 = ⋅ ⋅ ⋅ = XN = Y1 = and D = [2 : N])é DM unicast network (D = {N})

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 102 / 118

Page 106: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Multicast Network Network Decode–Forward

Network Decode–Forward

Decode–forward for RC can be extended to MN

M j

M j−1

M j M j−2

M j−2X1

Y2 :X2 Y3 :X3

Y4

Network Decode–Forward Lower Bound(Xie–Kumar 2005, Kramer–Gastpar–Gupta 2005)

C ≥ maxp(xN )

mink∈[1:N−1]

I(Xk ;Yk+1 |XNk+1)

For N = 3 and X3 = , reduces to the decode–forward lower bound for DM-RC

Tight for a degraded DM-MN, i.e., p(yNk+2|xN , yk+1) = p(yNk+2|xNk+1 , yk+1)Holds for any D ⊆ [2 : N]Can be improved by removing some relay nodes and relabeling the nodes

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 103 / 118

Page 107: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Multicast Network Network Decode–Forward

Proof of Achievability

We use block Markov coding and sliding window decoding (Carleial 1982)

We illustrate this scheme for DM-RC

Codebook generation, encoding, and relay encoding: same as before

Block 1 2 3 ⋅ ⋅ ⋅ b − 1 b

X1 xn1 (m1|1) xn1 (m2|m1) xn1 (m3|m2) ⋅ ⋅ ⋅ xn1 (mb−1|mb−2) xn1 (1|mb−1)

Y2 m1 m2 m3 ⋅ ⋅ ⋅ mb−1

X2 xn2 (1) xn2 (m1) xn2 (m2) ⋅ ⋅ ⋅ xn2 (mb−2) xn2 (mb−1)

Y3 m1 m2 ⋅ ⋅ ⋅ mb−2 mb−1

Decoding:

é At the end of block j + 1, find the unique m j such that

(xn1(m j |m j−1), xn2 (m j−1), yn3 ( j)) ∈ T

(n)є and (xn

2(m j), yn3 ( j + 1)) ∈ T

(n)є simultaneously

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 104 / 118

Page 108: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Multicast Network Network Decode–Forward

Analysis of the Probability of Error

Assume that M j−1 = M j = 1The decoder makes an error only if one or more of the following events occur:

E( j − 1) = {M j−1 = 1}E( j) = {M j = 1}

E( j − 1) = {M j−1 = 1}E1( j) = �(Xn

1(M j |M j−1), Xn

2(M j−1),Yn

3( j)) ∉ T

(n)є or (Xn

2(M j),Yn

3( j + 1)) ∉ T

(n)є �

E2( j) = �(Xn1(m j |M j−1), Xn

2(M j−1),Yn

3( j)) ∈ T

(n)є and (Xn

2(m j),Yn

3( j + 1)) ∈ T

(n)є

for some m j = M j�Thus, the probability of error is upper bounded as

P(E( j)) ≤ P(E( j − 1) ∪ E( j) ∪ E( j − 1) ∪ E1( j) ∪ E2( j))≤ P(E( j − 1)) + P(E( j)) + P(E( j − 1))

+ P(E1( j) ∩ Ec( j − 1) ∩ E

c( j) ∩ Ec( j − 1)) + P(E2( j) ∩ E

c( j))By independence of the codebooks, the LLN, the packing lemma, andinduction, the first four terms tend to zero as n → ∞ if R < I(X1 ;Y2|X2) − δ(є)El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 105 / 118

Page 109: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Multicast Network Network Decode–Forward

For the last term, consider

P(E2( j) ∩ Ec( j)) = P�(Xn

1(m j |M j−1), Xn

2(M j−1),Yn

3( j)) ∈ T

(n)є ,

(Xn2(m j),Yn

3( j + 1)) ∈ T

(n)є for some m j = 1, and M j = 1�

≤ Hm j =1

P�(Xn1(m j |M j−1), Xn

2(M j−1),Yn

3( j)) ∈ T

(n)є ,

(Xn2(m j),Yn

3( j + 1)) ∈ T

(n)є , and M j = 1�

(a)= Hm j =1

P�(Xn1(m j |M j−1), Xn

2(M j−1),Yn

3( j)) ∈ T

(n)є and M j = 1�

⋅ P�(Xn2(m j),Yn

3( j + 1)) ∈ T

(n)є | M j = 1�

≤ Hm j =1

P�(Xn1(m j |M j−1), Xn

2(M j−1),Yn

3( j)) ∈ T

(n)є �

⋅ P�(Xn2(m j),Yn

3( j + 1)) ∈ T

(n)є | M j = 1�

(b)≤ 2nR2−n(I(X1 ;Y3|X2)−δ(є))2−n(I(X2 ;Y3)−δ(є))

→ 0 as n → ∞ if R < I(X1 ;Y3|X2) + I(X2 ;Y3) − 2δ(є) = I(X1 , X2 ;Y3) − 2δ(є)(a) {(Xn

1(m j |M j−1), Xn

2(M j−1),Yn

3( j)) ∈ T

(n)є } and {(Xn

2(m j),Yn

3( j + 1)) ∈ T

(n)є } are

conditionally independent given M j = 1 for m j = 1

(b) independence of the codebooks and the joint typicality lemma

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 106 / 118

Page 110: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Multicast Network Noisy Network Coding

Noisy Network Coding

Compress–forward for DM-RC can be extended to DM-MN

Theorem (Noisy Network Coding Lower Bound)

C ≥ maxmink∈D

minS :1∈S ,k∈S c

�I(X(S); Y(S c),Yk |X(S c)) − I(Y(S); Y(S)|XN , Y(S c),Yk)�,where the maximum is over all ∏N

k=1 p(xk)p( yk |yk , xk), Y1 = by convention,X(S) denotes inputs in S, and Y(S c) denotes outputs in S

c

Special cases:

é Compress–forward lower bound for DM-RC (N = 3 and X3 = )

é Network coding theorem for graphical MN (Ahlswede–Cai–Li–Yeung 2000)

é Capacity of deterministic MN with no interference (Ratnakar–Kramer 2006)

é Capacity of wireless erasure MN (Dana–Gowaikar–Palanki–Hassibi–Effros 2006)

é Lower bound for general deterministic MN (Avestimehr–Diggavi–Tse 2011)

Can be extended to Gaussian networks (giving best known gap result) and tomultiple messages (Lim–Kim–El Gamal–Chung 2011)

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 107 / 118

Page 111: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Multicast Network Noisy Network Coding

Proof of Achievability

We use several new ideas beyond compress–forward for DM-RC

é The source node sends the same message m ∈ [1 : 2nbR] over b blocks

é Relay node j sends the index of the compressed version Ynj of Yn

j without binning

é Each receiver node performs simultaneous nonunique decoding of the message andcompression indices from all b blocks

We illustrate this scheme for DM-RC

Codebook generation:

é Fix p(x1)p(x2)p( y2|y2 , x2) that attains the lower bound

é For each j ∈ [1 : b], randomly and independently generate 2nbR sequencesxn1( j ,m) ∼ ∏n

i=1 pX1(x1i), m ∈ [1 : 2nbR]

é Randomly and independently generate 2nR2 sequences xn2(l j−1) ∼ ∏n

i=1 pX2(x2i),

l j−1 ∈ [1 : 2nR2 ]é For each l j−1 ∈ [1 : 2nR2 ], randomly and conditionally independently generate 2nR2

sequences yn2(l j |l j−1) ∼ ∏n

i=1 pY2 |X2( y2i |x2i(l j−1)), l j ∈ [1 : 2nR2 ]

é C j = {(xn1( j ,m), xn

2(l j−1), yn2 (l j |l j−1)):m ∈ [1 : 2nbR], l j , l j−1 ∈ [1 : 2nR2 ]}, j ∈ [1 : b]

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 108 / 118

Page 112: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Multicast Network Noisy Network Coding

Block 1 2 3 ⋅ ⋅ ⋅ b − 1 b

X1 xn1 (1,m) xn1 (2,m) xn1 (3,m) ⋅ ⋅ ⋅ xn1 (b − 1,m) xn1 (b,m)

Y2 yn2 (l1|1), l1 yn2 (l2|l1), l2 yn2 (l3|l2), l3 ⋅ ⋅ ⋅ yn2 (lb−1|lb−2), lb−1 yn2 (lb |lb−1), lb

X2 xn2 (1) xn2 (l1) xn2 (l2) ⋅ ⋅ ⋅ xn2 (lb−2) xn2 (lb−1)

Y3 ⋅ ⋅ ⋅ m

Encoding:

é To send m ∈ [1 : 2nbR], transmit xn1( j ,m) in block j

Relay encoding:

é At the end of block j, find an index l j such that (yn2( j), yn

2(l j |l j−1), xn2 (l j−1)) ∈ T

(n)

є�

é In block j + 1, transmit xn2(l j)

Decoding:

é At the end of block b, find the unique m such that(xn

1( j , m), xn

2(l j−1), yn2 (l j |l j−1), yn3 ( j)) ∈ T

(n)є for all j ∈ [1 : b] for some l1 , l2 , . . . , lb

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 109 / 118

Page 113: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Multicast Network Noisy Network Coding

Analysis of the Probability of Error

Assume M = 1 and L1 = L2 = ⋅ ⋅ ⋅ = Lb = 1

The decoder makes an error only if one or more of the following events occur:

E1 = �(Yn2( j), Yn

2(l j |1), Xn

2(1)) ∉ T

(n)

є�for all l j for some j ∈ [1 : b]�

E2 = �(Xn1( j , 1), Xn

2(1), Yn

2(1|1),Yn

3( j)) ∉ T

(n)є for some j ∈ [1 : b]�

E3 = �(Xn1( j ,m), Xn

2(l j−1), Yn

2(l j | l j−1),Yn

3( j)) ∈ T

(n)є for all j for some lb, m = 1�

Thus, the probability of error is upper bounded as

P(E) ≤ P(E1) +P(E2 ∩ Ec1) + P(E3)

By the covering lemma and the union of events bound (over b blocks),P(E1) → 0 as n → ∞ if R2 > I(Y2 ;Y2|X2) + δ(є�)By the conditional typicality lemma and the union of events bound,P(E2 ∩ E

c1) → 0 as n → ∞

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 110 / 118

Page 114: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Multicast Network Noisy Network Coding

Define E j(m, l j−1 , l j) = �(Xn1( j ,m), Xn

2(l j−1), Yn

2(l j |l j−1),Yn

3( j)) ∈ T

(n)є �

Then

P(E3) = P�]m =1

]lb

b

^j=1

E j(m, l j−1 , l j)�

≤ Hm =1

Hlb

P� b

^j=1

E j(m, l j−1 , l j)�

= Hm =1

Hlb

b

Ij=1

P(E j(m, l j−1 , l j))

≤ Hm =1

Hlb

b

Ij=2

P(E j(m, l j−1 , l j))If l j−1 = 1, then by the joint typicality lemma, P(E j) ≤ 2−n(

I1���������������������I(X1 ; Y2 ,Y3 |X2)−δ(є))

Similarly, if l j−1 = 1, then P(E j) ≤ 2−n(I(X1 , X2 ;Y3) + I(Y2 ; X1 ,Y3 |X2)�������������������������������������������������I2

−δ(є))

Thus, if lb−1 has k 1s, then

b

Ij=2

P(E j(m, l j−1 , l j)) ≤ 2−n(kI1+(b−1−k)I2−(b−1)δ(є))El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 111 / 118

Page 115: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Multicast Network Noisy Network Coding

Continuing with the bound,

Hm =1

Hlb

b

Ij=2

P(E j(m, l j−1 , l j)) = Hm =1

Hlb

Hlb−1

b

Ij=2

P(E j(m, l j−1 , l j))

≤ Hm =1

Hlb

b−1

Hj=0

�b − 1

j�2n(b−1− j)R2 ⋅ 2−n( jI1+(b−1− j)I2−(b−1)δ(є))

= Hm =1

Hlb

b−1

Hj=0

�b − 1

j�2−n( jI1+(b−1− j)(I2−R2)−(b−1)δ(є))

≤ Hm =1

Hlb

b−1

Hj=0

�b − 1

j�2−n((b−1)(min{I1 , I2−R2}−δ(є)))

≤ 2nbR ⋅ 2nR2 ⋅ 2b ⋅ 2−n(b−1)(min{I1 , I2−R2}−δ(є)) ,

which → 0 as n → ∞ if R < ((b − 1)(min{I1 , I2 − R2} − δ�(є)) − R2)/bFinally, by eliminating R2 > I(Y2 ;Y2|X2) + δ(є�), substituting I1 and I2, andtaking b → ∞, we have shown that P(E) → 0 as n → ∞ if

R < min�I(X1 ; Y2 ,Y3 |X2), I(X1 , X2 ;Y3) − I(Y2 ;Y2 |X1 , X2 ,Y3)� − δ�(є) − δ(є�)This completes the proof of achievability for noisy network codingEl Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 112 / 118

Page 116: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Multicast Network Noisy Network Coding

Summary

1. Typical Sequences

2. Point-to-Point Communication

3. Multiple Access Channel

4. Broadcast Channel

5. Lossy Source Coding

6. Wyner–Ziv Coding

7. Gelfand–Pinsker Coding

8. Wiretap Channel

9. Relay Channel

10. Multicast Network è

Network decode–forward

Sliding window decoding

Noisy network coding

Sending same message multiple timesusing independent codebooks

Beyond packing lemma

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 113 / 118

Page 117: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

Conclusion

Conclusion

Presented a unified approach to achievability proofs for DM networks:

é Typicality and elementary lemmas

é Coding techniques: random coding, joint typicality encoding/decoding,simultaneous (nonunique) decoding, superposition coding, binning, multicoding

Results can be extended to Gaussian models via discretization procedures

Lossless source coding is a corollary of lossy source coding

Network Information Theory book:

é Comprehensive coverage of this approach

é More advanced coding techniques and analysis tools

é Converse techniques (DM and Gaussian)

é Open problems

Although the theory is far from complete, we hope that our approach will

é Make the subject accessible to students, researchers, and communication engineers

é Help in the quest for a unified theory of information flow in networks

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 114 / 118

Page 118: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

References

References

Ahlswede, R. (1971). Multiway communication channels. In Proc. 2nd Int. Symp. Inf. Theory,Tsahkadsor, Armenian SSR, pp. 23–52.

Ahlswede, R., Cai, N., Li, S.-Y. R., and Yeung, R. W. (2000). Network information flow. IEEE

Trans. Inf. Theory, 46(4), 1204–1216.

Avestimehr, A. S., Diggavi, S. N., and Tse, D. N. C. (2011). Wireless network information flow:A deterministic approach. IEEE Trans. Inf. Theory, 57(4), 1872–1905.

Bergmans, P. P. (1973). Random coding theorem for broadcast channels with degradedcomponents. IEEE Trans. Inf. Theory, 19(2), 197–207.

Carleial, A. B. (1982). Multiple-access channels with different generalized feedback signals. IEEE

Trans. Inf. Theory, 28(6), 841–850.

Costa, M. H. M. (1983). Writing on dirty paper. IEEE Trans. Inf. Theory, 29(3), 439–441.

Cover, T. M. (1972). Broadcast channels. IEEE Trans. Inf. Theory, 18(1), 2–14.

Cover, T. M. and El Gamal, A. (1979). Capacity theorems for the relay channel. IEEE Trans.

Inf. Theory, 25(5), 572–584.

Csiszar, I. and Korner, J. (1978). Broadcast channels with confidential messages. IEEE Trans.

Inf. Theory, 24(3), 339–348.

Dana, A. F., Gowaikar, R., Palanki, R., Hassibi, B., and Effros, M. (2006). Capacity of wirelesserasure networks. IEEE Trans. Inf. Theory, 52(3), 789–804.

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 115 / 118

Page 119: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

References

References (cont.)

El Gamal, A., Mohseni, M., and Zahedi, S. (2006). Bounds on capacity and minimumenergy-per-bit for AWGN relay channels. IEEE Trans. Inf. Theory, 52(4), 1545–1561.

Elias, P., Feinstein, A., and Shannon, C. E. (1956). A note on the maximum flow through anetwork. IRE Trans. Inf. Theory, 2(4), 117–119.

Ford, L. R., Jr. and Fulkerson, D. R. (1956). Maximal flow through a network. Canad. J. Math.,8(3), 399–404.

Gelfand, S. I. and Pinsker, M. S. (1980). Coding for channel with random parameters. Probl.

Control Inf. Theory, 9(1), 19–31.

Han, T. S. and Kobayashi, K. (1981). A new achievable rate region for the interference channel.IEEE Trans. Inf. Theory, 27(1), 49–60.

Heegard, C. and El Gamal, A. (1983). On the capacity of computer memories with defects. IEEE

Trans. Inf. Theory, 29(5), 731–739.

Kramer, G., Gastpar, M., and Gupta, P. (2005). Cooperative strategies and capacity theoremsfor relay networks. IEEE Trans. Inf. Theory, 51(9), 3037–3063.

Liao, H. H. J. (1972). Multiple access channels. Ph.D. thesis, University of Hawaii, Honolulu, HI.

Lim, S. H., Kim, Y.-H., El Gamal, A., and Chung, S.-Y. (2011). Noisy network coding. IEEE

Trans. Inf. Theory, 57(5), 3132–3152.

McEliece, R. J. (1977). The Theory of Information and Coding. Addison-Wesley, Reading, MA.

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 116 / 118

Page 120: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

References

References (cont.)

Orlitsky, A. and Roche, J. R. (2001). Coding for computing. IEEE Trans. Inf. Theory, 47(3),903–917.

Ratnakar, N. and Kramer, G. (2006). The multicast capacity of deterministic relay networks withno interference. IEEE Trans. Inf. Theory, 52(6), 2425–2432.

Shannon, C. E. (1948). A mathematical theory of communication. Bell Syst. Tech. J., 27(3),379–423, 27(4), 623–656.

Shannon, C. E. (1959). Coding theorems for a discrete source with a fidelity criterion. In IRE

Int. Conv. Rec., vol. 7, part 4, pp. 142–163. Reprint with changes (1960). In R. E. Machol(ed.) Information and Decision Processes, pp. 93–126. McGraw-Hill, New York.

Shannon, C. E. (1961). Two-way communication channels. In Proc. 4th Berkeley Symp. Math.

Statist. Probab., vol. I, pp. 611–644. University of California Press, Berkeley.

Slepian, D. and Wolf, J. K. (1973a). Noiseless coding of correlated information sources. IEEE

Trans. Inf. Theory, 19(4), 471–480.

Slepian, D. and Wolf, J. K. (1973b). A coding theorem for multiple access channels withcorrelated sources. Bell Syst. Tech. J., 52(7), 1037–1076.

Willems, F. M. J. and van der Meulen, E. C. (1985). The discrete memoryless multiple-accesschannel with cribbing encoders. IEEE Trans. Inf. Theory, 31(3), 313–327.

Wyner, A. D. (1975). The wire-tap channel. Bell Syst. Tech. J., 54(8), 1355–1387.

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 117 / 118

Page 121: Elements of Network Information Theory - Stanford …isl.stanford.edu/~abbas/presentations/enit.pdf · Elements of Network Information Theory ... Network Information Theory Book ...

References

References (cont.)

Wyner, A. D. and Ziv, J. (1976). The rate–distortion function for source coding with sideinformation at the decoder. IEEE Trans. Inf. Theory, 22(1), 1–10.

Xie, L.-L. and Kumar, P. R. (2005). An achievable rate for the multiple-level relay channel. IEEE

Trans. Inf. Theory, 51(4), 1348–1358.

Zeng, C.-M., Kuhlmann, F., and Buzo, A. (1989). Achievability proof of some multiuser channelcoding theorems using backward decoding. IEEE Trans. Inf. Theory, 35(6), 1160–1165.

El Gamal & Kim (Stanford & UCSD) Elements of NIT Tutorial, ISIT 2011 118 / 118