Integer-Forcing Source Coding - York
Transcript of Integer-Forcing Source Coding - York
Integer-Forcing Source Coding
Or Ordentlich (MIT)Joint work with Uri Erez (TAU)
July 6th, 2016Wireless and Number Theory Workshop
York, England
Or Ordentlich Integer-Forcing Source Coding
The Communication Problem
Or Ordentlich Integer-Forcing Source Coding
The Communication Problem
Source coding: representing an information source with minimum # bitsChannel coding: sending i.i.d. Bernoulli(1/2) bits over a noisy channel
Or Ordentlich Integer-Forcing Source Coding
The Communication Problem
Source coding: representing an information source with minimum # bitsChannel coding: sending i.i.d. Bernoulli(1/2) bits over a noisy channel
Shannon
For i.i.d. information source and discrete memoryless channels there is noloss (asymptotically) in solving the source and channel coding separately
Or Ordentlich Integer-Forcing Source Coding
The Communication Problem
Source coding: representing an information source with minimum # bitsChannel coding: sending i.i.d. Bernoulli(1/2) bits over a noisy channel
Most talks in the workshop deal with channel codingThis talk is about source coding
Or Ordentlich Integer-Forcing Source Coding
The Communication Problem
Source coding: representing an information source with minimum # bitsChannel coding: sending i.i.d. Bernoulli(1/2) bits over a noisy channel
watch out!
Channel coding: large rate is goodSource coding: small rate is good
Or Ordentlich Integer-Forcing Source Coding
Lossy Compression
Many information sources of interest are analog in natureaudio, video, pictures, EM waves
Or Ordentlich Integer-Forcing Source Coding
Lossy Compression
Many information sources of interest are analog in natureaudio, video, pictures, EM waves
There are many good reasons to represent them digitallyResilience to noise/aging, cheap storage, fast access, efficient processing....
Or Ordentlich Integer-Forcing Source Coding
Lossy Compression
Many information sources of interest are analog in natureaudio, video, pictures, EM waves
There are many good reasons to represent them digitallyResilience to noise/aging, cheap storage, fast access, efficient processing....
Analog-to-Digital Conversion
Continuous-to-discrete (sampling)Can represent any band-limited signal by a discrete sequence(Nyquist’s Theorem)
Quantization of samples to a finite-alphabet (w.l.o.g. bits)Storage is finite, so average number of bits/sample is limited
Or Ordentlich Integer-Forcing Source Coding
Lossy Compression
Many information sources of interest are analog in natureaudio, video, pictures, EM waves
There are many good reasons to represent them digitallyResilience to noise/aging, cheap storage, fast access, efficient processing....
Analog-to-Digital Conversion
Continuous-to-discrete (sampling)Can represent any band-limited signal by a discrete sequence(Nyquist’s Theorem)
Quantization of samples to a finite-alphabet (w.l.o.g. bits)Storage is finite, so average number of bits/sample is limited
Something is always lost in the conversion from analog-to-digital
Or Ordentlich Integer-Forcing Source Coding
Lossy Compression
Many information sources of interest are analog in natureaudio, video, pictures, EM waves
There are many good reasons to represent them digitallyResilience to noise/aging, cheap storage, fast access, efficient processing....
Analog-to-Digital Conversion
Continuous-to-discrete (sampling)Can represent any band-limited signal by a discrete sequence(Nyquist’s Theorem)
Quantization of samples to a finite-alphabet (w.l.o.g. bits)Storage is finite, so average number of bits/sample is limited
Something is always lost in the conversion from analog-to-digital
The more bits we allocate for storing the signal, the smaller the loss
Or Ordentlich Integer-Forcing Source Coding
Rate-Distortion Theory
We can restrict attention to discrete signals
Let x [k], k = 1, . . . , n, be the discrete-time sampled signal
Or Ordentlich Integer-Forcing Source Coding
Rate-Distortion Theory
We can restrict attention to discrete signals
Let x [k], k = 1, . . . , n, be the discrete-time sampled signal
Assume further we have nR bits for storing x = (x [1], . . . , x [n])
Or Ordentlich Integer-Forcing Source Coding
Rate-Distortion Theory
We can restrict attention to discrete signals
Let x [k], k = 1, . . . , n, be the discrete-time sampled signal
Assume further we have nR bits for storing x = (x [1], . . . , x [n])
This is done using two functions:
Encoder: E : Rn 7→ {1, . . . , 2nR}Decoder: D : {1, . . . , 2nR} 7→ Rn
Or Ordentlich Integer-Forcing Source Coding
Rate-Distortion Theory
We can restrict attention to discrete signals
Let x [k], k = 1, . . . , n, be the discrete-time sampled signal
Assume further we have nR bits for storing x = (x [1], . . . , x [n])
This is done using two functions:
Encoder: E : Rn 7→ {1, . . . , 2nR}Decoder: D : {1, . . . , 2nR} 7→ Rn
We would like x and x , D (E (x)) to be as close as possible
Or Ordentlich Integer-Forcing Source Coding
Rate-Distortion Theory
We can restrict attention to discrete signals
Let x [k], k = 1, . . . , n, be the discrete-time sampled signal
Assume further we have nR bits for storing x = (x [1], . . . , x [n])
This is done using two functions:
Encoder: E : Rn 7→ {1, . . . , 2nR}Decoder: D : {1, . . . , 2nR} 7→ Rn
We would like x and x , D (E (x)) to be as close as possible
Need to specify distortion metric. We will use squared loss
D ,1
n
n∑
k=1
(x [k]− x [k])2
Or Ordentlich Integer-Forcing Source Coding
Rate-Distortion Theory
We can restrict attention to discrete signals
Let x [k], k = 1, . . . , n, be the discrete-time sampled signal
Assume further we have nR bits for storing x = (x [1], . . . , x [n])
This is done using two functions:
Encoder: E : Rn 7→ {1, . . . , 2nR}Decoder: D : {1, . . . , 2nR} 7→ Rn
We would like x and x , D (E (x)) to be as close as possible
Need to specify distortion metric. We will use squared loss
D ,1
n
n∑
k=1
(x [k]− x [k])2
Goal: design E and D to minimize D
Or Ordentlich Integer-Forcing Source Coding
Rate-Distortion Theory
When designing E and D the signal x is not known
Or Ordentlich Integer-Forcing Source Coding
Rate-Distortion Theory
When designing E and D the signal x is not known
Need to assume distribution x ∼ P and design w.r.t. P
Or Ordentlich Integer-Forcing Source Coding
Rate-Distortion Theory
When designing E and D the signal x is not known
Need to assume distribution x ∼ P and design w.r.t. P
Goal is to minimize
D =1
nEx∼P
n∑
k=1
(x [n]− x [n])2
where x = D (E (x)).
Or Ordentlich Integer-Forcing Source Coding
Rate-Distortion Theory
When designing E and D the signal x is not known
Need to assume distribution x ∼ P and design w.r.t. P
Goal is to minimize
D =1
nEx∼P
n∑
k=1
(x [n]− x [n])2
where x = D (E (x)).
Common assumption: x is a random vector with i.i.d. entries
Or Ordentlich Integer-Forcing Source Coding
Rate-Distortion Theory
When designing E and D the signal x is not known
Need to assume distribution x ∼ P and design w.r.t. P
Goal is to minimize
D =1
nEx∼P
n∑
k=1
(x [n]− x [n])2
where x = D (E (x)).
Common assumption: x [n] ∼ N (0, σ2) i.i.d.
Or Ordentlich Integer-Forcing Source Coding
Rate-Distortion Theory
When designing E and D the signal x is not known
Need to assume distribution x ∼ P and design w.r.t. P
Goal is to minimize
D =1
nEx∼P
n∑
k=1
(x [n]− x [n])2
where x = D (E (x)).
Common assumption: x [n] ∼ N (0, σ2) i.i.d.
Rate-distortion theorem (Shannon)
The minimum number of bits/sample for attaining avg. distortion D is
R(D) =1
2log
(σ2
D
)
Or Ordentlich Integer-Forcing Source Coding
Scalar Uniform Quantization 1
Achieving optimal R(D) requires n → ∞
Or Ordentlich Integer-Forcing Source Coding
Scalar Uniform Quantization 1
Achieving optimal R(D) requires n → ∞In practice, delay and computational complexity are limited
Or Ordentlich Integer-Forcing Source Coding
Scalar Uniform Quantization 1
Achieving optimal R(D) requires n → ∞In practice, delay and computational complexity are limited
Ideally, we would like to work with n = 1This is also the case for Analog-to-Digital Convertors (ADC)
Or Ordentlich Integer-Forcing Source Coding
Scalar Uniform Quantization 1
Achieving optimal R(D) requires n → ∞In practice, delay and computational complexity are limited
Ideally, we would like to work with n = 1This is also the case for Analog-to-Digital Convertors (ADC)
Simplest choice is a uniform scalar quantizer
Or Ordentlich Integer-Forcing Source Coding
Scalar Uniform Quantization 1
Achieving optimal R(D) requires n → ∞In practice, delay and computational complexity are limited
Ideally, we would like to work with n = 1This is also the case for Analog-to-Digital Convertors (ADC)
Simplest choice is a uniform scalar quantizer
Q(x) =
(∆− 1)/2 if ∆/2 < x
round(x) if −∆/2 ≤ x ≤ ∆/2
−(∆− 1)/2 if x < −∆/2
Or Ordentlich Integer-Forcing Source Coding
Scalar Uniform Quantization 1
Achieving optimal R(D) requires n → ∞In practice, delay and computational complexity are limited
Ideally, we would like to work with n = 1This is also the case for Analog-to-Digital Convertors (ADC)
Simplest choice is a uniform scalar quantizer
Q(x):
x1 2
1
3
1
· · · 2R
Or Ordentlich Integer-Forcing Source Coding
Scalar Uniform Quantization 1
Achieving optimal R(D) requires n → ∞In practice, delay and computational complexity are limited
Ideally, we would like to work with n = 1This is also the case for Analog-to-Digital Convertors (ADC)
Simplest choice is a uniform scalar quantizer
Q(x):
x1 2
1
3
1
· · · 2ROVERLOADOVERLOAD
No Overload Region
∆ = 2R
Or Ordentlich Integer-Forcing Source Coding
Scalar Uniform Quantization 1
Achieving optimal R(D) requires n → ∞In practice, delay and computational complexity are limited
Ideally, we would like to work with n = 1This is also the case for Analog-to-Digital Convertors (ADC)
Simplest choice is a uniform scalar quantizer
Q(x):
x1 2
1
3
1
· · · 2ROVERLOADOVERLOAD
No Overload Region
∆ = 2R
Dynamic range= [−∆/2,∆/2]Overload probability= Pr(|X | > ∆/2) = 2erfc(∆/(2σ))
Or Ordentlich Integer-Forcing Source Coding
Dithered quantization
The quantization error e = Q(x)− x is a deterministic function of x
Or Ordentlich Integer-Forcing Source Coding
Dithered quantization
The quantization error e = Q(x)− x is a deterministic function of x
complicates performance analysis
Or Ordentlich Integer-Forcing Source Coding
Dithered quantization
The quantization error e = Q(x)− x is a deterministic function of x
complicates performance analysis
This can be mitigated by adding randomization to the quantizer
Or Ordentlich Integer-Forcing Source Coding
Dithered quantization
The quantization error e = Q(x)− x is a deterministic function of x
complicates performance analysis
This can be mitigated by adding randomization to the quantizer
In practice, this randomization is seldom needed
Or Ordentlich Integer-Forcing Source Coding
Dithered quantization
The quantization error e = Q(x)− x is a deterministic function of x
complicates performance analysis
This can be mitigated by adding randomization to the quantizer
In practice, this randomization is seldom needed
Let u ∼ Uniform(−1
2 ,12
), u |= x
Or Ordentlich Integer-Forcing Source Coding
Dithered quantization
The quantization error e = Q(x)− x is a deterministic function of x
complicates performance analysis
This can be mitigated by adding randomization to the quantizer
In practice, this randomization is seldom needed
Let u ∼ Uniform(−1
2 ,12
), u |= x
Set x = Q(x + u)− u
Or Ordentlich Integer-Forcing Source Coding
Dithered quantization
The quantization error e = Q(x)− x is a deterministic function of x
complicates performance analysis
This can be mitigated by adding randomization to the quantizer
In practice, this randomization is seldom needed
Let u ∼ Uniform(−1
2 ,12
), u |= x
Set x = Q(x + u)− u
Assuming no overload, we have
e = x − x = Q(x + u)− (x + u) = round(x + u)− (x + u)
Or Ordentlich Integer-Forcing Source Coding
Dithered quantization
The quantization error e = Q(x)− x is a deterministic function of x
complicates performance analysis
This can be mitigated by adding randomization to the quantizer
In practice, this randomization is seldom needed
Let u ∼ Uniform(−1
2 ,12
), u |= x
Set x = Q(x + u)− u
Assuming no overload, we have
e = x − x = Q(x + u)− (x + u) = round(x + u)− (x + u)
It is easy to see that round(x + u)− (x + u) is Uniform(−1
2 ,12
)and |= x
Or Ordentlich Integer-Forcing Source Coding
Scalar Uniform Quantization 2
Q(x):
x1 2
√12d
3
√12d
· · · 2R
Or Ordentlich Integer-Forcing Source Coding
Scalar Uniform Quantization 2
Q(x):
x1 2
√12d
3
√12d
· · · 2R
Under no overload + dithered quantization
Xn Q(·) Xn
Or Ordentlich Integer-Forcing Source Coding
Scalar Uniform Quantization 2
Q(x):
x1 2
√12d
3
√12d
· · · 2R
Under no overload + dithered quantization
Xn Xn
Nn ∼ Uniform
(−
√12d2 ,
√12d2
)
E
(Xn − Xn
)2= d
Or Ordentlich Integer-Forcing Source Coding
Scalar Uniform Quantization 2
Q(x):
x1 2
√12d
3
√12d
· · · 2ROVERLOADOVERLOAD
No Overload Region
∆ = 2R√12d
Under no overload + dithered quantization
Xn Xn
Nn ∼ Uniform
(−
√12d2 ,
√12d2
)
E
(Xn − Xn
)2= d
Pr(OL) = 2erfc
(∆
2σ
)≤ e
− ∆2
8σ2 = e−32·2
2
(
R−
12 log
(
σ2
d
)
)
Or Ordentlich Integer-Forcing Source Coding
Scalar Uniform Quantization 2
Q(x):
x1 2
√12d
3
√12d
· · · 2ROVERLOADOVERLOAD
No Overload Region
∆ = 2R√12d
Under no overload + dithered quantization
Xn Xn
Nn ∼ Uniform
(−
√12d2 ,
√12d2
)
E
(Xn − Xn
)2= d
Pr(OL) = 2erfc
(∆
2σ
)≤ e
− ∆2
8σ2 = e−32·22δ
Or Ordentlich Integer-Forcing Source Coding
Scalar Uniform Quantization 2
Conclusion:
For x ∼ N (0, σ2), a scalar uniform quantizer with rate
R = δ +1
2log
(σ2
d
)
︸ ︷︷ ︸Shannon
achieves distortion ≈ d whenever overload does not occur, and
Pr(OL) ≤ e−3222δ
Or Ordentlich Integer-Forcing Source Coding
Scalar Uniform Quantization 2
Conclusion:
For x ∼ N (0, σ2), a scalar uniform quantizer with rate
R = δ +1
2log
(σ2
d
)
︸ ︷︷ ︸Shannon
achieves distortion ≈ d whenever overload does not occur, and
Pr(OL) ≤ e−3222δ
Main Goal:
Can we get close-to-optimal performance using scalar quantizers also whenthe source is a Gaussian vector x ∼ N (0,Kxx)?
Or Ordentlich Integer-Forcing Source Coding
Quantization of Gaussian Vectors
[
src1src2
]
∼ N(
0,
[
σ2 ρσ2
ρσ2 σ2
])
src1 E1R
src2 E2R
D ( ˆsrc1, d)( ˆsrc2, d)
Or Ordentlich Integer-Forcing Source Coding
Quantization of Gaussian Vectors
[
src1src2
]
∼ N(
0,
[
σ2 ρσ2
ρσ2 σ2
])
src1 E1R
src2 E2R
D ( ˆsrc1, d)( ˆsrc2, d)
Naive Approach (typically used in practice)
Ignore correlation and use scalar quantizers with ∆ ∝ σ so that with highprobability both src1, src2 ∈ [−∆
2 ,∆2 ]
⇒ R ∝ 12 log
(σ2
d
)
Or Ordentlich Integer-Forcing Source Coding
Quantization of Gaussian Vectors
[
src1src2
]
∼ N(
0,
[
σ2 00 σ2
])
src1 E1R
src2 E2R
D ( ˆsrc1, d)( ˆsrc2, d)
Naive Approach (typically used in practice)
Ignore correlation and use scalar quantizers with ∆ ∝ σ so that with highprobability both src1, src2 ∈ [−∆
2 ,∆2 ]
⇒ R ∝ 12 log
(σ2
d
)
Or Ordentlich Integer-Forcing Source Coding
Quantization of Gaussian Vectors
[
src1src2
]
∼ N(
0,
[
σ2 ρσ2
ρσ2 σ2
])
src1 E1R
src2 E2R
D ( ˆsrc1, d)( ˆsrc2, d)
Naive Approach (typically used in practice)
Ignore correlation and use scalar quantizers with ∆ ∝ σ so that with highprobability both src1, src2 ∈ [−∆
2 ,∆2 ]
⇒ R ∝ 12 log
(σ2
d
)
Fundamental IT Limits (Berger-Tung, Wagner et al)
With unlimited delay and computational complexity, it is possible toachieve∗ R ≈ 1
2 · 12 log det
(I+ 1
dKxx
)
Or Ordentlich Integer-Forcing Source Coding
Goal 1 - Universal Quantization
[
x1
x2
]
∼ N (0,Kxx )
x1 E1R
x2 E2R
D (x1, d)(x2, d)
Or Ordentlich Integer-Forcing Source Coding
Goal 1 - Universal Quantization
[
x1
x2
]
∼ N (0,Kxx )
x1 E1R
x2 E2R
D (x1, d)(x2, d)
Goal:
Simple, identical, universal, non-cooperating quantizers E1, E2Simple decoder D that can depend on Kxx
Good performance for all Kxx with the same log det(I+ 1
dKxx
)
Or Ordentlich Integer-Forcing Source Coding
Goal 1 - Universal Quantization
[
x1
x2
]
∼ N (0,Kxx )
x1 E1R
x2 E2R
D (x1, d)(x2, d)
Goal:
Simple, identical, universal, non-cooperating quantizers E1, E2Simple decoder D that can depend on Kxx
Good performance for all Kxx with the same log det(I+ 1
dKxx
)
Extreme cases:
K1xx =
[1 00 1
], K2
xx =
[a 00 0
], and K3
xx =
[b b
b b
]
Or Ordentlich Integer-Forcing Source Coding
Goal 1 - Universal Quantization
[
x1
x2
]
∼ N (0,Kxx )
x1
x2
P
E1R
E2R
D (x1, d)(x2, d)
Goal:
Simple, identical, universal, non-cooperating quantizers E1, E2Simple decoder D that can depend on Kxx
Good performance for all Kxx with the same log det(I+ 1
dKxx
)
Extreme cases:
K1xx =
[1 00 1
], K2
xx =
[a 00 0
], and K3
xx =
[b b
b b
]
Willing to apply a universal linear transformation before quantization
Or Ordentlich Integer-Forcing Source Coding
Goal 2 - Distributed Lossy Compression
x1 E1R1
...
xK EKRK
D(x1, d1)
...(xK , dK )
Or Ordentlich Integer-Forcing Source Coding
Goal 2 - Distributed Lossy Compression
x1 E1R1
...
xK EKRK
D(x1, d1)
...(xK , dK )
Fundamental limits understood in some cases
Inner and outer bounds known
Or Ordentlich Integer-Forcing Source Coding
Goal 2 - Distributed Lossy Compression
x1 E1R1
...
xK EKRK
D(x1, d1)
...(xK , dK )
Fundamental limits understood in some cases
Inner and outer bounds known
Some applications require
Extremely simple encoders/decoder
n = 1
Or Ordentlich Integer-Forcing Source Coding
Goal 2 -Distributed Lossy Compression
x1
...xK
∼ N (0,Kxx )
x1 E1R
...
xK EKR
D(x1, d)
...(xK , d)
We restrict attention to:
Gaussian sources x ∼ N (0,Kxx )
One-shot compression - block length is 1
Symmetric rates R1 = · · · = RK = R
Symmetric distortions d1 = · · · = dK = d
MSE distortion measure: E (xk − xk)2 ≤ d
Or Ordentlich Integer-Forcing Source Coding
Goal and Means
Goal
Simple encoders: uniform scalar quantizers
Decoupled decoding
Performance close to best known inner bounds (Berger-Tung)
Or Ordentlich Integer-Forcing Source Coding
Goal and Means
Goal
Simple encoders: uniform scalar quantizers
Decoupled decoding
Performance close to best known inner bounds (Berger-Tung)
Binning:
Well understood for large blocklengths, less for short blocks
Requires sophisticated joint decoding techniques
Or Ordentlich Integer-Forcing Source Coding
Goal and Means
Goal
Simple encoders: uniform scalar quantizers
Decoupled decoding
Performance close to best known inner bounds (Berger-Tung)
Binning:
Well understood for large blocklengths, less for short blocks
Requires sophisticated joint decoding techniques
Scalar Modulo
A simple 1-D binning operation
Allows for efficient decoding using integer-forcing
Or Ordentlich Integer-Forcing Source Coding
Integer-Forcing Source Coding: Overview
Basic Idea: Rather than solving the problem
x1 E1R
...
xK EKR
Dx1...xK
Or Ordentlich Integer-Forcing Source Coding
Integer-Forcing Source Coding: Overview
First solve
x1 E1R
...
xK EKR
D
∑K
m=1 a1mxm...
∑K
m=1 aKmxm
and then invert equations to get x1, . . . , xK
Or Ordentlich Integer-Forcing Source Coding
Integer-Forcing Source Coding: Overview
First solve
x1 E1R
...
xK EKR
D
∑K
m=1 a1mxm...
∑K
m=1 aKmxm
and then invert equations to get x1, . . . , xK
Problem reduces to simultaneous distributed compression of K linearcombinations
Can be efficiently solved with small rates for certain choices ofcoefficients
Equation coefficients can be chosen to optimize performance
Or Ordentlich Integer-Forcing Source Coding
Distributed Compression of Integer Linear Combination
x1 E1R
...
xK EKR
D aTx
Or Ordentlich Integer-Forcing Source Coding
Distributed Compression of Integer Linear Combination
Scalar Quantization
xi Q(·) xi
0√12d
xixi
High resolution/dithered quantization:
xi = xi + ui
where ui ∼ Uniform
([−
√12d2 ,
√12d2
)), ui |= xi
E(xi − xi )2 = d
Or Ordentlich Integer-Forcing Source Coding
Distributed Compression of Integer Linear Combination
Modulo Scalar Quantization
xi Q(·) mod∆ x∗i
−3∆ −2∆ −∆ 0 ∆ 2∆ 3∆
√12d
xixi
x∗i
∆ = 2R√12d =⇒ Compression rate is R
High resolution/dithered quantization:
x∗i = [xi + ui ]∗
Or Ordentlich Integer-Forcing Source Coding
Distributed Compression of Integer Linear Combination
Encoders
Each encoder is a modulo scalar quantizer with rate R : produces x∗k
Or Ordentlich Integer-Forcing Source Coding
Distributed Compression of Integer Linear Combination
Encoders
Each encoder is a modulo scalar quantizer with rate R : produces x∗k
Simple modulo property
For any set of integers a1, . . . , aK and real numbers x1, . . . , xK[K∑
k=1
ak xk
]∗
=
[K∑
k=1
ak x∗k
]∗
Or Ordentlich Integer-Forcing Source Coding
Distributed Compression of Integer Linear Combination
Encoders
Each encoder is a modulo scalar quantizer with rate R : produces x∗k
Simple modulo property
For any set of integers a1, . . . , aK and real numbers x1, . . . , xK[K∑
k=1
ak xk
]∗
=
[K∑
k=1
ak x∗k
]∗
Decoder
Gets: x∗1 , . . . , x∗K
Outputs:
aTx =
[K∑
k=1
ak x∗k
]∗
=
[K∑
k=1
ak xk
]∗
=[aT (x+ u)
]∗
Or Ordentlich Integer-Forcing Source Coding
Compression of Integer Linear Combination - Pe
aTx =[aT (x+ u)
]∗
aTx =
{aTx+ aTu if aT (x+ u) ∈
[−∆
2 ,∆2
)
error otherwise
Pe is small if ∆√
Var(aT (x+u))is large
∆ grows exponentially with R
Or Ordentlich Integer-Forcing Source Coding
Compression of Integer Linear Combination - Pe
aTx =[aT (x+ u)
]∗
aTx =
{aTx+ aTu if aT (x+ u) ∈
[−∆
2 ,∆2
)
error otherwise
Pe is small if ∆√
Var(aT (x+u))is large
∆ grows exponentially with R
Pe ≤ 2 exp
{−3
222
(
R− 12log
(
aT (Kxx+dI)ad
))}
Or Ordentlich Integer-Forcing Source Coding
Compression of Integer Linear Combination - Pe
aTx =[aT (x+ u)
]∗
aTx =
{aTx+ aTu if aT (x+ u) ∈
[−∆
2 ,∆2
)
error otherwise
Pe is small if ∆√
Var(aT (x+u))is large
∆ grows exponentially with R
Pe ≤ 2 exp
{−3
222
(
R− 12log
(
aT (Kxx+dI)ad
))}
For a with small Var(aT (x+ u)
)we can take small R
Or Ordentlich Integer-Forcing Source Coding
Integer-Forcing Source Coding
x1 E1R
...
xK EKR
D
∑K
m=1 a1mxm...
∑K
m=1 aKmxm
Need to estimate K linearly independent integer linear combinations
If all combinations estimated without error, can compute
x = A−1Ax = A−1(Ax+ Au) = x+ u
Or Ordentlich Integer-Forcing Source Coding
Integer-Forcing Source Coding
x1 E1R
...
xK EKR
D
∑K
m=1 a1mxm...
∑K
m=1 aKmxm
Need to estimate K linearly independent integer linear combinations
If all combinations estimated without error, can compute
x = A−1Ax = A−1(Ax+ Au) = x+ u
Pe ≤ 2K exp
−3
222
(
R− 12log
(
maxm=1,...,K aTm(Kxx+dI)am
d
))
Or Ordentlich Integer-Forcing Source Coding
Integer-Forcing Source Coding - Performance
Let
RIF(A, d) ,1
2log
(max
m=1,...,KaTm
(I+
1
dKxx
)am
)
Or Ordentlich Integer-Forcing Source Coding
Integer-Forcing Source Coding - Performance
Let
RIF(A, d) ,1
2log
(max
m=1,...,KaTm
(I+
1
dKxx
)am
)
Theorem
Let R = RIF(A, d) + δ. IF source coding produces estimates with averageMSE distortion d for all x1, . . . , xK with probability > 1− 2K exp
{−3
222δ}
Or Ordentlich Integer-Forcing Source Coding
Integer-Forcing Source Coding - Performance
Let
RIF(A, d) ,1
2log
(max
m=1,...,KaTm
(I+
1
dKxx
)am
)
Theorem
Let R = RIF(A, d) + δ. IF source coding produces estimates with averageMSE distortion d for all x1, . . . , xK with probability > 1− 2K exp
{−3
222δ}
Can minimize compression rate by minimizing RIF(A, d) w.r.t. A
Or Ordentlich Integer-Forcing Source Coding
Integer-Forcing Source Coding: Example
x ∼ N (0,Kxx), Kxx = I+ SNRHHT , SNR = 20dB and H ∈ R8×2
−20 −10 0 10 20 30 400
2
4
6
8
10
E(R
)[b
its]
(1/d)[dB]
Naive Compression Symmetric Successive Wyner−Ziv CodingR
IF(d)
Berger−Tung Benchmark
Or Ordentlich Integer-Forcing Source Coding
Back to Motivation 1
How close is RIF(d) to the optimal performance?
Usually very close to the performance of the Berger-Tung inner bound.
But... the gap can be arbitrarily large.
Or Ordentlich Integer-Forcing Source Coding
Back to Motivation 1
How close is RIF(d) to the optimal performance?
Usually very close to the performance of the Berger-Tung inner bound.
But... the gap can be arbitrarily large.
However, if we change the setting...
this obstacle can be overcome.
Or Ordentlich Integer-Forcing Source Coding
Back to Motivation 1
How close is RIF(d) to the optimal performance?
Usually very close to the performance of the Berger-Tung inner bound.
But... the gap can be arbitrarily large.
[
x1
x2
]
∼ N (0,Kxx )
x1 E1R
x2 E2R
D (x1, d)(x2, d)
Or Ordentlich Integer-Forcing Source Coding
Back to Motivation 1
How close is RIF(d) to the optimal performance?
Usually very close to the performance of the Berger-Tung inner bound.
But... the gap can be arbitrarily large.
[
x1
x2
]
∼ N (0,Kxx )
x1
x2
P
E1R
E2R
D (x1, d)(x2, d)
Or Ordentlich Integer-Forcing Source Coding
Back to Motivation 1
How close is RIF(d) to the optimal performance?
Usually very close to the performance of the Berger-Tung inner bound.
But... the gap can be arbitrarily large.
[
x1
x2
]
∼ N (0,Kxx )
x1
x2
P
E1R
E2R
D (x1, d)(x2, d)
Requirements
Universal precoding matrix P (does not depend on Kxx)
RIF(d) ≤ const + 12K log det(I+ 1
dKxx) for all Kxx
Or Ordentlich Integer-Forcing Source Coding
Back to Motivation 1
How close is RIF(d) to the optimal performance?
Usually very close to the performance of the Berger-Tung inner bound.
But... the gap can be arbitrarily large.
[
x1
x2
]
∼ N (0,Kxx )
x1
x2
P
E1R
E2R
D (x1, d)(x2, d)
Requirements
Universal precoding matrix P (does not depend on Kxx)
RIF(d) ≤ const + 12K log det(I+ 1
dKxx) for all Kxx
Price of universality - need to jointly encode K realizations
Or Ordentlich Integer-Forcing Source Coding
Space-Time Source Coding
x11
x12
x21
x22
P
IF enc 1R
IF enc 2R
IF enc 3R
IF enc 4R
IF
Decoder
(x11 , d
)(x12 , d
)(x21 , d
)(x22 , d
)
Or Ordentlich Integer-Forcing Source Coding
Space-Time Source Coding - Performance Guarantees
Let P be a generating matrix of a “perfect” linear dispersion space-timecode, with minimum det δmin(CST
∞ )
Theorem
For any source with covariance matrix Kxx, the rate-distortion function ofspace-time integer-forcing source coding with precoding matrix P isbounded by
RIF(d) <1
2Klog det
(I+
1
dKxx
)+ Γ
(K , δmin(CST
∞ ))
where Γ(K , δmin(CST
∞ )), 2K 2 log(2K 2) + K log 1
δmin(CST∞
)
Remark: For K = 2 the golden-code precoding matrix has δmin(CST∞ ) = 1/5
Or Ordentlich Integer-Forcing Source Coding
Example
K1xx =
[1 00 1
], K2
xx =
[a 00 0
], and K3
xx =
[b b
b b
]
12K log
(I+ 1
dK1
xx
)= 1
2K log(I+ 1
dK2
xx
)= 1
2K log(I+ 1
dK3
xx
)
Or Ordentlich Integer-Forcing Source Coding
Example
K1xx =
[1 00 1
], K2
xx =
[a 00 0
], and K3
xx =
[b b
b b
]
12K log
(I+ 1
dK1
xx
)= 1
2K log(I+ 1
dK2
xx
)= 1
2K log(I+ 1
dK3
xx
)
−10 0 10 20 30 400
1
2
3
4
5
6
7
R[b
its]
(1/d)[dB]
R1IF(d)
R2IF(d)
R3IF(d)
12K logdet
(
I + 1dKxx
)
Or Ordentlich Integer-Forcing Source Coding
Further Applications of Integer-Forcing Source Coding:
Or Ordentlich Integer-Forcing Source Coding
Further Applications of Integer-Forcing Source Coding:
Compress-and-forward for relay networks (C-RAN)
Or Ordentlich Integer-Forcing Source Coding
Further Applications of Integer-Forcing Source Coding:
Compress-and-forward for relay networks (C-RAN)
Analog-to-digital converters
Or Ordentlich Integer-Forcing Source Coding
Thanks for your attention!
Or Ordentlich Integer-Forcing Source Coding