Entropy Coding of Video Encoded by Compressive Sensing Yen-Ming Mark Lai, University of Maryland,...
-
Upload
bryan-reed -
Category
Documents
-
view
213 -
download
0
Transcript of Entropy Coding of Video Encoded by Compressive Sensing Yen-Ming Mark Lai, University of Maryland,...
Entropy Coding of Video Encoded by Compressive Sensing
Yen-Ming Mark Lai, University of Maryland, College Park, MD([email protected])
Razi Haimi-Cohen, Alcatel-Lucent Bell Labs, Murray Hill, NJ([email protected])
August 11, 2011
Break video into blocks
Take compressed sensed
measurements
Quantize measurements
Arithmetic decode
Arithmetic encode
L1 minimization
Deblock
Input video
Output video
channel
+ - - - +5 3 4 3 1
- +13
2 21
+ =Input (integers between 0 and 255)
CS Measurement(integers between -1275 and
1275)
18
Since CS measurements contain noise from pixel quantization, quantize at most to standard deviation of this noise
N12
1
standard deviation of noise from pixel quantization in CS measurements
N is total pixels in video block
** “Democracy in Action: Quantization, Saturation, and Compressive Sensing,” Jason N. Laska, Petros T. Boufounos, Mark A. Davenport, and Richard G. Baraniuk (Rice University, August 2009)
CS measurements are “democratic” **
Each measurement carries the same amount of information, regardless of its
magnitude
What to do with values outside range of quantizer?
Discard values, small PSNR loss since occurrence rare
quantizer range
Ratio of CS measurements
to number of input values0.15 0.25 0.35 0.45
PSNR
Bit Rate
“Normalized” quantization step multiplier
1 10 100 200 500
PSNR
Bit Rate
Range of quantizer (std dev of measured Gaussian distribution)
1.0 1.5 2.0 3.0
PSNRBit Rate
Processing Time
• 6 cores, 100 GB RAM
• 80 simulations (5 ratios, 4 steps, 4 ranges)
• 22 hours total
– 17 minutes per simulation
– 8.25 minutes per second of video
How often do large values occur in practice?
(theoretical)
2.7 million CS measurements
(0.135%)0.037%
What to do with large values outside range of quantizer?
Discard values, small PSNR loss since occurrence rare
quantizer range
Discarding values comes at bit rate cost
1001010110110101010011101000101
discard00101000010101010001010010
0010001001000101100101010100101010101001010110110101010011101000101 discar
d
Best Compression (Entropy) of Quantized Gaussian Variable X
bitsXh 4.9log)( bitse 4.172log
2
1 2 N12
1
Arithmetic Coding is viable option !
Fixed bit rate, what should we choose?
18.5 minutes, 121 bins
2.1 minutes, 78 bins
2.1 minutes, 78 bins
• Tune decoder
to take quantization noise into
account.
make use of out-of-range
measurements
• Improve computational efficiency of
arithmetic coder
Future Work
Break video into blocks
Take compressed sensed
measurements
Quantize measurements
Arithmetic decode
Arithmetic encode
L1 minimization
Deblock
Input video
Output video
For each block: 1) Output of arithmetic
encoder 2) mean, variance 3) DC value 4) sensing matrix
identifier
channel
“News” Test Video Input
• Block specifications
– 64 width, 64 height, 4 frames (16,384 pixels)
– Input 288 width, 352, height, 4 frames (30 blocks)
• Sampling Matrix
– Walsh Hadamard
• Compressed Sensed Measurements
– 10% of total pixels = 1638 measurements
i ii xpxpXH
)(
1log)()(
Given a discrete random variable X, the fewest number of bits (entropy) needed to encode X is given by:
dx
xpxpXh
)(
1log)()(
For a continuous random variable X, differential entropy is given by
Differential Entropy of Gaussian
22log2
1)( eXh
function of variance
maximizes entropy for fixed variance
i.e. h(X’) <= h(X) for all X’ with fixed variance
Approximate quantization noise as i.i.d. with uniform distribution
12,0~
2wUq
where w is width of quantization interval. Then,
N
ii
wXweightedVarMVar
1
22
12)()(
Variance from initial quantization noise
How much should we quantize?
N
ii
wXweightedVarMVar
1
22
12)()(
NXweightedVar12
1)(
2
input pixels are integers
measurement matrix is Walsh-Hadamard
Break video into blocks
Take compressed sensed
measurements
Quantize measurements
Arithmetic decode
Arithmetic encode
L1 minimization
Deblock
Input video
Output video
channel
What compression to expect from
arithmetic coding?
Entropy of Quantized Random Variable
log)()( XhXH
X - continuous random variable
X - uniformly quantized (discrete)X
22log2
1 e
N
Entropy of Quantized Random Variable
log)()( XhXH
differential (continuous) entropy
X - continuous random variable
X - uniformly quantized (discrete)X
quantization step size
Entropy of Quantized Random Variable
log)()( XhXH
- 5667^2 (average variance of video blocks)
N
bitse 5.142log2
1 2 N
2
- 16,384 pixels in video block
bits5.7 7 bit savings from quantization
Break video into blocks
Take compressed sensed
measurements
Quantize measurements
Arithmetic decode
Arithmetic encode
L1 minimization
Deblock
Input video
Output video
channel
What is penalty for using wrong probability
distribution?
)||()(1)( qpDXHXLE
Entropy of random variable
Expected length of wrong codeword
Kullback-Leiber divergence (penalty)
Assume random variable X has probability distribution p but we encode with distribution q
Worst case scenario for video blocks of “News”
)2800,340(~ 2Np
)11200,360(~ 2Nq
bits
qpDp
q
q
qpqp
33.1
ln2
)()||( 2
222
bitsxH
qpDXHXLE
33.2)(
)||()(1)(
Break video into blocks
Take compressed sensed
measurements
Quantize measurements
Arithmetic decode
Arithmetic encode
L1 minimization
Deblock
Input video
Output video
channel
What are statistics of
measurements?
Gaussian with different means and variances
Break video into blocks
Take compressed sensed
measurements
Quantize measurements
Arithmetic decode
Arithmetic encode
L1 minimization
Deblock
Input video
Output video
channel
How much should we quantize?
By square root of total number of pixels in video
block
Break video into blocks
Take compressed sensed
measurements
Quantize measurements
Arithmetic decode
Arithmetic encode
L1 minimization
Deblock
Input video
Output video
channel
What compression to expect from
arithmetic coding?
•14.5 bits/measurement (integer quantization)•7 bits/measurement (quantization)