Ip Image Compression
-
Upload
abhishek-ek -
Category
Documents
-
view
216 -
download
0
Transcript of Ip Image Compression
-
8/3/2019 Ip Image Compression
1/23
Module IV (13 hours)
Image restoration - image observation
models - inverse filtering - wiener filteringImage compression - pixel coding -
predictive coding - transform coding -
basic ideas
-
8/3/2019 Ip Image Compression
2/23
Image observation models
Refer to anil k jain pge no 268 - 275
1. Image formation models2. Noise models
3. Detector and recorder models
4. Sampled image observation models
-
8/3/2019 Ip Image Compression
3/23
Image compression
Concerned with minimizing the
number of bits required to rep animage
-
8/3/2019 Ip Image Compression
4/23
The objective of a image coding method is torepresent images with as small data size aspossible.
Typicallyyou start with a bitmap image, i.e., atrivial image coding method where each pixel(image element) is represented individually withone or several bytes. This is often called anun-
coded image.Image coding is often called compression,
which means that you compare with the originalun-coded image.
-
8/3/2019 Ip Image Compression
5/23
Two categories of image coding methods (for bitmapimages):
Lossless coding
Lossy coding
With loss-less coding no information is lost. You only tryto find the best (smallest) data representation for thatinformation.
Ex. GIF, PNG, BMP, TIFF.
With lossy coding, you will always loose some
information. However, the compression ratio is generallymuch better than for loss-less coding (smaller files).
Ex: JPG, JPG2000, (MPEG).
-
8/3/2019 Ip Image Compression
6/23
Image data compression
techniques Pixel coding - each pixel is processed independently
PCM /quantization
Entropy coding
Huffman coding
Runlength coding
bit plane coding
-
8/3/2019 Ip Image Compression
7/23
PCM
Incoming video signal is sampled , quantized and coded
by generally a fixed length binary code having B bits
B- average bit rate of original data
For monochrome 8 bits/pixelcolor-10 -12 bits/pixel
-
8/3/2019 Ip Image Compression
8/23
Entropy coding
The basic principle ofentropy coding isthat pixel values that occur often should
be repressed with fewer bits that thosewho occur seldom.
Here we encode a block of M pixels havingMB bits with probabilities by log2 pi
Huffman coding is an optimal algoritm forcreating a data representation with
minimal size
-
8/3/2019 Ip Image Compression
9/23
Run length coding
Run length coding is an
example of entropy
encoding.
Images with repeating
greyvalues along rows
(or columns) can be
compressed by storing
"runs" of identicalgreyvalues in the
format:
row # column #
run1 begin
column #
run1 end
column #
run2 begin
column #
run2 end
-
8/3/2019 Ip Image Compression
10/23
Uncompressed, a character run of 15 A characters would normally require15 bytes to store:
AAAAAAAAAAAAAAA The same string after RLE encoding would requireonly two bytes:
15A The 15A code generated to represent the character string is called anRLEpacket. Here, the first byte, 15, is the run count and contains thenumber of repetitions. The second byte, A, is the run value and contains theactual repeated value in the run.
A new packet is generated each time the run character changes, or eachtime the number of characters in the run exceeds the maximum count.Assume that our 15-character string now contains four different characterruns:
AAAAAAbbbXXXXXt Using run-length encoding this could be compressed
into four 2-byte packets: 6A3b5X1t Thus, after run-length encoding, the 15-byte string would require
only eight bytes of data to represent the string, as opposed to the original 15bytes. In this case, run-length encoding yielded a compression ratio ofalmost 2 to 1.
-
8/3/2019 Ip Image Compression
11/23
-
8/3/2019 Ip Image Compression
12/23
The run length coding is
0 3 5 9 9
1 1 7 9 9
3 4 4 6 6 8 8 10 10 12 14
-
8/3/2019 Ip Image Compression
13/23
Bit plane coding
Consider 256 level iamge as a set of 8 one
bit palnes & each can be runlength coded
Here compression ratios of 1.5 to 2 can beachieved
-
8/3/2019 Ip Image Compression
14/23
Predictive oding with
Quantization
Consider: high correlation between successive
samples
Predictive coding
Basic principle: Remove redundancy betweensuccessive pixels and only encode residual between
actual and predicted
Residue usually has much smaller dynamic range
Allow fewer quantization levels for the same MSE => getcompression
Compression efficiency depends on intersample
redundancy
UMCPE
NEE408G
Slides(create
dby
M.Wu&R.L
iu
200
2)
-
8/3/2019 Ip Image Compression
15/23
u(n)
Predictor
Quantizer_
e(n) eQ(n)
EncoderEncoderuP(n) = f[u(n-1)]
uQ (n)
Predictor+
eQ(n)
uP(n) = f[uQ(n-1)]DecoderDecoder
-
8/3/2019 Ip Image Compression
16/23
Problem with 1st try Input to predictor are different at encoder and decoder
decoder doesnt know u(n)!
Mismatch error could propagate to future reconstructed samples
Solution: Differential PCM (DPCM) Use quantized sequence uQ(n) for prediction at both encoder
and decoder
Simple predictor f[ x ] = x Prediction errore(n)
Quantized prediction erroreQ(n)
Distortion d(n) = e(n) eQ(n)
-
8/3/2019 Ip Image Compression
17/23
Predictive Coding (contd)
uQ (n)
Predictor+
eQ(n)
uP(n)
= f[uQ(n-1)]
DecoderDecoder
EncoderEncoder
u(n)
Predictor
Quantizer_
e(n) eQ(n)
+
uP(n)=f[uQ(n-1)]
uQ(n)
UMCPE
NEE408G
Slides(create
dby
M.Wu&R.L
iu
200
2)
Note: Predictor contains one-step
buffer as input to the prediction
-
8/3/2019 Ip Image Compression
18/23
Transform Coding theory Use transform to pack energy to only a few coeff.
How many bits to be allocated for each coeff.?
More bits for coeff. with high variance Wk2 to keep total MSE small
Also determined by perceptual importance
From Jains
Fig.11.15
-
8/3/2019 Ip Image Compression
19/23
Zonal Coding and Threshold Coding
Zonal coding Only transmit a small predetermined zone of
transformed coeff.
Threshold coding
Transmit coeff. that are above certain thresholds
Compare Threshold coding is inherently adaptive
introduce smaller distortion for the same # of coded coeff.
Threshold coding needs overhead in specifying
index of coded coeff.
-
8/3/2019 Ip Image Compression
20/23
Determining Block Size
W
hy block based? High transform computation
complexity for large block
O( m logm v m ) per block
in tranf. for (MN/m2
) blocks complexity in bit allocation
Block transform captures local
info. better than global transform
Rate & complexity vs. block size
Commonly used block size ~ 8x8
From Jains Fig.11.16
complexi
-
8/3/2019 Ip Image Compression
21/23
oc agram o rans orm
Coding Encoder
Step-1 Divide an image into m x m blocks and perfromtransform
Step-2 Determine bit-allocation for coefficients
Step-3 Design quantizer and quantize coefficients (lossy!)
Step-4 Encode quantized coefficients Decoder
-
8/3/2019 Ip Image Compression
22/23
How to Encode Quantized
Coeff. in Each Block Basic tools Entropy coding (Huffman, etc.) and run-length coding
Predictive coding ~ esp. for DC
Ordering
zig-zag scan for block-DCT to better achieve run-length
coding gain
Horizontal frequency
Vertical
frequency
DC
AC01AC07
AC70AC77
low-frequency coefficients,
then high frequency coefficients
-
8/3/2019 Ip Image Compression
23/23
advantages
Transform coding achieves relatively
larger compression than predictive
methods
Here any distortions due to quantization
and channel errors gets distributed during
inverse transformation, over the entire
range