Post on 05-Apr-2018
8/2/2019 Image Compression Shabbir
1/32
IMAGE COMPRESSION
TECHNIQUES
BY: C.MD.SHABBIR
(04098077)
8/2/2019 Image Compression Shabbir
2/32
INTRODUCTION
WHAT IS AN IMAGE?
WHAT IS IMAGE COMPRESSION?
TYPES OF REDUNDANCIES
WHY DO WE NEED IMAGE COMPRESSION?
IMAGE COMPRESSION TECHNIQUES
SUMMARY
CONCLUSION
8/2/2019 Image Compression Shabbir
3/32
8/2/2019 Image Compression Shabbir
4/32
PIXELS
This example shows an image with a portion
greatly enlarged, in which the individual pixels
are rendered as little squares and can easily be
seen.
8/2/2019 Image Compression Shabbir
5/32
Memory requirements
pixels Size (uncompressed)
1.3 Mpixel 3.7 MB
2.1 Mpixel 6 MB
5 Mpixel 14.3 MB
8 MPixel 22.8 MB
The number of bytes to store an uncompressedimage can be really huge. So we need image
compression techniques.
8/2/2019 Image Compression Shabbir
6/32
Overview Of Image Compression
- Process of reducing or compressing size and imagedata files but still retaining important information.
- compressed file is used to reconstruct image.
- relationship between compressed and uncompressed
file is denoted as the compression ratio
Compression Ratio =Uncompressed file size
Compressed file size=
SIZEU
SIZEC
8/2/2019 Image Compression Shabbir
7/32
8/2/2019 Image Compression Shabbir
8/32
COMPRESSION ALGORITHMS :
The image compression algorithms can be
divided into two branches : Lossless algorithms
The information content is not modified .
Lossy algorithmsThe information content is reduced and it is
not recoverable .
8/2/2019 Image Compression Shabbir
9/32
Lossless Compression Method
Lossless Compression Methods:
1) Run length coding2) Huffman Coding
3) Predictive Coding
Lossless compression methods guarantees
that the decompressed image is absolutely
identical to the image before compression.
8/2/2019 Image Compression Shabbir
10/32
A simple example
Suppose we have a message consisting of 5 symbols, e.g.
[]
How can we code this message using 0/1 so the coded
message will have minimum length (for transmission or
saving!)
5 symbols at least 3 bits
For a simple encoding,
length of code is 10*3=30 bits
8/2/2019 Image Compression Shabbir
11/32
A simple example cont.
Intuition: Those symbols that are more frequent should have
smaller codes, yet since their length is not the same, there
must be a way of distinguishing each code
For Huffman code,
length of encoded message
will be
=3*2 +3*2+2*2+3+3=24bits
8/2/2019 Image Compression Shabbir
12/32
12
Lossless compression
techniques
Run-length coding
Huffman Coding Lossless Predictive Coding
8/2/2019 Image Compression Shabbir
13/32
13
Run-length coding
replacing long sequences of the same valuewith a code indicating the value that isrepeated and the number of times it occurs in
the sequence. Input sequence:
0,0,-3,5,1,0,-2,0,0,0,0,2,-4,3,-2,0,0,0,1,0,0,-2
Run-length sequence:(2,-3)(0,5)(0,1)(1,-2)(4,2)(0,-4)(0,3)(0,-2)(3,1)(2,-2)
8/2/2019 Image Compression Shabbir
14/32
8/2/2019 Image Compression Shabbir
15/32
15
Lossless compression
techniques Run-length coding
Huffman Coding
Lossless Predictive Coding
8/2/2019 Image Compression Shabbir
16/32
16
Huffman Coding
When coding the symbols of an information
source individually, Huffman coding yields
the smallest possible number of code
symbols per source symbol.
The resulting code is optimal for a fixed value
of n, subject to the constraint that the source
symbols be coded one at a time.
8/2/2019 Image Compression Shabbir
17/32
17
Huffman Coding Steps
(i) Arrange the symbol probabilities in a decreasing
order and consider them as leaf nodes of a tree.
(ii) While there is more than one node:
Merge the two nodes with smallest probability toform a new node whose probability is the sum of
the two merged nodes.
Arbitrarily assign 1 and 0 to each pair of branchesmerging into a node.
(iii) Read sequentially from the root node to the leaf
node where the symbol is located.
8/2/2019 Image Compression Shabbir
18/32
18
Example
1.0)()(
4.0)(
2.0)()(
},,,,{
54
2
31
54321
aPaP
aP
aPaP
aaaaaA
8/2/2019 Image Compression Shabbir
19/32
Example
Ax={ a , b , c , d , e }
Px={0.25, 0.25, 0.2, 0.15, 0.15}
d
0.15
e
0.15
b
0.25
c
0.2
a
0.25
0.3
0 1
0.45
0 1
0.55
0
1
1.0
0
1
00 10 11 010 011
8/2/2019 Image Compression Shabbir
20/32
20
Lossless compression
techniques Run-length coding
Huffman Coding
Lossless Predictive Coding
8/2/2019 Image Compression Shabbir
21/32
21
Lossless Predictive Coding
Predicting the next pixel value based on the
previous value
Encoding the difference between the
predicted value and the actual value
Differential pulse code modulation (DPCM)
8/2/2019 Image Compression Shabbir
22/32
22
Lossless Predictive Coding
Predictor
Input
imageSymbol
encoder
Compressed
image
Each successive pixel of the input image, denoted
The output of the predictor is then rounded to thenearest integer, donated
nf
nf^
A lossless predictive coding model: encoder
8/2/2019 Image Compression Shabbir
23/32
23
Lossless Predictive Coding
Predictor
Symbol
decoder
Compressed
image
A lossless predictive coding model: decoder
Decompressedimage
8/2/2019 Image Compression Shabbir
24/32
24
Lossless Predictive Coding
In most cases, however, the prediction is formed by alinear combination of m previous pixels. That is,
where m is the order of the linear predictor, round is
a function used to denote the rounding or nearest
integer operation, and the for i=1,2,3m areprediction coefficients.
ia
8/2/2019 Image Compression Shabbir
25/32
Lossy Compression
In order to achieve higher rates of compression, we
give up complete reconstruction and consider lossy
compression techniques
So we need a way to measure how good thecompression technique is
How close to the original data the reconstructed
data is
8/2/2019 Image Compression Shabbir
26/32
Lossy Compression Techniques
Vector quantization
Transformation coding
Fractal coding
8/2/2019 Image Compression Shabbir
27/32
The basic idea in this technique is to develop a
dictionary of fixed-size vectors, called code vectors.
A vector is usually a block of pixel values. A given image
is then partitioned into non-overlapping blocks (vectors)called image vectors.
Then for each in the dictionary is determined and its
index in the dictionary is used as the encoding of the
original image vector.Thus, each image is represented by a sequence of
indices that can be further entropy coded.
Outline of Vector Quantization
8/2/2019 Image Compression Shabbir
28/32
S.R.Subramanya 28
Outline of Vector Quantization of
Images
8/2/2019 Image Compression Shabbir
29/32
Lossy Compression Techniques
Vector quantization
Transformation coding
O tli f T f ti C di
8/2/2019 Image Compression Shabbir
30/32
In this coding scheme, transforms such as DFT (Discrete Fourier
Transform) and DCT (Discrete Cosine Transform) are used tochange the pixels in the original image into frequency domain
coefficients (called transform coefficients).
These coefficients have several desirable properties. One is the
energy compaction property that results in most of the energy of
the original data being concentrated in only a few of the
significant transform coefficients.
Only those few significant coefficients are selected and theremaining are discarded.
The selected coefficients are considered for further quantization
and entropy encoding.
Outline of Transformation Coding
8/2/2019 Image Compression Shabbir
31/32
Conclusion
8/2/2019 Image Compression Shabbir
32/32
References