This publication is a reference manual for anyone needing ...
Digital Image Processing Lecture 15 - basu.ac.ir · – (remove coding and interpixel redundancy)...
Transcript of Digital Image Processing Lecture 15 - basu.ac.ir · – (remove coding and interpixel redundancy)...
Digital Image Processing
Lecture 15(Image Compression)
Bu-Ali Sina UniversityComputer Engineering Dep.
Fall 2017
Image Compression
Image Compression
Reducing the amount of data requiredto represent a digital image.The basis of the reduction process is
the removal of redundant data.
Coding redundancy
Interpixel redundancy
Psychovisual redundancy
Fidelity Criteria
ObjectiveSubjective
Image compression model
•Source Encoder is used to remove redundancy in the input image.
• Channel Encoder is used to introduce redundancy in a controlledfashion to help combat noise. Example: Parity bit.
• This provides a certain level of immunity from noise that is inherentin any storage/transmission system.
• The Channel could be a communication link or a storage/retrieval system.
• What is information --- how to quantify it?• What is the minimum amount of data that is sufficient to represent an imagewithout loss of information?• What is theoretically the best compression possible?• What is the theoretically best possible transmission rate for reliablecommunication over a noisy channel?• Information theory provides answers to these and other related fundamentalquestions.• The fundamental premise of information theory is that the generation ofinformation can be modeled as a probabilistic process.• A discrete source of information generates one of N possible symbolsfrom a source alphabet set
in unit time.
Information theory
Example:
• The source output can be modeled as a discrete random variable E, which cantake values in set
• With corresponding probabilities
• We will denote the symbol probabilities by the vector
• Naturally,
• The information source is characterized by the pair (A, z).
Information theory
Observing an occurrence of the random variable E results in some gain ofinformation denoted by I(E). This gain of information was defined to be(Shannon):
The base for the logarithm depends on the units for measuring information.Usually, we use base 2, which gives the information in units of “binary digits” or“bits.” Using a base 10 logarithm would give the entropy in the units of decimaldigits.
The amount of information attributed to an event E is inversely related to theprobability of that event.
Information theory
· Examples:Certain event: P(E) =1 . In this case I (E) = log(1/1) = 0.This agrees with intuition, since if the event E is certain to occur(has probability 1), knowing that it has occurred has not led to anygain of information.
Coin toss: P(E = Heads) = 0.5. In this case I (E) = log(1/ 0.5) =log(2) =1 bit.
Rare event: P(E) = 0.001. In this case I (E) = log(1/ 0.001) =log(1000) = 9.97 bits. This again agrees with intuition, sinceknowing that a rare event has occurred leads to a significant gain ofinformation.
Information theory
· The entropy H(z) of a source is defined as the average amount of informationgained by observing a single source symbol:
· By convention, in the above formula, we set 0log 0 = 0.
· The entropy of a source quantifies the “randomness” of a source.
· Higher the source entropy, more the uncertainty associated with a source output,and higher the information associated with a source.
· For a fixed number of source symbols, the entropy is maximized if all thesymbols are equally likely (recall uniform histogram).
Information theory
Information theorySelf information of
event E
Average selfinformation of k
event
Average informationor entropy (uncertainty)
Zero memory noiselessZero memory noiseless
Average informationor entropy (uncertainty)
Efficiency
Information theory
· Given that a source produces the above symbols with indicated probabilities,how do we represent them using binary strings?
Information theory
Compression MethodsError free (lossless)
–Run length–Huffman–Arithmetic coding–LZW coding
Lossy–Transform coding
Lossless compression Variable-Length coding
– Huffman coding
010100111100a3a1a2a2a6
Lossless compression
LZW (Lempel-Ziv-Welch) coding
– (remove coding and interpixel
redundancy)Assign fixed code to variable length cod
without needing to a prior knowledge ofprobability
– Is used for gif, tiff and pdf formats
Lossless compression
LZW coding
Bit plane codingBit plan decomposition
– Polynomial– Gray code
Binary coding– Constant area– Run length coding
Predictive coding
Example
Lossy Compression
Lossy predictive compression
Transform coding
Transform Coding
H and g are Forwardand inverse
transformationkernels
Transformcoefficient
Transform coding
Fourier kernel
Walsh-hadamard
kernel (WHT)
WHT basis function
DCT transform
Subimage size
Bit allocation
truncating–zonal coding (maximum variance)–Threshold coding (maximum magnitude)
QuantizingCoding
JPEG compressionDCT computation (8*8 blocks)QuantizationVariable length coding
Assignment8-3,8-4,8-13,8-15,