Image Compression

44
•SAWAL POCHO ?

description

computer vision, this presentation is about the Image Compression and about the different type if image compression we can use. This is about how we compress the data of the image and what makes it better.

Transcript of Image Compression

Page 1: Image Compression

•SAWAL POCHO ?

Page 2: Image Compression

Image Compression (Chapter 8)

CS474/674 – Prof. Bebis

Page 3: Image Compression

Goal of Image Compression

• Digital images require huge amounts of space for storage and large bandwidths for transmission. – A 640 x 480 color image requires close to 1MB of space.

• The goal of image compression is to reduce the amount of data required to represent a digital image.– Reduce storage requirements and increase transmission rates.

Page 4: Image Compression

Approaches

• Lossless– Information preserving– Low compression ratios

• Lossy– Not information preserving– High compression ratios

• Trade-off: image quality vs compression ratio

Page 5: Image Compression

Data ≠ Information

• Data and information are not synonymous terms!

• Data is the means by which information is conveyed.

• Data compression aims to reduce the amount of data required to represent a given quantity of information while preserving as much information as possible.

Page 6: Image Compression

Data vs Information (cont’d)

• The same amount of information can be represented by various amount of data, e.g.:

Your wife, Helen, will meet you at Logan Airport in Boston at 5 minutes past 6:00 pm tomorrow night

Your wife will meet you at Logan Airport at 5 minutes past 6:00 pm tomorrow night

Helen will meet you at Logan at 6:00 pm tomorrow night

Ex1:

Ex2:

Ex3:

Page 7: Image Compression

Data Redundancy

compression

Compression ratio:

Page 8: Image Compression

Data Redundancy (cont’d)

• Relative data redundancy:

Example:

Page 9: Image Compression

Types of Data Redundancy

(1) Coding

(2) Interpixel

(3) Psychovisual

• Compression attempts to reduce one or more of these redundancy types.

Page 10: Image Compression

Coding Redundancy

• Code: a list of symbols (letters, numbers, bits etc.)

• Code word: a sequence of symbols used to represent a piece of information or an event (e.g., gray levels).

• Code word length: number of symbols in each code word

Page 11: Image Compression

Coding Redundancy (cont’d)

N x M imagerk: k-th gray level

P(rk): probability of rk

l(rk): # of bits for rk

( ) ( )x

E X xP X x Expected value:

Page 12: Image Compression

Coding Redundancy (con’d)

• l(rk) = constant length

Example:

Page 13: Image Compression

Coding Redundancy (cont’d)

• l(rk) = variable length

• Consider the probability of the gray levels:

variable length

Page 14: Image Compression

Interpixel redundancy

• Interpixel redundancy implies that any pixel value can be reasonably predicted by its neighbors (i.e., correlated).

( ) ( ) ( ) ( )f x o g x f x g x a da

autocorrelation: f(x)=g(x)

Page 15: Image Compression

Interpixel redundancy (cont’d)

• To reduce interpixel redundnacy, the data must be transformed in another format (i.e., through a transformation)– e.g., thresholding, differences between adjacent pixels, DFT

• Example:

original

thresholded

(profile – line 100)

threshold

(1+10) bits/pair

Page 16: Image Compression

Psychovisual redundancy

• The human eye does not respond with equal sensitivity to all visual information.

• It is more sensitive to the lower frequencies than to the higher frequencies in the visual spectrum.

• Idea: discard data that is perceptually insignificant!

Page 17: Image Compression

Psychovisual redundancy (cont’d)

256 gray levels 16 gray levels16 gray levels

C=8/4 = 2:1i.e., add to each pixel asmall pseudo-random numberprior to quantization

Example: quantization

Page 18: Image Compression

How do we measure information?

• What is the information content of a message/image?

• What is the minimum amount of data that is sufficient to describe completely an image without loss of information?

Page 19: Image Compression

Modeling Information

• Information generation is assumed to be a probabilistic process.

• Idea: associate information with probability!

Note: I(E)=0 when P(E)=1

A random event E with probability P(E) contains:

Page 20: Image Compression

How much information does a pixel contain?

• Suppose that gray level values are generated by a random variable, then rk contains:

units of information!

Page 21: Image Compression

• Average information content of an image:

units/pixel

1

0

( ) Pr( )L

k kk

E I r r

using

How much information does an image contain?

(assumes statistically independent random events)

Entropy

Page 22: Image Compression

• Redundancy:

Redundancy (revisited)

where:

Note: of Lavg= H, the R=0 (no redundancy)

Page 23: Image Compression

Entropy Estimation

• It is not easy to estimate H reliably!

image

Page 24: Image Compression

Entropy Estimation (cont’d)

• First order estimate of H:

Page 25: Image Compression

Estimating Entropy (cont’d)

• Second order estimate of H:– Use relative frequencies of pixel blocks :

image

Page 26: Image Compression

Estimating Entropy (cont’d)

• The first-order estimate provides only a lower-bound on the compression that can be achieved.

• Differences between higher-order estimates of entropy and the first-order estimate indicate the presence of interpixel redundancy!

Need to apply transformations!

Page 27: Image Compression

Estimating Entropy (cont’d)• For example, consider differences:

16

Page 28: Image Compression

Estimating Entropy (cont’d)

• Entropy of difference image:

• However, a better transformation could be found since:

• Better than before (i.e., H=1.81 for original image)

Page 29: Image Compression

Huffman Coding (coding redundancy)

• A variable-length coding technique.• Optimal code (i.e., minimizes the number of code

symbols per source symbol).

• Assumption: symbols are encoded one at a time!

Page 30: Image Compression

Huffman Coding (cont’d)

• Forward Pass1. Sort probabilities per symbol2. Combine the lowest two probabilities3. Repeat Step2 until only two probabilities

remain.

Page 31: Image Compression

Huffman Coding (cont’d)

• Backward PassAssign code symbols going backwards

Page 32: Image Compression

Huffman Coding (cont’d)

• Lavg using Huffman coding:

• Lavg assuming binary codes:

Page 33: Image Compression

Huffman Coding/Decoding

• After the code has been created, coding/decoding can be implemented using a look-up table.

• Note that decoding is done unambiguously.

Page 34: Image Compression

Run-length coding (RLC) (interpixel redundancy)

• Used to reduce the size of a repeating string of characters (i.e., runs):

1 1 1 1 1 0 0 0 0 0 0 1 (1,5) (0, 6) (1, 1)

a a a b b b b b b c c (a,3) (b, 6) (c, 2)

• Encodes a run of symbols into two bytes: (symbol, count)

• Can compress any type of data but cannot achieve high compression ratios compared to other compression methods.

Page 35: Image Compression

Bit-plane coding (interpixel redundancy)

• An effective technique to reduce inter pixel redundancy is to process each bit plane individually.

(1) Decompose an image into a series of binary images.

(2) Compress each binary image (e.g., using run-length coding)

Page 36: Image Compression

Combining Huffman Coding with Run-length Coding

• Assuming that a message has been encoded using Huffman coding, additional compression can be achieved using run-length coding.

e.g., (0,1)(1,1)(0,1)(1,0)(0,2)(1,4)(0,2)

Page 37: Image Compression

Lossy Compression

• Transform the image into a domain where compression can be performed more efficiently (i.e., reduce interpixel redundancies).

~ (N/n)2 subimages

Page 38: Image Compression

Example: Fourier Transform

The magnitude of the FT decreases, as u, v increase!

K-1 K-1

K << N

Page 39: Image Compression

Transform Selection

• T(u,v) can be computed using various transformations, for example:– DFT

– DCT (Discrete Cosine Transform)

– KLT (Karhunen-Loeve Transformation)

Page 40: Image Compression

DCT

if u=0

if u>0

if v=0

if v>0

forward

inverse

Page 41: Image Compression

DCT (cont’d)

• Basis set of functions for a 4x4 image (i.e.,cosines of different frequencies).

Page 42: Image Compression

DCT (cont’d)

DFT WHT DCT

RMS error: 2.32 1.78 1.13

8 x 8 subimages

64 coefficientsper subimage

50% of the coefficientstruncated

Page 43: Image Compression

DCT (cont’d)

• DCT minimizes "blocking artifacts" (i.e., boundaries between subimages do not become very visible).

DFT

i.e., n-point periodicitygives rise todiscontinuities!

DCTi.e., 2n-point periodicityprevents discontinuities!

Page 44: Image Compression

DCT (cont’d)

• Subimage size selection:

2 x 2 subimagesoriginal 4 x 4 subimages 8 x 8 subimages